1
|
Carass A, Cuzzocreo J, Wheeler MB, Bazin PL, Resnick SM, Prince JL. Simple paradigm for extra-cerebral tissue removal: algorithm and analysis. Neuroimage 2011; 56:1982-92. [PMID: 21458576 PMCID: PMC3105165 DOI: 10.1016/j.neuroimage.2011.03.045] [Citation(s) in RCA: 78] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2010] [Revised: 03/11/2011] [Accepted: 03/16/2011] [Indexed: 10/18/2022] Open
Abstract
Extraction of the brain-i.e. cerebrum, cerebellum, and brain stem-from T1-weighted structural magnetic resonance images is an important initial step in neuroimage analysis. Although automatic algorithms are available, their inconsistent handling of the cortical mantle often requires manual interaction, thereby reducing their effectiveness. This paper presents a fully automated brain extraction algorithm that incorporates elastic registration, tissue segmentation, and morphological techniques which are combined by a watershed principle, while paying special attention to the preservation of the boundary between the gray matter and the cerebrospinal fluid. The approach was evaluated by comparison to a manual rater, and compared to several other leading algorithms on a publically available data set of brain images using the Dice coefficient and containment index as performance metrics. The qualitative and quantitative impact of this initial step on subsequent cortical surface generation is also presented. Our experiments demonstrate that our approach is quantitatively better than six other leading algorithms (with statistical significance on modern T1-weighted MR data). We also validated the robustness of the algorithm on a very large data set of over one thousand subjects, and showed that it can replace an experienced manual rater as preprocessing for a cortical surface extraction algorithm with statistically insignificant differences in cortical surface position.
Collapse
|
Research Support, N.I.H., Extramural |
14 |
78 |
2
|
Villarroel M, Guazzi A, Jorge J, Davis S, Watkinson P, Green G, Shenvi A, McCormick K, Tarassenko L. Continuous non-contact vital sign monitoring in neonatal intensive care unit. Healthc Technol Lett 2014; 1:87-91. [PMID: 26609384 PMCID: PMC4612732 DOI: 10.1049/htl.2014.0077] [Citation(s) in RCA: 74] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2014] [Revised: 08/22/2014] [Accepted: 08/22/2014] [Indexed: 11/19/2022] Open
Abstract
Current technologies to allow continuous monitoring of vital signs in pre-term infants in the hospital require adhesive electrodes or sensors to be in direct contact with the patient. These can cause stress, pain, and also damage the fragile skin of the infants. It has been established previously that the colour and volume changes in superficial blood vessels during the cardiac cycle can be measured using a digital video camera and ambient light, making it possible to obtain estimates of heart rate or breathing rate. Most of the papers in the literature on non-contact vital sign monitoring report results on adult healthy human volunteers in controlled environments for short periods of time. The authors' current clinical study involves the continuous monitoring of pre-term infants, for at least four consecutive days each, in the high-dependency care area of the Neonatal Intensive Care Unit (NICU) at the John Radcliffe Hospital in Oxford. The authors have further developed their video-based, non-contact monitoring methods to obtain continuous estimates of heart rate, respiratory rate and oxygen saturation for infants nursed in incubators. In this Letter, it is shown that continuous estimates of these three parameters can be computed with an accuracy which is clinically useful. During stable sections with minimal infant motion, the mean absolute error between the camera-derived estimates of heart rate and the reference value derived from the ECG is similar to the mean absolute error between the ECG-derived value and the heart rate value from a pulse oximeter. Continuous non-contact vital sign monitoring in the NICU using ambient light is feasible, and the authors have shown that clinically important events such as a bradycardia accompanied by a major desaturation can be identified with their algorithms for processing the video signal.
Collapse
|
Journal Article |
11 |
74 |
3
|
Léger É, Drouin S, Collins DL, Popa T, Kersten-Oertel M. Quantifying attention shifts in augmented reality image-guided neurosurgery. Healthc Technol Lett 2017; 4:188-192. [PMID: 29184663 PMCID: PMC5683248 DOI: 10.1049/htl.2017.0062] [Citation(s) in RCA: 65] [Impact Index Per Article: 8.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2017] [Accepted: 07/27/2017] [Indexed: 11/20/2022] Open
Abstract
Image-guided surgery (IGS) has allowed for more minimally invasive procedures, leading to better patient outcomes, reduced risk of infection, less pain, shorter hospital stays and faster recoveries. One drawback that has emerged with IGS is that the surgeon must shift their attention from the patient to the monitor for guidance. Yet both cognitive and motor tasks are negatively affected with attention shifts. Augmented reality (AR), which merges the realworld surgical scene with preoperative virtual patient images and plans, has been proposed as a solution to this drawback. In this work, we studied the impact of two different types of AR IGS set-ups (mobile AR and desktop AR) and traditional navigation on attention shifts for the specific task of craniotomy planning. We found a significant difference in terms of the time taken to perform the task and attention shifts between traditional navigation, but no significant difference between the different AR set-ups. With mobile AR, however, users felt that the system was easier to use and that their performance was better. These results suggest that regardless of where the AR visualisation is shown to the surgeon, AR may reduce attention shifts, leading to more streamlined and focused procedures.
Collapse
|
research-article |
8 |
65 |
4
|
Frantz T, Jansen B, Duerinck J, Vandemeulebroucke J. Augmenting Microsoft's HoloLens with vuforia tracking for neuronavigation. Healthc Technol Lett 2018; 5:221-225. [PMID: 30464854 PMCID: PMC6222243 DOI: 10.1049/htl.2018.5079] [Citation(s) in RCA: 45] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2018] [Accepted: 09/03/2018] [Indexed: 11/20/2022] Open
Abstract
Major hurdles for Microsoft's HoloLens as a tool in medicine have been accessing tracking data, as well as a relatively high-localisation error of the displayed information; cumulatively resulting in its limited use and minimal quantification. The following work investigates the augmentation of HoloLens with the proprietary image processing SDK Vuforia, allowing integration of data from its front-facing RGB camera to provide more spatially stable holograms for neuronavigational use. Continuous camera tracking was able to maintain hologram registration with a mean perceived drift of 1.41 mm, as well as a mean sub 2-mm surface point localisation accuracy of 53%, all while allowing the researcher to walk about a test area. This represents a 68% improvement for the later and a 34% improvement for the former compared with a typical HoloLens deployment used as a control. Both represent a significant improvement on hologram stability given the current state-of-the-art, and to the best of the authors knowledge are the first example of quantified measurements when augmenting hologram stability using data from the RGB sensor.
Collapse
|
Journal Article |
7 |
45 |
5
|
Alam MM, Islam MT. Machine learning approach of automatic identification and counting of blood cells. Healthc Technol Lett 2019; 6:103-108. [PMID: 31531224 PMCID: PMC6718065 DOI: 10.1049/htl.2018.5098] [Citation(s) in RCA: 43] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2018] [Accepted: 05/03/2019] [Indexed: 11/19/2022] Open
Abstract
A complete blood cell count is an important test in medical diagnosis to evaluate overall health condition. Traditionally blood cells are counted manually using haemocytometer along with other laboratory equipment's and chemical compounds, which is a time-consuming and tedious task. In this work, the authors present a machine learning approach for automatic identification and counting of three types of blood cells using ‘you only look once’ (YOLO) object detection and classification algorithm. YOLO framework has been trained with a modified configuration BCCD Dataset of blood smear images to automatically identify and count red blood cells, white blood cells, and platelets. Moreover, this study with other convolutional neural network architectures considering architecture complexity, reported accuracy, and running time with this framework and compare the accuracy of the models for blood cells detection. They also tested the trained model on smear images from a different dataset and found that the learned models are generalised. Overall the computer-aided system of detection and counting enables us to count blood cells from smear images in less than a second, which is useful for practical applications.
Collapse
|
Journal Article |
6 |
43 |
6
|
Sadoughi F, Kazemy Z, Hamedan F, Owji L, Rahmanikatigari M, Azadboni TT. Artificial intelligence methods for the diagnosis of breast cancer by image processing: a review. BREAST CANCER-TARGETS AND THERAPY 2018; 10:219-230. [PMID: 30555254 PMCID: PMC6278839 DOI: 10.2147/bctt.s175311] [Citation(s) in RCA: 36] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/17/2023]
Abstract
Breast cancer is the most common cancer among women around the world. Despite enormous medical progress, breast cancer has still remained the second leading cause of death worldwide; thus, its early diagnosis has a significant impact on reducing mortality. However, it is often difficult to diagnose breast abnormalities. Different tools such as mammography, ultrasound, and thermography have been developed to screen breast cancer. In this way, the computer helps radiologists identify chest abnormalities more efficiently using image processing and artificial intelligence (AI) tools. This article examined various methods of AI using image processing to diagnose breast cancer. It was a review study through library and Internet searches. By searching the databases such as Medical Literature Analysis and Retrieval System Online (MEDLINE) via PubMed, Springer, IEEE, ScienceDirect, and Gray Literature (including Google Scholar, articles published in conferences, government technical reports, and other materials not controlled by scientific publishers) and searching for breast cancer keywords, AI and medical image processing techniques were extracted. The results were provided in tables to demonstrate different techniques and their results over recent years. In this study, 18,651 articles were extracted from 2007 to 2017. Among them, those that used similar techniques and reported similar results were excluded and 40 articles were finally examined. Since each of the articles used image processing, a list of features related to the image used in each article was also provided. The results showed that support vector machines had the highest accuracy percentage for different types of images (ultrasound =95.85%, mammography =93.069%, thermography =100%). Computerized diagnosis of breast cancer has greatly contributed to the development of medicine, is constantly being used by radiologists, and is clear in ethical and medical fields with regard to its effects. Computer-assisted methods increase diagnosis accuracy by reducing false positives.
Collapse
|
Review |
7 |
36 |
7
|
Kamerling CP, Fast MF, Ziegenhein P, Menten MJ, Nill S, Oelfke U. Real-time 4D dose reconstruction for tracked dynamic MLC deliveries for lung SBRT. Med Phys 2016; 43:6072. [PMID: 27806589 PMCID: PMC5965366 DOI: 10.1118/1.4965045] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2016] [Revised: 08/26/2016] [Accepted: 10/05/2016] [Indexed: 12/25/2022] Open
Abstract
PURPOSE This study provides a proof of concept for real-time 4D dose reconstruction for lung stereotactic body radiation therapy (SBRT) with multileaf collimator (MLC) tracking and assesses the impact of tumor tracking on the size of target margins. METHODS The authors have implemented real-time 4D dose reconstruction by connecting their tracking and delivery software to an Agility MLC at an Elekta Synergy linac and to their in-house treatment planning software (TPS). Actual MLC apertures and (simulated) target positions are reported to the TPS every 40 ms. The dose is calculated in real-time from 4DCT data directly after each reported aperture by utilization of precalculated dose-influence data based on a Monte Carlo algorithm. The dose is accumulated onto the peak-exhale (reference) phase using energy-mass transfer mapping. To investigate the impact of a potentially reducible safety margin, the authors have created and delivered treatment plans designed for a conventional internal target volume (ITV) + 5 mm, a midventilation approach, and three tracking scenarios for four lung SBRT patients. For the tracking plans, a moving target volume (MTV) was established by delineating the gross target volume (GTV) on every 4DCT phase. These were rigidly aligned to the reference phase, resulting in a unified maximum GTV to which a 1, 3, or 5 mm isotropic margin was added. All scenarios were planned for 9-beam step-and-shoot IMRT to meet the criteria of RTOG 1021 (3 × 18 Gy). The GTV 3D center-of-volume shift varied from 6 to 14 mm. RESULTS Real-time dose reconstruction at 25 Hz could be realized on a single workstation due to the highly efficient implementation of dose calculation and dose accumulation. Decreased PTV margins resulted in inadequate target coverage during untracked deliveries for patients with substantial tumor motion. MLC tracking could ensure the GTV target dose for these patients. Organ-at-risk (OAR) doses were consistently reduced by decreased PTV margins. The tracked MTV + 1 mm deliveries resulted in the following OAR dose reductions: lung V20 up to 3.5%, spinal cord D2 up to 0.9 Gy/Fx, and proximal airways D2 up to 1.4 Gy/Fx. CONCLUSIONS The authors could show that for patient data at clinical resolution and realistic motion conditions, the delivered dose could be reconstructed in 4D for the whole lung volume in real-time. The dose distributions show that reduced margins yield lower doses to healthy tissue, whilst target dose can be maintained using dynamic MLC tracking.
Collapse
|
other |
9 |
32 |
8
|
Wheeler G, Deng S, Toussaint N, Pushparajah K, Schnabel JA, Simpson JM, Gomez A. Virtual interaction and visualisation of 3D medical imaging data with VTK and Unity. Healthc Technol Lett 2018; 5:148-153. [PMID: 30800321 PMCID: PMC6372083 DOI: 10.1049/htl.2018.5064] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2018] [Accepted: 08/20/2018] [Indexed: 11/22/2022] Open
Abstract
The authors present a method to interconnect the Visualisation Toolkit (VTK) and Unity. This integration enables them to exploit the visualisation capabilities of VTK with Unity's widespread support of virtual, augmented, and mixed reality displays, and interaction and manipulation devices, for the development of medical image applications for virtual environments. The proposed method utilises OpenGL context sharing between Unity and VTK to render VTK objects into the Unity scene via a Unity native plugin. The proposed method is demonstrated in a simple Unity application that performs VTK volume rendering to display thoracic computed tomography and cardiac magnetic resonance images. Quantitative measurements of the achieved frame rates show that this approach provides over 90 fps using standard hardware, which is suitable for current augmented reality/virtual reality display devices.
Collapse
|
research-article |
7 |
32 |
9
|
Moreta-Martinez R, García-Mato D, García-Sevilla M, Pérez-Mañanes R, Calvo-Haro J, Pascau J. Augmented reality in computer-assisted interventions based on patient-specific 3D printed reference. Healthc Technol Lett 2018; 5:162-166. [PMID: 30464847 PMCID: PMC6222179 DOI: 10.1049/htl.2018.5072] [Citation(s) in RCA: 30] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2018] [Accepted: 08/20/2018] [Indexed: 01/03/2023] Open
Abstract
Augmented reality (AR) can be an interesting technology for clinical scenarios as an alternative to conventional surgical navigation. However, the registration between augmented data and real-world spaces is a limiting factor. In this study, the authors propose a method based on desktop three-dimensional (3D) printing to create patient-specific tools containing a visual pattern that enables automatic registration. This specific tool fits on the patient only in the location it was designed for, avoiding placement errors. This solution has been developed as a software application running on Microsoft HoloLens. The workflow was validated on a 3D printed phantom replicating the anatomy of a patient presenting an extraosseous Ewing's sarcoma, and then tested during the actual surgical intervention. The application allowed physicians to visualise the skin, bone and tumour location overlaid on the phantom and patient. This workflow could be extended to many clinical applications in the surgical field and also for training and simulation, in cases where hard body structures are involved. Although the authors have tested their workflow on AR head mounted display, they believe that a similar approach can be applied to other devices such as tablets or smartphones.
Collapse
|
Journal Article |
7 |
30 |
10
|
Demir Ö, Yılmaz Çamurcu A. Computer-aided detection of lung nodules using outer surface features. Biomed Mater Eng 2016; 26 Suppl 1:S1213-22. [PMID: 26405880 DOI: 10.3233/bme-151418] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
In this study, a computer-aided detection (CAD) system was developed for the detection of lung nodules in computed tomography images. The CAD system consists of four phases, including two-dimensional and three-dimensional preprocessing phases. In the feature extraction phase, four different groups of features are extracted from volume of interests: morphological features, statistical and histogram features, statistical and histogram features of outer surface, and texture features of outer surface. The support vector machine algorithm is optimized using particle swarm optimization for classification. The CAD system provides 97.37% sensitivity, 86.38% selectivity, 88.97% accuracy and 2.7 false positive per scan using three groups of classification features. After the inclusion of outer surface texture features, classification results of the CAD system reaches 98.03% sensitivity, 87.71% selectivity, 90.12% accuracy and 2.45 false positive per scan. Experimental results demonstrate that outer surface texture features of nodule candidates are useful to increase sensitivity and decrease the number of false positives in the detection of lung nodules in computed tomography images.
Collapse
|
Journal Article |
9 |
28 |
11
|
Lee SC, Fuerst B, Tateno K, Johnson A, Fotouhi J, Osgood G, Tombari F, Navab N. Multi-modal imaging, model-based tracking, and mixed reality visualisation for orthopaedic surgery. Healthc Technol Lett 2017; 4:168-173. [PMID: 29184659 PMCID: PMC5683202 DOI: 10.1049/htl.2017.0066] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2017] [Accepted: 08/02/2017] [Indexed: 12/12/2022] Open
Abstract
Orthopaedic surgeons are still following the decades old workflow of using dozens of two-dimensional fluoroscopic images to drill through complex 3D structures, e.g. pelvis. This Letter presents a mixed reality support system, which incorporates multi-modal data fusion and model-based surgical tool tracking for creating a mixed reality environment supporting screw placement in orthopaedic surgery. A red–green–blue–depth camera is rigidly attached to a mobile C-arm and is calibrated to the cone-beam computed tomography (CBCT) imaging space via iterative closest point algorithm. This allows real-time automatic fusion of reconstructed surface and/or 3D point clouds and synthetic fluoroscopic images obtained through CBCT imaging. An adapted 3D model-based tracking algorithm with automatic tool segmentation allows for tracking of the surgical tools occluded by hand. This proposed interactive 3D mixed reality environment provides an intuitive understanding of the surgical site and supports surgeons in quickly localising the entry point and orienting the surgical tool during screw placement. The authors validate the augmentation by measuring target registration error and also evaluate the tracking accuracy in the presence of partial occlusion.
Collapse
|
Journal Article |
8 |
28 |
12
|
Niu S, Liu M, Liu Y, Wang J, Song H. Distant Domain Transfer Learning for Medical Imaging. IEEE J Biomed Health Inform 2021; 25:3784-3793. [PMID: 33449887 PMCID: PMC8545174 DOI: 10.1109/jbhi.2021.3051470] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Revised: 01/07/2021] [Accepted: 01/10/2021] [Indexed: 11/09/2022]
Abstract
Medical image processing is one of the most important topics in the Internet of Medical Things (IoMT). Recently, deep learning methods have carried out state-of-the-art performances on medical imaging tasks. In this paper, we propose a novel transfer learning framework for medical image classification. Moreover, we apply our method COVID-19 diagnosis with lung Computed Tomography (CT) images. However, well-labeled training data sets cannot be easily accessed due to the disease's novelty and privacy policies. The proposed method has two components: reduced-size Unet Segmentation model and Distant Feature Fusion (DFF) classification model. This study is related to a not well-investigated but important transfer learning problem, termed Distant Domain Transfer Learning (DDTL). In this study, we develop a DDTL model for COVID-19 diagnosis using unlabeled Office-31, Caltech-256, and chest X-ray image data sets as the source data, and a small set of labeled COVID-19 lung CT as the target data. The main contributions of this study are: 1) the proposed method benefits from unlabeled data in distant domains which can be easily accessed, 2) it can effectively handle the distribution shift between the training data and the testing data, 3) it has achieved 96% classification accuracy, which is 13% higher classification accuracy than "non-transfer" algorithms, and 8% higher than existing transfer and distant transfer algorithms.
Collapse
|
research-article |
4 |
26 |
13
|
Ruvio G, Cuccaro A, Solimene R, Brancaccio A, Basile B, Ammann MJ. Microwave bone imaging: a preliminary scanning system for proof-of-concept. Healthc Technol Lett 2016; 3:218-221. [PMID: 27733930 PMCID: PMC5047277 DOI: 10.1049/htl.2016.0003] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2016] [Revised: 05/21/2016] [Accepted: 05/31/2016] [Indexed: 11/20/2022] Open
Abstract
This Letter introduces a feasibility study of a scanning system for applications in biomedical bone imaging operating in the microwave range 0.5–4 GHz. Mechanical uncertainties and data acquisition time are minimised by using a fully automated scanner that controls two antipodal Vivaldi antennas. Accurate antenna positioning and synchronisation with data acquisition enables a rigorous proof-of-concept for the microwave imaging procedure of a multi-layer phantom including skin, fat, muscle and bone tissues. The presence of a suitable coupling medium enables antenna miniaturisation and mitigates the impedance mismatch between antennas and phantom. The three-dimensional image of tibia and fibula is successfully reconstructed by scanning the multi-layer phantom due to the distinctive dielectric contrast between target and surrounding tissues. These results show the viability of a microwave bone imaging technology which is low cost, portable, non-ionising, and does not require specially trained personnel. In fact, as no a-priori characterisation of the antenna is required, the image formation procedure is very conveniently simplified.
Collapse
|
|
9 |
25 |
14
|
Zhao Z, Cai T, Chang F, Cheng X. Real-time surgical instrument detection in robot-assisted surgery using a convolutional neural network cascade. Healthc Technol Lett 2019; 6:275-279. [PMID: 32038871 PMCID: PMC6952255 DOI: 10.1049/htl.2019.0064] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2019] [Accepted: 10/02/2019] [Indexed: 12/24/2022] Open
Abstract
Surgical instrument detection in robot-assisted surgery videos is an import vision component for these systems. Most of the current deep learning methods focus on single-tool detection and suffer from low detection speed. To address this, the authors propose a novel frame-by-frame detection method using a cascading convolutional neural network (CNN) which consists of two different CNNs for real-time multi-tool detection. An hourglass network and a modified visual geometry group (VGG) network are applied to jointly predict the localisation. The former CNN outputs detection heatmaps representing the location of tool tip areas, and the latter performs bounding-box regression for tool tip areas on these heatmaps stacked with input RGB image frames. The authors’ method is tested on the publicly available EndoVis Challenge dataset and the ATLAS Dione dataset. The experimental results show that their method achieves better performance than mainstream detection methods in terms of detection accuracy and speed.
Collapse
|
Journal Article |
6 |
24 |
15
|
Hemorrhage Detection Based on 3D CNN Deep Learning Framework and Feature Fusion for Evaluating Retinal Abnormality in Diabetic Patients. SENSORS 2021; 21:s21113865. [PMID: 34205120 PMCID: PMC8199947 DOI: 10.3390/s21113865] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/29/2021] [Revised: 05/29/2021] [Accepted: 06/01/2021] [Indexed: 01/07/2023]
Abstract
Diabetic retinopathy (DR) is the main cause of blindness in diabetic patients. Early and accurate diagnosis can improve the analysis and prognosis of the disease. One of the earliest symptoms of DR are the hemorrhages in the retina. Therefore, we propose a new method for accurate hemorrhage detection from the retinal fundus images. First, the proposed method uses the modified contrast enhancement method to improve the edge details from the input retinal fundus images. In the second stage, a new convolutional neural network (CNN) architecture is proposed to detect hemorrhages. A modified pre-trained CNN model is used to extract features from the detected hemorrhages. In the third stage, all extracted feature vectors are fused using the convolutional sparse image decomposition method, and finally, the best features are selected by using the multi-logistic regression controlled entropy variance approach. The proposed method is evaluated on 1509 images from HRF, DRIVE, STARE, MESSIDOR, DIARETDB0, and DIARETDB1 databases and achieves the average accuracy of 97.71%, which is superior to the previous works. Moreover, the proposed hemorrhage detection system attains better performance, in terms of visual quality and quantitative analysis with high accuracy, in comparison with the state-of-the-art methods.
Collapse
|
Journal Article |
4 |
24 |
16
|
Kubicek J, Tomanec F, Cerny M, Vilimek D, Kalova M, Oczka D. Recent Trends, Technical Concepts and Components of Computer-Assisted Orthopedic Surgery Systems: A Comprehensive Review. SENSORS (BASEL, SWITZERLAND) 2019; 19:E5199. [PMID: 31783631 PMCID: PMC6929084 DOI: 10.3390/s19235199] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/20/2019] [Revised: 11/08/2019] [Accepted: 11/12/2019] [Indexed: 12/17/2022]
Abstract
Computer-assisted orthopedic surgery (CAOS) systems have become one of the most important and challenging types of system in clinical orthopedics, as they enable precise treatment of musculoskeletal diseases, employing modern clinical navigation systems and surgical tools. This paper brings a comprehensive review of recent trends and possibilities of CAOS systems. There are three types of the surgical planning systems, including: systems based on the volumetric images (computer tomography (CT), magnetic resonance imaging (MRI) or ultrasound images), further systems utilize either 2D or 3D fluoroscopic images, and the last one utilizes the kinetic information about the joints and morphological information about the target bones. This complex review is focused on three fundamental aspects of CAOS systems: their essential components, types of CAOS systems, and mechanical tools used in CAOS systems. In this review, we also outline the possibilities for using ultrasound computer-assisted orthopedic surgery (UCAOS) systems as an alternative to conventionally used CAOS systems.
Collapse
|
Review |
6 |
23 |
17
|
Boone JM. Reply to "Comment on the 'Report of AAPM TG 204: Size-specific dose estimates (SSDE) in pediatric and adult body CT examinations'" [AAPM Report 204, 2011]. Med Phys 2012; 39:4615-4616. [PMID: 28516563 DOI: 10.1118/1.4725757] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2012] [Accepted: 04/24/2012] [Indexed: 11/07/2022] Open
|
Letter |
13 |
23 |
18
|
Lahmiri S, Boukadoum M. New approach for automatic classification of Alzheimer's disease, mild cognitive impairment and healthy brain magnetic resonance images. Healthc Technol Lett 2014; 1:32-6. [PMID: 26609373 DOI: 10.1049/htl.2013.0022] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2013] [Revised: 03/19/2014] [Accepted: 03/20/2014] [Indexed: 11/20/2022] Open
Abstract
Explored is the utility of modelling brain magnetic resonance images as a fractal object for the classification of healthy brain images against those with Alzheimer's disease (AD) or mild cognitive impairment (MCI). More precisely, fractal multi-scale analysis is used to build feature vectors from the derived Hurst's exponents. These are then classified by support vector machines (SVMs). Three experiments were conducted: in the first the SVM was trained to classify AD against healthy images. In the second experiment, the SVM was trained to classify AD against MCI and, in the third experiment, a multiclass SVM was trained to classify all three types of images. The experimental results, using the 10-fold cross-validation technique, indicate that the SVM achieved 97.08% ± 0.05 correct classification rate, 98.09% ± 0.04 sensitivity and 96.07% ± 0.07 specificity for the classification of healthy against MCI images, thus outperforming recent works found in the literature. For the classification of MCI against AD, the SVM achieved 97.5% ± 0.04 correct classification rate, 100% sensitivity and 94.93% ± 0.08 specificity. The third experiment also showed that the multiclass SVM provided highly accurate classification results. The processing time for a given image was 25 s. These findings suggest that this approach is efficient and may be promising for clinical applications.
Collapse
|
Journal Article |
11 |
23 |
19
|
Ekanayake SW, Morris AJ, Forrester M, Pathirana PN. BioKin: an ambulatory platform for gait kinematic and feature assessment. Healthc Technol Lett 2015; 2:40-5. [PMID: 26609403 DOI: 10.1049/htl.2014.0094] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2014] [Revised: 02/06/2015] [Accepted: 02/08/2015] [Indexed: 11/20/2022] Open
Abstract
A platform to move gait analysis, which is normally restricted to a clinical environment in a well-equipped gait laboratory, into an ambulatory system, potentially in non-clinical settings is introduced. This novel system can provide functional measurements to guide therapeutic interventions for people requiring rehabilitation with limited access to such gait laboratories. BioKin system consists of three layers: a low-cost wearable wireless motion capture sensor, data collection and storage engine, and the motion analysis and visualisation platform. Moreover, a novel limb orientation estimation algorithm is implemented in the motion analysis platform. The performance of the orientation estimation algorithm is validated against the orientation results from a commercial optical motion analysis system and an instrumented treadmill. The study results demonstrate a root-mean-square error less than 4° and a correlation coefficient more than 0.95 when compared with the industry standard system. These results indicate that the proposed motion analysis platform is a potential addition to existing gait laboratories in order to facilitate gait analysis in remote locations.
Collapse
|
Journal Article |
10 |
22 |
20
|
Lahmiri S. Image denoising in bidimensional empirical mode decomposition domain: the role of Student's probability distribution function. Healthc Technol Lett 2016; 3:67-71. [PMID: 27222723 DOI: 10.1049/htl.2015.0007] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2015] [Revised: 07/31/2015] [Accepted: 08/21/2015] [Indexed: 11/19/2022] Open
Abstract
Hybridisation of the bi-dimensional empirical mode decomposition (BEMD) with denoising techniques has been proposed in the literature as an effective approach for image denoising. In this Letter, the Student's probability density function is introduced in the computation of the mean envelope of the data during the BEMD sifting process to make it robust to values that are far from the mean. The resulting BEMD is denoted tBEMD. In order to show the effectiveness of the tBEMD, several image denoising techniques in tBEMD domain are employed; namely, fourth order partial differential equation (PDE), linear complex diffusion process (LCDP), non-linear complex diffusion process (NLCDP), and the discrete wavelet transform (DWT). Two biomedical images and a standard digital image were considered for experiments. The original images were corrupted with additive Gaussian noise with three different levels. Based on peak-signal-to-noise ratio, the experimental results show that PDE, LCDP, NLCDP, and DWT all perform better in the tBEMD than in the classical BEMD domain. It is also found that tBEMD is faster than classical BEMD when the noise level is low. When it is high, the computational cost in terms of processing time is similar. The effectiveness of the presented approach makes it promising for clinical applications.
Collapse
|
|
9 |
21 |
21
|
Fletcher E, DeCarli C, Fan AP, Knaack A. Convolutional Neural Net Learning Can Achieve Production-Level Brain Segmentation in Structural Magnetic Resonance Imaging. Front Neurosci 2021; 15:683426. [PMID: 34234642 PMCID: PMC8255694 DOI: 10.3389/fnins.2021.683426] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2021] [Accepted: 05/27/2021] [Indexed: 01/18/2023] Open
Abstract
Deep learning implementations using convolutional neural nets have recently demonstrated promise in many areas of medical imaging. In this article we lay out the methods by which we have achieved consistently high quality, high throughput computation of intra-cranial segmentation from whole head magnetic resonance images, an essential but typically time-consuming bottleneck for brain image analysis. We refer to this output as “production-level” because it is suitable for routine use in processing pipelines. Training and testing with an extremely large archive of structural images, our segmentation algorithm performs uniformly well over a wide variety of separate national imaging cohorts, giving Dice metric scores exceeding those of other recent deep learning brain extractions. We describe the components involved to achieve this performance, including size, variety and quality of ground truth, and appropriate neural net architecture. We demonstrate the crucial role of appropriately large and varied datasets, suggesting a less prominent role for algorithm development beyond a threshold of capability.
Collapse
|
Journal Article |
4 |
20 |
22
|
Vassallo R, Kasuya H, Lo BWY, Peters T, Xiao Y. Augmented reality guidance in cerebrovascular surgery using microscopic video enhancement. Healthc Technol Lett 2018; 5:158-161. [PMID: 30464846 PMCID: PMC6222178 DOI: 10.1049/htl.2018.5069] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2018] [Accepted: 08/20/2018] [Indexed: 11/20/2022] Open
Abstract
Cerebrovascular surgery treats vessel abnormalities in the brain and spinal cord, including arteriovenous malformations (AVMs) and aneurysms. These procedures often involve clipping the vessels feeding blood to these abnormalities, making accurate classification of blood vessel types (feeding versus draining) important during surgery. Previous work to guide the intraoperative identification of the vessels included augmented reality (AR) using pre-operative images, injected dyes, and Doppler ultrasound, but each with their drawbacks. The authors propose and demonstrate a novel technique to help differentiate vessels by enhancing short videos of a few seconds from the surgical microscope using motion magnification and spectral analysis, and constructing AR views that fuse the analysis results as intuitive colourmaps and the surgical microscopic view. They demonstrated the proposed technique retrospectively with two real cerebrovascular surgical cases: one AVM and one aneurysm. The results showed that the proposed technique can help characterise different vessel types (feeding and draining the abnormality), which agree with those identified by the operating surgeon.
Collapse
|
Journal Article |
7 |
20 |
23
|
Muzammil SR, Maqsood S, Haider S, Damaševičius R. CSID: A Novel Multimodal Image Fusion Algorithm for Enhanced Clinical Diagnosis. Diagnostics (Basel) 2020; 10:E904. [PMID: 33167376 PMCID: PMC7694345 DOI: 10.3390/diagnostics10110904] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2020] [Revised: 10/28/2020] [Accepted: 10/29/2020] [Indexed: 12/19/2022] Open
Abstract
Technology-assisted clinical diagnosis has gained tremendous importance in modern day healthcare systems. To this end, multimodal medical image fusion has gained great attention from the research community. There are several fusion algorithms that merge Computed Tomography (CT) and Magnetic Resonance Images (MRI) to extract detailed information, which is used to enhance clinical diagnosis. However, these algorithms exhibit several limitations, such as blurred edges during decomposition, excessive information loss that gives rise to false structural artifacts, and high spatial distortion due to inadequate contrast. To resolve these issues, this paper proposes a novel algorithm, namely Convolutional Sparse Image Decomposition (CSID), that fuses CT and MR images. CSID uses contrast stretching and the spatial gradient method to identify edges in source images and employs cartoon-texture decomposition, which creates an overcomplete dictionary. Moreover, this work proposes a modified convolutional sparse coding method and employs improved decision maps and the fusion rule to obtain the final fused image. Simulation results using six datasets of multimodal images demonstrate that CSID achieves superior performance, in terms of visual quality and enriched information extraction, in comparison with eminent image fusion algorithms.
Collapse
|
research-article |
5 |
19 |
24
|
Unberath M, Fotouhi J, Hajek J, Maier A, Osgood G, Taylor R, Armand M, Navab N. Augmented reality-based feedback for technician-in-the-loop C-arm repositioning. Healthc Technol Lett 2018; 5:143-147. [PMID: 30464844 PMCID: PMC6222181 DOI: 10.1049/htl.2018.5066] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2018] [Accepted: 08/20/2018] [Indexed: 12/20/2022] Open
Abstract
Interventional C-arm imaging is crucial to percutaneous orthopedic procedures as it enables the surgeon to monitor the progress of surgery on the anatomy level. Minimally invasive interventions require repeated acquisition of X-ray images from different anatomical views to verify tool placement. Achieving and reproducing these views often comes at the cost of increased surgical time and radiation. We propose a marker-free ‘technician-in-the-loop’ Augmented Reality (AR) solution for C-arm repositioning. The X-ray technician operating the C-arm interventionally is equipped with a head-mounted display system capable of recording desired C-arm poses in 3D via an integrated infrared sensor. For C-arm repositioning to a target view, the recorded pose is restored as a virtual object and visualized in an AR environment, serving as a perceptual reference for the technician. Our proof-of-principle findings from a simulated trauma surgery indicate that the proposed system can decrease the 2.76 X-ray images required for re-aligning the scanner with an intra-operatively recorded C-arm view down to zero, suggesting substantial reductions of radiation dose. The proposed AR solution is a first step towards facilitating communication between the surgeon and the surgical staff, improving the quality of surgical image acquisition, and enabling context-aware guidance for surgery rooms of the future.
Collapse
|
Journal Article |
7 |
19 |
25
|
Farzaneh N, Williamson CA, Jiang C, Srinivasan A, Bapuraj JR, Gryak J, Najarian K, Soroushmehr SMR. Automated Segmentation and Severity Analysis of Subdural Hematoma for Patients with Traumatic Brain Injuries. Diagnostics (Basel) 2020; 10:E773. [PMID: 33007929 PMCID: PMC7600198 DOI: 10.3390/diagnostics10100773] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2020] [Revised: 09/18/2020] [Accepted: 09/21/2020] [Indexed: 01/06/2023] Open
Abstract
Detection and severity assessment of subdural hematoma is a major step in the evaluation of traumatic brain injuries. This is a retrospective study of 110 computed tomography (CT) scans from patients admitted to the Michigan Medicine Neurological Intensive Care Unit or Emergency Department. A machine learning pipeline was developed to segment and assess the severity of subdural hematoma. First, the probability of each point belonging to the hematoma region was determined using a combination of hand-crafted and deep features. This probability provided the initial state of the segmentation. Next, a 3D post-processing model was applied to evolve the initial state and delineate the hematoma. The recall, precision, and Dice similarity coefficient of the proposed segmentation method were 78.61%, 76.12%, and 75.35%, respectively, for the entire population. The Dice similarity coefficient was 79.97% for clinically significant hematomas, which compared favorably to an inter-rater Dice similarity coefficient. In volume-based severity analysis, the proposed model yielded an F1, recall, and specificity of 98.22%, 98.81%, and 92.31%, respectively, in detecting moderate and severe subdural hematomas based on hematoma volume. These results show that the combination of classical image processing and deep learning can outperform deep learning only methods to achieve greater average performance and robustness. Such a system can aid critical care physicians in reducing time to intervention and thereby improve long-term patient outcomes.
Collapse
|
research-article |
5 |
19 |