1
|
Ervik Ø, Tveten I, Hofstad EF, Langø T, Leira HO, Amundsen T, Sorger H. Automatic Segmentation of Mediastinal Lymph Nodes and Blood Vessels in Endobronchial Ultrasound (EBUS) Images Using Deep Learning. J Imaging 2024; 10:190. [PMID: 39194979 DOI: 10.3390/jimaging10080190] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2024] [Revised: 07/22/2024] [Accepted: 08/03/2024] [Indexed: 08/29/2024] Open
Abstract
Endobronchial ultrasound (EBUS) is used in the minimally invasive sampling of thoracic lymph nodes. In lung cancer staging, the accurate assessment of mediastinal structures is essential but challenged by variations in anatomy, image quality, and operator-dependent image interpretation. This study aimed to automatically detect and segment mediastinal lymph nodes and blood vessels employing a novel U-Net architecture-based approach in EBUS images. A total of 1161 EBUS images from 40 patients were annotated. For training and validation, 882 images from 30 patients and 145 images from 5 patients were utilized. A separate set of 134 images was reserved for testing. For lymph node and blood vessel segmentation, the mean ± standard deviation (SD) values of the Dice similarity coefficient were 0.71 ± 0.35 and 0.76 ± 0.38, those of the precision were 0.69 ± 0.36 and 0.82 ± 0.22, those of the sensitivity were 0.71 ± 0.38 and 0.80 ± 0.25, those of the specificity were 0.98 ± 0.02 and 0.99 ± 0.01, and those of the F1 score were 0.85 ± 0.16 and 0.81 ± 0.21, respectively. The average processing and segmentation run-time per image was 55 ± 1 ms (mean ± SD). The new U-Net architecture-based approach (EBUS-AI) could automatically detect and segment mediastinal lymph nodes and blood vessels in EBUS images. The method performed well and was feasible and fast, enabling real-time automatic labeling.
Collapse
Affiliation(s)
- Øyvind Ervik
- Clinic of Medicine, Nord-Trøndelag Hospital Trust, Levanger Hospital, 7601 Levanger, Norway
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, 7030 Trondheim, Norway
| | - Ingrid Tveten
- Department of Health Research, SINTEF Digital, 7034 Trondheim, Norway
| | | | - Thomas Langø
- Department of Health Research, SINTEF Digital, 7034 Trondheim, Norway
- Department of Research, St. Olavs Hospital, 7030 Trondheim, Norway
| | - Håkon Olav Leira
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, 7030 Trondheim, Norway
- Department of Thoracic Medicine, St Olavs Hospital, Trondheim University Hospital, 7030 Trondheim, Norway
| | - Tore Amundsen
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, 7030 Trondheim, Norway
- Department of Thoracic Medicine, St Olavs Hospital, Trondheim University Hospital, 7030 Trondheim, Norway
| | - Hanne Sorger
- Clinic of Medicine, Nord-Trøndelag Hospital Trust, Levanger Hospital, 7601 Levanger, Norway
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, 7030 Trondheim, Norway
| |
Collapse
|
2
|
Rettenberger L, Schilling M, Elser S, Bohland M, Reischl M. Self-Supervised Learning for Annotation Efficient Biomedical Image Segmentation. IEEE Trans Biomed Eng 2023; 70:2519-2528. [PMID: 37028023 DOI: 10.1109/tbme.2023.3252889] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/08/2023]
Abstract
OBJECTIVE The scarcity of high-quality annotated data is omnipresent in machine learning. Especially in biomedical segmentation applications, experts need to spend a lot of their time into annotating due to the complexity. Hence, methods to reduce such efforts are desired. METHODS Self-Supervised Learning (SSL) is an emerging field that increases performance when unannotated data is present. However, profound studies regarding segmentation tasks and small datasets are still absent. A comprehensive qualitative and quantitative evaluation is conducted, examining SSL's applicability with a focus on biomedical imaging. We consider various metrics and introduce multiple novel application-specific measures. All metrics and state-of-the-art methods are provided in a directly applicable software package (https://osf.io/gu2t8/). RESULTS We show that SSL can lead to performance improvements of up to 10%, which is especially notable for methods designed for segmentation tasks. CONCLUSION SSL is a sensible approach to data-efficient learning, especially for biomedical applications, where generating annotations requires much effort. Additionally, our extensive evaluation pipeline is vital since there are significant differences between the various approaches. SIGNIFICANCE We provide biomedical practitioners with an overview of innovative data-efficient solutions and a novel toolbox for their own application of new approaches. Our pipeline for analyzing SSL methods is provided as a ready-to-use software package.
Collapse
|
3
|
Multimodal Registration for Image-Guided EBUS Bronchoscopy. J Imaging 2022; 8:jimaging8070189. [PMID: 35877633 PMCID: PMC9320860 DOI: 10.3390/jimaging8070189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2022] [Revised: 06/27/2022] [Accepted: 06/29/2022] [Indexed: 12/24/2022] Open
Abstract
The state-of-the-art procedure for examining the lymph nodes in a lung cancer patient involves using an endobronchial ultrasound (EBUS) bronchoscope. The EBUS bronchoscope integrates two modalities into one device: (1) videobronchoscopy, which gives video images of the airway walls; and (2) convex-probe EBUS, which gives 2D fan-shaped views of extraluminal structures situated outside the airways. During the procedure, the physician first employs videobronchoscopy to navigate the device through the airways. Next, upon reaching a given node’s approximate vicinity, the physician probes the airway walls using EBUS to localize the node. Due to the fact that lymph nodes lie beyond the airways, EBUS is essential for confirming a node’s location. Unfortunately, it is well-documented that EBUS is difficult to use. In addition, while new image-guided bronchoscopy systems provide effective guidance for videobronchoscopic navigation, they offer no assistance for guiding EBUS localization. We propose a method for registering a patient’s chest CT scan to live surgical EBUS views, thereby facilitating accurate image-guided EBUS bronchoscopy. The method entails an optimization process that registers CT-based virtual EBUS views to live EBUS probe views. Results using lung cancer patient data show that the method correctly registered 28/28 (100%) lymph nodes scanned by EBUS, with a mean registration time of 3.4 s. In addition, the mean position and direction errors of registered sites were 2.2 mm and 11.8∘, respectively. In addition, sensitivity studies show the method’s robustness to parameter variations. Lastly, we demonstrate the method’s use in an image-guided system designed for guiding both phases of EBUS bronchoscopy.
Collapse
|
4
|
Ma J, Bao L, Lou Q, Kong D. Transfer learning for automatic joint segmentation of thyroid and breast lesions from ultrasound images. Int J Comput Assist Radiol Surg 2021; 17:363-372. [PMID: 34881409 DOI: 10.1007/s11548-021-02505-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2021] [Accepted: 09/17/2021] [Indexed: 01/03/2023]
Abstract
PURPOSE It plays a significant role to accurately and automatically segment lesions from ultrasound (US) images in clinical application. Nevertheless, it is extremely challenging because distinct components of heterogeneous lesions are similar to background in US images. In our study, a transfer learning-based method is developed for full-automatic joint segmentation of nodular lesions. METHODS Transfer learning is a widely used method to build high performing computer vision models. Our transfer learning model is a novel type of densely connected convolutional network (SDenseNet). Specifically, we pre-train SDenseNet based on ImageNet dataset. Then our SDenseNet is designed as a multi-channel model (denoted Mul-DenseNet) for automatically jointly segmenting lesions. As comparison, our SDenseNet using different transfer learning is applied to segmenting nodules, respectively. In our study, we find that more datasets for pre-training and multiple pre-training do not always work in segmentation of nodules, and the performance of transfer learning depends on a judicious choice of dataset and characteristics of targets. RESULTS Experimental results illustrate a significant performance of the Mul-DenseNet compared to that of other methods in the study. Specially, for thyroid nodule segmentation, overlap metric (OM), dice ratio (DR), true-positive rate (TPR), false-positive rate (FPR) and modified Hausdorff distance (MHD) are [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text] and [Formula: see text] mm, respectively; for breast nodule segmentation, OM, DR, TPR, FPR and MHD are [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text] and [Formula: see text] mm, respectively. CONCLUSIONS The experimental results illustrate our transfer learning models are very effective in segmentation of lesions, which also demonstrate that it is potential of our proposed Mul-DenseNet model in clinical applications. This model can reduce heavy workload of the physicians so that it can avoid misdiagnosis cases due to excessive fatigue. Moreover, it is easy and reproducible to detect lesions without medical expertise.
Collapse
Affiliation(s)
- Jinlian Ma
- School of Microelectronics, Shandong University, Jinan, China.,Shenzhen Research Institute of Shandong University, A301 Virtual University Park in South District of Shenzhen, Shenzhen, China.,State Key Lab of CAD&CG, College of Computer Science and Technology, Zhejiang University, Hangzhou, China
| | - Lingyun Bao
- Department of Ultrasound, Hangzhou First Peoples Hospital, Zhejiang University, Hangzhou, China
| | - Qiong Lou
- School of Science, Zhejiang University of Sciences and Technology, Hangzhou, China
| | - Dexing Kong
- School of Mathematical Sciences, Zhejiang University, Hangzhou, China.
| |
Collapse
|
5
|
Abstract
The staging of the central-chest lymph nodes is a major step in the management of lung-cancer patients. For this purpose, the physician uses a device that integrates videobronchoscopy and an endobronchial ultrasound (EBUS) probe. To biopsy a lymph node, the physician first uses videobronchoscopy to navigate through the airways and then invokes EBUS to localize and biopsy the node. Unfortunately, this process proves difficult for many physicians, with the choice of biopsy site found by trial and error. We present a complete image-guided EBUS bronchoscopy system tailored to lymph-node staging. The system accepts a patient’s 3D chest CT scan, an optional PET scan, and the EBUS bronchoscope’s video sources as inputs. System workflow follows two phases: (1) procedure planning and (2) image-guided EBUS bronchoscopy. Procedure planning derives airway guidance routes that facilitate optimal EBUS scanning and nodal biopsy. During the live procedure, the system’s graphical display suggests a series of device maneuvers to perform and provides multimodal visual cues for locating suitable biopsy sites. To this end, the system exploits data fusion to drive a multimodal virtual bronchoscope and other visualization tools that lead the physician through the process of device navigation and localization. A retrospective lung-cancer patient study and follow-on prospective patient study, performed within the standard clinical workflow, demonstrate the system’s feasibility and functionality. For the prospective study, 60/60 selected lymph nodes (100%) were correctly localized using the system, and 30/33 biopsied nodes (91%) gave adequate tissue samples. Also, the mean procedure time including all user interactions was 6 min 43 s All of these measures improve upon benchmarks reported for other state-of-the-art systems and current practice. Overall, the system enabled safe, efficient EBUS-based localization and biopsy of lymph nodes.
Collapse
|
6
|
Lian J, Zhang M, Jiang N, Bi W, Dong X. Feature Extraction of Kidney Tissue Image Based on Ultrasound Image Segmentation. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:9915697. [PMID: 33986943 PMCID: PMC8093061 DOI: 10.1155/2021/9915697] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Revised: 04/09/2021] [Accepted: 04/16/2021] [Indexed: 11/17/2022]
Abstract
The kidney tissue image is affected by other interferences in the tissue, which makes it difficult to extract the kidney tissue image features, and it is difficult to judge the lesion characteristics and types by intelligent feature recognition. In order to improve the efficiency and accuracy of feature extraction of kidney tissue images, refer to the ultrasonic heart image for analysis and then apply it to the feature extraction of kidney tissue. This paper proposes a feature extraction method based on ultrasound image segmentation. Moreover, this study combines the optical flow method and the speckle tracking algorithm to select the best image tracking method and optimizes the algorithm speed through the full search method and the two-dimensional log search method. In addition, this study verifies the performance of the method proposed in this paper through comparative experimental research, and this study combines statistical analysis methods to perform data analysis. The research results show that the algorithm proposed in this paper has a certain effect.
Collapse
Affiliation(s)
- Jie Lian
- Department of Ultrasound, Harbin Medical University Fourth Hospital, Harbin 150001, Heilongjiang, China
| | - Mingyu Zhang
- Department of Cardiology, Harbin Medical University Fourth Hospital, Harbin 150001, Heilongjiang, China
| | - Na Jiang
- Department of Ultrasound, Harbin Medical University Fourth Hospital, Harbin 150001, Heilongjiang, China
| | - Wei Bi
- Department of Ultrasound, Harbin Medical University Fourth Hospital, Harbin 150001, Heilongjiang, China
| | - Xiaoqiu Dong
- Department of Ultrasound, Harbin Medical University Fourth Hospital, Harbin 150001, Heilongjiang, China
| |
Collapse
|
7
|
A hybrid active contour model for ultrasound image segmentation. Soft comput 2020. [DOI: 10.1007/s00500-020-05097-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
8
|
Chen J, You H, Li K. A review of thyroid gland segmentation and thyroid nodule segmentation methods for medical ultrasound images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 185:105329. [PMID: 31955006 DOI: 10.1016/j.cmpb.2020.105329] [Citation(s) in RCA: 45] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/31/2019] [Revised: 01/08/2020] [Accepted: 01/08/2020] [Indexed: 05/07/2023]
Abstract
Background and objective Thyroid image segmentation is an indispensable part in computer-aided diagnosis systems and medical image diagnoses of thyroid diseases. There have been dozens of studies on thyroid gland segmentation and thyroid nodule segmentation in ultrasound images. The aim of this work is to categorize and review the thyroid gland segmentation and thyroid nodule segmentation methods in medical ultrasound. Methods This work proposes a categorization approach of thyroid gland segmentation and thyroid nodule segmentation methods according to the theoretical bases of segmentation methods. The segmentation methods are categorized into four groups, including contour and shape based methods, region based methods, machine and deep learning methods and hybrid methods. The representative articles are reviewed with detailed descriptions of methods and analyses of correlations between methods. The evaluation metrics for the reviewed segmentation methods are named uniformly in this work. The segmentation performance results using the uniformly named evaluation metrics are compared. Results After careful investigation, 28 representative papers are selected for comprehensive analyses and comparisons in this review. The dominant thyroid gland segmentation methods are machine and deep learning methods. The training of massive data makes these models have better segmentation performance and robustness. But deep learning models usually require plenty of marked training data and long training time. For thyroid nodule segmentation, the most common methods are contour and shape based methods, which have good segmentation performance. However, most of them are tested on small datasets. Conclusions Based on the comprehensive consideration of application scenario, image features, method practicability and segmentation performance, the appropriate segmentation method for specific situation can be selected. Furthermore, several limitations of current thyroid ultrasound image segmentation methods are presented, which may be overcome in future studies, such as the segmentation of pathological or abnormal thyroid glands, identification of the specific nodular diseases, and the standard thyroid ultrasound image datasets.
Collapse
Affiliation(s)
- Junying Chen
- School of Software Engineering, South China University of Technology, Guangzhou, Guangdong 510006, China.
| | - Haijun You
- School of Software Engineering, South China University of Technology, Guangzhou, Guangdong 510006, China.
| | - Kai Li
- Department of Ultrasound, The Third Affiliated Hospital of Sun Yat-Sen University, Guangzhou, Guangdong 510630, China.
| |
Collapse
|
9
|
Zang X, Gibbs JD, Cheirsilp R, Byrnes PD, Toth J, Bascom R, Higgins WE. Optimal route planning for image-guided EBUS bronchoscopy. Comput Biol Med 2019; 112:103361. [PMID: 31362107 PMCID: PMC6820695 DOI: 10.1016/j.compbiomed.2019.103361] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2019] [Revised: 07/16/2019] [Accepted: 07/16/2019] [Indexed: 12/25/2022]
Abstract
The staging of the central-chest lymph nodes is a major lung-cancer management procedure. To perform a staging procedure, the physician first uses a patient's 3D X-ray computed-tomography (CT) chest scan to interactively plan airway routes leading to selected target lymph nodes. Next, using an integrated EBUS bronchoscope (EBUS = endobronchial ultrasound), the physician uses videobronchoscopy to navigate through the airways toward a target node's general vicinity and then invokes EBUS to localize the node for biopsy. Unfortunately, during the procedure, the physician has difficulty in translating the preplanned airway routes into safe, effective biopsy sites. We propose an automatic route-planning method for EBUS bronchoscopy that gives optimal localization of safe, effective nodal biopsy sites. To run the method, a 3D chest model is first computed from a patient's chest CT scan. Next, an optimization method derives feasible airway routes that enables maximal tissue sampling of target lymph nodes while safely avoiding major blood vessels. In a lung-cancer patient study entailing 31 nodes (long axis range: [9.0 mm, 44.5 mm]), 25/31 nodes yielded safe airway routes having an optimal tissue sample size = 8.4 mm (range: [1.0 mm, 18.6 mm]) and sample adequacy = 0.42 (range: [0.05, 0.93]). Quantitative results indicate that the method potentially enables successful biopsies in essentially 100% of selected lymph nodes versus the 70-94% success rate of other approaches. The method also potentially facilitates adequate tissue biopsies for nearly 100% of selected nodes, as opposed to the 55-77% tissue adequacy rates of standard methods. The remaining nodes did not yield a safe route within the preset safety-margin constraints, with 3 nodes never yielding a route even under the most lenient safety-margin conditions. Thus, the method not only helps determine effective airway routes and expected sample quality for nodal biopsy, but it also helps point out situations where biopsy may not be advisable. We also demonstrate the methodology in an image-guided EBUS bronchoscopy system, used successfully in live lung-cancer patient studies. During a live procedure, the method provides dynamic real-time sample size visualization in an enhanced virtual bronchoscopy viewer. In this way, the physician vividly sees the most promising biopsy sites along the airway walls as the bronchoscope moves through the airways.
Collapse
Affiliation(s)
- Xiaonan Zang
- School of Electrical Engineering and Computer Science, USA; EDDA Technologies, Princeton, NJ, 08540, USA
| | - Jason D Gibbs
- School of Electrical Engineering and Computer Science, USA; X-Nav Technologies, Lansdale, PA, 19446, USA
| | - Ronnarit Cheirsilp
- School of Electrical Engineering and Computer Science, USA; Broncus Medical, San Jose, CA, USA
| | | | - Jennifer Toth
- Department of Medicine, Division of Pulmonary, Allergy, and Critical Care Penn State University, University Park and Hershey, PA, USA
| | - Rebecca Bascom
- Department of Medicine, Division of Pulmonary, Allergy, and Critical Care Penn State University, University Park and Hershey, PA, USA
| | | |
Collapse
|
10
|
Mishra D, Chaudhury S, Sarkar M, Soin AS. Ultrasound Image Segmentation: A Deeply Supervised Network With Attention to Boundaries. IEEE Trans Biomed Eng 2018; 66:1637-1648. [PMID: 30346279 DOI: 10.1109/tbme.2018.2877577] [Citation(s) in RCA: 59] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
OBJECTIVE Segmentation of anatomical structures in ultrasound images requires vast radiological knowledge and experience. Moreover, the manual segmentation often results in subjective variations, therefore, an automatic segmentation is desirable. We aim to develop a fully convolutional neural network (FCNN) with attentional deep supervision for the automatic and accurate segmentation of the ultrasound images. METHOD FCNN/CNNs are used to infer high-level context using low-level image features. In this paper, a sub-problem specific deep supervision of the FCNN is performed. The attention of fine resolution layers is steered to learn object boundary definitions using auxiliary losses, whereas coarse resolution layers are trained to discriminate object regions from the background. Furthermore, a customized scheme for downweighting the auxiliary losses and a trainable fusion layer are introduced. This produces an accurate segmentation and helps in dealing with the broken boundaries, usually found in the ultrasound images. RESULTS The proposed network is first tested for blood vessel segmentation in liver images. It results in F1 score, mean intersection over union, and dice index of 0.83, 0.83, and 0.79, respectively. The best values observed among the existing approaches are produced by U-net as 0.74, 0.81, and 0.75, respectively. The proposed network also results in dice index value of 0.91 in the lumen segmentation experiments on MICCAI 2011 IVUS challenge dataset, which is near to the provided reference value of 0.93. Furthermore, the improvements similar to vessel segmentation experiments are also observed in the experiment performed to segment lesions. CONCLUSION Deep supervision of the network based on the input-output characteristics of the layers results in improvement in overall segmentation accuracy. SIGNIFICANCE Sub-problem specific deep supervision for ultrasound image segmentation is the main contribution of this paper. Currently the network is trained and tested for fixed size inputs. It requires image resizing and limits the performance in small size images.
Collapse
|
11
|
Abstract
Bronchoscopy enables many minimally invasive chest procedures for diseases such as lung cancer and asthma. Guided by the bronchoscope's video stream, a physician can navigate the complex three-dimensional (3-D) airway tree to collect tissue samples or administer a disease treatment. Unfortunately, physicians currently discard procedural video because of the overwhelming amount of data generated. Hence, they must rely on memory and anecdotal snapshots to document a procedure. We propose a robust automatic method for summarizing an endobronchial video stream. Inspired by the multimedia concept of the video summary and by research in other endoscopy domains, our method consists of three main steps: 1) shot segmentation, 2) motion analysis, and 3) keyframe selection. Overall, the method derives a true hierarchical decomposition, consisting of a shot set and constituent keyframe set, for a given procedural video. No other method to our knowledge gives such a structured summary for the raw, unscripted, unedited videos arising in endoscopy. Results show that our method more efficiently covers the observed endobronchial regions than other keyframe-selection approaches and is robust to parameter variations. Over a wide range of video sequences, our method required on average only 6.5% of available video frames to achieve a video coverage = 92.7%. We also demonstrate how the derived video summary facilitates direct fusion with a patient's 3-D chest computed-tomography scan in a system under development, thereby enabling efficient video browsing and retrieval through the complex airway tree.
Collapse
|
12
|
Lazzaro D, Morigi S, Melpignano P, Loli Piccolomini E, Benini L. Image enhancement variational methods for enabling strong cost reduction in OLED-based point-of-care immunofluorescent diagnostic systems. INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING 2018; 34:e2932. [PMID: 29076644 DOI: 10.1002/cnm.2932] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2017] [Revised: 09/14/2017] [Accepted: 09/23/2017] [Indexed: 06/07/2023]
Abstract
Immunofluorescence diagnostic systems cost is often dominated by high-sensitivity, low-noise CCD-based cameras that are used to acquire the fluorescence images. In this paper, we investigate the use of low-cost CMOS sensors in a point-of-care immunofluorescence diagnostic application for the detection and discrimination of 4 different serotypes of the Dengue virus in a set of human samples. A 2-phase postprocessing software pipeline is proposed, which consists in a first image-enhancement stage for resolution increasing and segmentation and a second diagnosis stage for the computation of the output concentrations. We present a novel variational coupled model for the joint super-resolution and segmentation stage and an automatic innovative image analysis for the diagnosis purpose. A specially designed forward backward-based numerical algorithm is introduced, and its convergence is proved under mild conditions. We present results on a cheap prototype CMOS camera compared with the results of a more expensive CCD device, for the detection of the Dengue virus with a low-cost OLED light source. The combination of the CMOS sensor and the developed postprocessing software allows to correctly identify the different Dengue serotype using an automatized procedure. The results demonstrate that our diagnostic imaging system enables camera cost reduction up to 99%, at an acceptable diagnostic accuracy, with respect to the reference CCD-based camera system. The correct detection and identification of the Dengue serotypes have been confirmed by standard diagnostic methods (RT-PCR and ELISA).
Collapse
Affiliation(s)
- D Lazzaro
- Department of Mathematics, University of Bologna, Bologna, Italy
| | - S Morigi
- Department of Mathematics, University of Bologna, Bologna, Italy
| | - P Melpignano
- Or-el d.o.o. Organska elektronika, Kobarid, Slovenia
| | | | - L Benini
- Department of Electrical, Electronic, and Information Engineering, University of Bologna, Bologna, Italy
| |
Collapse
|
13
|
Texture Based Quality Analysis of Simulated Synthetic Ultrasound Images Using Local Binary Patterns. J Imaging 2017. [DOI: 10.3390/jimaging4010003] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
|