1
|
Wu L, Chen A, Salama P, Winfree S, Dunn KW, Delp EJ. NISNet3D: three-dimensional nuclear synthesis and instance segmentation for fluorescence microscopy images. Sci Rep 2023; 13:9533. [PMID: 37308499 DOI: 10.1038/s41598-023-36243-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2023] [Accepted: 05/31/2023] [Indexed: 06/14/2023] Open
Abstract
The primary step in tissue cytometry is the automated distinction of individual cells (segmentation). Since cell borders are seldom labeled, cells are generally segmented by their nuclei. While tools have been developed for segmenting nuclei in two dimensions, segmentation of nuclei in three-dimensional volumes remains a challenging task. The lack of effective methods for three-dimensional segmentation represents a bottleneck in the realization of the potential of tissue cytometry, particularly as methods of tissue clearing present the opportunity to characterize entire organs. Methods based on deep learning have shown enormous promise, but their implementation is hampered by the need for large amounts of manually annotated training data. In this paper, we describe 3D Nuclei Instance Segmentation Network (NISNet3D) that directly segments 3D volumes through the use of a modified 3D U-Net, 3D marker-controlled watershed transform, and a nuclei instance segmentation system for separating touching nuclei. NISNet3D is unique in that it provides accurate segmentation of even challenging image volumes using a network trained on large amounts of synthetic nuclei derived from relatively few annotated volumes, or on synthetic data obtained without annotated volumes. We present a quantitative comparison of results obtained from NISNet3D with results obtained from a variety of existing nuclei segmentation techniques. We also examine the performance of the methods when no ground truth is available and only synthetic volumes were used for training.
Collapse
Affiliation(s)
- Liming Wu
- Video and Image Processing Laboratory, School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, 47907, USA
| | - Alain Chen
- Video and Image Processing Laboratory, School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, 47907, USA
| | - Paul Salama
- Department of Electrical and Computer Engineering, Indiana University-Purdue University Indianapolis, Indianapolis, IN, 46202, USA
| | - Seth Winfree
- Department of Pathology and Microbiology, University of Nebraska Medical Center, Omaha, NE, 68198, USA
| | - Kenneth W Dunn
- School of Medicine, Indiana University, Indianapolis, IN, 46202, USA
| | - Edward J Delp
- Video and Image Processing Laboratory, School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, 47907, USA.
| |
Collapse
|
2
|
Parvathi S, Vaishnavi P. An efficient breast cancer detection with secured cloud storage & reliability analysis using FMEA. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-221973] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
Breast cancer is considered as a most dangerous type of cancer found in women among all the cancers. Around 2.3 million women in the world are affected by this cancer and there is no cure if it is left untreated at an earlier stage. Therefore, early diagnosis of this disease is an important consideration to save the life of millions of women. Many machine learning models have been evolved in the recent years for breast cancer detection. However, all the currently available works focused only on improving the prediction accuracy, they need more attention on providing reliable services. This work presents an efficient breast cancer detection mechanism using deep learning strategies. The various assortments like breast image shapes, the intensity of images, regions of an image, illuminations, and contrast are the conceivable factors that define breast cancer identification. This study offers a strong image detection process for breast cancer mammography images by considering the whole slide image. Here, the input process for the preprocessing stage will remove the noise present in the image using Gaussian Filter (GF). The preprocessed image moves to the image segmentation and then forward to the feature extraction for extracting the features of the images using Cauchy distribution-based segmentation and Shearlet based feature extraction. Then the specialized features can be isolated using the Entropy PCA based feature selection. Finally, the breast cancer area is to be detected as benign or malignant accurately by using the Unified probability with LSTM neural network classification (UP-LSTM) for whole slide image (WSI). The attained outcomes and the detected outcomes were stored in cloud using a security mechanism for further monitoring purposes. To provide an efficient security, a Bio-inspired Iterative Honey Bee (BI-IHB) encryption is employed which is decrypted on user request. The reliability of the stored data is then found using FMEA (Failure mode and effective analysis) approach. From the experimental analysis, it is observed that UP-LSTM classifier model offers accuracy of 99.26% , sensitivity of 100% , and precision value of 98.59% which is better than the other state of the art techniques.
Collapse
Affiliation(s)
- S. Parvathi
- Department of Computer Applications, UCE, Anna University, BIT Campus, Trichy, India
| | - P. Vaishnavi
- Department of Computer Applications, UCE, Anna University, BIT Campus, Trichy, India
| |
Collapse
|
3
|
Dong C, Xu S, Li Z. A novel end-to-end deep learning solution for coronary artery segmentation from CCTA. Med Phys 2022; 49:6945-6959. [PMID: 35770676 DOI: 10.1002/mp.15842] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2021] [Revised: 04/07/2022] [Accepted: 06/10/2022] [Indexed: 12/13/2022] Open
Abstract
PURPOSE Coronary computed tomographic angiography (CCTA) plays a vital role in the diagnosis of cardiovascular diseases, among which automatic coronary artery segmentation (CAS) serves as one of the most challenging tasks. To computationally assist the task, this paper proposes a novel end-to-end deep learning-based (DL) solution for automatic CAS. METHODS Inspired by the Di-Vnet network, a fully automatic multistage DL solution is proposed. The new solution aims to preserve the integrity of blood vessels in terms of both their shape details and continuity. The solution is developed using 338 CCTA cases, among which 133 cases (33865 axial images) have their ground-truth cardiac masks pre-annotated and 205 cases (53365 axial images) have their ground-truth coronary artery (CA) masks pre-annotated. The solution's accuracy is measured using dice similarity coefficient (DSC), 95th percentile Hausdorff Distance (95% HD), Recall, and Precision scores for CAS. RESULTS The proposed solution attains 90.29% in DSC, 2.11 mm in 95% HD, 97.02% in Recall, and 92.17% in Precision, respectively, which consumes 0.112 s per image and 30 s per case on average. Such performance of our method is superior to other state-of-the-art segmentation methods. CONCLUSIONS The novel DL solution is able to automatically learn to perform CAS in an end-to-end fashion, attaining a high accuracy, efficiency and robustness simultaneously.
Collapse
Affiliation(s)
- Caixia Dong
- Institute of Medical Artificial Intelligence, The Second Affiliated Hospital of Xi'an Jiaotong University, Xi'an, Shannxi, China
| | - Songhua Xu
- Institute of Medical Artificial Intelligence, The Second Affiliated Hospital of Xi'an Jiaotong University, Xi'an, Shannxi, China
| | - Zongfang Li
- Institute of Medical Artificial Intelligence, The Second Affiliated Hospital of Xi'an Jiaotong University, Xi'an, Shannxi, China
| |
Collapse
|
4
|
Musa Jaber M, Yussof S, S. Elameer A, Yeng Weng L, Khalil Abd S, Nayyar A. Medical Image Analysis Using Deep Learning and Distribution Pattern Matching Algorithm. COMPUTERS, MATERIALS & CONTINUA 2022; 72:2175-2190. [DOI: 10.32604/cmc.2022.023387] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/06/2021] [Accepted: 12/31/2021] [Indexed: 09/02/2023]
|
5
|
A digital cardiac disease biomarker from a generative progressive cardiac cine-MRI representation. Biomed Eng Lett 2021; 12:75-84. [DOI: 10.1007/s13534-021-00212-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Revised: 10/28/2021] [Accepted: 11/20/2021] [Indexed: 10/19/2022] Open
|
6
|
Pena H, Gomez S, Romo-Bucheli D, Martinez F. Cardiac Disease Representation Conditioned by Spatio-temporal Priors in Cine-MRI Sequences Using Generative Embedding Vectors. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:5570-5573. [PMID: 34892386 DOI: 10.1109/embc46164.2021.9630115] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Cardiac cine-MRI is one of the most important diagnostic tools for characterizing heart-related pathologies. This imaging technique allows clinicians to assess the morphology and physiology of the heart during the cardiac cycle. Nonetheless, the analysis on cardiac cine-MRI is highly dependent on the observer expertise and a high inter-reader variability is frequently observed. Alternatively, the ejection fraction, a quantitative heart dynamic measure, is used to identify potential cardiac diseases. Unfortunately, this type of measurement is insufficient to distinguish among different cardiac pathologies. This quantification does not exploit all the heart functional information conveyed by cine-MRI sequences. Automatic image analysis might help to identify visual patterns associated with cardiac diseases in the cine-MRI sequences and highlight potential biomarkers. This paper introduces a conditional generative adversarial network that learns a mapping between the latent space and a generated cine-MRI data distribution involving information from five different cardiac pathologies. This net is guided from the left ventricle segmentation and the velocity field that is computed as prior information to focus on the deep representation of salient cardiac patterns. Once the deep neural networks are trained, a set of validation cine-MRI slices is represented in the embedding space. The associated embedding descriptor, in the latent space, is found by minimizing a reconstruction error in the generator output. We evaluated the obtained embedded representation as a disease marker by using different classification models in 16000 pathological cine-MRI slices. The representation retrieved by using the best conditional generative model configuration was used on the classifier models yielding an average accuracy of 90.04% and an average F1-score of 89.97% in the classification task.Clinical relevance-Construction of a topological embedding space, from generative representation, that fully exploits hidden relationships of cine-MRI and represent cardiac diseases.
Collapse
|
7
|
Li J, Wang P, Zhou Y, Liang H, Luan K. Different Machine Learning and Deep Learning Methods for the Classification of Colorectal Cancer Lymph Node Metastasis Images. Front Bioeng Biotechnol 2021; 8:620257. [PMID: 33520971 PMCID: PMC7841386 DOI: 10.3389/fbioe.2020.620257] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2020] [Accepted: 12/14/2020] [Indexed: 12/14/2022] Open
Abstract
The classification of colorectal cancer (CRC) lymph node metastasis (LNM) is a vital clinical issue related to recurrence and design of treatment plans. However, it remains unclear which method is effective in automatically classifying CRC LNM. Hence, this study compared the performance of existing classification methods, i.e., machine learning, deep learning, and deep transfer learning, to identify the most effective method. A total of 3,364 samples (1,646 positive and 1,718 negative) from Harbin Medical University Cancer Hospital were collected. All patches were manually segmented by experienced radiologists, and the image size was based on the lesion to be intercepted. Two classes of global features and one class of local features were extracted from the patches. These features were used in eight machine learning algorithms, while the other models used raw data. Experiment results showed that deep transfer learning was the most effective method with an accuracy of 0.7583 and an area under the curve of 0.7941. Furthermore, to improve the interpretability of the results from the deep learning and deep transfer learning models, the classification heat-map features were used, which displayed the region of feature extraction by superposing with raw data. The research findings are expected to promote the use of effective methods in CRC LNM detection and hence facilitate the design of proper treatment plans.
Collapse
Affiliation(s)
- Jin Li
- College of Intelligent System Science and Engineering, Harbin Engineering University, Harbin, China
| | - Peng Wang
- College of Intelligent System Science and Engineering, Harbin Engineering University, Harbin, China
| | - Yang Zhou
- College of Intelligent System Science and Engineering, Harbin Engineering University, Harbin, China
- Department of Radiology, Harbin Medical University Cancer Hospital, Harbin, China
| | - Hong Liang
- College of Intelligent System Science and Engineering, Harbin Engineering University, Harbin, China
| | - Kuan Luan
- College of Intelligent System Science and Engineering, Harbin Engineering University, Harbin, China
| |
Collapse
|
8
|
Khadangi A, Boudier T, Rajagopal V. EM-stellar: benchmarking deep learning for electron microscopy image segmentation. Bioinformatics 2021; 37:97-106. [PMID: 33416852 PMCID: PMC8034537 DOI: 10.1093/bioinformatics/btaa1094] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2020] [Revised: 10/06/2020] [Accepted: 12/22/2020] [Indexed: 11/13/2022] Open
Abstract
MOTIVATION The inherent low contrast of electron microscopy (EM) datasets presents a significant challenge for rapid segmentation of cellular ultrastructures from EM data. This challenge is particularly prominent when working with high resolution big-datasets that are now acquired using electron tomography and serial block-face imaging techniques. Deep learning (DL) methods offer an exciting opportunity to automate the segmentation process by learning from manual annotations of a small sample of EM data. While many DL methods are being rapidly adopted to segment EM data no benchmark analysis has been conducted on these methods to date. RESULTS We present EM-stellar, a platform that is hosted on Google Colab that can be used to benchmark the performance of a range of state-of-the-art DL methods on user-provided datasets. Using EM-Stellar we show that the performance of any DL method is dependent on the properties of the images being segmented. It also follows that no single DL method performs consistently across all performance evaluation metrics. AVAILABILITY EM-stellar (code and data) is written in Python and is freely available under MIT license on GitHub (https://github.com/cellsmb/em-stellar). SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
- Afshin Khadangi
- Department of Biomedical Engineering, University of Melbourne, Victoria, Australia
| | - Thomas Boudier
- Institute of Molecular Biology, Academia Sinica, Taipei, Taiwan
| | - Vijay Rajagopal
- Department of Biomedical Engineering, University of Melbourne, Victoria, Australia
| |
Collapse
|
9
|
Abstract
The medical field is creating large amount of data that physicians are unable to decipher and use efficiently. Moreover, rule-based expert systems are inefficient in solving complicated medical tasks or for creating insights using big data. Deep learning has emerged as a more accurate and effective technology in a wide range of medical problems such as diagnosis, prediction, and intervention. Deep learning is a representation learning method that consists of layers that transform data nonlinearly, thus, revealing hierarchical relationships and structures. In this review, we survey deep learning application papers that use structured data, and signal and imaging modalities from cardiology. We discuss the advantages and limitations of applying deep learning in cardiology that also apply in medicine in general, while proposing certain directions as the most viable for clinical use.
Collapse
|
10
|
Unberath M, Taubmann O, Aichert A, Achenbach S, Maier A. Prior-Free Respiratory Motion Estimation in Rotational Angiography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:1999-2009. [PMID: 29994629 DOI: 10.1109/tmi.2018.2806310] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Rotational coronary angiography using C-arm angiography systems enables intra-procedural 3-D imaging that is considered beneficial for diagnostic assessment and interventional guidance. Despite previous efforts, rotational angiography was not yet successfully established in clinical practice for coronary artery procedures due to challenges associated with substantial intra-scan respiratory and cardiac motion. While gating handles cardiac motion during reconstruction, respiratory motion requires compensation. State-of-the-art algorithms rely on 3-D / 2-D registration that requires an uncompensated reconstruction of sufficient quality. To overcome this limitation, we investigate two prior-free respiratory motion estimation methods based on the optimization of: 1) epipolar consistency conditions (ECCs) and 2) a task-based auto-focus measure (AFM). The methods assess redundancies in projection images or impose favorable properties of 3-D space, respectively, and are used to estimate the respiratory motion of the coronary arteries within rotational angiograms. We evaluate our algorithms on the publicly available CAVAREV benchmark and on clinical data. We quantify reductions in error due to respiratory motion compensation using a dedicated reconstruction domain metric. Moreover, we study the improvements in image quality when using an analytic and a novel temporal total variation regularized algebraic reconstruction algorithm. We observed substantial improvement in all figures of merit compared with the uncompensated case. Improvements in image quality presented as a reduction of double edges, blurring, and noise. Benefits of the proposed corrections were notable even in cases suffering little corruption from respiratory motion, translating to an improvement in the vessel sharpness of (6.08 ± 4.46)% and (14.7 ± 8.80)% when the ECC-based and the AFM-based compensation were applied. On the CAVAREV data, our motion compensation approach exhibits an improvement of (27.6 ± 7.5)% and (97.0 ± 17.7)% when the ECC and AFM were used, respectively. At the time of writing, our method based on AFM is leading the CAVAREV scoreboard. Both motion estimation strategies are purely image-based and accurately estimate the displacements of the coronary arteries due to respiration. While current evidence suggests the superior performance of AFM, future work will further investigate the use of ECC in the context of angiography as they solely rely on geometric calibration and projection-domain images.
Collapse
|