1
|
Innani S, Dutande P, Baid U, Pokuri V, Bakas S, Talbar S, Baheti B, Guntuku SC. Generative adversarial networks based skin lesion segmentation. Sci Rep 2023; 13:13467. [PMID: 37596306 PMCID: PMC10439152 DOI: 10.1038/s41598-023-39648-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2023] [Accepted: 07/28/2023] [Indexed: 08/20/2023] Open
Abstract
Skin cancer is a serious condition that requires accurate diagnosis and treatment. One way to assist clinicians in this task is using computer-aided diagnosis tools that automatically segment skin lesions from dermoscopic images. We propose a novel adversarial learning-based framework called Efficient-GAN (EGAN) that uses an unsupervised generative network to generate accurate lesion masks. It consists of a generator module with a top-down squeeze excitation-based compound scaled path, an asymmetric lateral connection-based bottom-up path, and a discriminator module that distinguishes between original and synthetic masks. A morphology-based smoothing loss is also implemented to encourage the network to create smooth semantic boundaries of lesions. The framework is evaluated on the International Skin Imaging Collaboration Lesion Dataset. It outperforms the current state-of-the-art skin lesion segmentation approaches with a Dice coefficient, Jaccard similarity, and accuracy of 90.1%, 83.6%, and 94.5%, respectively. We also design a lightweight segmentation framework called Mobile-GAN (MGAN) that achieves comparable performance as EGAN but with an order of magnitude lower number of training parameters, thus resulting in faster inference times for low compute resource settings.
Collapse
Affiliation(s)
- Shubham Innani
- Center of Excellence in Signal and Image Processing, Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, Maharashtra, India.
| | - Prasad Dutande
- Center of Excellence in Signal and Image Processing, Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, Maharashtra, India
| | - Ujjwal Baid
- Center of Excellence in Signal and Image Processing, Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, Maharashtra, India
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Venu Pokuri
- Department of Computer and Information Science, University of Pennsylvania, Philadelphia, USA
| | - Spyridon Bakas
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Sanjay Talbar
- Center of Excellence in Signal and Image Processing, Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, Maharashtra, India
| | - Bhakti Baheti
- Center of Excellence in Signal and Image Processing, Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, Maharashtra, India
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Sharath Chandra Guntuku
- Department of Computer and Information Science, University of Pennsylvania, Philadelphia, USA
| |
Collapse
|
2
|
Dutande P, Baid U, Talbar S. Deep Residual Separable Convolutional Neural Network for lung tumor segmentation. Comput Biol Med 2022; 141:105161. [PMID: 34999468 DOI: 10.1016/j.compbiomed.2021.105161] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Revised: 12/19/2021] [Accepted: 12/19/2021] [Indexed: 12/01/2022]
Abstract
Lung cancer is one of the deadliest types of cancers. Computed Tomography (CT) is a widely used technique to detect tumors present inside the lungs. Delineation of such tumors is particularly essential for analysis and treatment purposes. With the advancement in hardware technologies, Machine Learning and Deep Learning methods are outperforming the traditional methods in the field of medical imaging. In order to delineate lung cancer tumors, we have proposed a deep learning-based methodology which includes a maximum intensity projection based pre-processing method, two novel deep learning networks and an ensemble strategy. The two proposed networks named Deep Residual Separable Convolutional Neural Network 1 and 2 (DRS-CNN1 and DRS-CNN2) achieved better performance over the state-of-the-art U-net network and other segmentation networks. For fair comparison, we have evaluated the performances of all networks on Medical Segmentation Decathlon (MSD) and StructSeg 2019 datasets. The DRS-CNN2 achieved a mean Dice Similarity Coefficient (DSC) of 0.649, mean 95 Hausdorff Distance (HD95) of 18.26, mean Sensitivity 0.737 and a mean Precision of 0.765 on independent test sets.
Collapse
Affiliation(s)
- Prasad Dutande
- Center of Excellence in Signal and Image Processing, SGGS Institute of Engineering and Technology, Nanded, India.
| | - Ujjwal Baid
- Center of Excellence in Signal and Image Processing, SGGS Institute of Engineering and Technology, Nanded, India
| | - Sanjay Talbar
- Center of Excellence in Signal and Image Processing, SGGS Institute of Engineering and Technology, Nanded, India
| |
Collapse
|
3
|
Verma R, Kumar N, Patil A, Kurian NC, Rane S, Graham S, Vu QD, Zwager M, Raza SEA, Rajpoot N, Wu X, Chen H, Huang Y, Wang L, Jung H, Brown GT, Liu Y, Liu S, Jahromi SAF, Khani AA, Montahaei E, Baghshah MS, Behroozi H, Semkin P, Rassadin A, Dutande P, Lodaya R, Baid U, Baheti B, Talbar S, Mahbod A, Ecker R, Ellinger I, Luo Z, Dong B, Xu Z, Yao Y, Lv S, Feng M, Xu K, Zunair H, Hamza AB, Smiley S, Yin TK, Fang QR, Srivastava S, Mahapatra D, Trnavska L, Zhang H, Narayanan PL, Law J, Yuan Y, Tejomay A, Mitkari A, Koka D, Ramachandra V, Kini L, Sethi A. MoNuSAC2020: A Multi-Organ Nuclei Segmentation and Classification Challenge. IEEE Trans Med Imaging 2021; 40:3413-3423. [PMID: 34086562 DOI: 10.1109/tmi.2021.3085712] [Citation(s) in RCA: 34] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Detecting various types of cells in and around the tumor matrix holds a special significance in characterizing the tumor micro-environment for cancer prognostication and research. Automating the tasks of detecting, segmenting, and classifying nuclei can free up the pathologists' time for higher value tasks and reduce errors due to fatigue and subjectivity. To encourage the computer vision research community to develop and test algorithms for these tasks, we prepared a large and diverse dataset of nucleus boundary annotations and class labels. The dataset has over 46,000 nuclei from 37 hospitals, 71 patients, four organs, and four nucleus types. We also organized a challenge around this dataset as a satellite event at the International Symposium on Biomedical Imaging (ISBI) in April 2020. The challenge saw a wide participation from across the world, and the top methods were able to match inter-human concordance for the challenge metric. In this paper, we summarize the dataset and the key findings of the challenge, including the commonalities and differences between the methods developed by various participants. We have released the MoNuSAC2020 dataset to the public.
Collapse
|