1
|
Strzelecki M, Kociołek M, Strąkowska M, Kozłowski M, Grzybowski A, Szczypiński PM. Artificial intelligence in the detection of skin cancer: State of the art. Clin Dermatol 2024; 42:280-295. [PMID: 38181888 DOI: 10.1016/j.clindermatol.2023.12.022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2024]
Abstract
The incidence of melanoma is increasing rapidly. This cancer has a good prognosis if detected early. For this reason, various systems of skin lesion image analysis, which support imaging diagnostics of this neoplasm, are developing very dynamically. To detect and recognize neoplastic lesions, such systems use various artificial intelligence (AI) algorithms. This area of computer science applications has recently undergone dynamic development, abounding in several solutions that are effective tools supporting diagnosticians in many medical specialties. In this contribution, a number of applications of different classes of AI algorithms for the detection of this skin melanoma are presented and evaluated. Both classic systems based on the analysis of dermatoscopic images as well as total body systems, enabling the analysis of the patient's whole body to detect moles and pathologic changes, are discussed. These increasingly popular applications that allow the analysis of lesion images using smartphones are also described. The quantitative evaluation of the discussed systems with particular emphasis on the method of validation of the implemented algorithms is presented. The advantages and limitations of AI in the analysis of lesion images are also discussed, and problems requiring a solution for more effective use of AI in dermatology are identified.
Collapse
Affiliation(s)
- Michał Strzelecki
- Institute of Electronics, Lodz University of Technology, Łódź, Poland.
| | - Marcin Kociołek
- Institute of Electronics, Lodz University of Technology, Łódź, Poland
| | - Maria Strąkowska
- Institute of Electronics, Lodz University of Technology, Łódź, Poland
| | - Michał Kozłowski
- Department of Mechatronics and Technical and IT Education, Faculty of Technical Science, University of Warmia and Mazury, Olsztyn, Poland
| | - Andrzej Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, Poznan, Poland
| | | |
Collapse
|
2
|
Ichim L, Mitrica RI, Serghei MO, Popescu D. Detection of Malignant Skin Lesions Based on Decision Fusion of Ensembles of Neural Networks. Cancers (Basel) 2023; 15:4946. [PMID: 37894313 PMCID: PMC10605379 DOI: 10.3390/cancers15204946] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Revised: 10/02/2023] [Accepted: 10/08/2023] [Indexed: 10/29/2023] Open
Abstract
Today, skin cancer, and especially melanoma, is an increasing and dangerous health disease. The high mortality rate of some types of skin cancers needs to be detected in the early stages and treated urgently. The use of neural network ensembles for the detection of objects of interest in images has gained more and more interest due to the increased performance of the results. In this sense, this paper proposes two ensembles of neural networks, based on the fusion of the decisions of the component neural networks for the detection of four skin lesions (basal cancer cell, melanoma, benign keratosis, and melanocytic nevi). The first system is based on separate learning of three neural networks (MobileNet V2, DenseNet 169, and EfficientNet B2), with multiple weights for the four classes of lesions and weighted overall prediction. The second system is made up of six binary models (one for each pair of classes) for each network; the fusion and prediction are conducted by weighted summation per class and per model. In total, 18 such binary models will be considered. The 91.04% global accuracy of this set of binary models is superior to the first system (89.62%). Separately, only for the binary classifications within the system was the individual accuracy better. The individual F1 score for each class and the global system varied from 81.36% to 94.17%. Finally, a critical comparison is made with similar works from the literature.
Collapse
Affiliation(s)
- Loretta Ichim
- Faculty of Automatic Control and Computers, University Politehnica of Bucharest, 060042 Bucharest, Romania; (L.I.); (R.-I.M.); (M.-O.S.)
- “Ștefan S. Nicolau” Institute of Virology, 030304 Bucharest, Romania
| | - Razvan-Ionut Mitrica
- Faculty of Automatic Control and Computers, University Politehnica of Bucharest, 060042 Bucharest, Romania; (L.I.); (R.-I.M.); (M.-O.S.)
| | - Madalina-Oana Serghei
- Faculty of Automatic Control and Computers, University Politehnica of Bucharest, 060042 Bucharest, Romania; (L.I.); (R.-I.M.); (M.-O.S.)
| | - Dan Popescu
- Faculty of Automatic Control and Computers, University Politehnica of Bucharest, 060042 Bucharest, Romania; (L.I.); (R.-I.M.); (M.-O.S.)
| |
Collapse
|
3
|
Huang Y, Jiao J, Yu J, Zheng Y, Wang Y. Si-MSPDNet: A multiscale Siamese network with parallel partial decoders for the 3-D measurement of spines in 3D ultrasonic images. Comput Med Imaging Graph 2023; 108:102262. [PMID: 37385048 DOI: 10.1016/j.compmedimag.2023.102262] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Revised: 05/26/2023] [Accepted: 06/09/2023] [Indexed: 07/01/2023]
Abstract
Early screening and frequent monitoring effectively decrease the risk of severe scoliosis, but radiation exposure is a consequence of traditional radiograph examinations. Additionally, traditional X-ray images on the coronal or sagittal plane have difficulty providing three-dimensional (3-D) information on spinal deformities. The Scolioscan system provides an innovative 3-D spine imaging approach via ultrasonic scanning, and its feasibility has been demonstrated in numerous studies. In this paper, to further examine the potential of spinal ultrasonic data for describing 3-D spinal deformities, we propose a novel deep-learning tracker named Si-MSPDNet for extracting widely employed landmarks (spinous process (SP)) in ultrasonic images of spines and establish a 3-D spinal profile to measure 3-D spinal deformities. Si-MSPDNet has a Siamese architecture. First, we employ two efficient two-stage encoders to extract features from the uncropped ultrasonic image and the patch centered on the SP cut from the image. Then, a fusion block is designed to strengthen the communication between encoded features and further refine them from channel and spatial perspectives. The SP is a very small target in ultrasonic images, so its representation is weak in the highest-level feature maps. To overcome this, we ignore the highest-level feature maps and introduce parallel partial decoders to localize the SP. The correlation evaluation in the traditional Siamese network is also expanded to multiple scales to enhance cooperation. Furthermore, we propose a binary guided mask based on vertebral anatomical prior knowledge, which can further improve the performance of our tracker by highlighting the potential region with SP. The binary-guided mask is also utilized for fully automatic initialization in tracking. We collected spinal ultrasonic data and corresponding radiographs on the coronal and sagittal planes from 150 patients to evaluate the tracking precision of Si-MSPDNet and the performance of the generated 3-D spinal profile. Experimental results revealed that our tracker achieved a tracking success rate of 100% and a mean IoU of 0.882, outperforming some commonly used tracking and real-time detection models. Furthermore, a high correlation existed on both the coronal and sagittal planes between our projected spinal curve and that extracted from the spinal annotation in X-ray images. The correlation between the tracking results of the SP and their ground truths on other projected planes was also satisfactory. More importantly, the difference in mean curvatures was slight on all projected planes between tracking results and ground truths. Thus, this study effectively demonstrates the promising potential of our 3-D spinal profile extraction method for the 3-D measurement of spinal deformities using 3-D ultrasound data.
Collapse
Affiliation(s)
- Yi Huang
- Biomedical Engineering Center, Fudan University, Shanghai 200433, China
| | - Jing Jiao
- Biomedical Engineering Center, Fudan University, Shanghai 200433, China
| | - Jinhua Yu
- Biomedical Engineering Center, Fudan University, Shanghai 200433, China; Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention of Shanghai, Fudan University, 200433, China
| | - Yongping Zheng
- Department of Biomedical Engineering, The Hong Kong Polytechnic University, Hong Kong Special Administrative Region of China; Research Institute for Smart Ageing, The Hong Kong Polytechnic University, Hong Kong Special Administrative Region of China.
| | - Yuanyuan Wang
- Biomedical Engineering Center, Fudan University, Shanghai 200433, China; Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention of Shanghai, Fudan University, 200433, China.
| |
Collapse
|
4
|
Effective extraction of ventricles and myocardium objects from cardiac magnetic resonance images with a multi-task learning U-Net. Pattern Recognit Lett 2022. [DOI: 10.1016/j.patrec.2021.10.025] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
5
|
Yu Z, Nguyen J, Nguyen TD, Kelly J, Mclean C, Bonnington P, Zhang L, Mar V, Ge Z. Early Melanoma Diagnosis With Sequential Dermoscopic Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:633-646. [PMID: 34648437 DOI: 10.1109/tmi.2021.3120091] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Dermatologists often diagnose or rule out early melanoma by evaluating the follow-up dermoscopic images of skin lesions. However, existing algorithms for early melanoma diagnosis are developed using single time-point images of lesions. Ignoring the temporal, morphological changes of lesions can lead to misdiagnosis in borderline cases. In this study, we propose a framework for automated early melanoma diagnosis using sequential dermoscopic images. To this end, we construct our method in three steps. First, we align sequential dermoscopic images of skin lesions using estimated Euclidean transformations, extract the lesion growth region by computing image differences among the consecutive images, and then propose a spatio-temporal network to capture the dermoscopic changes from aligned lesion images and the corresponding difference images. Finally, we develop an early diagnosis module to compute probability scores of malignancy for lesion images over time. We collected 179 serial dermoscopic imaging data from 122 patients to verify our method. Extensive experiments show that the proposed model outperforms other commonly used sequence models. We also compared the diagnostic results of our model with those of seven experienced dermatologists and five registrars. Our model achieved higher diagnostic accuracy than clinicians (63.69% vs. 54.33%, respectively) and provided an earlier diagnosis of melanoma (60.7% vs. 32.7% of melanoma correctly diagnosed on the first follow-up images). These results demonstrate that our model can be used to identify melanocytic lesions that are at high-risk of malignant transformation earlier in the disease process and thereby redefine what is possible in the early detection of melanoma.
Collapse
|
6
|
Popescu D, El-Khatib M, El-Khatib H, Ichim L. New Trends in Melanoma Detection Using Neural Networks: A Systematic Review. SENSORS (BASEL, SWITZERLAND) 2022; 22:496. [PMID: 35062458 PMCID: PMC8778535 DOI: 10.3390/s22020496] [Citation(s) in RCA: 21] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Revised: 12/28/2021] [Accepted: 01/05/2022] [Indexed: 12/29/2022]
Abstract
Due to its increasing incidence, skin cancer, and especially melanoma, is a serious health disease today. The high mortality rate associated with melanoma makes it necessary to detect the early stages to be treated urgently and properly. This is the reason why many researchers in this domain wanted to obtain accurate computer-aided diagnosis systems to assist in the early detection and diagnosis of such diseases. The paper presents a systematic review of recent advances in an area of increased interest for cancer prediction, with a focus on a comparative perspective of melanoma detection using artificial intelligence, especially neural network-based systems. Such structures can be considered intelligent support systems for dermatologists. Theoretical and applied contributions were investigated in the new development trends of multiple neural network architecture, based on decision fusion. The most representative articles covering the area of melanoma detection based on neural networks, published in journals and impact conferences, were investigated between 2015 and 2021, focusing on the interval 2018-2021 as new trends. Additionally presented are the main databases and trends in their use in teaching neural networks to detect melanomas. Finally, a research agenda was highlighted to advance the field towards the new trends.
Collapse
Affiliation(s)
- Dan Popescu
- Faculty of Automatic Control and Computers, University Politehnica of Bucharest, 060042 Bucharest, Romania; (M.E.-K.); (H.E.-K.); (L.I.)
| | | | | | | |
Collapse
|
7
|
Winkler JK, Tschandl P, Toberer F, Sies K, Fink C, Enk A, Kittler H, Haenssle HA. Monitoring patients at risk for melanoma: May convolutional neural networks replace the strategy of sequential digital dermoscopy? Eur J Cancer 2021; 160:180-188. [PMID: 34840028 DOI: 10.1016/j.ejca.2021.10.030] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2021] [Revised: 10/06/2021] [Accepted: 10/25/2021] [Indexed: 01/11/2023]
Abstract
BACKGROUND Sequential digital dermoscopy (SDD) is applied for early melanoma detection by uncovering dynamic changes of monitored lesions. Convolutional neural networks (CNN) are capable of high diagnostic accuracies similar to trained dermatologists. OBJECTIVES To investigate the capability of CNN to correctly classify melanomas originally diagnosed by mere dynamic changes during SDD. METHODS A retrospective cross-sectional study using image quartets of 59 high-risk patients each containing one melanoma diagnosed by dynamic changes during SDD and three nevi (236 lesions). Two validated CNN classified quartets at baseline or after SDD follow-up at the time of melanoma diagnosis. Moreover, baseline quartets were rated by 26 dermatologists. The main outcome was the number of quartets with correct classifications. RESULTS CNN-1 correctly classified 9 (15.3%) and CNN-2 8 (13.6%) of 59 baseline quartets. In baseline images, CNN-1 attained a sensitivity of 25.4% (16.1%-37.8%) and specificity of 92.7% (87.8%-95.7%), whereas CNN-2 of 28.8% (18.8%-41.4%) and 75.7% (68.9%-81.4%). Expectedly, after SDD follow-up CNN more readily detected melanomas resulting in improved sensitivities (CNN-1: 44.1% [32.2%-56.7%]; CNN-2: 49.2% [36.8%-61.6%]). Dermatologists were told that each baseline quartet contained one melanoma, and on average, correctly classified 24 (22-27) of 59 quartets. Correspondingly, accepting a baseline quartet to be appropriately classified whenever the highest malignancy score was assigned to the melanoma within, CNN-1 and CNN-2 correctly classified 28 (47.5%) and 22 (37.3%) of 59 quartets, respectively. CONCLUSIONS The tested CNN could not replace the strategy of SDD. There is a need for CNN capable of integrating information on dynamic changes into analyses.
Collapse
Affiliation(s)
- Julia K Winkler
- Department of Dermatology, University of Heidelberg, Heidelberg, Germany
| | - Philipp Tschandl
- Department of Dermatology, Medical University of Vienna, Vienna, Austria
| | - Ferdinand Toberer
- Department of Dermatology, University of Heidelberg, Heidelberg, Germany
| | - Katharina Sies
- Department of Dermatology, University of Heidelberg, Heidelberg, Germany
| | - Christine Fink
- Department of Dermatology, University of Heidelberg, Heidelberg, Germany
| | - Alexander Enk
- Department of Dermatology, University of Heidelberg, Heidelberg, Germany
| | - Harald Kittler
- Department of Dermatology, Medical University of Vienna, Vienna, Austria
| | - Holger A Haenssle
- Department of Dermatology, University of Heidelberg, Heidelberg, Germany.
| |
Collapse
|
8
|
Lin L, Tao X, Yang W, Pang S, Su Z, Lu H, Li S, Feng Q, Chen B. Quantifying Axial Spine Images Using Object-Specific Bi-Path Network. IEEE J Biomed Health Inform 2021; 25:2978-2987. [PMID: 33788697 DOI: 10.1109/jbhi.2021.3070235] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Automatic estimation of indices from medical images is the main goal of computer-aided quantification (CADq), which speeds up diagnosis and lightens the workload of radiologists. Deep learning technique is a good choice for implementing CADq. Usually, to acquire high-accuracy quantification, specific network architecture needs to be designed for a given CADq task. In this study, considering that the target organs are the intervertebral disc and the dural sac, we propose an object-specific bi-path network (OSBP-Net) for axial spine image quantification. Each path of the OSBP-Net comprises a shallow feature extraction layer (SFE) and a deep feature extraction sub-network (DFE). The SFEs use different convolution strides because the two target organs have different anatomical sizes. The DFEs use average pooling for downsampling based on the observation that the target organs have lower intensity than the background. In addition, an inter-path dissimilarity constraint is proposed and applied to the output of the SFEs, taking into account that the activated regions in the feature maps of two paths should be different theoretically. An inter-index correlation regularization is introduced and applied to the output of the DFEs based on the observation that the diameter and area of the same object express an approximately linear relation. The prediction results of OSBP-Net are compared to several state-of-the-art machine learning-based CADq methods. The comparison reveals that the proposed methods precede other competing methods extensively, indicating its great potential for spine CADq.
Collapse
|
9
|
Ursuleanu TF, Luca AR, Gheorghe L, Grigorovici R, Iancu S, Hlusneac M, Preda C, Grigorovici A. Deep Learning Application for Analyzing of Constituents and Their Correlations in the Interpretations of Medical Images. Diagnostics (Basel) 2021; 11:1373. [PMID: 34441307 PMCID: PMC8393354 DOI: 10.3390/diagnostics11081373] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Revised: 07/25/2021] [Accepted: 07/27/2021] [Indexed: 12/13/2022] Open
Abstract
The need for time and attention, given by the doctor to the patient, due to the increased volume of medical data to be interpreted and filtered for diagnostic and therapeutic purposes has encouraged the development of the option to support, constructively and effectively, deep learning models. Deep learning (DL) has experienced an exponential development in recent years, with a major impact on interpretations of the medical image. This has influenced the development, diversification and increase of the quality of scientific data, the development of knowledge construction methods and the improvement of DL models used in medical applications. All research papers focus on description, highlighting, classification of one of the constituent elements of deep learning models (DL), used in the interpretation of medical images and do not provide a unified picture of the importance and impact of each constituent in the performance of DL models. The novelty in our paper consists primarily in the unitary approach, of the constituent elements of DL models, namely, data, tools used by DL architectures or specifically constructed DL architecture combinations and highlighting their "key" features, for completion of tasks in current applications in the interpretation of medical images. The use of "key" characteristics specific to each constituent of DL models and the correct determination of their correlations, may be the subject of future research, with the aim of increasing the performance of DL models in the interpretation of medical images.
Collapse
Affiliation(s)
- Tudor Florin Ursuleanu
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Surgery VI, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
- Department of Surgery I, Regional Institute of Oncology, 700483 Iasi, Romania
| | - Andreea Roxana Luca
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department Obstetrics and Gynecology, Integrated Ambulatory of Hospital “Sf. Spiridon”, 700106 Iasi, Romania
| | - Liliana Gheorghe
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Radiology, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
| | - Roxana Grigorovici
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
| | - Stefan Iancu
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
| | - Maria Hlusneac
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
| | - Cristina Preda
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Endocrinology, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
| | - Alexandru Grigorovici
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Surgery VI, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
| |
Collapse
|