1
|
Azhideh A, Pooyan A, Alipour E, Haseli S, Hosseini N, Chalian M. The Role of Artificial Intelligence in Osteoarthritis. Semin Roentgenol 2024; 59:518-525. [PMID: 39490044 DOI: 10.1053/j.ro.2024.07.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2024] [Revised: 07/11/2024] [Accepted: 07/12/2024] [Indexed: 11/05/2024]
Affiliation(s)
- Arash Azhideh
- Department of Radiology, Division of Muscluskeletal and Intervention, University of Washington, Seattle, WA
| | - Atefe Pooyan
- Department of Radiology, Division of Muscluskeletal and Intervention, University of Washington, Seattle, WA
| | - Ehsan Alipour
- Department of Radiology, Division of Muscluskeletal and Intervention, University of Washington, Seattle, WA
| | - Sara Haseli
- Department of Radiology, Division of Muscluskeletal and Intervention, University of Washington, Seattle, WA
| | - Nastaran Hosseini
- Department of Radiology, Division of Muscluskeletal and Intervention, University of Washington, Seattle, WA
| | - Majid Chalian
- Department of Radiology, Division of Muscluskeletal and Intervention, University of Washington, Seattle, WA.
| |
Collapse
|
2
|
Zheng F, Yin P, Liang K, Liu T, Wang Y, Hao W, Hao Q, Hong N. Comparison of Different Fusion Radiomics for Predicting Benign and Malignant Sacral Tumors: A Pilot Study. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:2415-2427. [PMID: 38717515 PMCID: PMC11522258 DOI: 10.1007/s10278-024-01134-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/07/2024] [Revised: 04/27/2024] [Accepted: 04/29/2024] [Indexed: 10/30/2024]
Abstract
Differentiating between benign and malignant sacral tumors is crucial for determining appropriate treatment options. This study aims to develop two benchmark fusion models and a deep learning radiomic nomogram (DLRN) capable of distinguishing between benign and malignant sacral tumors using multiple imaging modalities. We reviewed axial T2-weighted imaging (T2WI) and non-contrast computed tomography (NCCT) of 134 patients pathologically confirmed as sacral tumors. The two benchmark fusion models were developed using fusion deep learning (DL) features and fusion classical machine learning (CML) features from multiple imaging modalities, employing logistic regression, K-nearest neighbor classification, and extremely randomized trees. The two benchmark models exhibiting the most robust predictive performance were merged with clinical data to formulate the DLRN. Performance assessment involved computing the area under the receiver operating characteristic curve (AUC), sensitivity, specificity, accuracy, negative predictive value (NPV), and positive predictive value (PPV). The DL benchmark fusion model demonstrated superior performance compared to the CML fusion model. The DLRN, identified as the optimal model, exhibited the highest predictive performance, achieving an accuracy of 0.889 and an AUC of 0.961 in the test sets. Calibration curves were utilized to evaluate the predictive capability of the models, and decision curve analysis (DCA) was conducted to assess the clinical net benefit of the DLR model. The DLRN could serve as a practical predictive tool, capable of distinguishing between benign and malignant sacral tumors, offering valuable information for risk counseling, and aiding in clinical treatment decisions.
Collapse
Affiliation(s)
- Fei Zheng
- Department of Radiology, Peking University People's Hospital, No. 11 Xizhimen South Street, Xicheng District, Beijing, 100044, People's Republic of China
| | - Ping Yin
- Department of Radiology, Peking University People's Hospital, No. 11 Xizhimen South Street, Xicheng District, Beijing, 100044, People's Republic of China
| | - Kewei Liang
- Intelligent Manufacturing Research Institute, Visual 3D Medical Science and Technology Development, Fengtai District, No. 186 South Fourth Ring Road West, Beijing, 100071, People's Republic of China
| | - Tao Liu
- Department of Radiology, Peking University People's Hospital, No. 11 Xizhimen South Street, Xicheng District, Beijing, 100044, People's Republic of China
| | - Yujian Wang
- Department of Radiology, Peking University People's Hospital, No. 11 Xizhimen South Street, Xicheng District, Beijing, 100044, People's Republic of China
| | - Wenhan Hao
- Department of Radiology, Peking University People's Hospital, No. 11 Xizhimen South Street, Xicheng District, Beijing, 100044, People's Republic of China
| | - Qi Hao
- Department of Radiology, Peking University People's Hospital, No. 11 Xizhimen South Street, Xicheng District, Beijing, 100044, People's Republic of China
| | - Nan Hong
- Department of Radiology, Peking University People's Hospital, No. 11 Xizhimen South Street, Xicheng District, Beijing, 100044, People's Republic of China.
| |
Collapse
|
3
|
Rizk PA, Gonzalez MR, Galoaa BM, Girgis AG, Van Der Linden L, Chang CY, Lozano-Calderon SA. Machine Learning-Assisted Decision Making in Orthopaedic Oncology. JBJS Rev 2024; 12:01874474-202407000-00005. [PMID: 38991098 DOI: 10.2106/jbjs.rvw.24.00057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/13/2024]
Abstract
» Artificial intelligence is an umbrella term for computational calculations that are designed to mimic human intelligence and problem-solving capabilities, although in the future, this may become an incomplete definition. Machine learning (ML) encompasses the development of algorithms or predictive models that generate outputs without explicit instructions, assisting in clinical predictions based on large data sets. Deep learning is a subset of ML that utilizes layers of networks that use various inter-relational connections to define and generalize data.» ML algorithms can enhance radiomics techniques for improved image evaluation and diagnosis. While ML shows promise with the advent of radiomics, there are still obstacles to overcome.» Several calculators leveraging ML algorithms have been developed to predict survival in primary sarcomas and metastatic bone disease utilizing patient-specific data. While these models often report exceptionally accurate performance, it is crucial to evaluate their robustness using standardized guidelines.» While increased computing power suggests continuous improvement of ML algorithms, these advancements must be balanced against challenges such as diversifying data, addressing ethical concerns, and enhancing model interpretability.
Collapse
Affiliation(s)
- Paul A Rizk
- Division of Orthopaedic Oncology, Department of Orthopaedic Surgery, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts
| | - Marcos R Gonzalez
- Division of Orthopaedic Oncology, Department of Orthopaedic Surgery, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts
| | - Bishoy M Galoaa
- Interdisciplinary Science & Engineering Complex (ISEC), Northeastern University, Boston, Massachusetts
| | - Andrew G Girgis
- Boston University Chobanian & Avedisian School of Medicine, Boston, Massachusetts
| | - Lotte Van Der Linden
- Division of Orthopaedic Oncology, Department of Orthopaedic Surgery, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts
| | - Connie Y Chang
- Musculoskeletal Imaging and Intervention, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts
| | - Santiago A Lozano-Calderon
- Division of Orthopaedic Oncology, Department of Orthopaedic Surgery, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
4
|
Droppelmann G, Rodríguez C, Jorquera C, Feijoo F. Artificial intelligence in diagnosing upper limb musculoskeletal disorders: a systematic review and meta-analysis of diagnostic tests. EFORT Open Rev 2024; 9:241-251. [PMID: 38579757 PMCID: PMC11044087 DOI: 10.1530/eor-23-0174] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 04/07/2024] Open
Abstract
Purpose The integration of artificial intelligence (AI) in radiology has revolutionized diagnostics, optimizing precision and decision-making. Specifically in musculoskeletal imaging, AI tools can improve accuracy for upper extremity pathologies. This study aimed to assess the diagnostic performance of AI models in detecting musculoskeletal pathologies of the upper extremity using different imaging modalities. Methods A meta-analysis was conducted, involving searches on MEDLINE/PubMed, SCOPUS, Cochrane Library, Lilacs, and SciELO. The quality of the studies was assessed using the QUADAS-2 tool. Diagnostic accuracy measures including sensitivity, specificity, diagnostic odds ratio (DOR), positive and negative likelihood ratios (PLR, NLR), area under the curve (AUC), and summary receiver operating characteristic were pooled using a random-effects model. Heterogeneity and subgroup analyses were also included. All statistical analyses and plots were performed using the R software package. Results Thirteen models from ten articles were analyzed. The sensitivity and specificity of the AI models to detect musculoskeletal conditions in the upper extremity were 0.926 (95% CI: 0.900; 0.945) and 0.908 (95% CI: 0.810; 0.958). The PLR, NLR, lnDOR, and the AUC estimates were found to be 19.18 (95% CI: 8.90; 29.34), 0.11 (95% CI: 0.18; 0.46), 4.62 (95% CI: 4.02; 5.22) with a (P < 0.001), and 95%, respectively. Conclusion The AI models exhibited strong univariate and bivariate performance in detecting both positive and negative cases within the analyzed dataset of musculoskeletal pathologies in the upper extremity.
Collapse
Affiliation(s)
- Guillermo Droppelmann
- Research Center on Medicine, Exercise, Sport and Health, MEDS Clinic, Santiago, RM, Chile
- Health Sciences PhD Program, Universidad Católica de Murcia UCAM, Murcia, Spain
- Harvard T.H. Chan School of Public Health, Boston, Massachusetts, USA
| | | | - Carlos Jorquera
- Facultad de Ciencias, Escuela de Nutrición y Dietética, Universidad Mayor, Santiago, RM, Chile
| | - Felipe Feijoo
- School of Industrial Engineering, Pontificia Universidad Católica de Valparaíso, Valparaíso, Chile
| |
Collapse
|
5
|
Salehi MA, Mohammadi S, Harandi H, Zakavi SS, Jahanshahi A, Shahrabi Farahani M, Wu JS. Diagnostic Performance of Artificial Intelligence in Detection of Primary Malignant Bone Tumors: a Meta-Analysis. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:766-777. [PMID: 38343243 PMCID: PMC11031503 DOI: 10.1007/s10278-023-00945-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 10/04/2023] [Accepted: 10/12/2023] [Indexed: 04/20/2024]
Abstract
We aim to conduct a meta-analysis on studies that evaluated the diagnostic performance of artificial intelligence (AI) algorithms in the detection of primary bone tumors, distinguishing them from other bone lesions, and comparing them with clinician assessment. A systematic search was conducted using a combination of keywords related to bone tumors and AI. After extracting contingency tables from all included studies, we performed a meta-analysis using random-effects model to determine the pooled sensitivity and specificity, accompanied by their respective 95% confidence intervals (CI). Quality assessment was evaluated using a modified version of Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis (TRIPOD) and Prediction Model Study Risk of Bias Assessment Tool (PROBAST). The pooled sensitivities for AI algorithms and clinicians on internal validation test sets for detecting bone neoplasms were 84% (95% CI: 79.88) and 76% (95% CI: 64.85), and pooled specificities were 86% (95% CI: 81.90) and 64% (95% CI: 55.72), respectively. At external validation, the pooled sensitivity and specificity for AI algorithms were 84% (95% CI: 75.90) and 91% (95% CI: 83.96), respectively. The same numbers for clinicians were 85% (95% CI: 73.92) and 94% (95% CI: 89.97), respectively. The sensitivity and specificity for clinicians with AI assistance were 95% (95% CI: 86.98) and 57% (95% CI: 48.66). Caution is needed when interpreting findings due to potential limitations. Further research is needed to bridge this gap in scientific understanding and promote effective implementation for medical practice advancement.
Collapse
Affiliation(s)
- Mohammad Amin Salehi
- School of Medicine, Tehran University of Medical Sciences, Pour Sina St, Keshavarz Blvd, Tehran, 1417613151, Iran
| | - Soheil Mohammadi
- School of Medicine, Tehran University of Medical Sciences, Pour Sina St, Keshavarz Blvd, Tehran, 1417613151, Iran.
| | - Hamid Harandi
- School of Medicine, Tehran University of Medical Sciences, Pour Sina St, Keshavarz Blvd, Tehran, 1417613151, Iran
| | - Seyed Sina Zakavi
- School of Medicine, Tabriz University of Medical Sciences, Tabriz, Iran
| | - Ali Jahanshahi
- School of Medicine, Guilan University of Medical Sciences, Rasht, Iran
| | | | - Jim S Wu
- Department of Radiology, Beth Israel Deaconess Medical Center, Harvard Medical School, 330 Brookline Avenue, Boston, MA, 02215, USA
| |
Collapse
|
6
|
Sampath K, Rajagopal S, Chintanpalli A. A comparative analysis of CNN-based deep learning architectures for early diagnosis of bone cancer using CT images. Sci Rep 2024; 14:2144. [PMID: 38273131 PMCID: PMC10811327 DOI: 10.1038/s41598-024-52719-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Accepted: 01/23/2024] [Indexed: 01/27/2024] Open
Abstract
Bone cancer is a rare in which cells in the bone grow out of control, resulting in destroying the normal bone tissue. A benign type of bone cancer is harmless and does not spread to other body parts, whereas a malignant type can spread to other body parts and might be harmful. According to Cancer Research UK (2021), the survival rate for patients with bone cancer is 40% and early detection can increase the chances of survival by providing treatment at the initial stages. Prior detection of these lumps or masses can reduce the risk of death and treat bone cancer early. The goal of this current study is to utilize image processing techniques and deep learning-based Convolution neural network (CNN) to classify normal and cancerous bone images. Medical image processing techniques, like pre-processing (e.g., median filter), K-means clustering segmentation, and, canny edge detection were used to detect the cancer region in Computer Tomography (CT) images for parosteal osteosarcoma, enchondroma and osteochondroma types of bone cancer. After segmentation, the normal and cancerous affected images were classified using various existing CNN-based models. The results revealed that AlexNet model showed a better performance with a training accuracy of 98%, validation accuracy of 98%, and testing accuracy of 100%.
Collapse
Affiliation(s)
- Kanimozhi Sampath
- Department of Sensor and Biomedical Technology, School of Electronics Engineering, Vellore Institute of Technology, Vellore, 632014, India
| | - Sivakumar Rajagopal
- Department of Sensor and Biomedical Technology, School of Electronics Engineering, Vellore Institute of Technology, Vellore, 632014, India.
| | - Ananthakrishna Chintanpalli
- Department of Communication Engineering, School of Electronics Engineering, Vellore Institute of Technology, Vellore, 632014, India
| |
Collapse
|
7
|
Yildiz Potter I, Yeritsyan D, Mahar S, Wu J, Nazarian A, Vaziri A, Vaziri A. Automated Bone Tumor Segmentation and Classification as Benign or Malignant Using Computed Tomographic Imaging. J Digit Imaging 2023; 36:869-878. [PMID: 36627518 PMCID: PMC10287871 DOI: 10.1007/s10278-022-00771-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Revised: 12/23/2022] [Accepted: 12/27/2022] [Indexed: 01/12/2023] Open
Abstract
The purpose of this study was to pair computed tomography (CT) imaging and machine learning for automated bone tumor segmentation and classification to aid clinicians in determining the need for biopsy. In this retrospective study (March 2005-October 2020), a dataset of 84 femur CT scans (50 females and 34 males, 20 years and older) with definitive histologic confirmation of bone lesion (71% malignant) were leveraged to perform automated tumor segmentation and classification. Our method involves a deep learning architecture that receives a DICOM slice and predicts (i) a segmentation mask over the estimated tumor region, and (ii) a corresponding class as benign or malignant. Class prediction for each case is then determined via majority voting. Statistical analysis was conducted via fivefold cross validation, with results reported as averages along with 95% confidence intervals. Despite the imbalance between benign and malignant cases in our dataset, our approach attains similar classification performances in specificity (75%) and sensitivity (79%). Average segmentation performance attains 56% Dice score and reaches up to 80% for an image slice in each scan. The proposed approach establishes the first steps in developing an automated deep learning method on bone tumor segmentation and classification from CT imaging. Our approach attains comparable quantitative performance to existing deep learning models using other imaging modalities, including X-ray. Moreover, visual analysis of bone tumor segmentation indicates that our model is capable of learning typical tumor characteristics and provides a promising direction in aiding the clinical decision process for biopsy.
Collapse
Affiliation(s)
| | - Diana Yeritsyan
- Beth Israel Deaconess Medical Center (BIDMC), Harvard Medical School, 330 Brookline Avenue, Boston, MA, 02215, USA
| | - Sarah Mahar
- Beth Israel Deaconess Medical Center (BIDMC), Harvard Medical School, 330 Brookline Avenue, Boston, MA, 02215, USA
| | - Jim Wu
- Beth Israel Deaconess Medical Center (BIDMC), Harvard Medical School, 330 Brookline Avenue, Boston, MA, 02215, USA
| | - Ara Nazarian
- Beth Israel Deaconess Medical Center (BIDMC), Harvard Medical School, 330 Brookline Avenue, Boston, MA, 02215, USA
| | - Aidin Vaziri
- BioSensics LLC, 57 Chapel Street, Newton, MA, 02458, USA
| | - Ashkan Vaziri
- BioSensics LLC, 57 Chapel Street, Newton, MA, 02458, USA
| |
Collapse
|
8
|
Ong W, Zhu L, Tan YL, Teo EC, Tan JH, Kumar N, Vellayappan BA, Ooi BC, Quek ST, Makmur A, Hallinan JTPD. Application of Machine Learning for Differentiating Bone Malignancy on Imaging: A Systematic Review. Cancers (Basel) 2023; 15:cancers15061837. [PMID: 36980722 PMCID: PMC10047175 DOI: 10.3390/cancers15061837] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 03/07/2023] [Accepted: 03/16/2023] [Indexed: 03/22/2023] Open
Abstract
An accurate diagnosis of bone tumours on imaging is crucial for appropriate and successful treatment. The advent of Artificial intelligence (AI) and machine learning methods to characterize and assess bone tumours on various imaging modalities may assist in the diagnostic workflow. The purpose of this review article is to summarise the most recent evidence for AI techniques using imaging for differentiating benign from malignant lesions, the characterization of various malignant bone lesions, and their potential clinical application. A systematic search through electronic databases (PubMed, MEDLINE, Web of Science, and clinicaltrials.gov) was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. A total of 34 articles were retrieved from the databases and the key findings were compiled and summarised. A total of 34 articles reported the use of AI techniques to distinguish between benign vs. malignant bone lesions, of which 12 (35.3%) focused on radiographs, 12 (35.3%) on MRI, 5 (14.7%) on CT and 5 (14.7%) on PET/CT. The overall reported accuracy, sensitivity, and specificity of AI in distinguishing between benign vs. malignant bone lesions ranges from 0.44–0.99, 0.63–1.00, and 0.73–0.96, respectively, with AUCs of 0.73–0.96. In conclusion, the use of AI to discriminate bone lesions on imaging has achieved a relatively good performance in various imaging modalities, with high sensitivity, specificity, and accuracy for distinguishing between benign vs. malignant lesions in several cohort studies. However, further research is necessary to test the clinical performance of these algorithms before they can be facilitated and integrated into routine clinical practice.
Collapse
Affiliation(s)
- Wilson Ong
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
- Correspondence: ; Tel.: +65-67725207
| | - Lei Zhu
- Department of Computer Science, School of Computing, National University of Singapore, 13 Computing Drive, Singapore 117417, Singapore
| | - Yi Liang Tan
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
| | - Ee Chin Teo
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
| | - Jiong Hao Tan
- University Spine Centre, Department of Orthopaedic Surgery, National University Health System, 1E, Lower Kent Ridge Road, Singapore 119228, Singapore
| | - Naresh Kumar
- University Spine Centre, Department of Orthopaedic Surgery, National University Health System, 1E, Lower Kent Ridge Road, Singapore 119228, Singapore
| | - Balamurugan A. Vellayappan
- Department of Radiation Oncology, National University Cancer Institute Singapore, National University Hospital, 5 Lower Kent Ridge Road, Singapore 119074, Singapore
| | - Beng Chin Ooi
- Department of Computer Science, School of Computing, National University of Singapore, 13 Computing Drive, Singapore 117417, Singapore
| | - Swee Tian Quek
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| | - Andrew Makmur
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| | - James Thomas Patrick Decourcy Hallinan
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| |
Collapse
|
9
|
An Automated Method for Classifying Liver Lesions in Contrast-Enhanced Ultrasound Imaging Based on Deep Learning Algorithms. Diagnostics (Basel) 2023; 13:diagnostics13061062. [PMID: 36980369 PMCID: PMC10047233 DOI: 10.3390/diagnostics13061062] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 03/09/2023] [Accepted: 03/09/2023] [Indexed: 03/14/2023] Open
Abstract
Background: Contrast-enhanced ultrasound (CEUS) is an important imaging modality in the diagnosis of liver tumors. By using contrast agent, a more detailed image is obtained. Time-intensity curves (TIC) can be extracted using a specialized software, and then the signal can be analyzed for further investigations. Methods: The purpose of the study was to build an automated method for extracting TICs and classifying liver lesions in CEUS liver investigations. The cohort contained 50 anonymized video investigations from 49 patients. Besides the CEUS investigations, clinical data from the patients were provided. A method comprising three modules was proposed. The first module, a lesion segmentation deep learning (DL) model, handled the prediction of masks frame-by-frame (region of interest). The second module performed dilation on the mask, and after applying colormap to the image, it extracted the TIC and the parameters from the TIC (area under the curve, time to peak, mean transit time, and maximum intensity). The third module, a feed-forward neural network, predicted the final diagnosis. It was trained on the TIC parameters extracted by the second model, together with other data: gender, age, hepatitis history, and cirrhosis history. Results: For the feed-forward classifier, five classes were chosen: hepatocarcinoma, metastasis, other malignant lesions, hemangioma, and other benign lesions. Being a multiclass classifier, appropriate performance metrics were observed: categorical accuracy, F1 micro, F1 macro, and Matthews correlation coefficient. The results showed that due to class imbalance, in some cases, the classifier was not able to predict with high accuracy a specific lesion from the minority classes. However, on the majority classes, the classifier can predict the lesion type with high accuracy. Conclusions: The main goal of the study was to develop an automated method of classifying liver lesions in CEUS video investigations. Being modular, the system can be a useful tool for gastroenterologists or medical students: either as a second opinion system or a tool to automatically extract TICs.
Collapse
|
10
|
Manganelli Conforti P, D’Acunto M, Russo P. Deep Learning for Chondrogenic Tumor Classification through Wavelet Transform of Raman Spectra. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22197492. [PMID: 36236597 PMCID: PMC9571786 DOI: 10.3390/s22197492] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Revised: 09/16/2022] [Accepted: 09/23/2022] [Indexed: 05/22/2023]
Abstract
The grading of cancer tissues is still one of the main challenges for pathologists. The development of enhanced analysis strategies hence becomes crucial to accurately identify and further deal with each individual case. Raman spectroscopy (RS) is a promising tool for the classification of tumor tissues as it allows us to obtain the biochemical maps of the tissues under analysis and to observe their evolution in terms of biomolecules, proteins, lipid structures, DNA, vitamins, and so on. However, its potential could be further improved by providing a classification system which would be able to recognize the sample tumor category by taking as input the raw Raman spectroscopy signal; this could provide more reliable responses in shorter time scales and could reduce or eliminate false-positive or -negative diagnoses. Deep Learning techniques have become ubiquitous in recent years, with models able to perform classification with high accuracy in most diverse fields of research, e.g., natural language processing, computer vision, medical imaging. However, deep models often rely on huge labeled datasets to produce reasonable accuracy, otherwise occurring in overfitting issues when the training data is insufficient. In this paper, we propose a chondrogenic tumor CLAssification through wavelet transform of RAman spectra (CLARA), which is able to classify with high accuracy Raman spectra obtained from bone tissues. CLARA recognizes and grades the tumors in the evaluated dataset with 97% accuracy by exploiting a classification pipeline consisting of the division of the original task in two binary classification steps, where the first is performed on the original RS signals while the latter is accomplished through the use of a hybrid temporal-frequency 2D transform.
Collapse
Affiliation(s)
| | - Mario D’Acunto
- CNR-IBF, Istituto di Biofisica, Via Moruzzi 1, 56124 Pisa, Italy
| | - Paolo Russo
- DIAG Department, Sapienza University of Rome, Via Ariosto 25, 00185 Roma, Italy
- Correspondence:
| |
Collapse
|