1
|
Chang JY, Makary MS. Evolving and Novel Applications of Artificial Intelligence in Thoracic Imaging. Diagnostics (Basel) 2024; 14:1456. [PMID: 39001346 PMCID: PMC11240935 DOI: 10.3390/diagnostics14131456] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2024] [Revised: 07/01/2024] [Accepted: 07/06/2024] [Indexed: 07/16/2024] Open
Abstract
The advent of artificial intelligence (AI) is revolutionizing medicine, particularly radiology. With the development of newer models, AI applications are demonstrating improved performance and versatile utility in the clinical setting. Thoracic imaging is an area of profound interest, given the prevalence of chest imaging and the significant health implications of thoracic diseases. This review aims to highlight the promising applications of AI within thoracic imaging. It examines the role of AI, including its contributions to improving diagnostic evaluation and interpretation, enhancing workflow, and aiding in invasive procedures. Next, it further highlights the current challenges and limitations faced by AI, such as the necessity of 'big data', ethical and legal considerations, and bias in representation. Lastly, it explores the potential directions for the application of AI in thoracic radiology.
Collapse
Affiliation(s)
- Jin Y Chang
- Department of Radiology, The Ohio State University College of Medicine, Columbus, OH 43210, USA
| | - Mina S Makary
- Department of Radiology, The Ohio State University College of Medicine, Columbus, OH 43210, USA
- Division of Vascular and Interventional Radiology, Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, OH 43210, USA
| |
Collapse
|
2
|
Dissler N, Nogueira D, Keppi B, Sanguinet P, Ozanon C, Geoffroy-Siraudin C, Pollet-Villard X, Boussommier-Calleja A. Artificial intelligence-powered assisted ranking of sibling embryos to increase first cycle pregnancy rate. Reprod Biomed Online 2024; 49:103887. [PMID: 38701632 DOI: 10.1016/j.rbmo.2024.103887] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Revised: 01/15/2024] [Accepted: 02/05/2024] [Indexed: 05/05/2024]
Abstract
RESEARCH QUESTION Could EMBRYOLY, an artificial intelligence embryo evaluation tool, assist embryologists to increase first cycle pregnancy rate and reduce cycles to pregnancy for patients? DESIGN Data from 11,988 embryos were collected via EMBRYOLY from 2666 egg retrievals (2019-2022) across 11 centres in France, Spain and Morocco using three time-lapse systems (TLS). Data from two independent clinics were also examined. EMBRYOLY's transformer-based model was applied to transferred embryos to evaluate ranking performances against pregnancy and birth outcomes. It was applied to cohorts to rank sibling embryos (including non-transferred) according to their likelihood of clinical pregnancy and to compute the agreement with the embryologist's highest ranked embryo. Its effect on time to pregnancy and first cycle pregnancy rate was evaluated on cohorts with multiple single blastocyst transfers, assuming the embryologist would have considered EMBRYOLY's ranking on the embryos favoured for transfer. RESULTS EMBRYOLY's score correlated significantly with clinical pregnancies and live births for cleavage and blastocyst transfers. This held true for clinical pregnancies from blastocyst transfers in two independent clinics. In cases of multiple single embryo transfers, embryologists achieved a 19.8% first cycle pregnancy rate, which could have been improved to 44.1% with the adjunctive use of EMBRYOLY (McNemar's test: P < 0.001). This could have reduced cycles to clinical pregnancy from 2.01 to 1.66 (Wilcoxon test: P < 0.001). CONCLUSIONS EMBRYOLY's potential to enhance first cycle pregnancy rates when combined with embryologists' expertise is highlighted. It reduces the number of unsuccessful cycles for patients across TLS and IVF centres.
Collapse
Affiliation(s)
- Nina Dissler
- ImVitro, Paris, France, 130 Rue de Lourmel, 75015 Paris
| | - Daniela Nogueira
- INOVIE Fertilité, Institut de Fertilité La Croix Du Sud, Clinique la Croix du Sud, Toulouse, France.; Art Fertility Clinics, IVF laboratory, Abu Dhabi, United Arab Emirates
| | - Bertrand Keppi
- INOVIE Group, INOVIE Fertilié, Gen-Bio, 63100 Clermont-Ferrand, France
| | - Pierre Sanguinet
- INOVIE Group, INOVIE Fertilié, LaboSud, 34000 Montpellier, France
| | | | | | - Xavier Pollet-Villard
- MLAB Groupe, Centre d'Assistance Médicale à la Procréation Nataliance, Pôle Santé Oréliance, Saran, France
| | | |
Collapse
|
3
|
Ramireddy JK, Sathya A, Sasidharan BK, Varghese AJ, Sathyamurthy A, John NO, Chandramohan A, Singh A, Joel A, Mittal R, Masih D, Varghese K, Rebekah G, Ram TS, Thomas HMT. Can Pretreatment MRI and Planning CT Radiomics Improve Prediction of Complete Pathological Response in Locally Advanced Rectal Cancer Following Neoadjuvant Treatment? J Gastrointest Cancer 2024:10.1007/s12029-024-01073-z. [PMID: 38856797 DOI: 10.1007/s12029-024-01073-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/19/2024] [Indexed: 06/11/2024]
Abstract
OBJECTIVE(S) The treatment response to neoadjuvant chemoradiation (nCRT) differs largely in individuals treated for rectal cancer. In this study, we investigated the role of radiomics to predict the pathological response in locally advanced rectal cancers at different treatment time points: (1) before the start of any treatment using baseline T2-weighted MRI (T2W-MR) and (2) at the start of radiation treatment using planning CT. METHODS Patients on nCRT followed by surgery between June 2017 to December 2019 were included in the study. Histopathological tumour response grading (TRG) was used for classification, and gross tumour volume was defined by the radiation oncologists. Following resampling, 100 and 103 pyradiomic features were extracted from T2W-MR and planning CT images, respectively. Synthetic minority oversampling technique (SMOTE) was used to address class imbalance. Four machine learning classifiers built clinical, radiomic, and merged models. Model performances were evaluated on a held-out test dataset following 3-fold cross-validation using area under the receiver operator characteristic curves (AUC) with bootstrap 95% confidence intervals. RESULTS One hundred and fifty patients were included; 58/150 with TRG 1 were classified as complete responders, and rest were incomplete responders (IR). Clinical models performed better (AUC = 0.68) compared to radiomics models (AUC = 0.62). Overall, the clinical + T2W-MR model showed best performance (AUC = 0.72) in predicting the pathological response prior to therapy. Clinical + Planning CT-merged models could only achieve the highest AUC of 0.66. CONCLUSION Merging clinical and baseline T2W-MR radiomics enhances predicting pathological response in rectal cancer. Validation in larger cohorts is warranted, especially for watch and wait strategies.
Collapse
Grants
- Fluid research major grant Christian Medical College, Vellore
- Fluid research major grant Christian Medical College, Vellore
- Fluid research major grant Christian Medical College, Vellore
- Fluid research major grant Christian Medical College, Vellore
- Fluid research major grant Christian Medical College, Vellore
- Fluid research major grant Christian Medical College, Vellore
- Fluid research major grant Christian Medical College, Vellore
- Fluid research major grant Christian Medical College, Vellore
- Fluid research major grant Christian Medical College, Vellore
- Fluid research major grant Christian Medical College, Vellore
- Fluid research major grant Christian Medical College, Vellore
- Fluid research major grant Christian Medical College, Vellore
Collapse
Affiliation(s)
- Jeba Karunya Ramireddy
- Quantitative Imaging Research and Artificial Intelligence Lab, Department of Radiation Oncology, Unit 2, Dr Ida B Scudder Cancer Centre, Christian Medical College, Vellore, Tamil Nadu, 632004, India
| | - A Sathya
- Quantitative Imaging Research and Artificial Intelligence Lab, Department of Radiation Oncology, Unit 2, Dr Ida B Scudder Cancer Centre, Christian Medical College, Vellore, Tamil Nadu, 632004, India
| | - Balu Krishna Sasidharan
- Quantitative Imaging Research and Artificial Intelligence Lab, Department of Radiation Oncology, Unit 2, Dr Ida B Scudder Cancer Centre, Christian Medical College, Vellore, Tamil Nadu, 632004, India
| | - Amal Joseph Varghese
- Quantitative Imaging Research and Artificial Intelligence Lab, Department of Radiation Oncology, Unit 2, Dr Ida B Scudder Cancer Centre, Christian Medical College, Vellore, Tamil Nadu, 632004, India
| | - Arvind Sathyamurthy
- Quantitative Imaging Research and Artificial Intelligence Lab, Department of Radiation Oncology, Unit 2, Dr Ida B Scudder Cancer Centre, Christian Medical College, Vellore, Tamil Nadu, 632004, India
| | - Neenu Oliver John
- Quantitative Imaging Research and Artificial Intelligence Lab, Department of Radiation Oncology, Unit 2, Dr Ida B Scudder Cancer Centre, Christian Medical College, Vellore, Tamil Nadu, 632004, India
| | | | - Ashish Singh
- Department of Medical Oncology, Christian Medical College, Vellore, India
| | - Anjana Joel
- Department of Medical Oncology, Christian Medical College, Vellore, India
| | - Rohin Mittal
- Department of General Surgery, Christian Medical College, Vellore, India
| | - Dipti Masih
- Department of Pathology, Christian Medical College, Vellore, India
| | - Kripa Varghese
- Department of Pathology, Christian Medical College, Vellore, India
| | - Grace Rebekah
- Department of Biostatistics, Christian Medical College, Vellore, India
| | - Thomas Samuel Ram
- Quantitative Imaging Research and Artificial Intelligence Lab, Department of Radiation Oncology, Unit 2, Dr Ida B Scudder Cancer Centre, Christian Medical College, Vellore, Tamil Nadu, 632004, India
| | - Hannah Mary T Thomas
- Quantitative Imaging Research and Artificial Intelligence Lab, Department of Radiation Oncology, Unit 2, Dr Ida B Scudder Cancer Centre, Christian Medical College, Vellore, Tamil Nadu, 632004, India.
| |
Collapse
|
4
|
Uhlig A, Uhlig J, Leha A, Biggemann L, Bachanek S, Stöckle M, Reichert M, Lotz J, Zeuschner P, Maßmann A. Radiomics and machine learning for renal tumor subtype assessment using multiphase computed tomography in a multicenter setting. Eur Radiol 2024:10.1007/s00330-024-10731-6. [PMID: 38634876 DOI: 10.1007/s00330-024-10731-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2023] [Revised: 02/14/2024] [Accepted: 03/06/2024] [Indexed: 04/19/2024]
Abstract
OBJECTIVES To distinguish histological subtypes of renal tumors using radiomic features and machine learning (ML) based on multiphase computed tomography (CT). MATERIAL AND METHODS Patients who underwent surgical treatment for renal tumors at two tertiary centers from 2012 to 2022 were included retrospectively. Preoperative arterial (corticomedullary) and venous (nephrogenic) phase CT scans from these centers, as well as from external imaging facilities, were manually segmented, and standardized radiomic features were extracted. Following preprocessing and addressing the class imbalance, a ML algorithm based on extreme gradient boosting trees (XGB) was employed to predict renal tumor subtypes using 10-fold cross-validation. The evaluation was conducted using the multiclass area under the receiver operating characteristic curve (AUC). Algorithms were trained on data from one center and independently tested on data from the other center. RESULTS The training cohort comprised n = 297 patients (64.3% clear cell renal cell cancer [RCC], 13.5% papillary renal cell carcinoma (pRCC), 7.4% chromophobe RCC, 9.4% oncocytomas, and 5.4% angiomyolipomas (AML)), and the testing cohort n = 121 patients (56.2%/16.5%/3.3%/21.5%/2.5%). The XGB algorithm demonstrated a diagnostic performance of AUC = 0.81/0.64/0.8 for venous/arterial/combined contrast phase CT in the training cohort, and AUC = 0.75/0.67/0.75 in the independent testing cohort. In pairwise comparisons, the lowest diagnostic accuracy was evident for the identification of oncocytomas (AUC = 0.57-0.69), and the highest for the identification of AMLs (AUC = 0.9-0.94) CONCLUSION: Radiomic feature analyses can distinguish renal tumor subtypes on routinely acquired CTs, with oncocytomas being the hardest subtype to identify. CLINICAL RELEVANCE STATEMENT Radiomic feature analyses yield robust results for renal tumor assessment on routine CTs. Although radiologists routinely rely on arterial phase CT for renal tumor assessment and operative planning, radiomic features derived from arterial phase did not improve the accuracy of renal tumor subtype identification in our cohort.
Collapse
Affiliation(s)
- Annemarie Uhlig
- Department of Urology, University Medical Center Goettingen, Goettingen, Germany.
| | - Johannes Uhlig
- Department of Clinical and Interventional Radiology, University Medical Center Goettingen, Goettingen, Germany
| | - Andreas Leha
- Department of Medical Statistics, University Medical Center Goettingen, Goettingen, Germany
| | - Lorenz Biggemann
- Department of Clinical and Interventional Radiology, University Medical Center Goettingen, Goettingen, Germany
| | - Sophie Bachanek
- Department of Clinical and Interventional Radiology, University Medical Center Goettingen, Goettingen, Germany
| | - Michael Stöckle
- Department of Urology and Pediatric Urology, Saarland University, Homburg, Germany
| | - Mathias Reichert
- Department of Urology, University Medical Center Goettingen, Goettingen, Germany
| | - Joachim Lotz
- Department of Cardiac Imaging, University Medical Center Goettingen, Goettingen, Germany
| | - Philip Zeuschner
- Department of Urology and Pediatric Urology, Saarland University, Homburg, Germany
| | - Alexander Maßmann
- Department of Radiology and Nuclear Medicine, Robert-Bosch-Clinic, Stuttgart, Germany
| |
Collapse
|
5
|
Nigam S, Gjelaj E, Wang R, Wei GW, Wang P. Machine Learning and Deep Learning Applications in Magnetic Particle Imaging. J Magn Reson Imaging 2024. [PMID: 38358090 DOI: 10.1002/jmri.29294] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Revised: 01/29/2024] [Accepted: 01/31/2024] [Indexed: 02/16/2024] Open
Abstract
In recent years, magnetic particle imaging (MPI) has emerged as a promising imaging technique depicting high sensitivity and spatial resolution. It originated in the early 2000s where it proposed a new approach to challenge the low spatial resolution achieved by using relaxometry in order to measure the magnetic fields. MPI presents 2D and 3D images with high temporal resolution, non-ionizing radiation, and optimal visual contrast due to its lack of background tissue signal. Traditionally, the images were reconstructed by the conversion of signal from the induced voltage by generating system matrix and X-space based methods. Because image reconstruction and analyses play an integral role in obtaining precise information from MPI signals, newer artificial intelligence-based methods are continuously being researched and developed upon. In this work, we summarize and review the significance and employment of machine learning and deep learning models for applications with MPI and the potential they hold for the future. LEVEL OF EVIDENCE: 5 TECHNICAL EFFICACY: Stage 1.
Collapse
Affiliation(s)
- Saumya Nigam
- Precision Health Program, Michigan State University, East Lansing, Michigan, USA
- Department of Radiology, College of Human Medicine, Michigan State University, East Lansing, Michigan, USA
| | - Elvira Gjelaj
- Precision Health Program, Michigan State University, East Lansing, Michigan, USA
- Lyman Briggs College, Michigan State University, East Lansing, Michigan, USA
| | - Rui Wang
- Department of Mathematics, College of Natural Science, Michigan State University, East Lansing, Michigan, USA
| | - Guo-Wei Wei
- Department of Mathematics, College of Natural Science, Michigan State University, East Lansing, Michigan, USA
- Department of Electrical and Computer Engineering, College of Engineering, Michigan State University, East Lansing, Michigan, USA
- Department of Biochemistry and Molecular Biology, College of Natural Science, Michigan State University, East Lansing, Michigan, USA
| | - Ping Wang
- Precision Health Program, Michigan State University, East Lansing, Michigan, USA
- Department of Radiology, College of Human Medicine, Michigan State University, East Lansing, Michigan, USA
| |
Collapse
|
6
|
Wang S, Chen L, Sun H. Interpretable artificial intelligence-assisted embryo selection improved single-blastocyst transfer outcomes: a prospective cohort study. Reprod Biomed Online 2023; 47:103371. [PMID: 37839212 DOI: 10.1016/j.rbmo.2023.103371] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Revised: 08/01/2023] [Accepted: 08/23/2023] [Indexed: 10/17/2023]
Abstract
RESEARCH QUESTION What is the pregnancy and neonatal outcomes of an interpretable artificial intelligence (AI) model for embryo selection in a prospective clinical trial? DESIGN This single-centre prospective cohort study was carried out from October 2021 to March 2022. A total of 330 eligible patients were assigned to their preferred groups, with 250 patients undergoing a fresh single-blastocyst transfer cycle after the exclusion criteria had been applied. For the AI-assisted group (AAG), embryologists selected the embryos for transfer based on the ranking recommendations provided by an interpretable AI system, while with the manual group, embryologists used the Gardner grading system to make their decisions. RESULTS The implantation rate was significantly higher in the AAG than the manual group (80.87% versus 68.15%, P = 0.022). No significant difference was found in terms of monozygotic twin rate, miscarriage rate, live birth rate and ectopic pregnancy rate between the groups. Furthermore, there was no significant difference in terms of neonatal outcomes, including gestational weeks, premature birth rate, birth height, birthweight, sex ratio at birth and newborn malformation rate. The consensus rate between the AI and retrospective analysis by the embryologists was significantly higher for good-quality embryos (i.e. grade 4BB or higher) versus poor-quality embryos (i.e. less than 4BB) (84.71% versus 25%, P < 0.001). CONCLUSIONS These prospective trial results suggest that the proposed AI system could effectively help embryologists to improve the implantation rate with single-blastocyst transfer compared with traditional manual evaluation methods.
Collapse
Affiliation(s)
- Shanshan Wang
- Center for Reproductive Medicine and Obstetrics and Gynecology, Nanjing Drum Tower Hospital, Affiliated Hospital of Medical School, Nanjing University, Nanjing, China
| | - Lei Chen
- Center for Reproductive Medicine and Obstetrics and Gynecology, Nanjing Drum Tower Hospital, Affiliated Hospital of Medical School, Nanjing University, Nanjing, China
| | - Haixiang Sun
- Center for Reproductive Medicine and Obstetrics and Gynecology, Nanjing Drum Tower Hospital, Affiliated Hospital of Medical School, Nanjing University, Nanjing, China.
| |
Collapse
|
7
|
Shanbhag NM, Bin Sumaida A, Binz T, Hasnain SM, El-Koha O, Al Kaabi K, Saleh M, Al Qawasmeh K, Balaraj K. Integrating Artificial Intelligence Into Radiation Oncology: Can Humans Spot AI? Cureus 2023; 15:e50486. [PMID: 38098735 PMCID: PMC10719429 DOI: 10.7759/cureus.50486] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/13/2023] [Indexed: 12/17/2023] Open
Abstract
Introduction Artificial intelligence (AI) is transforming healthcare, particularly in radiation oncology. AI-based contouring tools like Limbus are designed to delineate Organs at Risk (OAR) and Target Volumes quickly. This study evaluates the accuracy and efficiency of AI contouring compared to human radiation oncologists and the ability of professionals to differentiate between AI-generated and human-generated contours. Methods At a recent AI conference in Abu Dhabi, a blind comparative analysis was performed to assess AI's performance in radiation oncology. Participants included four human radiation oncologists and the Limbus® AI software. They contoured specific regions from CT scans of a breast cancer patient. The audience, consisting of healthcare professionals and AI experts, was challenged to identify the AI-generated contours. The exercise was repeated twice to observe any learning effects. Time taken for contouring and audience identification accuracy were recorded. Results Initially, only 28% of the audience correctly identified the AI contours, which slightly increased to 31% in the second attempt. This indicated a difficulty in distinguishing between AI and human expertise. The AI completed contouring in up to 60 seconds, significantly faster than the human average of 8 minutes. Discussion The results indicate that AI can perform radiation contouring comparably to human oncologists but much faster. The challenge faced by professionals in identifying AI versus human contours highlights AI's advanced capabilities in medical tasks. Conclusion AI shows promise in enhancing radiation oncology workflow by reducing contouring time without quality compromise. Further research is needed to confirm AI contouring's clinical efficacy and its integration into routine practice.
Collapse
Affiliation(s)
- Nandan M Shanbhag
- Oncology/Palliative Care, Tawam Hospital, Al Ain, ARE
- Oncology/Radiation Oncolgy, Tawam Hospital, Al Ain, ARE
| | | | - Theresa Binz
- Radiotherapy Technology, Tawam Hospital, Al Ain, ARE
| | | | | | | | | | | | - Khalid Balaraj
- Oncology/Radiation Oncology, Tawam Hospital, Al Ain, ARE
| |
Collapse
|
8
|
Liu C, Liu Z, Holmes J, Zhang L, Zhang L, Ding Y, Shu P, Wu Z, Dai H, Li Y, Shen D, Liu N, Li Q, Li X, Zhu D, Liu T, Liu W. Artificial general intelligence for radiation oncology. META-RADIOLOGY 2023; 1:100045. [PMID: 38344271 PMCID: PMC10857824 DOI: 10.1016/j.metrad.2023.100045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/15/2024]
Abstract
The emergence of artificial general intelligence (AGI) is transforming radiation oncology. As prominent vanguards of AGI, large language models (LLMs) such as GPT-4 and PaLM 2 can process extensive texts and large vision models (LVMs) such as the Segment Anything Model (SAM) can process extensive imaging data to enhance the efficiency and precision of radiation therapy. This paper explores full-spectrum applications of AGI across radiation oncology including initial consultation, simulation, treatment planning, treatment delivery, treatment verification, and patient follow-up. The fusion of vision data with LLMs also creates powerful multimodal models that elucidate nuanced clinical patterns. Together, AGI promises to catalyze a shift towards data-driven, personalized radiation therapy. However, these models should complement human expertise and care. This paper provides an overview of how AGI can transform radiation oncology to elevate the standard of patient care in radiation oncology, with the key insight being AGI's ability to exploit multimodal clinical data at scale.
Collapse
Affiliation(s)
- Chenbin Liu
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, Guangdong, China
| | | | - Jason Holmes
- Department of Radiation Oncology, Mayo Clinic, USA
| | - Lu Zhang
- Department of Computer Science and Engineering, The University of Texas at Arlington, USA
| | - Lian Zhang
- Department of Radiation Oncology, Mayo Clinic, USA
| | - Yuzhen Ding
- Department of Radiation Oncology, Mayo Clinic, USA
| | - Peng Shu
- School of Computing, University of Georgia, USA
| | - Zihao Wu
- School of Computing, University of Georgia, USA
| | - Haixing Dai
- School of Computing, University of Georgia, USA
| | - Yiwei Li
- School of Computing, University of Georgia, USA
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, China
- Shanghai United Imaging Intelligence Co., Ltd, China
- Shanghai Clinical Research and Trial Center, China
| | - Ninghao Liu
- School of Computing, University of Georgia, USA
| | - Quanzheng Li
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, USA
| | - Xiang Li
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, USA
| | - Dajiang Zhu
- Department of Computer Science and Engineering, The University of Texas at Arlington, USA
| | | | - Wei Liu
- Department of Radiation Oncology, Mayo Clinic, USA
| |
Collapse
|
9
|
Garg P, Mohanty A, Ramisetty S, Kulkarni P, Horne D, Pisick E, Salgia R, Singhal SS. Artificial intelligence and allied subsets in early detection and preclusion of gynecological cancers. Biochim Biophys Acta Rev Cancer 2023; 1878:189026. [PMID: 37980945 DOI: 10.1016/j.bbcan.2023.189026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2023] [Revised: 11/09/2023] [Accepted: 11/14/2023] [Indexed: 11/21/2023]
Abstract
Gynecological cancers including breast, cervical, ovarian, uterine, and vaginal, pose the greatest threat to world health, with early identification being crucial to patient outcomes and survival rates. The application of machine learning (ML) and artificial intelligence (AI) approaches to the study of gynecological cancer has shown potential to revolutionize cancer detection and diagnosis. The current review outlines the significant advancements, obstacles, and prospects brought about by AI and ML technologies in the timely identification and accurate diagnosis of different types of gynecological cancers. The AI-powered technologies can use genomic data to discover genetic alterations and biomarkers linked to a particular form of gynecologic cancer, assisting in the creation of targeted treatments. Furthermore, it has been shown that the potential benefits of AI and ML technologies in gynecologic tumors can greatly increase the accuracy and efficacy of cancer diagnosis, reduce diagnostic delays, and possibly eliminate the need for needless invasive operations. In conclusion, the review focused on the integrative part of AI and ML based tools and techniques in the early detection and exclusion of various cancer types; together with a collaborative coordination between research clinicians, data scientists, and regulatory authorities, which is suggested to realize the full potential of AI and ML in gynecologic cancer care.
Collapse
Affiliation(s)
- Pankaj Garg
- Department of Chemistry, GLA University, Mathura, Uttar Pradesh 281406, India
| | - Atish Mohanty
- Departments of Medical Oncology & Therapeutics Research, Molecular Medicine, Beckman Research Institute of City of Hope, Comprehensive Cancer Center and National Medical Center, Duarte, CA 91010, USA
| | - Sravani Ramisetty
- Departments of Medical Oncology & Therapeutics Research, Molecular Medicine, Beckman Research Institute of City of Hope, Comprehensive Cancer Center and National Medical Center, Duarte, CA 91010, USA
| | - Prakash Kulkarni
- Departments of Medical Oncology & Therapeutics Research, Molecular Medicine, Beckman Research Institute of City of Hope, Comprehensive Cancer Center and National Medical Center, Duarte, CA 91010, USA
| | - David Horne
- Molecular Medicine, Beckman Research Institute of City of Hope, Comprehensive Cancer Center and National Medical Center, Duarte, CA 91010, USA
| | - Evan Pisick
- Department of Medical Oncology, City of Hope, Chicago, IL 60099, USA
| | - Ravi Salgia
- Departments of Medical Oncology & Therapeutics Research, Molecular Medicine, Beckman Research Institute of City of Hope, Comprehensive Cancer Center and National Medical Center, Duarte, CA 91010, USA
| | - Sharad S Singhal
- Departments of Medical Oncology & Therapeutics Research, Molecular Medicine, Beckman Research Institute of City of Hope, Comprehensive Cancer Center and National Medical Center, Duarte, CA 91010, USA.
| |
Collapse
|
10
|
Fiagbedzi E, Hasford F, Tagoe SN. The influence of artificial intelligence on the work of the medical physicist in radiotherapy practice: a short review. BJR Open 2023; 5:20230003. [PMID: 37942499 PMCID: PMC10630976 DOI: 10.1259/bjro.20230003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2022] [Revised: 04/11/2023] [Accepted: 08/02/2023] [Indexed: 11/10/2023] Open
Abstract
There have been many applications and influences of Artificial intelligence (AI) in many sectors and its professionals, that of radiotherapy and the medical physicist is no different. AI and technological advances have necessitated changing roles of medical physicists due to the development of modernized technology with image-guided accessories for the radiotherapy treatment of cancer patients. Given the changing role of medical physicists in ensuring patient safety and optimal care, AI can reshape radiotherapy practice now and in some years to come. Medical physicists' roles in radiotherapy practice have evolved to meet technology for the management of better patient care in the age of modern radiotherapy. This short review provides an insight into the influence of AI on the changing role of medical physicists in each specific chain of the workflow in radiotherapy in which they are involved.
Collapse
Affiliation(s)
| | - Francis Hasford
- Department of Medical Physics, Accra-Ghana, University of Ghana, Accra, Ghana
| | - Samuel Nii Tagoe
- Department of Medical Physics, Accra-Ghana, University of Ghana, Accra, Ghana
| |
Collapse
|
11
|
Oliveira-Saraiva D, Mendes J, Leote J, Gonzalez FA, Garcia N, Ferreira HA, Matela N. Make It Less Complex: Autoencoder for Speckle Noise Removal-Application to Breast and Lung Ultrasound. J Imaging 2023; 9:217. [PMID: 37888324 PMCID: PMC10607564 DOI: 10.3390/jimaging9100217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2023] [Revised: 09/28/2023] [Accepted: 10/07/2023] [Indexed: 10/28/2023] Open
Abstract
Ultrasound (US) imaging is used in the diagnosis and monitoring of COVID-19 and breast cancer. The presence of Speckle Noise (SN) is a downside to its usage since it decreases lesion conspicuity. Filters can be used to remove SN, but they involve time-consuming computation and parameter tuning. Several researchers have been developing complex Deep Learning (DL) models (150,000-500,000 parameters) for the removal of simulated added SN, without focusing on the real-world application of removing naturally occurring SN from original US images. Here, a simpler (<30,000 parameters) Convolutional Neural Network Autoencoder (CNN-AE) to remove SN from US images of the breast and lung is proposed. In order to do so, simulated SN was added to such US images, considering four different noise levels (σ = 0.05, 0.1, 0.2, 0.5). The original US images (N = 1227, breast + lung) were given as targets, while the noised US images served as the input. The Structural Similarity Index Measure (SSIM) and Peak Signal-to-Noise Ratio (PSNR) were used to compare the output of the CNN-AE and of the Median and Lee filters with the original US images. The CNN-AE outperformed the use of these classic filters for every noise level. To see how well the model removed naturally occurring SN from the original US images and to test its real-world applicability, a CNN model that differentiates malignant from benign breast lesions was developed. Several inputs were used to train the model (original, CNN-AE denoised, filter denoised, and noised US images). The use of the original US images resulted in the highest Matthews Correlation Coefficient (MCC) and accuracy values, while for sensitivity and negative predicted values, the CNN-AE-denoised US images (for higher σ values) achieved the best results. Our results demonstrate that the application of a simpler DL model for SN removal results in fewer misclassifications of malignant breast lesions in comparison to the use of original US images and the application of the Median filter. This shows that the use of a less-complex model and the focus on clinical practice applicability are relevant and should be considered in future studies.
Collapse
Affiliation(s)
- Duarte Oliveira-Saraiva
- Instituto de Biofísica e Engenharia Biomédica, Faculdade de Ciências, Universidade de Lisboa, 1749-016 Lisbon, Portugal (N.M.)
- LASIGE, Faculdade de Ciências, Universidade de Lisboa, 1749-016 Lisbon, Portugal;
| | - João Mendes
- Instituto de Biofísica e Engenharia Biomédica, Faculdade de Ciências, Universidade de Lisboa, 1749-016 Lisbon, Portugal (N.M.)
- LASIGE, Faculdade de Ciências, Universidade de Lisboa, 1749-016 Lisbon, Portugal;
| | - João Leote
- Critical Care Department, Hospital Garcia de Orta E.P.E, 2805-267 Almada, Portugal
| | | | - Nuno Garcia
- LASIGE, Faculdade de Ciências, Universidade de Lisboa, 1749-016 Lisbon, Portugal;
| | - Hugo Alexandre Ferreira
- Instituto de Biofísica e Engenharia Biomédica, Faculdade de Ciências, Universidade de Lisboa, 1749-016 Lisbon, Portugal (N.M.)
| | - Nuno Matela
- Instituto de Biofísica e Engenharia Biomédica, Faculdade de Ciências, Universidade de Lisboa, 1749-016 Lisbon, Portugal (N.M.)
| |
Collapse
|
12
|
Steenbruggen I, McCormack MC. Artificial intelligence: do we really need it in pulmonary function interpretation? Eur Respir J 2023; 61:61/5/2300625. [PMID: 37208036 DOI: 10.1183/13993003.00625-2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Accepted: 04/25/2023] [Indexed: 05/21/2023]
|
13
|
Synergy between artificial intelligence and precision medicine for computer-assisted oral and maxillofacial surgical planning. Clin Oral Investig 2023; 27:897-906. [PMID: 36323803 DOI: 10.1007/s00784-022-04706-4] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Accepted: 08/29/2022] [Indexed: 11/06/2022]
Abstract
OBJECTIVES The aim of this review was to investigate the application of artificial intelligence (AI) in maxillofacial computer-assisted surgical planning (CASP) workflows with the discussion of limitations and possible future directions. MATERIALS AND METHODS An in-depth search of the literature was undertaken to review articles concerned with the application of AI for segmentation, multimodal image registration, virtual surgical planning (VSP), and three-dimensional (3D) printing steps of the maxillofacial CASP workflows. RESULTS The existing AI models were trained to address individual steps of CASP, and no single intelligent workflow was found encompassing all steps of the planning process. Segmentation of dentomaxillofacial tissue from computed tomography (CT)/cone-beam CT imaging was the most commonly explored area which could be applicable in a clinical setting. Nevertheless, a lack of generalizability was the main issue, as the majority of models were trained with the data derived from a single device and imaging protocol which might not offer similar performance when considering other devices. In relation to registration, VSP and 3D printing, the presence of inadequate heterogeneous data limits the automatization of these tasks. CONCLUSION The synergy between AI and CASP workflows has the potential to improve the planning precision and efficacy. However, there is a need for future studies with big data before the emergent technology finds application in a real clinical setting. CLINICAL RELEVANCE The implementation of AI models in maxillofacial CASP workflows could minimize a surgeon's workload and increase efficiency and consistency of the planning process, meanwhile enhancing the patient-specific predictability.
Collapse
|
14
|
von Ende E, Ryan S, Crain MA, Makary MS. Artificial Intelligence, Augmented Reality, and Virtual Reality Advances and Applications in Interventional Radiology. Diagnostics (Basel) 2023; 13:diagnostics13050892. [PMID: 36900036 PMCID: PMC10000832 DOI: 10.3390/diagnostics13050892] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Revised: 02/12/2023] [Accepted: 02/23/2023] [Indexed: 03/03/2023] Open
Abstract
Artificial intelligence (AI) uses computer algorithms to process and interpret data as well as perform tasks, while continuously redefining itself. Machine learning, a subset of AI, is based on reverse training in which evaluation and extraction of data occur from exposure to labeled examples. AI is capable of using neural networks to extract more complex, high-level data, even from unlabeled data sets, and better emulate, or even exceed, the human brain. Advances in AI have and will continue to revolutionize medicine, especially the field of radiology. Compared to the field of interventional radiology, AI innovations in the field of diagnostic radiology are more widely understood and used, although still with significant potential and growth on the horizon. Additionally, AI is closely related and often incorporated into the technology and programming of augmented reality, virtual reality, and radiogenomic innovations which have the potential to enhance the efficiency and accuracy of radiological diagnoses and treatment planning. There are many barriers that limit the applications of artificial intelligence applications into the clinical practice and dynamic procedures of interventional radiology. Despite these barriers to implementation, artificial intelligence in IR continues to advance and the continued development of machine learning and deep learning places interventional radiology in a unique position for exponential growth. This review describes the current and possible future applications of artificial intelligence, radiogenomics, and augmented and virtual reality in interventional radiology while also describing the challenges and limitations that must be addressed before these applications can be fully implemented into common clinical practice.
Collapse
|
15
|
Hernández-González J, Valls O, Torres-Martín A, Cerquides J. Modeling three sources of uncertainty in assisted reproductive technologies with probabilistic graphical models. Comput Biol Med 2022; 150:106160. [PMID: 36242813 DOI: 10.1016/j.compbiomed.2022.106160] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 09/08/2022] [Accepted: 10/01/2022] [Indexed: 12/19/2022]
Abstract
Embryo selection is a critical step in assisted reproduction: good selection criteria are expected to increase the probability of inducing a pregnancy. Machine learning techniques have been applied for implantation prediction or embryo quality assessment, which embryologists can use to make a decision about embryo selection. However, this is a highly uncertain real-world problem, and current proposals do not model always all the sources of uncertainty. We present a novel probabilistic graphical model that accounts for three different sources of uncertainty, the standard embryo and cycle viability, and a third one that represents any unknown factor that can drive a treatment to a failure in otherwise perfect conditions. We derive a parametric learning method based on the Expectation-Maximization strategy, which accounts for uncertainty issues. We empirically analyze the model within a real database consisting of 604 cycles (3125 embryos) carried out at Hospital Donostia (Spain). Embryologists followed the protocol of the Spanish Association for Reproduction Biology Studies (ASEBIR), based on morphological features, for embryo selection. Our model predictions are correlated with the ASEBIR protocol, which validates our model. The benefits of accounting for the different sources of uncertainty and the importance of the cycle characteristics are shown. Considering only transferred embryos, our model does not further discriminate them as implanted or failed, suggesting that the ASEBIR protocol could be understood as a thorough summary of the available morphological features.
Collapse
Affiliation(s)
| | - Olga Valls
- Departament de Matemàtiques i Informàtica, Universitat de Barcelona (UB), 08007 Barcelona, Spain
| | - Adrián Torres-Martín
- Department of Information and Communications Engineering, Universitat Autònoma de Barcelona, 08193 Cerdanyola del Vallès, Spain
| | - Jesús Cerquides
- Artificial Intelligence Research Institute (IIIA-CSIC), 08193 Bellaterra, Spain
| |
Collapse
|
16
|
Merchant SA, Nadkarni P, Shaikh MJS. Augmentation of literature review of COVID-19 radiology. World J Radiol 2022; 14:342-351. [PMID: 36186515 PMCID: PMC9521431 DOI: 10.4329/wjr.v14.i9.342] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/14/2021] [Revised: 03/26/2022] [Accepted: 08/21/2022] [Indexed: 02/08/2023] Open
Abstract
We suggest an augmentation of the excellent comprehensive review article titled “Comprehensive literature review on the radiographic findings, imaging modalities, and the role of radiology in the coronavirus disease 2019 (COVID-19) pandemic” under the following categories: (1) “Inclusion of additional radiological features, related to pulmonary infarcts and to COVID-19 pneumonia”; (2) “Amplified discussion of cardiovascular COVID-19 manifestations and the role of cardiac magnetic resonance imaging in monitoring and prognosis”; (3) “Imaging findings related to fluorodeoxyglucose positron emission tomography, optical, thermal and other imaging modalities/devices, including ‘intelligent edge’ and other remote monitoring devices”; (4) “Artificial intelligence in COVID-19 imaging”; (5) “Additional annotations to the radiological images in the manuscript to illustrate the additional signs discussed”; and (6) “A minor correction to a passage on pulmonary destruction”.
Collapse
Affiliation(s)
| | - Prakash Nadkarni
- College of Nursing, University of Iowa, Iowa City, IA 52242, United States
| | - Mohd Javed Saifullah Shaikh
- Department of Radiology, North Bengal Neuro Centre - Jupiter MRI & Diagnostic Centre, Siliguri 734003, West Bengal, India
| |
Collapse
|
17
|
Laino ME, Ammirabile A, Lofino L, Mannelli L, Fiz F, Francone M, Chiti A, Saba L, Orlandi MA, Savevski V. Artificial Intelligence Applied to Pancreatic Imaging: A Narrative Review. Healthcare (Basel) 2022; 10:healthcare10081511. [PMID: 36011168 PMCID: PMC9408381 DOI: 10.3390/healthcare10081511] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Revised: 07/31/2022] [Accepted: 08/08/2022] [Indexed: 12/19/2022] Open
Abstract
The diagnosis, evaluation, and treatment planning of pancreatic pathologies usually require the combined use of different imaging modalities, mainly, computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET). Artificial intelligence (AI) has the potential to transform the clinical practice of medical imaging and has been applied to various radiological techniques for different purposes, such as segmentation, lesion detection, characterization, risk stratification, or prediction of response to treatments. The aim of the present narrative review is to assess the available literature on the role of AI applied to pancreatic imaging. Up to now, the use of computer-aided diagnosis (CAD) and radiomics in pancreatic imaging has proven to be useful for both non-oncological and oncological purposes and represents a promising tool for personalized approaches to patients. Although great developments have occurred in recent years, it is important to address the obstacles that still need to be overcome before these technologies can be implemented into our clinical routine, mainly considering the heterogeneity among studies.
Collapse
Affiliation(s)
- Maria Elena Laino
- Artificial Intelligence Center, IRCCS Humanitas Research Hospital, Via Manzoni 56, Rozzano, 20089 Milan, Italy
- Correspondence: (M.E.L.); (A.A.)
| | - Angela Ammirabile
- Department of Biomedical Sciences, Humanitas University, Via Rita Levi Montalcini 4, Pieve Emanuele, 20072 Milan, Italy
- Department of Diagnostic and Interventional Radiology, IRCCS Humanitas Research Hospital, Via Manzoni 56, Rozzano, 20089 Milan, Italy
- Correspondence: (M.E.L.); (A.A.)
| | - Ludovica Lofino
- Department of Biomedical Sciences, Humanitas University, Via Rita Levi Montalcini 4, Pieve Emanuele, 20072 Milan, Italy
- Department of Diagnostic and Interventional Radiology, IRCCS Humanitas Research Hospital, Via Manzoni 56, Rozzano, 20089 Milan, Italy
| | | | - Francesco Fiz
- Nuclear Medicine Unit, Department of Diagnostic Imaging, E.O. Ospedali Galliera, 56321 Genoa, Italy
- Department of Nuclear Medicine and Clinical Molecular Imaging, University Hospital, 72074 Tübingen, Germany
| | - Marco Francone
- Department of Biomedical Sciences, Humanitas University, Via Rita Levi Montalcini 4, Pieve Emanuele, 20072 Milan, Italy
- Department of Diagnostic and Interventional Radiology, IRCCS Humanitas Research Hospital, Via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Arturo Chiti
- Department of Biomedical Sciences, Humanitas University, Via Rita Levi Montalcini 4, Pieve Emanuele, 20072 Milan, Italy
- Department of Nuclear Medicine, IRCCS Humanitas Research Hospital, Via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Luca Saba
- Department of Radiology, University of Cagliari, 09124 Cagliari, Italy
| | | | - Victor Savevski
- Artificial Intelligence Center, IRCCS Humanitas Research Hospital, Via Manzoni 56, Rozzano, 20089 Milan, Italy
| |
Collapse
|
18
|
Tekin AB, Yassa M, Birol İlter P, Yavuz E, Önden B, Usta C, Budak D, Günkaya OS, Çavuşoğlu G, Taymur BD, Tuğ N. COVID-19 related maternal mortality cases in associated with Delta and Omicron waves and the role of lung ultrasound. Turk J Obstet Gynecol 2022; 19:88-97. [PMID: 35770508 PMCID: PMC9249361 DOI: 10.4274/tjod.galenos.2022.36937] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022] Open
Abstract
Objective: To present coronavirus disease-2019 (COVID-19) related maternal mortality in relation to Delta and Omicron waves and to investigate the role of lung ultrasound (LUS) in estimating mortality. Materials and Methods: This retrospective cohort study was conducted in the obstetrics and gynecology clinic of a tertiary pandemic hospital between March 2020 and January 2022. The hospitalized pregnant women with COVID-19 diagnosis and maternal deaths were studied in relation with Delta and Omicron waves. The relationship between LUS scores of hospitalized patients and maternal mortality was explored. Results: Thousand and sixty-five pregnant women were hospitalized because of COVID-19 infection. Fifty-one (4.79%) of these patients had critical sickness, 96 (9.01%) of them had severe illness, 62 (5.82%) of them were admitted to the intensive care unit and 28 (2.63%) of all hospitalized pregnant women had died. Of the 1.065 patients, 783 (73.5%) were hospitalized before the Delta wave and the maternal mortality rate was 1.28% (10/783), 243 (22.8%) were hospitalized during the Delta wave and the maternal mortality rate was 7% (17/243) [relative risk (RR)=5.478, 95% confidence interval (CI) (2.54-11.8), z=4.342, p<0.001]. During the Omicron wave 39 (3.66%) patients were hospitalized and the maternal mortality rate was 2.56% (1/39). Maternal mortality rates, according to LUS scores, were 0.37% (1/273) for LUS 0, 0.72% (2/277) for LUS 1, 2.58% (10/387) for LUS 2 and 11.72% (15/128) for LUS 3 respectively (LUS 3 vs. others; maternal mortality: RR=8.447, 95% CI (4.11-17.34), z=5.814, p<0.0001). There were no vaccinated patients in the study cohort. Conclusion: The maternal mortality rate was relatively high, particularly during the Delta wave at our referral center. The Delta wave, delayed vaccination and vaccine hesitancy of pregnant women might have important roles in maternal mortality. Higher LUS scores should warn clinicians of an increased risk of maternal death.
Collapse
Affiliation(s)
- Arzu Bilge Tekin
- University of Health Sciences Turkey, Şehit Prof. Dr. İlhan Varank Sancaktepe Training and Research Hospital, Clinic of Obstetrics and Gynecology, İstanbul, Turkey
| | - Murat Yassa
- Bahçeşehir University, VM Medical Park Maltepe Hospital, Clinic of Obstetrics and Gynecology, İstanbul, Turkey
| | - Pınar Birol İlter
- University of Health Sciences Turkey, Şehit Prof. Dr. İlhan Varank Sancaktepe Training and Research Hospital, Clinic of Obstetrics and Gynecology, İstanbul, Turkey
| | - Emre Yavuz
- University of Health Sciences Turkey, Şehit Prof. Dr. İlhan Varank Sancaktepe Training and Research Hospital, Clinic of Obstetrics and Gynecology, İstanbul, Turkey
| | - Betül Önden
- University of Health Sciences Turkey, Şehit Prof. Dr. İlhan Varank Sancaktepe Training and Research Hospital, Clinic of Obstetrics and Gynecology, İstanbul, Turkey
| | - Canberk Usta
- University of Health Sciences Turkey, Şehit Prof. Dr. İlhan Varank Sancaktepe Training and Research Hospital, Clinic of Obstetrics and Gynecology, İstanbul, Turkey
| | - Doğuş Budak
- University of Health Sciences Turkey, Şehit Prof. Dr. İlhan Varank Sancaktepe Training and Research Hospital, Clinic of Obstetrics and Gynecology, İstanbul, Turkey
| | - Osman Samet Günkaya
- University of Health Sciences Turkey, Şehit Prof. Dr. İlhan Varank Sancaktepe Training and Research Hospital, Clinic of Obstetrics and Gynecology, İstanbul, Turkey
| | - Gül Çavuşoğlu
- University of Health Sciences Turkey, Şehit Prof. Dr. İlhan Varank Sancaktepe Training and Research Hospital, Clinic of Obstetrics and Gynecology, İstanbul, Turkey
| | - Bilge Doğan Taymur
- University of Health Sciences Turkey, Şehit Prof. Dr. İlhan Varank Sancaktepe Training and Research Hospital, Clinic of Obstetrics and Gynecology, İstanbul, Turkey
| | - Niyazi Tuğ
- University of Health Sciences Turkey, Şehit Prof. Dr. İlhan Varank Sancaktepe Training and Research Hospital, Clinic of Obstetrics and Gynecology, İstanbul, Turkey
| |
Collapse
|
19
|
Militello C, Rundo L, Dimarco M, Orlando A, Woitek R, D'Angelo I, Russo G, Bartolotta TV. 3D DCE-MRI Radiomic Analysis for Malignant Lesion Prediction in Breast Cancer Patients. Acad Radiol 2022; 29:830-840. [PMID: 34600805 DOI: 10.1016/j.acra.2021.08.024] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2021] [Revised: 08/26/2021] [Accepted: 08/30/2021] [Indexed: 12/20/2022]
Abstract
RATIONALE AND OBJECTIVES To develop and validate a radiomic model, with radiomic features extracted from breast Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) from a 1.5T scanner, for predicting the malignancy of masses with enhancement. Images were acquired using an 8-channel breast coil in the axial plane. The rationale behind this study is to show the feasibility of a radiomics-powered model that could be integrated into the clinical practice by exploiting only standard-of-care DCE-MRI with the goal of reducing the required image pre-processing (ie, normalization and quantitative imaging map generation). MATERIALS AND METHODS 107 radiomic features were extracted from a manually annotated dataset of 111 patients, which was split into discovery and test sets. A feature calibration and pre-processing step was performed to find only robust non-redundant features. An in-depth discovery analysis was performed to define a predictive model: for this purpose, a Support Vector Machine (SVM) was trained in a nested 5-fold cross-validation scheme, by exploiting several unsupervised feature selection methods. The predictive model performance was evaluated in terms of Area Under the Receiver Operating Characteristic (AUROC), specificity, sensitivity, PPV and NPV. The test was performed on unseen held-out data. RESULTS The model combining Unsupervised Discriminative Feature Selection (UDFS) and SVMs on average achieved the best performance on the blinded test set: AUROC = 0.725±0.091, sensitivity = 0.709±0.176, specificity = 0.741±0.114, PPV = 0.72±0.093, and NPV = 0.75±0.114. CONCLUSION In this study, we built a radiomic predictive model based on breast DCE-MRI, using only the strongest enhancement phase, with promising results in terms of accuracy and specificity in the differentiation of malignant from benign breast lesions.
Collapse
|
20
|
Wickramasinghe SU, Weerakoon TI, Gamage DPJ, Bandara DMS, Pallewatte DA. Identification of Radiomic Features as an Imaging Marker to Differentiate Benign and Malignant Breast Masses Based on Magnetic Resonance Imaging. IMAGING 2022. [DOI: 10.1556/1647.2022.00065] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022] Open
Abstract
AbstractBackground - Breast cancer is one of the most common cancers among women globally and early identification is known to increase patient outcomes. Therefore, the main aim of this study is to identify the essential radiomic features as an image marker and compare the diagnostic feasibility of feature parameters derived from radiomics analysis and conventional Magnetic Resonance Imaging (MRI) to differentiate benign and malignant breast masses.Methods and Material - T1-weighted Dynamic Contrast-Enhanced (DCE) breast MR axial images of 151 (benign (79) and malignant (72)) patients were chosen. Regions of interest were selected using both manual and semi-automatic segmentation from each lesion. 382 radiomic features computed on the selected regions. A random forest model was employed to detect the most important features that differentiate benign and malignant breast masses. The ten most important radiomics features were obtained from manual and semi-automatic segmentation based on the Gini index to train a support vector machine. MATLAB and IBM SPSS Statistics Subscription software used for statistical analysis.Results - The accuracy (sensitivity) of the models built from the ten most significant features obtained from manual and semi-automatic segmentation were 0.815 (0.84), 0.821 (0.87), respectively. The top 10 features obtained from manual delineation and semi-automatic segmentation showed a significant difference (p<0.05) between benign and malignant breast lesions.Conclusion - This radiomics analysis based on DCE-BMRI revealed distinct radiomic features to differentiate benign and malignant breast masses. Therefore, the radiomics analysis can be used as a supporting tool in detecting breast MRI lesions.
Collapse
Affiliation(s)
- Sachini Udara Wickramasinghe
- BSc (Hons) Radiography, Department of Radiography and Radiotherapy, Faculty of Allied Health Sciences, General Sir John Kotelawala Defence University, Rathmalana, Sri Lanka
| | - Thushara Indika Weerakoon
- BSc (Hons) Radiography, Department of Radiography and Radiotherapy, Faculty of Allied Health Sciences, General Sir John Kotelawala Defence University, Rathmalana, Sri Lanka
| | | | | | | |
Collapse
|
21
|
Common and Uncommon Errors in Emergency Ultrasound. Diagnostics (Basel) 2022; 12:diagnostics12030631. [PMID: 35328184 PMCID: PMC8947314 DOI: 10.3390/diagnostics12030631] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 02/27/2022] [Accepted: 03/01/2022] [Indexed: 12/19/2022] Open
Abstract
Errors in emergency ultrasound (US) have been representing an increasing problem in recent years thanks to several unique features related to both the inherent characteristics of the discipline and to the latest developments, which every medical operator should be aware of. Because of the subjective nature of the interpretation of emergency US findings, it is more prone to errors than other diagnostic imaging modalities. The misinterpretation of US images should therefore be considered as a serious risk in diagnosis. The etiology of error is multi-factorial: it depends on environmental factors, patients and the technical skills of the operator; it is influenced by intrinsic US artifacts, poor clinical correlation, US-setting errors and anatomical variants; and it is conditioned by the lack of a methodologically correct clinical approach and excessive diagnostic confidence too. In this review, we evaluate the common and uncommon sources of diagnostic errors in emergency US during clinical practice, showing how to recognize and avoid them.
Collapse
|
22
|
Tahara H. Pictorial research of pancreas with artificial intelligence and simulacra in the works of Fellini. Artif Intell Med Imaging 2021; 2:115-117. [DOI: 10.35711/aimi.v2.i6.115] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/22/2021] [Revised: 10/26/2021] [Accepted: 12/28/2021] [Indexed: 02/06/2023] Open
Abstract
This is the consideration recalled from my reading of Acute pancreatitis: A pictorial review of early pancreatic fluid collections by Xiao. This perspective related to the works of Fellini might be able to contribute the future development of the research of pancreatic diseases.
Collapse
Affiliation(s)
- Hiroki Tahara
- Faculty of Integrated Human Studies, Kyoto University, Kyoto 606-8501, Japan
| |
Collapse
|
23
|
Ballotin VR, Bigarella LG, Soldera J, Soldera J. Deep learning applied to the imaging diagnosis of hepatocellular carcinoma. Artif Intell Gastrointest Endosc 2021; 2:127-135. [DOI: 10.37126/aige.v2.i4.127] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/21/2021] [Revised: 06/05/2021] [Accepted: 07/19/2021] [Indexed: 02/06/2023] Open
Abstract
Each year, hepatocellular carcinoma is diagnosed in more than half a million people worldwide. It is the fifth most common cancer in men and the seventh most common cancer in women. Its diagnosis is currently made using imaging techniques, such as computed tomography and magnetic resonance imaging. For most cirrhotic patients, these methods are enough for diagnosis, foregoing the necessity of a liver biopsy. In order to improve outcomes and bypass obstacles, many companies and clinical centers have been trying to develop deep learning systems that could be able to diagnose and classify liver nodules in the cirrhotic liver, in which the neural networks are one of the most efficient approaches to accurately diagnose liver nodules. Despite the advances in deep learning systems for the diagnosis of imaging techniques, there are many issues that need better development in order to make such technologies more useful in daily practice.
Collapse
Affiliation(s)
| | | | - John Soldera
- Computer Science, Federal Institute of Education, Science and Technology Farroupilha, Santo Ângelo 98806-700, RS, Brazil
| | - Jonathan Soldera
- Clinical Gastroenterology, Universidade de Caxias do Sul, Caxias do Sul 95070-560, RS, Brazil
| |
Collapse
|
24
|
Stanzione A, Verde F, Romeo V, Boccadifuoco F, Mainenti PP, Maurea S. Radiomics and machine learning applications in rectal cancer: Current update and future perspectives. World J Gastroenterol 2021; 27:5306-5321. [PMID: 34539134 PMCID: PMC8409167 DOI: 10.3748/wjg.v27.i32.5306] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 03/13/2021] [Accepted: 07/22/2021] [Indexed: 02/06/2023] Open
Abstract
The high incidence of rectal cancer in both sexes makes it one of the most common tumors, with significant morbidity and mortality rates. To define the best treatment option and optimize patient outcome, several rectal cancer biological variables must be evaluated. Currently, medical imaging plays a crucial role in the characterization of this disease, and it often requires a multimodal approach. Magnetic resonance imaging is the first-choice imaging modality for local staging and restaging and can be used to detect high-risk prognostic factors. Computed tomography is widely adopted for the detection of distant metastases. However, conventional imaging has recognized limitations, and many rectal cancer characteristics remain assessable only after surgery and histopathology evaluation. There is a growing interest in artificial intelligence applications in medicine, and imaging is by no means an exception. The introduction of radiomics, which allows the extraction of quantitative features that reflect tumor heterogeneity, allows the mining of data in medical images and paved the way for the identification of potential new imaging biomarkers. To manage such a huge amount of data, the use of machine learning algorithms has been proposed. Indeed, without prior explicit programming, they can be employed to build prediction models to support clinical decision making. In this review, current applications and future perspectives of artificial intelligence in medical imaging of rectal cancer are presented, with an imaging modality-based approach and a keen eye on unsolved issues. The results are promising, but the road ahead for translation in clinical practice is rather long.
Collapse
Affiliation(s)
- Arnaldo Stanzione
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Naples 80131, Italy
| | - Francesco Verde
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Naples 80131, Italy
| | - Valeria Romeo
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Naples 80131, Italy
| | - Francesca Boccadifuoco
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Naples 80131, Italy
| | - Pier Paolo Mainenti
- Institute of Biostructures and Bioimaging, National Council of Research, Napoli 80131, Italy
| | - Simone Maurea
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Naples 80131, Italy
| |
Collapse
|
25
|
Kragh MF, Karstoft H. Embryo selection with artificial intelligence: how to evaluate and compare methods? J Assist Reprod Genet 2021; 38:1675-1689. [PMID: 34173914 PMCID: PMC8324599 DOI: 10.1007/s10815-021-02254-6] [Citation(s) in RCA: 39] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Accepted: 06/02/2021] [Indexed: 12/19/2022] Open
Abstract
Embryo selection within in vitro fertilization (IVF) is the process of evaluating qualities of fertilized oocytes (embryos) and selecting the best embryo(s) available within a patient cohort for subsequent transfer or cryopreservation. In recent years, artificial intelligence (AI) has been used extensively to improve and automate the embryo ranking and selection procedure by extracting relevant information from embryo microscopy images. The AI models are evaluated based on their ability to identify the embryo(s) with the highest chance(s) of achieving a successful pregnancy. Whether such evaluations should be based on ranking performance or pregnancy prediction, however, seems to divide studies. As such, a variety of performance metrics are reported, and comparisons between studies are often made on different outcomes and data foundations. Moreover, superiority of AI methods over manual human evaluation is often claimed based on retrospective data, without any mentions of potential bias. In this paper, we provide a technical view on some of the major topics that divide how current AI models are trained, evaluated and compared. We explain and discuss the most common evaluation metrics and relate them to the two separate evaluation objectives, ranking and prediction. We also discuss when and how to compare AI models across studies and explain in detail how a selection bias is inevitable when comparing AI models against current embryo selection practice in retrospective cohort studies.
Collapse
Affiliation(s)
- Mikkel Fly Kragh
- Department of Electrical and Computer Engineering, Aarhus University, Aarhus N, Denmark.
- Vitrolife A/S, Viby J, Denmark.
| | - Henrik Karstoft
- Department of Electrical and Computer Engineering, Aarhus University, Aarhus N, Denmark
| |
Collapse
|
26
|
Miyagi Y, Hata T, Bouno S, Koyanagi A, Miyake T. Recognition of facial expression of fetuses by artificial intelligence (AI). J Perinat Med 2021; 49:596-603. [PMID: 33548168 DOI: 10.1515/jpm-2020-0537] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/17/2020] [Accepted: 12/27/2020] [Indexed: 01/19/2023]
Abstract
OBJECTIVES The development of the artificial intelligence (AI) classifier to recognize fetal facial expressions that are considered as being related to the brain development of fetuses as a retrospective, non-interventional pilot study. METHODS Images of fetal faces with sonography obtained from outpatient pregnant women with a singleton fetus were enrolled in routine conventional practice from 19 to 38 weeks of gestation from January 1, 2020, to September 30, 2020, with completely de-identified data. The images were classified into seven categories, such as eye blinking, mouthing, face without any expression, scowling, smiling, tongue expulsion, and yawning. The category in which the number of fetuses was less than 10 was eliminated before preparation. Next, we created a deep learning AI classifier with the data. Statistical values such as accuracy for the test dataset and the AI confidence score profiles for each category per image for all data were obtained. RESULTS The number of fetuses/images in the rated categories were 14/147, 23/302, 33/320, 8/55, and 10/72 for eye blinking, mouthing, face without any expression, scowling, and yawning, respectively. The accuracy of the AI fetal facial expression for the entire test data set was 0.985. The accuracy/sensitivity/specificity values were 0.996/0.993/1.000, 0.992/0.986/1.000, 0.985/1.000/0.979, 0.996/0.888/1.000, and 1.000/1.000/1.000 for the eye blinking, mouthing, face without any expression, scowling categories, and yawning, respectively. CONCLUSIONS The AI classifier has the potential to objectively classify fetal facial expressions. AI can advance fetal brain development research using ultrasound.
Collapse
Affiliation(s)
- Yasunari Miyagi
- Department of Gynecology, Miyake Ofuku Clinic, Okayama, Japan.,Medical Data Labo, Okayama, Japan.,Department of Gynecologic Oncology, Saitama Medical University International Medical Center, Hidaka, Japan
| | - Toshiyuki Hata
- Department of Obstetrics and Gynecology, Miyake Clinic, Okayama, Japan.,Department of Perinatology and Gynecology, Kagawa University Graduate School of Medicine, Kagawa, Japan
| | - Saori Bouno
- Department of Obstetrics and Gynecology, Miyake Clinic, Okayama, Japan
| | - Aya Koyanagi
- Department of Obstetrics and Gynecology, Miyake Clinic, Okayama, Japan
| | - Takahito Miyake
- Department of Gynecology, Miyake Ofuku Clinic, Okayama, Japan.,Department of Obstetrics and Gynecology, Miyake Clinic, Okayama, Japan
| |
Collapse
|
27
|
A Novel Approach for Coronary Artery Disease Diagnosis using Hybrid Particle Swarm Optimization based Emotional Neural Network. Biocybern Biomed Eng 2020. [DOI: 10.1016/j.bbe.2020.09.005] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
|