1
|
Rossi J, Wingfield R, Cimino-Mathews A. Breast calcifications on mammography from systemic amyloidosis: A case report. Radiol Case Rep 2024; 19:3740-3747. [PMID: 38983295 PMCID: PMC11231514 DOI: 10.1016/j.radcr.2024.05.083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2024] [Revised: 05/22/2024] [Accepted: 05/26/2024] [Indexed: 07/11/2024] Open
Abstract
Calcifications on mammography from systemic disease at times meet diagnostic criteria for histologic sampling to exclude malignancy. We present a case of bilateral groups of new calcifications on mammography that yielded amyloidosis on core biopsy. Awareness of our patient's known diagnosis of systemic light chain amyloidosis (AL) prompted use of Congo red staining to confirm the histologic diagnosis. Knowledge of systemic diseases with possible manifestations on mammography can facilitate cogent and clinically relevant radiology-pathology correlation.
Collapse
Affiliation(s)
- Joanna Rossi
- Johns Hopkins University School of Medicine, 601 N. Caroline Street, Baltimore, MD 21287, USA
| | - Rebecca Wingfield
- Johns Hopkins University School of Medicine, 601 N. Caroline Street, Baltimore, MD 21287, USA
| | - Ashley Cimino-Mathews
- Johns Hopkins University School of Medicine, 601 N. Caroline Street, Baltimore, MD 21287, USA
| |
Collapse
|
2
|
Wang R, Guo W, Wang Y, Zhou X, Leung JC, Yan S, Cui L. Hybrid multimodal fusion for graph learning in disease prediction. Methods 2024; 229:41-48. [PMID: 38880433 DOI: 10.1016/j.ymeth.2024.06.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2024] [Revised: 06/06/2024] [Accepted: 06/12/2024] [Indexed: 06/18/2024] Open
Abstract
Graph neural networks (GNNs) have gained significant attention in disease prediction where the latent embeddings of patients are modeled as nodes and the similarities among patients are represented through edges. The graph structure, which determines how information is aggregated and propagated, plays a crucial role in graph learning. Recent approaches typically create graphs based on patients' latent embeddings, which may not accurately reflect their real-world closeness. Our analysis reveals that raw data, such as demographic attributes and laboratory results, offers a wealth of information for assessing patient similarities and can serve as a compensatory measure for graphs constructed exclusively from latent embeddings. In this study, we first construct adaptive graphs from both latent representations and raw data respectively, and then merge these graphs via weighted summation. Given that the graphs may contain extraneous and noisy connections, we apply degree-sensitive edge pruning and kNN sparsification techniques to selectively sparsify and prune these edges. We conducted intensive experiments on two diagnostic prediction datasets, and the results demonstrate that our proposed method surpasses current state-of-the-art techniques.
Collapse
Affiliation(s)
| | - Wei Guo
- Shandong University, Jinan, 250210, China.
| | | | - Xin Zhou
- Nanyang Technological University, Singapore.
| | | | - Shuo Yan
- Shandong University, Jinan, 250210, China.
| | - Lizhen Cui
- Shandong University, Jinan, 250210, China.
| |
Collapse
|
3
|
Man K. Multimodal Data Fusion to Detect Preknowledge Test-Taking Behavior Using Machine Learning. EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT 2024; 84:753-779. [PMID: 39055093 PMCID: PMC11268392 DOI: 10.1177/00131644231193625] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/27/2024]
Abstract
In various fields, including college admission, medical board certifications, and military recruitment, high-stakes decisions are frequently made based on scores obtained from large-scale assessments. These decisions necessitate precise and reliable scores that enable valid inferences to be drawn about test-takers. However, the ability of such tests to provide reliable, accurate inference on a test-taker's performance could be jeopardized by aberrant test-taking practices, for instance, practicing real items prior to the test. As a result, it is crucial for administrators of such assessments to develop strategies that detect potential aberrant test-takers after data collection. The aim of this study is to explore the implementation of machine learning methods in combination with multimodal data fusion strategies that integrate bio-information technology, such as eye-tracking, and psychometric measures, including response times and item responses, to detect aberrant test-taking behaviors in technology-assisted remote testing settings.
Collapse
Affiliation(s)
- Kaiwen Man
- The University of Alabama, Tuscaloosa, USA
| |
Collapse
|
4
|
Cai JC, Nakai H, Kuanar S, Froemming AT, Bolan CW, Kawashima A, Takahashi H, Mynderse LA, Dora CD, Humphreys MR, Korfiatis P, Rouzrokh P, Bratt AK, Conte GM, Erickson BJ, Takahashi N. Fully Automated Deep Learning Model to Detect Clinically Significant Prostate Cancer at MRI. Radiology 2024; 312:e232635. [PMID: 39105640 DOI: 10.1148/radiol.232635] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/07/2024]
Abstract
Background Multiparametric MRI can help identify clinically significant prostate cancer (csPCa) (Gleason score ≥7) but is limited by reader experience and interobserver variability. In contrast, deep learning (DL) produces deterministic outputs. Purpose To develop a DL model to predict the presence of csPCa by using patient-level labels without information about tumor location and to compare its performance with that of radiologists. Materials and Methods Data from patients without known csPCa who underwent MRI from January 2017 to December 2019 at one of multiple sites of a single academic institution were retrospectively reviewed. A convolutional neural network was trained to predict csPCa from T2-weighted images, diffusion-weighted images, apparent diffusion coefficient maps, and T1-weighted contrast-enhanced images. The reference standard was pathologic diagnosis. Radiologist performance was evaluated as follows: Radiology reports were used for the internal test set, and four radiologists' PI-RADS ratings were used for the external (ProstateX) test set. The performance was compared using areas under the receiver operating characteristic curves (AUCs) and the DeLong test. Gradient-weighted class activation maps (Grad-CAMs) were used to show tumor localization. Results Among 5735 examinations in 5215 patients (mean age, 66 years ± 8 [SD]; all male), 1514 examinations (1454 patients) showed csPCa. In the internal test set (400 examinations), the AUC was 0.89 and 0.89 for the DL classifier and radiologists, respectively (P = .88). In the external test set (204 examinations), the AUC was 0.86 and 0.84 for the DL classifier and radiologists, respectively (P = .68). DL classifier plus radiologists had an AUC of 0.89 (P < .001). Grad-CAMs demonstrated activation over the csPCa lesion in 35 of 38 and 56 of 58 true-positive examinations in internal and external test sets, respectively. Conclusion The performance of a DL model was not different from that of radiologists in the detection of csPCa at MRI, and Grad-CAMs localized the tumor. © RSNA, 2024 Supplemental material is available for this article. See also the editorial by Johnson and Chandarana in this issue.
Collapse
Affiliation(s)
- Jason C Cai
- From the Departments of Radiology (J.C.C., H.N., S.K., A.T.F., H.T., P.K., P.R., A.K.B., G.M.C., B.J.E., N.T.) and Urology (L.A.M.), Mayo Clinic, 200 First St SW, Rochester, MN 55905; Department of Radiology, Massachusetts General Hospital, Boston, Mass (J.C.C.); Departments of Radiology (C.W.B.) and Urology (C.D.D.), Mayo Clinic, Jacksonville, Fla; and Departments of Radiology (A.K.) and Urology (M.R.H.), Mayo Clinic, Scottsdale, Ariz
| | - Hirotsugu Nakai
- From the Departments of Radiology (J.C.C., H.N., S.K., A.T.F., H.T., P.K., P.R., A.K.B., G.M.C., B.J.E., N.T.) and Urology (L.A.M.), Mayo Clinic, 200 First St SW, Rochester, MN 55905; Department of Radiology, Massachusetts General Hospital, Boston, Mass (J.C.C.); Departments of Radiology (C.W.B.) and Urology (C.D.D.), Mayo Clinic, Jacksonville, Fla; and Departments of Radiology (A.K.) and Urology (M.R.H.), Mayo Clinic, Scottsdale, Ariz
| | - Shiba Kuanar
- From the Departments of Radiology (J.C.C., H.N., S.K., A.T.F., H.T., P.K., P.R., A.K.B., G.M.C., B.J.E., N.T.) and Urology (L.A.M.), Mayo Clinic, 200 First St SW, Rochester, MN 55905; Department of Radiology, Massachusetts General Hospital, Boston, Mass (J.C.C.); Departments of Radiology (C.W.B.) and Urology (C.D.D.), Mayo Clinic, Jacksonville, Fla; and Departments of Radiology (A.K.) and Urology (M.R.H.), Mayo Clinic, Scottsdale, Ariz
| | - Adam T Froemming
- From the Departments of Radiology (J.C.C., H.N., S.K., A.T.F., H.T., P.K., P.R., A.K.B., G.M.C., B.J.E., N.T.) and Urology (L.A.M.), Mayo Clinic, 200 First St SW, Rochester, MN 55905; Department of Radiology, Massachusetts General Hospital, Boston, Mass (J.C.C.); Departments of Radiology (C.W.B.) and Urology (C.D.D.), Mayo Clinic, Jacksonville, Fla; and Departments of Radiology (A.K.) and Urology (M.R.H.), Mayo Clinic, Scottsdale, Ariz
| | - Candice W Bolan
- From the Departments of Radiology (J.C.C., H.N., S.K., A.T.F., H.T., P.K., P.R., A.K.B., G.M.C., B.J.E., N.T.) and Urology (L.A.M.), Mayo Clinic, 200 First St SW, Rochester, MN 55905; Department of Radiology, Massachusetts General Hospital, Boston, Mass (J.C.C.); Departments of Radiology (C.W.B.) and Urology (C.D.D.), Mayo Clinic, Jacksonville, Fla; and Departments of Radiology (A.K.) and Urology (M.R.H.), Mayo Clinic, Scottsdale, Ariz
| | - Akira Kawashima
- From the Departments of Radiology (J.C.C., H.N., S.K., A.T.F., H.T., P.K., P.R., A.K.B., G.M.C., B.J.E., N.T.) and Urology (L.A.M.), Mayo Clinic, 200 First St SW, Rochester, MN 55905; Department of Radiology, Massachusetts General Hospital, Boston, Mass (J.C.C.); Departments of Radiology (C.W.B.) and Urology (C.D.D.), Mayo Clinic, Jacksonville, Fla; and Departments of Radiology (A.K.) and Urology (M.R.H.), Mayo Clinic, Scottsdale, Ariz
| | - Hiroaki Takahashi
- From the Departments of Radiology (J.C.C., H.N., S.K., A.T.F., H.T., P.K., P.R., A.K.B., G.M.C., B.J.E., N.T.) and Urology (L.A.M.), Mayo Clinic, 200 First St SW, Rochester, MN 55905; Department of Radiology, Massachusetts General Hospital, Boston, Mass (J.C.C.); Departments of Radiology (C.W.B.) and Urology (C.D.D.), Mayo Clinic, Jacksonville, Fla; and Departments of Radiology (A.K.) and Urology (M.R.H.), Mayo Clinic, Scottsdale, Ariz
| | - Lance A Mynderse
- From the Departments of Radiology (J.C.C., H.N., S.K., A.T.F., H.T., P.K., P.R., A.K.B., G.M.C., B.J.E., N.T.) and Urology (L.A.M.), Mayo Clinic, 200 First St SW, Rochester, MN 55905; Department of Radiology, Massachusetts General Hospital, Boston, Mass (J.C.C.); Departments of Radiology (C.W.B.) and Urology (C.D.D.), Mayo Clinic, Jacksonville, Fla; and Departments of Radiology (A.K.) and Urology (M.R.H.), Mayo Clinic, Scottsdale, Ariz
| | - Chandler D Dora
- From the Departments of Radiology (J.C.C., H.N., S.K., A.T.F., H.T., P.K., P.R., A.K.B., G.M.C., B.J.E., N.T.) and Urology (L.A.M.), Mayo Clinic, 200 First St SW, Rochester, MN 55905; Department of Radiology, Massachusetts General Hospital, Boston, Mass (J.C.C.); Departments of Radiology (C.W.B.) and Urology (C.D.D.), Mayo Clinic, Jacksonville, Fla; and Departments of Radiology (A.K.) and Urology (M.R.H.), Mayo Clinic, Scottsdale, Ariz
| | - Mitchell R Humphreys
- From the Departments of Radiology (J.C.C., H.N., S.K., A.T.F., H.T., P.K., P.R., A.K.B., G.M.C., B.J.E., N.T.) and Urology (L.A.M.), Mayo Clinic, 200 First St SW, Rochester, MN 55905; Department of Radiology, Massachusetts General Hospital, Boston, Mass (J.C.C.); Departments of Radiology (C.W.B.) and Urology (C.D.D.), Mayo Clinic, Jacksonville, Fla; and Departments of Radiology (A.K.) and Urology (M.R.H.), Mayo Clinic, Scottsdale, Ariz
| | - Panagiotis Korfiatis
- From the Departments of Radiology (J.C.C., H.N., S.K., A.T.F., H.T., P.K., P.R., A.K.B., G.M.C., B.J.E., N.T.) and Urology (L.A.M.), Mayo Clinic, 200 First St SW, Rochester, MN 55905; Department of Radiology, Massachusetts General Hospital, Boston, Mass (J.C.C.); Departments of Radiology (C.W.B.) and Urology (C.D.D.), Mayo Clinic, Jacksonville, Fla; and Departments of Radiology (A.K.) and Urology (M.R.H.), Mayo Clinic, Scottsdale, Ariz
| | - Pouria Rouzrokh
- From the Departments of Radiology (J.C.C., H.N., S.K., A.T.F., H.T., P.K., P.R., A.K.B., G.M.C., B.J.E., N.T.) and Urology (L.A.M.), Mayo Clinic, 200 First St SW, Rochester, MN 55905; Department of Radiology, Massachusetts General Hospital, Boston, Mass (J.C.C.); Departments of Radiology (C.W.B.) and Urology (C.D.D.), Mayo Clinic, Jacksonville, Fla; and Departments of Radiology (A.K.) and Urology (M.R.H.), Mayo Clinic, Scottsdale, Ariz
| | - Alexander K Bratt
- From the Departments of Radiology (J.C.C., H.N., S.K., A.T.F., H.T., P.K., P.R., A.K.B., G.M.C., B.J.E., N.T.) and Urology (L.A.M.), Mayo Clinic, 200 First St SW, Rochester, MN 55905; Department of Radiology, Massachusetts General Hospital, Boston, Mass (J.C.C.); Departments of Radiology (C.W.B.) and Urology (C.D.D.), Mayo Clinic, Jacksonville, Fla; and Departments of Radiology (A.K.) and Urology (M.R.H.), Mayo Clinic, Scottsdale, Ariz
| | - Gian Marco Conte
- From the Departments of Radiology (J.C.C., H.N., S.K., A.T.F., H.T., P.K., P.R., A.K.B., G.M.C., B.J.E., N.T.) and Urology (L.A.M.), Mayo Clinic, 200 First St SW, Rochester, MN 55905; Department of Radiology, Massachusetts General Hospital, Boston, Mass (J.C.C.); Departments of Radiology (C.W.B.) and Urology (C.D.D.), Mayo Clinic, Jacksonville, Fla; and Departments of Radiology (A.K.) and Urology (M.R.H.), Mayo Clinic, Scottsdale, Ariz
| | - Bradley J Erickson
- From the Departments of Radiology (J.C.C., H.N., S.K., A.T.F., H.T., P.K., P.R., A.K.B., G.M.C., B.J.E., N.T.) and Urology (L.A.M.), Mayo Clinic, 200 First St SW, Rochester, MN 55905; Department of Radiology, Massachusetts General Hospital, Boston, Mass (J.C.C.); Departments of Radiology (C.W.B.) and Urology (C.D.D.), Mayo Clinic, Jacksonville, Fla; and Departments of Radiology (A.K.) and Urology (M.R.H.), Mayo Clinic, Scottsdale, Ariz
| | - Naoki Takahashi
- From the Departments of Radiology (J.C.C., H.N., S.K., A.T.F., H.T., P.K., P.R., A.K.B., G.M.C., B.J.E., N.T.) and Urology (L.A.M.), Mayo Clinic, 200 First St SW, Rochester, MN 55905; Department of Radiology, Massachusetts General Hospital, Boston, Mass (J.C.C.); Departments of Radiology (C.W.B.) and Urology (C.D.D.), Mayo Clinic, Jacksonville, Fla; and Departments of Radiology (A.K.) and Urology (M.R.H.), Mayo Clinic, Scottsdale, Ariz
| |
Collapse
|
5
|
Ma W, Chen C, Gong Y, Chan NY, Jiang M, Mak CHK, Abrigo JM, Dou Q. Causal Effect Estimation on Imaging and Clinical Data for Treatment Decision Support of Aneurysmal Subarachnoid Hemorrhage. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2778-2789. [PMID: 38635381 DOI: 10.1109/tmi.2024.3390812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/20/2024]
Abstract
Aneurysmal subarachnoid hemorrhage is a medical emergency of brain that has high mortality and poor prognosis. Causal effect estimation of treatment strategies on patient outcomes is crucial for aneurysmal subarachnoid hemorrhage treatment decision-making. However, most existing studies on treatment decision-making support of this disease are unable to simultaneously compare the potential outcomes of different treatments for a patient. Furthermore, these studies fail to harmoniously integrate the imaging data with non-imaging clinical data, both of which are useful in clinical scenarios. In this paper, we estimate the causal effect of various treatments on patients with aneurysmal subarachnoid hemorrhage by integrating plain CT with non-imaging clinical data, which is represented using structured tabular data. Specifically, we first propose a novel scheme that uses multi-modality confounders distillation architecture to predict the treatment outcome and treatment assignment simultaneously. With these distilled confounder features, we design an imaging and non-imaging interaction representation learning strategy to use the complementary information extracted from different modalities to balance the feature distribution of different treatment groups. We have conducted extensive experiments using a clinical dataset of 656 subarachnoid hemorrhage cases, which was collected from the Hospital Authority Data Collaboration Laboratory in Hong Kong. Our method shows consistent improvements on the evaluation metrics of treatment effect estimation, achieving state-of-the-art results over strong competitors. Code is released at https://github.com/med-air/TOP-aSAH.
Collapse
|
6
|
Lin J, Yang J, Yin M, Tang Y, Chen L, Xu C, Zhu S, Gao J, Liu L, Liu X, Gu C, Huang Z, Wei Y, Zhu J. Development and Validation of Multimodal Models to Predict the 30-Day Mortality of ICU Patients Based on Clinical Parameters and Chest X-Rays. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1312-1322. [PMID: 38448758 DOI: 10.1007/s10278-024-01066-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/06/2023] [Revised: 02/21/2024] [Accepted: 02/22/2024] [Indexed: 03/08/2024]
Abstract
We aimed to develop and validate multimodal ICU patient prognosis models that combine clinical parameters data and chest X-ray (CXR) images. A total of 3798 subjects with clinical parameters and CXR images were extracted from the Medical Information Mart for Intensive Care IV (MIMIC-IV) database and an external hospital (the test set). The primary outcome was 30-day mortality after ICU admission. Automated machine learning (AutoML) and convolutional neural networks (CNNs) were used to construct single-modal models based on clinical parameters and CXR separately. An early fusion approach was used to integrate both modalities (clinical parameters and CXR) into a multimodal model named PrismICU. Compared to the single-modal models, i.e., the clinical parameter model (AUC = 0.80, F1-score = 0.43) and the CXR model (AUC = 0.76, F1-score = 0.45) and the scoring system APACHE II (AUC = 0.83, F1-score = 0.77), PrismICU (AUC = 0.95, F1 score = 0.95) showed improved performance in predicting the 30-day mortality in the validation set. In the test set, PrismICU (AUC = 0.82, F1-score = 0.61) was also better than the clinical parameters model (AUC = 0.72, F1-score = 0.50), CXR model (AUC = 0.71, F1-score = 0.36), and APACHE II (AUC = 0.62, F1-score = 0.50). PrismICU, which integrated clinical parameters data and CXR images, performed better than single-modal models and the existing scoring system. It supports the potential of multimodal models based on structured data and imaging in clinical management.
Collapse
Affiliation(s)
- Jiaxi Lin
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Jiangsu, Suzhou 215006, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, China
| | - Jin Yang
- Department of Critical Care Medicine, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Jiangsu, Suzhou 215006, China
| | - Minyue Yin
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Jiangsu, Suzhou 215006, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, China
| | - Yuxiu Tang
- Department of Critical Care Medicine, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Jiangsu, Suzhou 215006, China
| | - Liquan Chen
- Department of Critical Care Medicine, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Jiangsu, Suzhou 215006, China
| | - Chang Xu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Jiangsu, Suzhou 215006, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, China
| | - Shiqi Zhu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Jiangsu, Suzhou 215006, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, China
| | - Jingwen Gao
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Jiangsu, Suzhou 215006, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, China
| | - Lu Liu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Jiangsu, Suzhou 215006, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, China
| | - Xiaolin Liu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Jiangsu, Suzhou 215006, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, China
| | - Chenqi Gu
- Department of Radiology, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Zhou Huang
- Department of Radiology, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Yao Wei
- Department of Critical Care Medicine, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Jiangsu, Suzhou 215006, China.
| | - Jinzhou Zhu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Jiangsu, Suzhou 215006, China.
- Suzhou Clinical Center of Digestive Diseases, Suzhou, China.
| |
Collapse
|
7
|
Sukegawa S, Tanaka F, Nakano K, Hara T, Ochiai T, Shimada K, Inoue Y, Taki Y, Nakai F, Nakai Y, Ishihama T, Miyazaki R, Murakami S, Nagatsuka H, Miyake M. Training high-performance deep learning classifier for diagnosis in oral cytology using diverse annotations. Sci Rep 2024; 14:17591. [PMID: 39080384 PMCID: PMC11289412 DOI: 10.1038/s41598-024-67879-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2024] [Accepted: 07/17/2024] [Indexed: 08/02/2024] Open
Abstract
The uncertainty of true labels in medical images hinders diagnosis owing to the variability across professionals when applying deep learning models. We used deep learning to obtain an optimal convolutional neural network (CNN) by adequately annotating data for oral exfoliative cytology considering labels from multiple oral pathologists. Six whole-slide images were processed using QuPath for segmenting them into tiles. The images were labeled by three oral pathologists, resulting in 14,535 images with the corresponding pathologists' annotations. Data from three pathologists who provided the same diagnosis were labeled as ground truth (GT) and used for testing. We investigated six models trained using the annotations of (1) pathologist A, (2) pathologist B, (3) pathologist C, (4) GT, (5) majority voting, and (6) a probabilistic model. We divided the test by cross-validation per slide dataset and examined the classification performance of the CNN with a ResNet50 baseline. Statistical evaluation was performed repeatedly and independently using every slide 10 times as test data. For the area under the curve, three cases showed the highest values (0.861, 0.955, and 0.991) for the probabilistic model. Regarding accuracy, two cases showed the highest values (0.988 and 0.967). For the models using the pathologists and GT annotations, many slides showed very low accuracy and large variations across tests. Hence, the classifier trained with probabilistic labels provided the optimal CNN for oral exfoliative cytology considering diagnoses from multiple pathologists. These results may lead to trusted medical artificial intelligence solutions that reflect diverse diagnoses of various professionals.
Collapse
Affiliation(s)
- Shintaro Sukegawa
- Department of Oral and Maxillofacial Surgery, Kagawa University Faculty of Medicine, 1750-1, Ikenobe, Miki-Cho, Kita-Gun, Kagawa, 761-0793, Japan.
- Department of Oral Pathology and Medicine, Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, Okayama, 700-8558, Japan.
| | - Futa Tanaka
- Department of Electrical, Electronic and Computer Engineering, Faculty of Engineering, Gifu University, 1-1 Yanagido, Gifu, Gifu, 501-1193, Japan
| | - Keisuke Nakano
- Department of Oral Pathology and Medicine, Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, Okayama, 700-8558, Japan
| | - Takeshi Hara
- Department of Electrical, Electronic and Computer Engineering, Faculty of Engineering, Gifu University, 1-1 Yanagido, Gifu, Gifu, 501-1193, Japan
- Center for Research, Education, and Development for Healthcare Life Design, Tokai National Higher Education and Research System, 1-1 Yanagido, Gifu, Gifu, 501-1193, Japan
| | - Takanaga Ochiai
- Division of Oral Pathogenesis and Disease Control, Department of Oral Pathology, Asahi University School of Dentistry, Mizuho, 501-0296, Japan
| | - Katsumitsu Shimada
- Department of Oral Pathology, Graduate School of Oral Medicine, Matsumoto Dental University, 1780 Hirooka-Gobara, Shiojiri, Nagano, 399-0781, Japan
| | - Yuta Inoue
- Department of Electrical, Electronic and Computer Engineering, Faculty of Engineering, Gifu University, 1-1 Yanagido, Gifu, Gifu, 501-1193, Japan
| | - Yoshihiro Taki
- Department of Electrical, Electronic and Computer Engineering, Faculty of Engineering, Gifu University, 1-1 Yanagido, Gifu, Gifu, 501-1193, Japan
| | - Fumi Nakai
- Department of Oral and Maxillofacial Surgery, Kagawa University Faculty of Medicine, 1750-1, Ikenobe, Miki-Cho, Kita-Gun, Kagawa, 761-0793, Japan
| | - Yasuhiro Nakai
- Department of Oral and Maxillofacial Surgery, Kagawa University Faculty of Medicine, 1750-1, Ikenobe, Miki-Cho, Kita-Gun, Kagawa, 761-0793, Japan
| | - Takanori Ishihama
- Department of Oral and Maxillofacial Surgery, Kagawa University Faculty of Medicine, 1750-1, Ikenobe, Miki-Cho, Kita-Gun, Kagawa, 761-0793, Japan
| | - Ryo Miyazaki
- Stony Brook Cancer Center, Stony Brook University, Stony Brook, NY, 11794, USA
| | - Satoshi Murakami
- Department of Oral Pathology, Graduate School of Oral Medicine, Matsumoto Dental University, 1780 Hirooka-Gobara, Shiojiri, Nagano, 399-0781, Japan
| | - Hitoshi Nagatsuka
- Department of Oral Pathology and Medicine, Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, Okayama, 700-8558, Japan
| | - Minoru Miyake
- Department of Oral and Maxillofacial Surgery, Kagawa University Faculty of Medicine, 1750-1, Ikenobe, Miki-Cho, Kita-Gun, Kagawa, 761-0793, Japan
| |
Collapse
|
8
|
Gunashekar DD, Bielak L, Oerther B, Benndorf M, Nedelcu A, Hickey S, Zamboglou C, Grosu AL, Bock M. Comparison of data fusion strategies for automated prostate lesion detection using mpMRI correlated with whole mount histology. Radiat Oncol 2024; 19:96. [PMID: 39080735 PMCID: PMC11287985 DOI: 10.1186/s13014-024-02471-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Accepted: 06/14/2024] [Indexed: 08/03/2024] Open
Abstract
BACKGROUND In this work, we compare input level, feature level and decision level data fusion techniques for automatic detection of clinically significant prostate lesions (csPCa). METHODS Multiple deep learning CNN architectures were developed using the Unet as the baseline. The CNNs use both multiparametric MRI images (T2W, ADC, and High b-value) and quantitative clinical data (prostate specific antigen (PSA), PSA density (PSAD), prostate gland volume & gross tumor volume (GTV)), and only mp-MRI images (n = 118), as input. In addition, co-registered ground truth data from whole mount histopathology images (n = 22) were used as a test set for evaluation. RESULTS The CNNs achieved for early/intermediate / late level fusion a precision of 0.41/0.51/0.61, recall value of 0.18/0.22/0.25, an average precision of 0.13 / 0.19 / 0.27, and F scores of 0.55/0.67/ 0.76. Dice Sorensen Coefficient (DSC) was used to evaluate the influence of combining mpMRI with parametric clinical data for the detection of csPCa. We compared the DSC between the predictions of CNN's trained with mpMRI and parametric clinical and the CNN's trained with only mpMRI images as input with the ground truth. We obtained a DSC of data 0.30/0.34/0.36 and 0.26/0.33/0.34 respectively. Additionally, we evaluated the influence of each mpMRI input channel for the task of csPCa detection and obtained a DSC of 0.14 / 0.25 / 0.28. CONCLUSION The results show that the decision level fusion network performs better for the task of prostate lesion detection. Combining mpMRI data with quantitative clinical data does not show significant differences between these networks (p = 0.26/0.62/0.85). The results show that CNNs trained with all mpMRI data outperform CNNs with less input channels which is consistent with current clinical protocols where the same input is used for PI-RADS lesion scoring. TRIAL REGISTRATION The trial was registered retrospectively at the German Register for Clinical Studies (DRKS) under proposal number Nr. 476/14 & 476/19.
Collapse
Affiliation(s)
- Deepa Darshini Gunashekar
- Division of Medical Physics, Department of Diagnostic and Interventional Radiology, University Medical Center Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany.
- German Cancer Consortium (DKTK), Partner Site Freiburg, Freiburg, Germany.
| | - Lars Bielak
- Division of Medical Physics, Department of Diagnostic and Interventional Radiology, University Medical Center Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
- German Cancer Consortium (DKTK), Partner Site Freiburg, Freiburg, Germany
| | - Benedict Oerther
- Department of Diagnostic and Interventional Radiology, University Medical Center Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Matthias Benndorf
- Department of Diagnostic and Interventional Radiology, University Medical Center Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Andrea Nedelcu
- Department of Diagnostic and Interventional Radiology, University Medical Center Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Samantha Hickey
- Division of Medical Physics, Department of Diagnostic and Interventional Radiology, University Medical Center Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Constantinos Zamboglou
- Department of Diagnostic and Interventional Radiology, University Medical Center Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
- German Oncology Center, European University Cyprus, Limassol, Cyprus
| | - Anca-Ligia Grosu
- Department of Diagnostic and Interventional Radiology, University Medical Center Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
- German Cancer Consortium (DKTK), Partner Site Freiburg, Freiburg, Germany
| | - Michael Bock
- Division of Medical Physics, Department of Diagnostic and Interventional Radiology, University Medical Center Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
- German Cancer Consortium (DKTK), Partner Site Freiburg, Freiburg, Germany
| |
Collapse
|
9
|
Pahud de Mortanges A, Luo H, Shu SZ, Kamath A, Suter Y, Shelan M, Pöllinger A, Reyes M. Orchestrating explainable artificial intelligence for multimodal and longitudinal data in medical imaging. NPJ Digit Med 2024; 7:195. [PMID: 39039248 PMCID: PMC11263688 DOI: 10.1038/s41746-024-01190-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2023] [Accepted: 07/15/2024] [Indexed: 07/24/2024] Open
Abstract
Explainable artificial intelligence (XAI) has experienced a vast increase in recognition over the last few years. While the technical developments are manifold, less focus has been placed on the clinical applicability and usability of systems. Moreover, not much attention has been given to XAI systems that can handle multimodal and longitudinal data, which we postulate are important features in many clinical workflows. In this study, we review, from a clinical perspective, the current state of XAI for multimodal and longitudinal datasets and highlight the challenges thereof. Additionally, we propose the XAI orchestrator, an instance that aims to help clinicians with the synopsis of multimodal and longitudinal data, the resulting AI predictions, and the corresponding explainability output. We propose several desirable properties of the XAI orchestrator, such as being adaptive, hierarchical, interactive, and uncertainty-aware.
Collapse
Affiliation(s)
| | - Haozhe Luo
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - Shelley Zixin Shu
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - Amith Kamath
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - Yannick Suter
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
- Department of Radiation Oncology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Mohamed Shelan
- Department of Radiation Oncology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Alexander Pöllinger
- Department of Diagnostic, Interventional and Pediatric Radiology, Inselspital, Bern University Hospital, Bern, Switzerland
| | - Mauricio Reyes
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
- Department of Radiation Oncology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| |
Collapse
|
10
|
Jeon K, Park WY, Kahn CE, Nagy P, You SC, Yoon SH. Advancing Medical Imaging Research Through Standardization: The Path to Rapid Development, Rigorous Validation, and Robust Reproducibility. Invest Radiol 2024:00004424-990000000-00232. [PMID: 38985896 DOI: 10.1097/rli.0000000000001106] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/12/2024]
Abstract
ABSTRACT Artificial intelligence (AI) has made significant advances in radiology. Nonetheless, challenges in AI development, validation, and reproducibility persist, primarily due to the lack of high-quality, large-scale, standardized data across the world. Addressing these challenges requires comprehensive standardization of medical imaging data and seamless integration with structured medical data.Developed by the Observational Health Data Sciences and Informatics community, the OMOP Common Data Model enables large-scale international collaborations with structured medical data. It ensures syntactic and semantic interoperability, while supporting the privacy-protected distribution of research across borders. The recently proposed Medical Imaging Common Data Model is designed to encompass all DICOM-formatted medical imaging data and integrate imaging-derived features with clinical data, ensuring their provenance.The harmonization of medical imaging data and its seamless integration with structured clinical data at a global scale will pave the way for advanced AI research in radiology. This standardization will enable federated learning, ensuring privacy-preserving collaboration across institutions and promoting equitable AI through the inclusion of diverse patient populations. Moreover, it will facilitate the development of foundation models trained on large-scale, multimodal datasets, serving as powerful starting points for specialized AI applications. Objective and transparent algorithm validation on a standardized data infrastructure will enhance reproducibility and interoperability of AI systems, driving innovation and reliability in clinical applications.
Collapse
Affiliation(s)
- Kyulee Jeon
- From the Department of Biomedical Systems Informatics, Yonsei University, Seoul, South Korea (K.J., S.C.Y.); Institution for Innovation in Digital Healthcare, Yonsei University, Seoul, South Korea (K.J., S.C.Y.); Biomedical Informatics and Data Science, Johns Hopkins University, Baltimore, MD (W.Y.P., P.N.); Department of Radiology, University of Pennsylvania, Philadelphia, PA (C.E.K.); and Department of Radiology, Seoul National University College of Medicine, Seoul National University Hospital, Seoul, South Korea (S.H.Y.)
| | | | | | | | | | | |
Collapse
|
11
|
Wagner L, Schneider DN, Mayer L, Jell A, Müller C, Lenz A, Knoll A, Wilhelm D. Towards multimodal graph neural networks for surgical instrument anticipation. Int J Comput Assist Radiol Surg 2024:10.1007/s11548-024-03226-8. [PMID: 38985412 DOI: 10.1007/s11548-024-03226-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Accepted: 06/25/2024] [Indexed: 07/11/2024]
Abstract
PURPOSE Decision support systems and context-aware assistance in the operating room have emerged as the key clinical applications supporting surgeons in their daily work and are generally based on single modalities. The model- and knowledge-based integration of multimodal data as a basis for decision support systems that can dynamically adapt to the surgical workflow has not yet been established. Therefore, we propose a knowledge-enhanced method for fusing multimodal data for anticipation tasks. METHODS We developed a holistic, multimodal graph-based approach combining imaging and non-imaging information in a knowledge graph representing the intraoperative scene of a surgery. Node and edge features of the knowledge graph are extracted from suitable data sources in the operating room using machine learning. A spatiotemporal graph neural network architecture subsequently allows for interpretation of relational and temporal patterns within the knowledge graph. We apply our approach to the downstream task of instrument anticipation while presenting a suitable modeling and evaluation strategy for this task. RESULTS Our approach achieves an F1 score of 66.86% in terms of instrument anticipation, allowing for a seamless surgical workflow and adding a valuable impact for surgical decision support systems. A resting recall of 63.33% indicates the non-prematurity of the anticipations. CONCLUSION This work shows how multimodal data can be combined with the topological properties of an operating room in a graph-based approach. Our multimodal graph architecture serves as a basis for context-sensitive decision support systems in laparoscopic surgery considering a comprehensive intraoperative operating scene.
Collapse
Affiliation(s)
- Lars Wagner
- Technical University of Munich, TUM School of Medicine and Health, Klinikum rechts der Isar, Research Group MITI, Munich, Germany.
| | - Dennis N Schneider
- Technical University of Munich, TUM School of Medicine and Health, Klinikum rechts der Isar, Research Group MITI, Munich, Germany
| | - Leon Mayer
- Technical University of Munich, TUM School of Medicine and Health, Klinikum rechts der Isar, Research Group MITI, Munich, Germany
| | - Alissa Jell
- Technical University of Munich, TUM School of Medicine and Health, Klinikum rechts der Isar, Research Group MITI, Munich, Germany
- Technical University of Munich, TUM School of Medicine and Health, Klinikum rechts der Isar, Department of Surgery, Munich, Germany
| | - Carolin Müller
- Technical University of Munich, TUM School of Medicine and Health, Klinikum rechts der Isar, Research Group MITI, Munich, Germany
| | - Alexander Lenz
- Technical University of Munich, TUM School of Computation, Information and Technology, Chair of Robotics, Artificial Intelligence and Real-Time Systems, Munich, Germany
| | - Alois Knoll
- Technical University of Munich, TUM School of Computation, Information and Technology, Chair of Robotics, Artificial Intelligence and Real-Time Systems, Munich, Germany
| | - Dirk Wilhelm
- Technical University of Munich, TUM School of Medicine and Health, Klinikum rechts der Isar, Research Group MITI, Munich, Germany
- Technical University of Munich, TUM School of Medicine and Health, Klinikum rechts der Isar, Department of Surgery, Munich, Germany
| |
Collapse
|
12
|
Chang JY, Makary MS. Evolving and Novel Applications of Artificial Intelligence in Thoracic Imaging. Diagnostics (Basel) 2024; 14:1456. [PMID: 39001346 PMCID: PMC11240935 DOI: 10.3390/diagnostics14131456] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2024] [Revised: 07/01/2024] [Accepted: 07/06/2024] [Indexed: 07/16/2024] Open
Abstract
The advent of artificial intelligence (AI) is revolutionizing medicine, particularly radiology. With the development of newer models, AI applications are demonstrating improved performance and versatile utility in the clinical setting. Thoracic imaging is an area of profound interest, given the prevalence of chest imaging and the significant health implications of thoracic diseases. This review aims to highlight the promising applications of AI within thoracic imaging. It examines the role of AI, including its contributions to improving diagnostic evaluation and interpretation, enhancing workflow, and aiding in invasive procedures. Next, it further highlights the current challenges and limitations faced by AI, such as the necessity of 'big data', ethical and legal considerations, and bias in representation. Lastly, it explores the potential directions for the application of AI in thoracic radiology.
Collapse
Affiliation(s)
- Jin Y Chang
- Department of Radiology, The Ohio State University College of Medicine, Columbus, OH 43210, USA
| | - Mina S Makary
- Department of Radiology, The Ohio State University College of Medicine, Columbus, OH 43210, USA
- Division of Vascular and Interventional Radiology, Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, OH 43210, USA
| |
Collapse
|
13
|
White A, Saranti M, d'Avila Garcez A, Hope TMH, Price CJ, Bowman H. Predicting recovery following stroke: Deep learning, multimodal data and feature selection using explainable AI. Neuroimage Clin 2024; 43:103638. [PMID: 39002223 DOI: 10.1016/j.nicl.2024.103638] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2024] [Revised: 04/22/2024] [Accepted: 06/29/2024] [Indexed: 07/15/2024]
Abstract
Machine learning offers great potential for automated prediction of post-stroke symptoms and their response to rehabilitation. Major challenges for this endeavour include the very high dimensionality of neuroimaging data, the relatively small size of the datasets available for learning and interpreting the predictive features, as well as, how to effectively combine neuroimaging and tabular data (e.g. demographic information and clinical characteristics). This paper evaluates several solutions based on two strategies. The first is to use 2D images that summarise MRI scans. The second is to select key features that improve classification accuracy. Additionally, we introduce the novel approach of training a convolutional neural network (CNN) on images that combine regions-of-interests (ROIs) extracted from MRIs, with symbolic representations of tabular data. We evaluate a series of CNN architectures (both 2D and a 3D) that are trained on different representations of MRI and tabular data, to predict whether a composite measure of post-stroke spoken picture description ability is in the aphasic or non-aphasic range. MRI and tabular data were acquired from 758 English speaking stroke survivors who participated in the PLORAS study. Each participant was assigned to one of five different groups that were matched for initial severity of symptoms, recovery time, left lesion size and the months or years post-stroke that spoken description scores were collected. Training and validation were carried out on the first four groups. The fifth (lock-box/test set) group was used to test how well model accuracy generalises to new (unseen) data. The classification accuracy for a baseline logistic regression was 0.678 based on lesion size alone, rising to 0.757 and 0.813 when initial symptom severity and recovery time were successively added. The highest classification accuracy (0.854), area under the curve (0.899) and F1 score (0.901) were observed when 8 regions of interest were extracted from each MRI scan and combined with lesion size, initial severity and recovery time in a 2D Residual Neural Network (ResNet). This was also the best model when data were limited to the 286 participants with moderate or severe initial aphasia (with area under curve = 0.865), a group that would be considered more difficult to classify. Our findings demonstrate how imaging and tabular data can be combined to achieve high post-stroke classification accuracy, even when the dataset is small in machine learning terms. We conclude by proposing how the current models could be improved to achieve even higher levels of accuracy using images from hospital scanners.
Collapse
Affiliation(s)
- Adam White
- Department of Computer Science, City, University of London, UK
| | | | | | - Thomas M H Hope
- Wellcome Centre for Human Neuroimaging, University College London, UK
| | - Cathy J Price
- Wellcome Centre for Human Neuroimaging, University College London, UK
| | - Howard Bowman
- School of Psychology, University of Birmingham, UK; School of Computer Science, University of Birmingham, UK
| |
Collapse
|
14
|
Coll L, Pareto D, Carbonell-Mirabent P, Cobo-Calvo Á, Arrambide G, Vidal-Jordana Á, Comabella M, Castilló J, Rodrı Guez-Acevedo B, Zabalza A, Galán I, Midaglia L, Nos C, Auger C, Alberich M, Río J, Sastre-Garriga J, Oliver A, Montalban X, Rovira À, Tintoré M, Lladó X, Tur C. Global and Regional Deep Learning Models for Multiple Sclerosis Stratification From MRI. J Magn Reson Imaging 2024; 60:258-267. [PMID: 37803817 DOI: 10.1002/jmri.29046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Revised: 09/15/2023] [Accepted: 09/18/2023] [Indexed: 10/08/2023] Open
Abstract
BACKGROUND The combination of anatomical MRI and deep learning-based methods such as convolutional neural networks (CNNs) is a promising strategy to build predictive models of multiple sclerosis (MS) prognosis. However, studies assessing the effect of different input strategies on model's performance are lacking. PURPOSE To compare whole-brain input sampling strategies and regional/specific-tissue strategies, which focus on a priori known relevant areas for disability accrual, to stratify MS patients based on their disability level. STUDY TYPE Retrospective. SUBJECTS Three hundred nineteen MS patients (382 brain MRI scans) with clinical assessment of disability level performed within the following 6 months (~70% training/~15% validation/~15% inference in-house dataset) and 440 MS patients from multiple centers (independent external validation cohort). FIELD STRENGTH/SEQUENCE Single vendor 1.5 T or 3.0 T. Magnetization-Prepared Rapid Gradient-Echo and Fluid-Attenuated Inversion Recovery sequences. ASSESSMENT A 7-fold patient cross validation strategy was used to train a 3D-CNN to classify patients into two groups, Expanded Disability Status Scale score (EDSS) ≥ 3.0 or EDSS < 3.0. Two strategies were investigated: 1) a global approach, taking the whole brain volume as input and 2) regional approaches using five different regions-of-interest: white matter, gray matter, subcortical gray matter, ventricles, and brainstem structures. The performance of the models was assessed in the in-house and the independent external cohorts. STATISTICAL TESTS Balanced accuracy, sensitivity, specificity, area under receiver operating characteristic (ROC) curve (AUC). RESULTS With the in-house dataset, the gray matter regional model showed the highest stratification accuracy (81%), followed by the global approach (79%). In the external dataset, without any further retraining, an accuracy of 72% was achieved for the white matter model and 71% for the global approach. DATA CONCLUSION The global approach offered the best trade-off between internal performance and external validation to stratify MS patients based on accumulated disability. EVIDENCE LEVEL 4 TECHNICAL EFFICACY: Stage 2.
Collapse
Affiliation(s)
- Llucia Coll
- Multiple Sclerosis Centre of Catalonia (Cemcat), Hospital Universitari Vall d'Hebron, Universitat Autònoma de Barcelona, Barcelona, Spain
| | - Deborah Pareto
- Section of Neuroradiology, Department of Radiology, Hospital Universitari Vall d'Hebron, Universitat Autònoma de Barcelona, Barcelona, Spain
| | - Pere Carbonell-Mirabent
- Multiple Sclerosis Centre of Catalonia (Cemcat), Hospital Universitari Vall d'Hebron, Universitat Autònoma de Barcelona, Barcelona, Spain
| | - Álvaro Cobo-Calvo
- Multiple Sclerosis Centre of Catalonia (Cemcat), Hospital Universitari Vall d'Hebron, Universitat Autònoma de Barcelona, Barcelona, Spain
| | - Georgina Arrambide
- Multiple Sclerosis Centre of Catalonia (Cemcat), Hospital Universitari Vall d'Hebron, Universitat Autònoma de Barcelona, Barcelona, Spain
| | - Ángela Vidal-Jordana
- Multiple Sclerosis Centre of Catalonia (Cemcat), Hospital Universitari Vall d'Hebron, Universitat Autònoma de Barcelona, Barcelona, Spain
| | - Manuel Comabella
- Multiple Sclerosis Centre of Catalonia (Cemcat), Hospital Universitari Vall d'Hebron, Universitat Autònoma de Barcelona, Barcelona, Spain
| | - Joaquín Castilló
- Multiple Sclerosis Centre of Catalonia (Cemcat), Hospital Universitari Vall d'Hebron, Universitat Autònoma de Barcelona, Barcelona, Spain
| | - Breogán Rodrı Guez-Acevedo
- Multiple Sclerosis Centre of Catalonia (Cemcat), Hospital Universitari Vall d'Hebron, Universitat Autònoma de Barcelona, Barcelona, Spain
| | - Ana Zabalza
- Multiple Sclerosis Centre of Catalonia (Cemcat), Hospital Universitari Vall d'Hebron, Universitat Autònoma de Barcelona, Barcelona, Spain
| | - Ingrid Galán
- Multiple Sclerosis Centre of Catalonia (Cemcat), Hospital Universitari Vall d'Hebron, Universitat Autònoma de Barcelona, Barcelona, Spain
| | - Luciana Midaglia
- Multiple Sclerosis Centre of Catalonia (Cemcat), Hospital Universitari Vall d'Hebron, Universitat Autònoma de Barcelona, Barcelona, Spain
| | - Carlos Nos
- Multiple Sclerosis Centre of Catalonia (Cemcat), Hospital Universitari Vall d'Hebron, Universitat Autònoma de Barcelona, Barcelona, Spain
| | - Cristina Auger
- Section of Neuroradiology, Department of Radiology, Hospital Universitari Vall d'Hebron, Universitat Autònoma de Barcelona, Barcelona, Spain
| | - Manel Alberich
- Section of Neuroradiology, Department of Radiology, Hospital Universitari Vall d'Hebron, Universitat Autònoma de Barcelona, Barcelona, Spain
| | - Jordi Río
- Multiple Sclerosis Centre of Catalonia (Cemcat), Hospital Universitari Vall d'Hebron, Universitat Autònoma de Barcelona, Barcelona, Spain
| | - Jaume Sastre-Garriga
- Multiple Sclerosis Centre of Catalonia (Cemcat), Hospital Universitari Vall d'Hebron, Universitat Autònoma de Barcelona, Barcelona, Spain
| | - Arnau Oliver
- Research Institute of Computer Vision and Robotics, University of Girona, Girona, Spain
| | - Xavier Montalban
- Multiple Sclerosis Centre of Catalonia (Cemcat), Hospital Universitari Vall d'Hebron, Universitat Autònoma de Barcelona, Barcelona, Spain
| | - Àlex Rovira
- Section of Neuroradiology, Department of Radiology, Hospital Universitari Vall d'Hebron, Universitat Autònoma de Barcelona, Barcelona, Spain
| | - Mar Tintoré
- Multiple Sclerosis Centre of Catalonia (Cemcat), Hospital Universitari Vall d'Hebron, Universitat Autònoma de Barcelona, Barcelona, Spain
| | - Xavier Lladó
- Research Institute of Computer Vision and Robotics, University of Girona, Girona, Spain
| | - Carmen Tur
- Multiple Sclerosis Centre of Catalonia (Cemcat), Hospital Universitari Vall d'Hebron, Universitat Autònoma de Barcelona, Barcelona, Spain
| |
Collapse
|
15
|
Tandon P, Nguyen KAN, Edalati M, Parchure P, Raut G, Reich DL, Freeman R, Levin MA, Timsina P, Powell CA, Fayad ZA, Kia A. Development and Validation of a Deep Learning Classifier Using Chest Radiographs to Predict Extubation Success in Patients Undergoing Invasive Mechanical Ventilation. Bioengineering (Basel) 2024; 11:626. [PMID: 38927862 PMCID: PMC11200686 DOI: 10.3390/bioengineering11060626] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2024] [Revised: 05/27/2024] [Accepted: 06/10/2024] [Indexed: 06/28/2024] Open
Abstract
The decision to extubate patients on invasive mechanical ventilation is critical; however, clinician performance in identifying patients to liberate from the ventilator is poor. Machine Learning-based predictors using tabular data have been developed; however, these fail to capture the wide spectrum of data available. Here, we develop and validate a deep learning-based model using routinely collected chest X-rays to predict the outcome of attempted extubation. We included 2288 serial patients admitted to the Medical ICU at an urban academic medical center, who underwent invasive mechanical ventilation, with at least one intubated CXR, and a documented extubation attempt. The last CXR before extubation for each patient was taken and split 79/21 for training/testing sets, then transfer learning with k-fold cross-validation was used on a pre-trained ResNet50 deep learning architecture. The top three models were ensembled to form a final classifier. The Grad-CAM technique was used to visualize image regions driving predictions. The model achieved an AUC of 0.66, AUPRC of 0.94, sensitivity of 0.62, and specificity of 0.60. The model performance was improved compared to the Rapid Shallow Breathing Index (AUC 0.61) and the only identified previous study in this domain (AUC 0.55), but significant room for improvement and experimentation remains.
Collapse
Affiliation(s)
- Pranai Tandon
- Department of Medicine Division of Pulmonary, Critical Care, and Sleep Medicine, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
| | - Kim-Anh-Nhi Nguyen
- Institute for Healthcare Delivery Science, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA; (K.-A.-N.N.); (M.E.); (P.P.); (G.R.); (R.F.); (P.T.); (A.K.)
| | - Masoud Edalati
- Institute for Healthcare Delivery Science, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA; (K.-A.-N.N.); (M.E.); (P.P.); (G.R.); (R.F.); (P.T.); (A.K.)
| | - Prathamesh Parchure
- Institute for Healthcare Delivery Science, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA; (K.-A.-N.N.); (M.E.); (P.P.); (G.R.); (R.F.); (P.T.); (A.K.)
| | - Ganesh Raut
- Institute for Healthcare Delivery Science, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA; (K.-A.-N.N.); (M.E.); (P.P.); (G.R.); (R.F.); (P.T.); (A.K.)
| | - David L. Reich
- Department of Anesthesiology, Perioperative and Pain Medicine, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA (M.A.L.)
| | - Robert Freeman
- Institute for Healthcare Delivery Science, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA; (K.-A.-N.N.); (M.E.); (P.P.); (G.R.); (R.F.); (P.T.); (A.K.)
| | - Matthew A. Levin
- Department of Anesthesiology, Perioperative and Pain Medicine, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA (M.A.L.)
- Department of Genetics and Genomic Sciences, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
- Windreich Department of Artificial Intelligence and Human Health, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
| | - Prem Timsina
- Institute for Healthcare Delivery Science, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA; (K.-A.-N.N.); (M.E.); (P.P.); (G.R.); (R.F.); (P.T.); (A.K.)
| | - Charles A. Powell
- Department of Medicine Division of Pulmonary, Critical Care, and Sleep Medicine, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
| | - Zahi A. Fayad
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
- Department of Radiology, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
| | - Arash Kia
- Institute for Healthcare Delivery Science, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA; (K.-A.-N.N.); (M.E.); (P.P.); (G.R.); (R.F.); (P.T.); (A.K.)
- Department of Anesthesiology, Perioperative and Pain Medicine, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA (M.A.L.)
| |
Collapse
|
16
|
Moukheiber D, Restrepo D, Cajas SA, Montoya MPA, Celi LA, Kuo KT, López DM, Moukheiber L, Moukheiber M, Moukheiber S, Osorio-Valencia JS, Purkayastha S, Paddo AR, Wu C, Kuo PC. A multimodal framework for extraction and fusion of satellite images and public health data. Sci Data 2024; 11:634. [PMID: 38879585 PMCID: PMC11180113 DOI: 10.1038/s41597-024-03366-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Accepted: 05/10/2024] [Indexed: 06/19/2024] Open
Abstract
In low- and middle-income countries, the substantial costs associated with traditional data collection pose an obstacle to facilitating decision-making in the field of public health. Satellite imagery offers a potential solution, but the image extraction and analysis can be costly and requires specialized expertise. We introduce SatelliteBench, a scalable framework for satellite image extraction and vector embeddings generation. We also propose a novel multimodal fusion pipeline that utilizes a series of satellite imagery and metadata. The framework was evaluated generating a dataset with a collection of 12,636 images and embeddings accompanied by comprehensive metadata, from 81 municipalities in Colombia between 2016 and 2018. The dataset was then evaluated in 3 tasks: including dengue case prediction, poverty assessment, and access to education. The performance showcases the versatility and practicality of SatelliteBench, offering a reproducible, accessible and open tool to enhance decision-making in public health.
Collapse
Affiliation(s)
- Dana Moukheiber
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
| | - David Restrepo
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA.
- Departamento de Telemática, Universidad del Cauca, Popayán, Cauca, Colombia.
| | - Sebastián Andrés Cajas
- John A. Paulson School of Engineering and Applied Sciences, Harvard University, Boston, Massachusetts, USA
- School of Computer Science, University College Dublin, Dublin, Ireland
| | | | - Leo Anthony Celi
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
- Department of Biostatistics, Harvard TH Chan School of Public Health, Boston, Massachusetts, USA
- Department of Medicine, Beth Israel Deaconess Medical Center, Boston, Massachusetts, USA
| | - Kuan-Ting Kuo
- Department of Computer Science, National Tsing Hua University, Hsinchu, Taiwan
| | - Diego M López
- Departamento de Telemática, Universidad del Cauca, Popayán, Cauca, Colombia
| | - Lama Moukheiber
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
| | - Mira Moukheiber
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
| | - Sulaiman Moukheiber
- Department of Computer Science, Worcester Polytechnic Institute, Worcester, Massachusetts, USA
| | | | - Saptarshi Purkayastha
- Department of BioHealth Informatics, Indiana University Luddy School of Informatics, Computing, and Engineering, Indianapolis, Indiana, USA
| | - Atika Rahman Paddo
- Department of BioHealth Informatics, Indiana University Luddy School of Informatics, Computing, and Engineering, Indianapolis, Indiana, USA
| | - Chenwei Wu
- Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, Michigan, USA
| | - Po-Chih Kuo
- Department of Computer Science, National Tsing Hua University, Hsinchu, Taiwan.
| |
Collapse
|
17
|
Teuho J, Schultz J, Klén R, Juarez-Orozco LE, Knuuti J, Saraste A, Ono N, Kanaya S. Explainable deep-learning-based ischemia detection using hybrid O-15 H 2O perfusion positron emission tomography and computed tomography imaging with clinical data. J Nucl Cardiol 2024:101889. [PMID: 38852900 DOI: 10.1016/j.nuclcard.2024.101889] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2024] [Revised: 04/30/2024] [Accepted: 05/25/2024] [Indexed: 06/11/2024]
Abstract
BACKGROUND We developed an explainable deep-learning (DL)-based classifier to identify flow-limiting coronary artery disease (CAD) by O-15 H2O perfusion positron emission tomography computed tomography (PET/CT) and coronary CT angiography (CTA) imaging. The classifier uses polar map images with numerical data and visualizes data findings. METHODS A DLmodel was implemented and evaluated on 138 individuals, consisting of a combined image-and data-based classifier considering 35 clinical, CTA, and PET variables. Data from invasive coronary angiography were used as reference. Performance was evaluated with clinical classification using accuracy (ACC), area under the receiver operating characteristic curve (AUC), F1 score (F1S), sensitivity (SEN), specificity (SPE), precision (PRE), net benefit, and Cohen's Kappa. Statistical testing was conducted using McNemar's test. RESULTS The DL model had a median ACC = 0.8478, AUC = 0.8481, F1S = 0.8293, SEN = 0.8500, SPE = 0.8846, and PRE = 0.8500. Improved detection of true-positive and false-negative cases, increased net benefit in thresholds up to 34%, and comparable Cohen's kappa was seen, reaching similar performance to clinical reading. Statistical testing revealed no significant differences between DL model and clinical reading. CONCLUSIONS The combined DL model is a feasible and an effective method in detection of CAD, allowing to highlight important data findings individually in interpretable manner.
Collapse
Affiliation(s)
- Jarmo Teuho
- Data Science Center, Nara Institute of Science and Technology, Nara, Japan; Turku PET Centre, University of Turku, Turku, Finland; Turku PET Centre, Turku University Hospital, Turku, Finland.
| | - Jussi Schultz
- Turku PET Centre, University of Turku, Turku, Finland; Turku PET Centre, Turku University Hospital, Turku, Finland
| | - Riku Klén
- Turku PET Centre, University of Turku, Turku, Finland; Turku PET Centre, Turku University Hospital, Turku, Finland
| | - Luis Eduardo Juarez-Orozco
- Department of Cardiology, Division Heart and Lungs, University Medical Center Utrecht, Utrecht University, Utrecht, the Netherlands; Department of Cardiology, Meader Medical Center, Amersfoort, the Netherlands
| | - Juhani Knuuti
- Turku PET Centre, University of Turku, Turku, Finland; Turku PET Centre, Turku University Hospital, Turku, Finland
| | - Antti Saraste
- Turku PET Centre, Turku University Hospital, Turku, Finland; Heart Centre, Turku University Hospital and University of Turku, Turku, Finland
| | - Naoaki Ono
- Data Science Center, Nara Institute of Science and Technology, Nara, Japan; Department of Science and Technology, Nara Institute of Science and Technology, Nara, Japan
| | - Shigehiko Kanaya
- Data Science Center, Nara Institute of Science and Technology, Nara, Japan; Department of Science and Technology, Nara Institute of Science and Technology, Nara, Japan
| |
Collapse
|
18
|
Fleurence RL, Kent S, Adamson B, Tcheng J, Balicer R, Ross JS, Haynes K, Muller P, Campbell J, Bouée-Benhamiche E, García Martí S, Ramsey S. Assessing Real-World Data From Electronic Health Records for Health Technology Assessment: The SUITABILITY Checklist: A Good Practices Report of an ISPOR Task Force. VALUE IN HEALTH : THE JOURNAL OF THE INTERNATIONAL SOCIETY FOR PHARMACOECONOMICS AND OUTCOMES RESEARCH 2024; 27:692-701. [PMID: 38871437 PMCID: PMC11182651 DOI: 10.1016/j.jval.2024.01.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Accepted: 01/23/2024] [Indexed: 06/15/2024]
Abstract
This ISPOR Good Practices report provides a framework for assessing the suitability of electronic health records data for use in health technology assessments (HTAs). Although electronic health record (EHR) data can fill evidence gaps and improve decisions, several important limitations can affect its validity and relevance. The ISPOR framework includes 2 components: data delineation and data fitness for purpose. Data delineation provides a complete understanding of the data and an assessment of its trustworthiness by describing (1) data characteristics; (2) data provenance; and (3) data governance. Fitness for purpose comprises (1) data reliability items, ie, how accurate and complete the estimates are for answering the question at hand and (2) data relevance items, which assess how well the data are suited to answer the particular question from a decision-making perspective. The report includes a checklist specific to EHR data reporting: the ISPOR SUITABILITY Checklist. It also provides recommendations for HTA agencies and policy makers to improve the use of EHR-derived data over time. The report concludes with a discussion of limitations and future directions in the field, including the potential impact from the substantial and rapid advances in the diffusion and capabilities of large language models and generative artificial intelligence. The report's immediate audiences are HTA evidence developers and users. We anticipate that it will also be useful to other stakeholders, particularly regulators and manufacturers, in the future.
Collapse
Affiliation(s)
| | - Seamus Kent
- Erasmus School of Health & Policy Management, Erasmus University, Rotterdam, The Netherlands
| | | | | | | | - Joseph S Ross
- Yale School of Medicine, Yale University, New Haven, CT, USA
| | - Kevin Haynes
- Janssen Research and Development, Titusville, NJ, USA
| | - Patrick Muller
- Centre for Guidelines, National Institute for Health and Care Excellence, Manchester or London, England, UK
| | - Jon Campbell
- National Pharmaceutical Council, Washington, DC, USA
| | - Elsa Bouée-Benhamiche
- Public Health and Healthcare Division, Institut National du Cancer, Boulogne-Billancourt, France
| | - Sebastián García Martí
- Health Technology Assessment and Health Economics Department, Institute for Clinical Effectiveness and Health Policy, Buenos Aires, Argentina
| | - Scott Ramsey
- Hutchinson Institute for Cancer Outcomes Research, Fred Hutchinson Cancer Center, Seattle, WA, USA.
| |
Collapse
|
19
|
Nordblom N, Büttner M, Schwendicke F. Artificial Intelligence in Orthodontics: Critical Review. J Dent Res 2024; 103:577-584. [PMID: 38682436 PMCID: PMC11118788 DOI: 10.1177/00220345241235606] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/01/2024] Open
Abstract
With increasing digitalization in orthodontics, certain orthodontic manufacturing processes such as the fabrication of indirect bonding trays, aligner production, or wire bending can be automated. However, orthodontic treatment planning and evaluation remains a specialist's task and responsibility. As the prediction of growth in orthodontic patients and response to orthodontic treatment is inherently complex and individual, orthodontists make use of features gathered from longitudinal, multimodal, and standardized orthodontic data sets. Currently, these data sets are used by the orthodontist to make informed, rule-based treatment decisions. In research, artificial intelligence (AI) has been successfully applied to assist orthodontists with the extraction of relevant data from such data sets. Here, AI has been applied for the analysis of clinical imagery, such as automated landmark detection in lateral cephalograms but also for evaluation of intraoral scans or photographic data. Furthermore, AI is applied to help orthodontists with decision support for treatment decisions such as the need for orthognathic surgery or for orthodontic tooth extractions. One major challenge in current AI research in orthodontics is the limited generalizability, as most studies use unicentric data with high risks of bias. Moreover, comparing AI across different studies and tasks is virtually impossible as both outcomes and outcome metrics vary widely, and underlying data sets are not standardized. Notably, only few AI applications in orthodontics have reached full clinical maturity and regulatory approval, and researchers in the field are tasked with tackling real-world evaluation and implementation of AI into the orthodontic workflow.
Collapse
Affiliation(s)
- N.F. Nordblom
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité–Universitätsmedizin Berlin, Berlin, Germany
| | - M. Büttner
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité–Universitätsmedizin Berlin, Berlin, Germany
| | - F. Schwendicke
- Department of Conservative Dentistry and Periodontology, University Hospital, Ludwig-Maximilians University of Munich, Munich, Germany
| |
Collapse
|
20
|
Regulski PA. Editorial for "Deep Learning Model for Grading and Localization of Lumbar Disc Herniation on Magnetic Resonance Imaging". J Magn Reson Imaging 2024. [PMID: 38804734 DOI: 10.1002/jmri.29457] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2024] [Revised: 05/09/2024] [Accepted: 05/09/2024] [Indexed: 05/29/2024] Open
Affiliation(s)
- Piotr A Regulski
- Laboratory of Digital Imaging and Virtual Reality, Department of Dental and Maxillofacial Radiology, Medical University of Warsaw, Warsaw, Poland
| |
Collapse
|
21
|
Kapitany V, Fatima A, Zickus V, Whitelaw J, McGhee E, Insall R, Machesky L, Faccio D. Single-sample image-fusion upsampling of fluorescence lifetime images. SCIENCE ADVANCES 2024; 10:eadn0139. [PMID: 38781345 PMCID: PMC11114222 DOI: 10.1126/sciadv.adn0139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/17/2023] [Accepted: 04/17/2024] [Indexed: 05/25/2024]
Abstract
Fluorescence lifetime imaging microscopy (FLIM) provides detailed information about molecular interactions and biological processes. A major bottleneck for FLIM is image resolution at high acquisition speeds due to the engineering and signal-processing limitations of time-resolved imaging technology. Here, we present single-sample image-fusion upsampling, a data-fusion approach to computational FLIM super-resolution that combines measurements from a low-resolution time-resolved detector (that measures photon arrival time) and a high-resolution camera (that measures intensity only). To solve this otherwise ill-posed inverse retrieval problem, we introduce statistically informed priors that encode local and global correlations between the two "single-sample" measurements. This bypasses the risk of out-of-distribution hallucination as in traditional data-driven approaches and delivers enhanced images compared, for example, to standard bilinear interpolation. The general approach laid out by single-sample image-fusion upsampling can be applied to other image super-resolution problems where two different datasets are available.
Collapse
Affiliation(s)
- Valentin Kapitany
- School of Physics & Astronomy, University of Glasgow, Glasgow G12 8QQ, UK
| | - Areeba Fatima
- School of Physics & Astronomy, University of Glasgow, Glasgow G12 8QQ, UK
| | - Vytautas Zickus
- School of Physics & Astronomy, University of Glasgow, Glasgow G12 8QQ, UK
- Department of Laser Technologies, Center for Physical Sciences and Technology, LT-10257 Vilnius, Lithuania
| | | | - Ewan McGhee
- School of Physics & Astronomy, University of Glasgow, Glasgow G12 8QQ, UK
- Cancer Research UK, Beatson Institute, Glasgow, UK
| | | | | | - Daniele Faccio
- School of Physics & Astronomy, University of Glasgow, Glasgow G12 8QQ, UK
| |
Collapse
|
22
|
Călburean PA, Pannone L, Monaco C, Rocca DD, Sorgente A, Almorad A, Bala G, Aglietti F, Ramak R, Overeinder I, Ströker E, Pappaert G, Măru'teri M, Harpa M, La Meir M, Brugada P, Sieira J, Sarkozy A, Chierchia GB, de Asmundis C. Predicting and Recognizing Drug-Induced Type I Brugada Pattern Using ECG-Based Deep Learning. J Am Heart Assoc 2024; 13:e033148. [PMID: 38726893 PMCID: PMC11179812 DOI: 10.1161/jaha.123.033148] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Accepted: 02/28/2024] [Indexed: 05/22/2024]
Abstract
BACKGROUND Brugada syndrome (BrS) has been associated with sudden cardiac death in otherwise healthy subjects, and drug-induced BrS accounts for 55% to 70% of all patients with BrS. This study aims to develop a deep convolutional neural network and evaluate its performance in recognizing and predicting BrS diagnosis. METHODS AND RESULTS Consecutive patients who underwent ajmaline testing for BrS following a standardized protocol were included. ECG tracings from baseline and during ajmaline were transformed using wavelet analysis and a deep convolutional neural network was separately trained to (1) recognize and (2) predict BrS type I pattern. The resultant networks are referred to as BrS-Net. A total of 1188 patients were included, of which 361 (30.3%) patients developed BrS type I pattern during ajmaline infusion. When trained and evaluated on ECG tracings during ajmaline, BrS-Net recognized a BrS type I pattern with an AUC-ROC of 0.945 (0.921-0.969) and an AUC-PR of 0.892 (0.815-0.939). When trained and evaluated on ECG tracings at baseline, BrS-Net predicted a BrS type I pattern during ajmaline with an AUC-ROC of 0.805 (0.845-0.736) and an AUC-PR of 0.605 (0.460-0.664). CONCLUSIONS BrS-Net, a deep convolutional neural network, can identify BrS type I pattern with high performance. BrS-Net can predict from baseline ECG the development of a BrS type I pattern after ajmaline with good performance in an unselected population.
Collapse
Affiliation(s)
- Paul-Adrian Călburean
- Heart Rhythm Management Centre, Postgraduate Program in Cardiac Electrophysiology and Pacing Universitair Ziekenhuis Brussel, Vrije Universiteit Brussel, European Reference Networks Guard-Heart Brussels Belgium
- University of Medicine, Pharmacy, Science and Technology "George Emil Palade" of Târgu Mureş Târgu Mureş Romania
| | - Luigi Pannone
- Heart Rhythm Management Centre, Postgraduate Program in Cardiac Electrophysiology and Pacing Universitair Ziekenhuis Brussel, Vrije Universiteit Brussel, European Reference Networks Guard-Heart Brussels Belgium
| | - Cinzia Monaco
- Heart Rhythm Management Centre, Postgraduate Program in Cardiac Electrophysiology and Pacing Universitair Ziekenhuis Brussel, Vrije Universiteit Brussel, European Reference Networks Guard-Heart Brussels Belgium
| | - Domenico Della Rocca
- Heart Rhythm Management Centre, Postgraduate Program in Cardiac Electrophysiology and Pacing Universitair Ziekenhuis Brussel, Vrije Universiteit Brussel, European Reference Networks Guard-Heart Brussels Belgium
| | - Antonio Sorgente
- Heart Rhythm Management Centre, Postgraduate Program in Cardiac Electrophysiology and Pacing Universitair Ziekenhuis Brussel, Vrije Universiteit Brussel, European Reference Networks Guard-Heart Brussels Belgium
| | - Alexandre Almorad
- Heart Rhythm Management Centre, Postgraduate Program in Cardiac Electrophysiology and Pacing Universitair Ziekenhuis Brussel, Vrije Universiteit Brussel, European Reference Networks Guard-Heart Brussels Belgium
| | - Gezim Bala
- Heart Rhythm Management Centre, Postgraduate Program in Cardiac Electrophysiology and Pacing Universitair Ziekenhuis Brussel, Vrije Universiteit Brussel, European Reference Networks Guard-Heart Brussels Belgium
| | - Filippo Aglietti
- Heart Rhythm Management Centre, Postgraduate Program in Cardiac Electrophysiology and Pacing Universitair Ziekenhuis Brussel, Vrije Universiteit Brussel, European Reference Networks Guard-Heart Brussels Belgium
| | - Robbert Ramak
- Heart Rhythm Management Centre, Postgraduate Program in Cardiac Electrophysiology and Pacing Universitair Ziekenhuis Brussel, Vrije Universiteit Brussel, European Reference Networks Guard-Heart Brussels Belgium
| | - Ingrid Overeinder
- Heart Rhythm Management Centre, Postgraduate Program in Cardiac Electrophysiology and Pacing Universitair Ziekenhuis Brussel, Vrije Universiteit Brussel, European Reference Networks Guard-Heart Brussels Belgium
| | - Erwin Ströker
- Heart Rhythm Management Centre, Postgraduate Program in Cardiac Electrophysiology and Pacing Universitair Ziekenhuis Brussel, Vrije Universiteit Brussel, European Reference Networks Guard-Heart Brussels Belgium
| | - Gudrun Pappaert
- Heart Rhythm Management Centre, Postgraduate Program in Cardiac Electrophysiology and Pacing Universitair Ziekenhuis Brussel, Vrije Universiteit Brussel, European Reference Networks Guard-Heart Brussels Belgium
| | - Marius Măru'teri
- University of Medicine, Pharmacy, Science and Technology "George Emil Palade" of Târgu Mureş Târgu Mureş Romania
| | - Marius Harpa
- University of Medicine, Pharmacy, Science and Technology "George Emil Palade" of Târgu Mureş Târgu Mureş Romania
| | - Mark La Meir
- Heart Rhythm Management Centre, Postgraduate Program in Cardiac Electrophysiology and Pacing Universitair Ziekenhuis Brussel, Vrije Universiteit Brussel, European Reference Networks Guard-Heart Brussels Belgium
| | - Pedro Brugada
- Heart Rhythm Management Centre, Postgraduate Program in Cardiac Electrophysiology and Pacing Universitair Ziekenhuis Brussel, Vrije Universiteit Brussel, European Reference Networks Guard-Heart Brussels Belgium
| | - Juan Sieira
- Heart Rhythm Management Centre, Postgraduate Program in Cardiac Electrophysiology and Pacing Universitair Ziekenhuis Brussel, Vrije Universiteit Brussel, European Reference Networks Guard-Heart Brussels Belgium
| | - Andrea Sarkozy
- Heart Rhythm Management Centre, Postgraduate Program in Cardiac Electrophysiology and Pacing Universitair Ziekenhuis Brussel, Vrije Universiteit Brussel, European Reference Networks Guard-Heart Brussels Belgium
| | - Gian-Battista Chierchia
- Heart Rhythm Management Centre, Postgraduate Program in Cardiac Electrophysiology and Pacing Universitair Ziekenhuis Brussel, Vrije Universiteit Brussel, European Reference Networks Guard-Heart Brussels Belgium
| | - Carlo de Asmundis
- Heart Rhythm Management Centre, Postgraduate Program in Cardiac Electrophysiology and Pacing Universitair Ziekenhuis Brussel, Vrije Universiteit Brussel, European Reference Networks Guard-Heart Brussels Belgium
| |
Collapse
|
23
|
Baheti B, Innani S, Nasrallah M, Bakas S. Prognostic stratification of glioblastoma patients by unsupervised clustering of morphology patterns on whole slide images furthering our disease understanding. Front Neurosci 2024; 18:1304191. [PMID: 38831756 PMCID: PMC11146603 DOI: 10.3389/fnins.2024.1304191] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Accepted: 04/25/2024] [Indexed: 06/05/2024] Open
Abstract
Introduction Glioblastoma (GBM) is a highly aggressive malignant tumor of the central nervous system that displays varying molecular and morphological profiles, leading to challenging prognostic assessments. Stratifying GBM patients according to overall survival (OS) from H&E-stained whole slide images (WSI) using advanced computational methods is challenging, but with direct clinical implications. Methods This work is focusing on GBM (IDH-wildtype, CNS WHO Gr.4) cases, identified from the TCGA-GBM and TCGA-LGG collections after considering the 2021 WHO classification criteria. The proposed approach starts with patch extraction in each WSI, followed by comprehensive patch-level curation to discard artifactual content, i.e., glass reflections, pen markings, dust on the slide, and tissue tearing. Each patch is then computationally described as a feature vector defined by a pre-trained VGG16 convolutional neural network. Principal component analysis provides a feature representation of reduced dimensionality, further facilitating identification of distinct groups of morphology patterns, via unsupervised k-means clustering. Results The optimal number of clusters, according to cluster reproducibility and separability, is automatically determined based on the rand index and silhouette coefficient, respectively. Our proposed approach achieved prognostic stratification accuracy of 83.33% on a multi-institutional independent unseen hold-out test set with sensitivity and specificity of 83.33%. Discussion We hypothesize that the quantification of these clusters of morphology patterns, reflect the tumor's spatial heterogeneity and yield prognostic relevant information to distinguish between short and long survivors using a decision tree classifier. The interpretability analysis of the obtained results can contribute to furthering and quantifying our understanding of GBM and potentially improving our diagnostic and prognostic predictions.
Collapse
Affiliation(s)
- Bhakti Baheti
- Division of Computational Pathology, Department of Pathology and Laboratory Medicine, Indiana University School of Medicine, Indianapolis, IN, United States
- Center for Artificial Intelligence and Data Science for Integrated Diagnostics (AI2D) and Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, United States
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| | - Shubham Innani
- Division of Computational Pathology, Department of Pathology and Laboratory Medicine, Indiana University School of Medicine, Indianapolis, IN, United States
- Center for Artificial Intelligence and Data Science for Integrated Diagnostics (AI2D) and Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, United States
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| | - MacLean Nasrallah
- Center for Artificial Intelligence and Data Science for Integrated Diagnostics (AI2D) and Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, United States
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| | - Spyridon Bakas
- Division of Computational Pathology, Department of Pathology and Laboratory Medicine, Indiana University School of Medicine, Indianapolis, IN, United States
- Center for Artificial Intelligence and Data Science for Integrated Diagnostics (AI2D) and Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, United States
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
- Department of Computer Science, Luddy School of Informatics, Computing, and Engineering, Indiana University, Indianapolis, IN, United States
| |
Collapse
|
24
|
Rubegni G, Cartocci A, Tognetti L, Tosi G, Salfi M, Caruso A, Castellino N, Orione M, Cappellani F, Fallico M, D'Esposito F, Russo A, Gagliano C, Avitabile T. Design of a new 3D printed all-in-one magnetic smartphone adapter for fundus and anterior segment imaging. Eur J Ophthalmol 2024:11206721241246187. [PMID: 38644806 DOI: 10.1177/11206721241246187] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/23/2024]
Abstract
PURPOSE To describe and validate a 3D-printed adapter tool which could be used with either a slit lamp or a condensing lens, interchangeable between devices through magnetic fastening, in order to provide physicians a quick, easy and effective method of obtaining clinical photos. MATERIALS AND METHODS Three specialists, with at least 4-year experience in ophthalmology, gave a rate of image quality obtained by our device and the diagnostic confidence grade. The 3 specialists conducted each 13 or 14 examinations with the smartphone and magnetic adapter. At the end of evaluation, they rated with the Likert scale the ease of use of the device in obtaining clinical images of the anterior segment and ocular fundus respectively. RESULTS Data of quality perception and confidence demonstrated high values not dissimilar to the "de visu" eye examination. Moreover the instrument we designed turned out to be very user friendly. CONCLUSION Our adapter coupled with a modern smartphone was able to obtain 4k images and videos of anterior segment, central and peripheral fundus, in an easy and inexpensive way.
Collapse
Affiliation(s)
- Giovanni Rubegni
- Ophthalmology Unit, Department of Medicine, Surgery and Neurosciences, University of Siena, Siena, Italy
| | - Alessandra Cartocci
- Dermatology Unit, Department of Medical, Surgical and Neurosciences, University of Siena, Siena, Italy
| | - Linda Tognetti
- Dermatology Unit, Department of Medical, Surgical and Neurosciences, University of Siena, Siena, Italy
| | - Gianmarco Tosi
- Ophthalmology Unit, Department of Medicine, Surgery and Neurosciences, University of Siena, Siena, Italy
| | - Massimiliano Salfi
- Department of mathematics and informatics, University of Catania, Catania, Italy
| | - Andrea Caruso
- Department of mathematics and informatics, University of Catania, Catania, Italy
| | | | - Matteo Orione
- Department of Ophthalmology, University of Catania, Catania, Italy
| | | | - Matteo Fallico
- Department of Ophthalmology, University of Catania, Catania, Italy
| | - Fabiana D'Esposito
- Department of Neurosciences, Reproductive Sciences and Dentistry, University of Naples Federico II, Napoli, Italy
| | - Andrea Russo
- Department of Ophthalmology, University of Catania, Catania, Italy
| | | | | |
Collapse
|
25
|
El-Ateif S, Idri A. Multimodality Fusion Strategies in Eye Disease Diagnosis. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01105-x. [PMID: 38639808 DOI: 10.1007/s10278-024-01105-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/05/2024] [Revised: 03/08/2024] [Accepted: 03/26/2024] [Indexed: 04/20/2024]
Abstract
Multimodality fusion has gained significance in medical applications, particularly in diagnosing challenging diseases like eye diseases, notably diabetic eye diseases that pose risks of vision loss and blindness. Mono-modality eye disease diagnosis proves difficult, often missing crucial disease indicators. In response, researchers advocate multimodality-based approaches to enhance diagnostics. This study is a unique exploration, evaluating three multimodality fusion strategies-early, joint, and late-in conjunction with state-of-the-art convolutional neural network models for automated eye disease binary detection across three datasets: fundus fluorescein angiography, macula, and combination of digital retinal images for vessel extraction, structured analysis of the retina, and high-resolution fundus. Findings reveal the efficacy of each fusion strategy: type 0 early fusion with DenseNet121 achieves an impressive 99.45% average accuracy. InceptionResNetV2 emerges as the top-performing joint fusion architecture with an average accuracy of 99.58%. Late fusion ResNet50V2 achieves a perfect score of 100% across all metrics, surpassing both early and joint fusion. Comparative analysis demonstrates that late fusion ResNet50V2 matches the accuracy of state-of-the-art feature-level fusion model for multiview learning. In conclusion, this study substantiates late fusion as the optimal strategy for eye disease diagnosis compared to early and joint fusion, showcasing its superiority in leveraging multimodal information.
Collapse
Affiliation(s)
- Sara El-Ateif
- Software Project Management Research Team, ENSIAS, Mohammed V University, BP 713, Agdal, Rabat, Morocco
| | - Ali Idri
- Software Project Management Research Team, ENSIAS, Mohammed V University, BP 713, Agdal, Rabat, Morocco.
- Faculty of Medical Sciences, Mohammed VI Polytechnic University, Marrakech-Rhamna, Benguerir, Morocco.
| |
Collapse
|
26
|
Li T, Guo Y, Zhao Z, Chen M, Lin Q, Hu X, Yao Z, Hu B. Automated Diagnosis of Major Depressive Disorder With Multi-Modal MRIs Based on Contrastive Learning: A Few-Shot Study. IEEE Trans Neural Syst Rehabil Eng 2024; 32:1566-1576. [PMID: 38512734 DOI: 10.1109/tnsre.2024.3380357] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/23/2024]
Abstract
Depression ranks among the most prevalent mood-related psychiatric disorders. Existing clinical diagnostic approaches relying on scale interviews are susceptible to individual and environmental variations. In contrast, the integration of neuroimaging techniques and computer science has provided compelling evidence for the quantitative assessment of major depressive disorder (MDD). However, one of the major challenges in computer-aided diagnosis of MDD is to automatically and effectively mine the complementary cross-modal information from limited datasets. In this study, we proposed a few-shot learning framework that integrates multi-modal MRI data based on contrastive learning. In the upstream task, it is designed to extract knowledge from heterogeneous data. Subsequently, the downstream task is dedicated to transferring the acquired knowledge to the target dataset, where a hierarchical fusion paradigm is also designed to integrate features across inter- and intra-modalities. Lastly, the proposed model was evaluated on a set of multi-modal clinical data, achieving average scores of 73.52% and 73.09% for accuracy and AUC, respectively. Our findings also reveal that the brain regions within the default mode network and cerebellum play a crucial role in the diagnosis, which provides further direction in exploring reproducible biomarkers for MDD diagnosis.
Collapse
|
27
|
Machado Reyes D, Chao H, Hahn J, Shen L, Yan P. Identifying Progression-Specific Alzheimer's Subtypes Using Multimodal Transformer. J Pers Med 2024; 14:421. [PMID: 38673048 PMCID: PMC11051083 DOI: 10.3390/jpm14040421] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Revised: 04/01/2024] [Accepted: 04/08/2024] [Indexed: 04/28/2024] Open
Abstract
Alzheimer's disease (AD) is the most prevalent neurodegenerative disease, yet its current treatments are limited to stopping disease progression. Moreover, the effectiveness of these treatments remains uncertain due to the heterogeneity of the disease. Therefore, it is essential to identify disease subtypes at a very early stage. Current data-driven approaches can be used to classify subtypes during later stages of AD or related disorders, but making predictions in the asymptomatic or prodromal stage is challenging. Furthermore, the classifications of most existing models lack explainability, and these models rely solely on a single modality for assessment, limiting the scope of their analysis. Thus, we propose a multimodal framework that utilizes early-stage indicators, including imaging, genetics, and clinical assessments, to classify AD patients into progression-specific subtypes at an early stage. In our framework, we introduce a tri-modal co-attention mechanism (Tri-COAT) to explicitly capture cross-modal feature associations. Data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) (slow progressing = 177, intermediate = 302, and fast = 15) were used to train and evaluate Tri-COAT using a 10-fold stratified cross-testing approach. Our proposed model outperforms baseline models and sheds light on essential associations across multimodal features supported by known biological mechanisms. The multimodal design behind Tri-COAT allows it to achieve the highest classification area under the receiver operating characteristic curve while simultaneously providing interpretability to the model predictions through the co-attention mechanism.
Collapse
Affiliation(s)
- Diego Machado Reyes
- Department of Biomedical Engineering, Center for Biotechnology and Interdisciplinary Studies, Rensselaer Polytechnic Institute, Troy, NY 12180, USA; (D.M.R.); (H.C.); (J.H.)
| | - Hanqing Chao
- Department of Biomedical Engineering, Center for Biotechnology and Interdisciplinary Studies, Rensselaer Polytechnic Institute, Troy, NY 12180, USA; (D.M.R.); (H.C.); (J.H.)
| | - Juergen Hahn
- Department of Biomedical Engineering, Center for Biotechnology and Interdisciplinary Studies, Rensselaer Polytechnic Institute, Troy, NY 12180, USA; (D.M.R.); (H.C.); (J.H.)
| | - Li Shen
- Department of Biostatistics, Epidemiology and Informatics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA;
| | - Pingkun Yan
- Department of Biomedical Engineering, Center for Biotechnology and Interdisciplinary Studies, Rensselaer Polytechnic Institute, Troy, NY 12180, USA; (D.M.R.); (H.C.); (J.H.)
| | | |
Collapse
|
28
|
Zhao J, Vaios E, Wang Y, Yang Z, Cui Y, Reitman ZJ, Lafata KJ, Fecci P, Kirkpatrick J, Fang Yin F, Floyd S, Wang C. Dose-Incorporated Deep Ensemble Learning for Improving Brain Metastasis Stereotactic Radiosurgery Outcome Prediction. Int J Radiat Oncol Biol Phys 2024:S0360-3016(24)00505-4. [PMID: 38615888 DOI: 10.1016/j.ijrobp.2024.04.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Revised: 03/19/2024] [Accepted: 04/02/2024] [Indexed: 04/16/2024]
Abstract
PURPOSE To develop a novel deep ensemble learning model for accurate prediction of brain metastasis (BM) local control outcomes after stereotactic radiosurgery (SRS). METHODS AND MATERIALS A total of 114 brain metastases (BMs) from 82 patients were evaluated, including 26 BMs that developed biopsy-confirmed local failure post-SRS. The SRS spatial dose distribution (Dmap) of each BM was registered to the planning contrast-enhanced T1 (T1-CE) magnetic resonance imaging (MRI). Axial slices of the Dmap, T1-CE, and planning target volume (PTV) segmentation (PTVseg) intersecting the BM center were extracted within a fixed field of view determined by the 60% isodose volume in Dmap. A spherical projection was implemented to transform planar image content onto a spherical surface using multiple projection centers, and the resultant T1-CE/Dmap/PTVseg projections were stacked as a 3-channel variable. Four Visual Geometry Group (VGG-19) deep encoders were used in an ensemble design, with each submodel using a different spherical projection formula as input for BM outcome prediction. In each submodel, clinical features after positional encoding were fused with VGG-19 deep features to generate logit results. The ensemble's outcome was synthesized from the 4 submodel results via logistic regression. In total, 10 model versions with random validation sample assignments were trained to study model robustness. Performance was compared with (1) a single VGG-19 encoder, (2) an ensemble with a T1-CE MRI as the sole image input after projections, and (3) an ensemble with the same image input design without clinical feature inclusion. RESULTS The ensemble model achieved an excellent area under the receiver operating characteristic curve (AUCROC: 0.89 ± 0.02) with high sensitivity (0.82 ± 0.05), specificity (0.84 ± 0.11), and accuracy (0.84 ± 0.08) results. This outperformed the MRI-only VGG-19 encoder (sensitivity: 0.35 ± 0.01, AUCROC: 0.64 ± 0.08), the MRI-only deep ensemble (sensitivity: 0.60 ± 0.09, AUCROC: 0.68 ± 0.06), and the 3-channel ensemble without clinical feature fusion (sensitivity: 0.78 ± 0.08, AUCROC: 0.84 ± 0.03). CONCLUSIONS Facilitated by the spherical image projection method, a deep ensemble model incorporating Dmap and clinical variables demonstrated excellent performance in predicting BM post-SRS local failure. Our novel approach could improve other radiation therapy outcome models and warrants further evaluation.
Collapse
Affiliation(s)
- Jingtong Zhao
- Duke University Medical Center, Durham, North Carolina
| | - Eugene Vaios
- Duke University Medical Center, Durham, North Carolina
| | - Yuqi Wang
- Duke University Medical Center, Durham, North Carolina
| | - Zhenyu Yang
- Duke University Medical Center, Durham, North Carolina
| | - Yunfeng Cui
- Duke University Medical Center, Durham, North Carolina
| | | | - Kyle J Lafata
- Duke University Medical Center, Durham, North Carolina
| | - Peter Fecci
- Duke University Medical Center, Durham, North Carolina
| | | | | | - Scott Floyd
- Duke University Medical Center, Durham, North Carolina
| | - Chunhao Wang
- Duke University Medical Center, Durham, North Carolina.
| |
Collapse
|
29
|
Jones CH, Dolsten M. Healthcare on the brink: navigating the challenges of an aging society in the United States. NPJ AGING 2024; 10:22. [PMID: 38582901 PMCID: PMC10998868 DOI: 10.1038/s41514-024-00148-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/24/2024] [Accepted: 03/21/2024] [Indexed: 04/08/2024]
Abstract
The US healthcare system is at a crossroads. With an aging population requiring more care and a strained system facing workforce shortages, capacity issues, and fragmentation, innovative solutions and policy reforms are needed. This paper aims to spark dialogue and collaboration among healthcare stakeholders and inspire action to meet the needs of the aging population. Through a comprehensive analysis of the impact of an aging society, this work highlights the urgency of addressing this issue and the importance of restructuring the healthcare system to be more efficient, equitable, and responsive.
Collapse
Affiliation(s)
- Charles H Jones
- Pfizer, 66 Hudson Boulevard, New York, New York, 10018, USA.
| | - Mikael Dolsten
- Pfizer, 66 Hudson Boulevard, New York, New York, 10018, USA.
| |
Collapse
|
30
|
Muse ED, Topol EJ. Transforming the cardiometabolic disease landscape: Multimodal AI-powered approaches in prevention and management. Cell Metab 2024; 36:670-683. [PMID: 38428435 PMCID: PMC10990799 DOI: 10.1016/j.cmet.2024.02.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/26/2023] [Revised: 01/25/2024] [Accepted: 02/06/2024] [Indexed: 03/03/2024]
Abstract
The rise of artificial intelligence (AI) has revolutionized various scientific fields, particularly in medicine, where it has enabled the modeling of complex relationships from massive datasets. Initially, AI algorithms focused on improved interpretation of diagnostic studies such as chest X-rays and electrocardiograms in addition to predicting patient outcomes and future disease onset. However, AI has evolved with the introduction of transformer models, allowing analysis of the diverse, multimodal data sources existing in medicine today. Multimodal AI holds great promise in more accurate disease risk assessment and stratification as well as optimizing the key driving factors in cardiometabolic disease: blood pressure, sleep, stress, glucose control, weight, nutrition, and physical activity. In this article we outline the current state of medical AI in cardiometabolic disease, highlighting the potential of multimodal AI to augment personalized prevention and treatment strategies in cardiometabolic disease.
Collapse
Affiliation(s)
- Evan D Muse
- Scripps Research Translational Institute, Scripps Research, La Jolla, CA 92037, USA; Division of Cardiovascular Diseases, Scripps Clinic, La Jolla, CA 92037, USA
| | - Eric J Topol
- Scripps Research Translational Institute, Scripps Research, La Jolla, CA 92037, USA; Division of Cardiovascular Diseases, Scripps Clinic, La Jolla, CA 92037, USA.
| |
Collapse
|
31
|
Ferreira A, Li J, Pomykala KL, Kleesiek J, Alves V, Egger J. GAN-based generation of realistic 3D volumetric data: A systematic review and taxonomy. Med Image Anal 2024; 93:103100. [PMID: 38340545 DOI: 10.1016/j.media.2024.103100] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 11/20/2023] [Accepted: 01/30/2024] [Indexed: 02/12/2024]
Abstract
With the massive proliferation of data-driven algorithms, such as deep learning-based approaches, the availability of high-quality data is of great interest. Volumetric data is very important in medicine, as it ranges from disease diagnoses to therapy monitoring. When the dataset is sufficient, models can be trained to help doctors with these tasks. Unfortunately, there are scenarios where large amounts of data is unavailable. For example, rare diseases and privacy issues can lead to restricted data availability. In non-medical fields, the high cost of obtaining enough high-quality data can also be a concern. A solution to these problems can be the generation of realistic synthetic data using Generative Adversarial Networks (GANs). The existence of these mechanisms is a good asset, especially in healthcare, as the data must be of good quality, realistic, and without privacy issues. Therefore, most of the publications on volumetric GANs are within the medical domain. In this review, we provide a summary of works that generate realistic volumetric synthetic data using GANs. We therefore outline GAN-based methods in these areas with common architectures, loss functions and evaluation metrics, including their advantages and disadvantages. We present a novel taxonomy, evaluations, challenges, and research opportunities to provide a holistic overview of the current state of volumetric GANs.
Collapse
Affiliation(s)
- André Ferreira
- Center Algoritmi/LASI, University of Minho, Braga, 4710-057, Portugal; Computer Algorithms for Medicine Laboratory, Graz, Austria; Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, Essen, 45131, Germany; Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, 52074 Aachen, Germany; Institute of Medical Informatics, University Hospital RWTH Aachen, 52074 Aachen, Germany.
| | - Jianning Li
- Computer Algorithms for Medicine Laboratory, Graz, Austria; Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, Essen, 45131, Germany; Cancer Research Center Cologne Essen (CCCE), University Medicine Essen, Hufelandstraße 55, Essen, 45147, Germany.
| | - Kelsey L Pomykala
- Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, Essen, 45131, Germany.
| | - Jens Kleesiek
- Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, Essen, 45131, Germany; Cancer Research Center Cologne Essen (CCCE), University Medicine Essen, Hufelandstraße 55, Essen, 45147, Germany; German Cancer Consortium (DKTK), Partner Site Essen, Hufelandstraße 55, Essen, 45147, Germany; TU Dortmund University, Department of Physics, Otto-Hahn-Straße 4, 44227 Dortmund, Germany.
| | - Victor Alves
- Center Algoritmi/LASI, University of Minho, Braga, 4710-057, Portugal.
| | - Jan Egger
- Computer Algorithms for Medicine Laboratory, Graz, Austria; Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, Essen, 45131, Germany; Cancer Research Center Cologne Essen (CCCE), University Medicine Essen, Hufelandstraße 55, Essen, 45147, Germany; Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, Graz, 801, Austria.
| |
Collapse
|
32
|
Coroller T, Sahiner B, Amatya A, Gossmann A, Karagiannis K, Moloney C, Samala RK, Santana-Quintero L, Solovieff N, Wang C, Amiri-Kordestani L, Cao Q, Cha KH, Charlab R, Cross FH, Hu T, Huang R, Kraft J, Krusche P, Li Y, Li Z, Mazo I, Paul R, Schnakenberg S, Serra P, Smith S, Song C, Su F, Tiwari M, Vechery C, Xiong X, Zarate JP, Zhu H, Chakravartty A, Liu Q, Ohlssen D, Petrick N, Schneider JA, Walderhaug M, Zuber E. Methodology for Good Machine Learning with Multi-Omics Data. Clin Pharmacol Ther 2024; 115:745-757. [PMID: 37965805 DOI: 10.1002/cpt.3105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Accepted: 10/20/2023] [Indexed: 11/16/2023]
Abstract
In 2020, Novartis Pharmaceuticals Corporation and the U.S. Food and Drug Administration (FDA) started a 4-year scientific collaboration to approach complex new data modalities and advanced analytics. The scientific question was to find novel radio-genomics-based prognostic and predictive factors for HR+/HER- metastatic breast cancer under a Research Collaboration Agreement. This collaboration has been providing valuable insights to help successfully implement future scientific projects, particularly using artificial intelligence and machine learning. This tutorial aims to provide tangible guidelines for a multi-omics project that includes multidisciplinary expert teams, spanning across different institutions. We cover key ideas, such as "maintaining effective communication" and "following good data science practices," followed by the four steps of exploratory projects, namely (1) plan, (2) design, (3) develop, and (4) disseminate. We break each step into smaller concepts with strategies for implementation and provide illustrations from our collaboration to further give the readers actionable guidance.
Collapse
Affiliation(s)
| | - Berkman Sahiner
- Center for Devices and Radiological Health, U.S. Food and Drug Administration, Silver Spring, Maryland, USA
| | - Anup Amatya
- Center for Drug Evaluation and Research, U.S. Food and Drug Administration, Silver Spring, Maryland, USA
| | - Alexej Gossmann
- Center for Devices and Radiological Health, U.S. Food and Drug Administration, Silver Spring, Maryland, USA
| | - Konstantinos Karagiannis
- Center for Biologics Evaluation and Research, U.S. Food and Drug Administration, Silver Spring, Maryland, USA
| | | | - Ravi K Samala
- Center for Devices and Radiological Health, U.S. Food and Drug Administration, Silver Spring, Maryland, USA
| | - Luis Santana-Quintero
- Center for Biologics Evaluation and Research, U.S. Food and Drug Administration, Silver Spring, Maryland, USA
| | - Nadia Solovieff
- Novartis Pharmaceutical Company, East Hanover, New Jersey, USA
| | - Craig Wang
- Novartis Pharma AG, Rotkreuz, Switzerland
| | - Laleh Amiri-Kordestani
- Center for Drug Evaluation and Research, U.S. Food and Drug Administration, Silver Spring, Maryland, USA
| | - Qian Cao
- Center for Devices and Radiological Health, U.S. Food and Drug Administration, Silver Spring, Maryland, USA
| | - Kenny H Cha
- Center for Devices and Radiological Health, U.S. Food and Drug Administration, Silver Spring, Maryland, USA
| | - Rosane Charlab
- Center for Drug Evaluation and Research, U.S. Food and Drug Administration, Silver Spring, Maryland, USA
| | - Frank H Cross
- Center for Drug Evaluation and Research, U.S. Food and Drug Administration, Silver Spring, Maryland, USA
| | - Tingting Hu
- Center for Devices and Radiological Health, U.S. Food and Drug Administration, Silver Spring, Maryland, USA
| | - Ruihao Huang
- Center for Drug Evaluation and Research, U.S. Food and Drug Administration, Silver Spring, Maryland, USA
| | - Jeffrey Kraft
- Center for Drug Evaluation and Research, U.S. Food and Drug Administration, Silver Spring, Maryland, USA
| | | | - Yutong Li
- Novartis Pharmaceutical Company, East Hanover, New Jersey, USA
| | - Zheng Li
- Novartis Pharmaceutical Company, East Hanover, New Jersey, USA
| | - Ilya Mazo
- Center for Biologics Evaluation and Research, U.S. Food and Drug Administration, Silver Spring, Maryland, USA
| | - Rahul Paul
- Center for Biologics Evaluation and Research, U.S. Food and Drug Administration, Silver Spring, Maryland, USA
| | | | - Paolo Serra
- Novartis Pharmaceutical Company, East Hanover, New Jersey, USA
| | - Sean Smith
- Center for Biologics Evaluation and Research, U.S. Food and Drug Administration, Silver Spring, Maryland, USA
| | - Chi Song
- Center for Drug Evaluation and Research, U.S. Food and Drug Administration, Silver Spring, Maryland, USA
| | - Fei Su
- Novartis Pharmaceutical Company, East Hanover, New Jersey, USA
| | - Mohit Tiwari
- Center for Drug Evaluation and Research, U.S. Food and Drug Administration, Silver Spring, Maryland, USA
| | - Colin Vechery
- Novartis Pharmaceutical Company, East Hanover, New Jersey, USA
| | - Xin Xiong
- Center for Devices and Radiological Health, U.S. Food and Drug Administration, Silver Spring, Maryland, USA
| | | | - Hao Zhu
- Center for Drug Evaluation and Research, U.S. Food and Drug Administration, Silver Spring, Maryland, USA
| | | | - Qi Liu
- Center for Drug Evaluation and Research, U.S. Food and Drug Administration, Silver Spring, Maryland, USA
| | - David Ohlssen
- Novartis Pharmaceutical Company, East Hanover, New Jersey, USA
| | - Nicholas Petrick
- Center for Devices and Radiological Health, U.S. Food and Drug Administration, Silver Spring, Maryland, USA
| | - Julie A Schneider
- Oncology Center of Excellence, U.S. Food and Drug Administration, Silver Spring, Maryland, USA
| | - Mark Walderhaug
- Center for Biologics Evaluation and Research, U.S. Food and Drug Administration, Silver Spring, Maryland, USA
| | | |
Collapse
|
33
|
Park WY, Jeon K, Schmidt TS, Kondylakis H, Alkasab T, Dewey BE, You SC, Nagy P. Development of Medical Imaging Data Standardization for Imaging-Based Observational Research: OMOP Common Data Model Extension. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:899-908. [PMID: 38315345 PMCID: PMC11031512 DOI: 10.1007/s10278-024-00982-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/04/2023] [Revised: 11/10/2023] [Accepted: 11/14/2023] [Indexed: 02/07/2024]
Abstract
The rapid growth of artificial intelligence (AI) and deep learning techniques require access to large inter-institutional cohorts of data to enable the development of robust models, e.g., targeting the identification of disease biomarkers and quantifying disease progression and treatment efficacy. The Observational Medical Outcomes Partnership Common Data Model (OMOP CDM) has been designed to accommodate a harmonized representation of observational healthcare data. This study proposes the Medical Imaging CDM (MI-CDM) extension, adding two new tables and two vocabularies to the OMOP CDM to address the structural and semantic requirements to support imaging research. The tables provide the capabilities of linking DICOM data sources as well as tracking the provenance of imaging features derived from those images. The implementation of the extension enables phenotype definitions using imaging features and expanding standardized computable imaging biomarkers. This proposal offers a comprehensive and unified approach for conducting imaging research and outcome studies utilizing imaging features.
Collapse
Affiliation(s)
- Woo Yeon Park
- Biomedical Informatics and Data Science, Johns Hopkins University, 855 N Wolfe St, Rangos 616, Baltimore, MD, USA.
| | - Kyulee Jeon
- Department of Biomedical Systems Informatics, Yonsei University College of Medicine, Seoul, Korea
- Institute for Innovation in Digital Healthcare, Yonsei University, Seoul, Korea
| | - Teri Sippel Schmidt
- Biomedical Informatics and Data Science, Johns Hopkins University, 855 N Wolfe St, Rangos 616, Baltimore, MD, USA
| | - Haridimos Kondylakis
- Institute of Computer Science, Foundation of Research & Technology-Hellas (FORTH), Heraklion, Greece
| | - Tarik Alkasab
- Department of Radiology, Massachusetts General Hospital, Boston, MA, USA
| | - Blake E Dewey
- Department of Neurology, Johns Hopkins University, Baltimore, MD, USA
| | - Seng Chan You
- Department of Biomedical Systems Informatics, Yonsei University College of Medicine, Seoul, Korea
- Institute for Innovation in Digital Healthcare, Yonsei University, Seoul, Korea
| | - Paul Nagy
- Biomedical Informatics and Data Science, Johns Hopkins University, 855 N Wolfe St, Rangos 616, Baltimore, MD, USA
| |
Collapse
|
34
|
Yin M, Lin J, Wang Y, Liu Y, Zhang R, Duan W, Zhou Z, Zhu S, Gao J, Liu L, Liu X, Gu C, Huang Z, Xu X, Xu C, Zhu J. Development and validation of a multimodal model in predicting severe acute pancreatitis based on radiomics and deep learning. Int J Med Inform 2024; 184:105341. [PMID: 38290243 DOI: 10.1016/j.ijmedinf.2024.105341] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2023] [Revised: 12/16/2023] [Accepted: 01/14/2024] [Indexed: 02/01/2024]
Abstract
OBJECTIVE Aim to establish a multimodal model for predicting severe acute pancreatitis (SAP) using machine learning (ML) and deep learning (DL). METHODS In this multicentre retrospective study, patients diagnosed with acute pancreatitis at admission were enrolled from January 2017 to December 2021. Clinical information within 24 h and CT scans within 72 h of admission were collected. First, we trained Model α based on clinical features selected by least absolute shrinkage and selection operator analysis. Second, radiomics features were extracted from 3D-CT scans and Model β was developed on the features after dimensionality reduction using principal component analysis. Third, Model γ was trained on 2D-CT images. Lastly, a multimodal model, namely PrismSAP, was constructed based on aforementioned features in the training set. The predictive accuracy of PrismSAP was verified in the validation and internal test sets and further validated in the external test set. Model performance was evaluated using area under the curve (AUC), accuracy, sensitivity, specificity, recall, precision and F1-score. RESULTS A total of 1,221 eligible patients were randomly split into a training set (n = 864), a validation set (n = 209) and an internal test set (n = 148). Data of 266 patients were for external testing. In the external test set, PrismSAP performed best with the highest AUC of 0.916 (0.873-0.960) among all models [Model α: 0.709 (0.618-0.800); Model β: 0.749 (0.675-0.824); Model γ: 0.687 (0.592-0.782); MCTSI: 0.778 (0.698-0.857); RANSON: 0.642 (0.559-0.725); BISAP: 0.751 (0.668-0.833); SABP: 0.710 (0.621-0.798)]. CONCLUSION The proposed multimodal model outperformed any single-modality models and traditional scoring systems.
Collapse
Affiliation(s)
- Minyue Yin
- Department of Gastroenterology, the First Affiliated Hospital of Soochow University, Suzhou, Jiangsu 215006, China; Suzhou Clinical Centre of Digestive Diseases, Suzhou, Jiangsu 215006, China
| | - Jiaxi Lin
- Department of Gastroenterology, the First Affiliated Hospital of Soochow University, Suzhou, Jiangsu 215006, China; Suzhou Clinical Centre of Digestive Diseases, Suzhou, Jiangsu 215006, China
| | - Yu Wang
- Department of Gastroenterology, the First Affiliated Hospital of Soochow University, Suzhou, Jiangsu 215006, China; Department of General Surgery, Jintan Hospital Affiliated to Jiangsu University, Changzhou, Jiangsu 213299, China
| | - Yuanjun Liu
- School of Computer Science and Technology, Soochow University, Suzhou, Jiangsu 215006, China
| | - Rufa Zhang
- Department of Gastroenterology, Changshu Hospital Affiliated to Soochow University, Changshu No. 1 People's Hospital, Suzhou, Jiangsu 215500, China
| | - Wenbin Duan
- Department of Hepatobiliary Surgery, the People's Hospital of Hunan Province, Changsha, Hunan 410002, China
| | - Zhirun Zhou
- Department of Obstetrics and Gynaecology, the Second Affiliated Hospital of Soochow University, Suzhou, Jiangsu 215004, China
| | - Shiqi Zhu
- Department of Gastroenterology, the First Affiliated Hospital of Soochow University, Suzhou, Jiangsu 215006, China; Suzhou Clinical Centre of Digestive Diseases, Suzhou, Jiangsu 215006, China
| | - Jingwen Gao
- Department of Gastroenterology, the First Affiliated Hospital of Soochow University, Suzhou, Jiangsu 215006, China; Suzhou Clinical Centre of Digestive Diseases, Suzhou, Jiangsu 215006, China
| | - Lu Liu
- Department of Gastroenterology, the First Affiliated Hospital of Soochow University, Suzhou, Jiangsu 215006, China; Suzhou Clinical Centre of Digestive Diseases, Suzhou, Jiangsu 215006, China
| | - Xiaolin Liu
- Department of Gastroenterology, the First Affiliated Hospital of Soochow University, Suzhou, Jiangsu 215006, China; Suzhou Clinical Centre of Digestive Diseases, Suzhou, Jiangsu 215006, China
| | - Chenqi Gu
- Department of Radiology, the First Affiliated Hospital of Soochow University, Suzhou, Jiangsu 215006, China
| | - Zhou Huang
- Department of Radiology, the First Affiliated Hospital of Soochow University, Suzhou, Jiangsu 215006, China
| | - Xiaodan Xu
- Department of Gastroenterology, Changshu Hospital Affiliated to Soochow University, Changshu No. 1 People's Hospital, Suzhou, Jiangsu 215500, China.
| | - Chunfang Xu
- Department of Gastroenterology, the First Affiliated Hospital of Soochow University, Suzhou, Jiangsu 215006, China; Suzhou Clinical Centre of Digestive Diseases, Suzhou, Jiangsu 215006, China.
| | - Jinzhou Zhu
- Department of Gastroenterology, the First Affiliated Hospital of Soochow University, Suzhou, Jiangsu 215006, China; Suzhou Clinical Centre of Digestive Diseases, Suzhou, Jiangsu 215006, China; Key Laboratory of Hepatosplenic Surgery, Ministry of Education, The First Affiliated Hospital of Harbin Medical University, Harbin 150000, China.
| |
Collapse
|
35
|
Maleki Varnosfaderani S, Forouzanfar M. The Role of AI in Hospitals and Clinics: Transforming Healthcare in the 21st Century. Bioengineering (Basel) 2024; 11:337. [PMID: 38671759 PMCID: PMC11047988 DOI: 10.3390/bioengineering11040337] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2024] [Revised: 03/25/2024] [Accepted: 03/26/2024] [Indexed: 04/28/2024] Open
Abstract
As healthcare systems around the world face challenges such as escalating costs, limited access, and growing demand for personalized care, artificial intelligence (AI) is emerging as a key force for transformation. This review is motivated by the urgent need to harness AI's potential to mitigate these issues and aims to critically assess AI's integration in different healthcare domains. We explore how AI empowers clinical decision-making, optimizes hospital operation and management, refines medical image analysis, and revolutionizes patient care and monitoring through AI-powered wearables. Through several case studies, we review how AI has transformed specific healthcare domains and discuss the remaining challenges and possible solutions. Additionally, we will discuss methodologies for assessing AI healthcare solutions, ethical challenges of AI deployment, and the importance of data privacy and bias mitigation for responsible technology use. By presenting a critical assessment of AI's transformative potential, this review equips researchers with a deeper understanding of AI's current and future impact on healthcare. It encourages an interdisciplinary dialogue between researchers, clinicians, and technologists to navigate the complexities of AI implementation, fostering the development of AI-driven solutions that prioritize ethical standards, equity, and a patient-centered approach.
Collapse
Affiliation(s)
| | - Mohamad Forouzanfar
- Département de Génie des Systèmes, École de Technologie Supérieure (ÉTS), Université du Québec, Montréal, QC H3C 1K3, Canada
- Centre de Recherche de L’institut Universitaire de Gériatrie de Montréal (CRIUGM), Montréal, QC H3W 1W5, Canada
| |
Collapse
|
36
|
Zheng H, Hung ALY, Miao Q, Song W, Scalzo F, Raman SS, Zhao K, Sung K. AtPCa-Net: anatomical-aware prostate cancer detection network on multi-parametric MRI. Sci Rep 2024; 14:5740. [PMID: 38459100 PMCID: PMC10923873 DOI: 10.1038/s41598-024-56405-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Accepted: 03/06/2024] [Indexed: 03/10/2024] Open
Abstract
Multi-parametric MRI (mpMRI) is widely used for prostate cancer (PCa) diagnosis. Deep learning models show good performance in detecting PCa on mpMRI, but domain-specific PCa-related anatomical information is sometimes overlooked and not fully explored even by state-of-the-art deep learning models, causing potential suboptimal performances in PCa detection. Symmetric-related anatomical information is commonly used when distinguishing PCa lesions from other visually similar but benign prostate tissue. In addition, different combinations of mpMRI findings are used for evaluating the aggressiveness of PCa for abnormal findings allocated in different prostate zones. In this study, we investigate these domain-specific anatomical properties in PCa diagnosis and how we can adopt them into the deep learning framework to improve the model's detection performance. We propose an anatomical-aware PCa detection Network (AtPCa-Net) for PCa detection on mpMRI. Experiments show that the AtPCa-Net can better utilize the anatomical-related information, and the proposed anatomical-aware designs help improve the overall model performance on both PCa detection and patient-level classification.
Collapse
Affiliation(s)
- Haoxin Zheng
- Radiological Sciences, University of California, Los Angeles, Los Angeles, 90095, USA.
- Computer Science, University of California, Los Angeles, Los Angeles, 90095, USA.
| | - Alex Ling Yu Hung
- Radiological Sciences, University of California, Los Angeles, Los Angeles, 90095, USA
- Computer Science, University of California, Los Angeles, Los Angeles, 90095, USA
| | - Qi Miao
- Radiological Sciences, University of California, Los Angeles, Los Angeles, 90095, USA
| | - Weinan Song
- Electrical and Computer Engineering, University of California, Los Angeles, Los Angeles, 90095, USA
| | - Fabien Scalzo
- Computer Science, University of California, Los Angeles, Los Angeles, 90095, USA
- The Seaver College, Pepperdine University, Los Angeles, 90363, USA
| | - Steven S Raman
- Radiological Sciences, University of California, Los Angeles, Los Angeles, 90095, USA
| | - Kai Zhao
- Radiological Sciences, University of California, Los Angeles, Los Angeles, 90095, USA
| | - Kyunghyun Sung
- Radiological Sciences, University of California, Los Angeles, Los Angeles, 90095, USA
| |
Collapse
|
37
|
Chen J, Wen Y, Pokojovy M, Tseng TLB, McCaffrey P, Vo A, Walser E, Moen S. Multi-modal learning for inpatient length of stay prediction. Comput Biol Med 2024; 171:108121. [PMID: 38382388 DOI: 10.1016/j.compbiomed.2024.108121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 12/20/2023] [Accepted: 02/04/2024] [Indexed: 02/23/2024]
Abstract
Predicting inpatient length of stay (LoS) is important for hospitals aiming to improve service efficiency and enhance management capabilities. Patient medical records are strongly associated with LoS. However, due to diverse modalities, heterogeneity, and complexity of data, it becomes challenging to effectively leverage these heterogeneous data to put forth a predictive model that can accurately predict LoS. To address the challenge, this study aims to establish a novel data-fusion model, termed as DF-Mdl, to integrate heterogeneous clinical data for predicting the LoS of inpatients between hospital discharge and admission. Multi-modal data such as demographic data, clinical notes, laboratory test results, and medical images are utilized in our proposed methodology with individual "basic" sub-models separately applied to each different data modality. Specifically, a convolutional neural network (CNN) model, which we termed CRXMDL, is designed for chest X-ray (CXR) image data, two long short-term memory networks are used to extract features from long text data, and a novel attention-embedded 1D convolutional neural network is developed to extract useful information from numerical data. Finally, these basic models are integrated to form a new data-fusion model (DF-Mdl) for inpatient LoS prediction. The proposed method attains the best R2 and EVAR values of 0.6039 and 0.6042 among competitors for the LoS prediction on the Medical Information Mart for Intensive Care (MIMIC)-IV test dataset. Empirical evidence suggests better performance compared with other state-of-the-art (SOTA) methods, which demonstrates the effectiveness and feasibility of the proposed approach.
Collapse
Affiliation(s)
- Junde Chen
- Dale E. and Sarah Ann Fowler School of Engineering, Chapman University, Orange, CA, 92866, USA
| | - Yuxin Wen
- Dale E. and Sarah Ann Fowler School of Engineering, Chapman University, Orange, CA, 92866, USA.
| | - Michael Pokojovy
- Department of Mathematics and Statistics, Old Dominion University, Norfolk, VA, 23529, USA
| | - Tzu-Liang Bill Tseng
- Department of Industrial, Manufacturing and Systems Engineering, The University of Texas at El Paso, El Paso, TX, 79968, USA
| | - Peter McCaffrey
- University of Texas Medical Branch, Galveston, TX, 77550, USA
| | - Alexander Vo
- University of Texas Medical Branch, Galveston, TX, 77550, USA
| | - Eric Walser
- University of Texas Medical Branch, Galveston, TX, 77550, USA
| | - Scott Moen
- University of Texas Medical Branch, Galveston, TX, 77550, USA
| |
Collapse
|
38
|
Wang Z, Zheng C, Han X, Chen W, Lu L. An Innovative and Efficient Diagnostic Prediction Flow for Head and Neck Cancer: A Deep Learning Approach for Multi-Modal Survival Analysis Prediction Based on Text and Multi-Center PET/CT Images. Diagnostics (Basel) 2024; 14:448. [PMID: 38396486 PMCID: PMC10888043 DOI: 10.3390/diagnostics14040448] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Revised: 02/14/2024] [Accepted: 02/14/2024] [Indexed: 02/25/2024] Open
Abstract
Objective: To comprehensively capture intra-tumor heterogeneity in head and neck cancer (HNC) and maximize the use of valid information collected in the clinical field, we propose a novel multi-modal image-text fusion strategy aimed at improving prognosis. Method: We have developed a tailored diagnostic algorithm for HNC, leveraging a deep learning-based model that integrates both image and clinical text information. For the image fusion part, we used the cross-attention mechanism to fuse the image information between PET and CT, and for the fusion of text and image, we used the Q-former architecture to fuse the text and image information. We also improved the traditional prognostic model by introducing time as a variable in the construction of the model, and finally obtained the corresponding prognostic results. Result: We assessed the efficacy of our methodology through the compilation of a multicenter dataset, achieving commendable outcomes in multicenter validations. Notably, our results for metastasis-free survival (MFS), recurrence-free survival (RFS), overall survival (OS), and progression-free survival (PFS) were as follows: 0.796, 0.626, 0.641, and 0.691. Our results demonstrate a notable superiority over the utilization of CT and PET independently, and exceed the result derived without the clinical textual information. Conclusions: Our model not only validates the effectiveness of multi-modal fusion in aiding diagnosis, but also provides insights for optimizing survival analysis. The study underscores the potential of our approach in enhancing prognosis and contributing to the advancement of personalized medicine in HNC.
Collapse
Affiliation(s)
- Zhaonian Wang
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China
| | - Chundan Zheng
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China
- Pazhou Lab, Guangzhou 510330, China
| | - Xu Han
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China
- Pazhou Lab, Guangzhou 510330, China
| | - Wufan Chen
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China
| | - Lijun Lu
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China
- Pazhou Lab, Guangzhou 510330, China
| |
Collapse
|
39
|
van de Beld JJ, Crull D, Mikhal J, Geerdink J, Veldhuis A, Poel M, Kouwenhoven EA. Complication Prediction after Esophagectomy with Machine Learning. Diagnostics (Basel) 2024; 14:439. [PMID: 38396478 PMCID: PMC10888312 DOI: 10.3390/diagnostics14040439] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2023] [Revised: 12/28/2023] [Accepted: 12/29/2023] [Indexed: 02/25/2024] Open
Abstract
Esophageal cancer can be treated effectively with esophagectomy; however, the postoperative complication rate is high. In this paper, we study to what extent machine learning methods can predict anastomotic leakage and pneumonia up to two days in advance. We use a dataset with 417 patients who underwent esophagectomy between 2011 and 2021. The dataset contains multimodal temporal information, specifically, laboratory results, vital signs, thorax images, and preoperative patient characteristics. The best models scored mean test set AUROCs of 0.87 and 0.82 for leakage 1 and 2 days ahead, respectively. For pneumonia, this was 0.74 and 0.61 for 1 and 2 days ahead, respectively. We conclude that machine learning models can effectively predict anastomotic leakage and pneumonia after esophagectomy.
Collapse
Affiliation(s)
- Jorn-Jan van de Beld
- Faculty of EEMCS, University of Twente, 7500 AE Enschede, The Netherlands
- Hospital Group Twente (ZGT), 7609 PP Almelo, The Netherlands
| | - David Crull
- Hospital Group Twente (ZGT), 7609 PP Almelo, The Netherlands
| | - Julia Mikhal
- Hospital Group Twente (ZGT), 7609 PP Almelo, The Netherlands
- Faculty of BMS, University of Twente, 7500 AE Enschede, The Netherlands
| | - Jeroen Geerdink
- Hospital Group Twente (ZGT), 7609 PP Almelo, The Netherlands
| | - Anouk Veldhuis
- Hospital Group Twente (ZGT), 7609 PP Almelo, The Netherlands
| | - Mannes Poel
- Faculty of EEMCS, University of Twente, 7500 AE Enschede, The Netherlands
| | | |
Collapse
|
40
|
Wenk J, Voigt I, Inojosa H, Schlieter H, Ziemssen T. Building digital patient pathways for the management and treatment of multiple sclerosis. Front Immunol 2024; 15:1356436. [PMID: 38433832 PMCID: PMC10906094 DOI: 10.3389/fimmu.2024.1356436] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2023] [Accepted: 01/30/2024] [Indexed: 03/05/2024] Open
Abstract
Recent advances in the field of artificial intelligence (AI) could yield new insights into the potential causes of multiple sclerosis (MS) and factors influencing its course as the use of AI opens new possibilities regarding the interpretation and use of big data from not only a cross-sectional, but also a longitudinal perspective. For each patient with MS, there is a vast amount of multimodal data being accumulated over time. But for the application of AI and related technologies, these data need to be available in a machine-readable format and need to be collected in a standardized and structured manner. Through the use of mobile electronic devices and the internet it has also become possible to provide healthcare services from remote and collect information on a patient's state of health outside of regular check-ups on site. Against this background, we argue that the concept of pathways in healthcare now could be applied to structure the collection of information across multiple devices and stakeholders in the virtual sphere, enabling us to exploit the full potential of AI technology by e.g., building digital twins. By going digital and using pathways, we can virtually link patients and their caregivers. Stakeholders then could rely on digital pathways for evidence-based guidance in the sequence of procedures and selection of therapy options based on advanced analytics supported by AI as well as for communication and education purposes. As far as we aware of, however, pathway modelling with respect to MS management and treatment has not been thoroughly investigated yet and still needs to be discussed. In this paper, we thus present our ideas for a modular-integrative framework for the development of digital patient pathways for MS treatment.
Collapse
Affiliation(s)
- Judith Wenk
- Center of Clinical Neuroscience, Department of Neurology, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
| | - Isabel Voigt
- Center of Clinical Neuroscience, Department of Neurology, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
| | - Hernan Inojosa
- Center of Clinical Neuroscience, Department of Neurology, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
| | - Hannes Schlieter
- Research Group Digital Health, Faculty of Business and Economics, Technische Universität Dresden, Dresden, Germany
| | - Tjalf Ziemssen
- Center of Clinical Neuroscience, Department of Neurology, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
| |
Collapse
|
41
|
Fiedler HC, Prager R, Smith D, Wu D, Dave C, Tschirhart J, Wu B, Van Berlo B, Malthaner R, Arntfield R. Automated Real-Time Detection of Lung Sliding Using Artificial Intelligence: A Prospective Diagnostic Accuracy Study. Chest 2024:S0012-3692(24)00157-0. [PMID: 38365174 DOI: 10.1016/j.chest.2024.02.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2023] [Revised: 02/04/2024] [Accepted: 02/09/2024] [Indexed: 02/18/2024] Open
Abstract
BACKGROUND Rapid evaluation for pneumothorax is a common clinical priority. Although lung ultrasound (LUS) often is used to assess for pneumothorax, its diagnostic accuracy varies based on patient and provider factors. To enhance the performance of LUS for pulmonary pathologic features, artificial intelligence (AI)-assisted imaging has been adopted; however, the diagnostic accuracy of AI-assisted LUS (AI-LUS) deployed in real time to diagnose pneumothorax remains unknown. RESEARCH QUESTION In patients with suspected pneumothorax, what is the real-time diagnostic accuracy of AI-LUS to recognize the absence of lung sliding? STUDY DESIGN AND METHODS We performed a prospective AI-assisted diagnostic accuracy study of AI-LUS to recognize the absence of lung sliding in a convenience sample of patients with suspected pneumothorax. After calibrating the model parameters and imaging settings for bedside deployment, we prospectively evaluated its diagnostic accuracy for lung sliding compared with a reference standard of expert consensus. RESULTS Two hundred forty-one lung sliding evaluations were derived from 62 patients. AI-LUS showed a sensitivity of 0.921 (95% CI, 0.792-0.973), specificity of 0.802 (95% CI, 0.735-0.856), area under the receiver operating characteristic curve of 0.885 (95% CI, 0.828-0.956), and accuracy of 0.824 (95% CI, 0.766-0.870) for the diagnosis of absent lung sliding. INTERPRETATION Real-time AI-LUS shows high sensitivity and moderate specificity to identify the absence of lung sliding. Further research to improve model performance and optimize the integration of AI-LUS into existing diagnostic pathways is warranted.
Collapse
Affiliation(s)
| | - Ross Prager
- Division of Critical Care Medicine, Western University, London, ON, Canada
| | - Delaney Smith
- Lawson Health Research Institute, London, ON, Canada
| | - Derek Wu
- Lawson Health Research Institute, London, ON, Canada
| | - Chintan Dave
- Lawson Health Research Institute, London, ON, Canada
| | - Jared Tschirhart
- Departments of Surgery, Oncology, and Epidemiology and Biostatistics, Schulich School of Medicine, Western University, London, ON, Canada
| | - Ben Wu
- Lawson Health Research Institute, London, ON, Canada
| | - Blake Van Berlo
- Faculty of Mathematics, University of Waterloo, Waterloo, ON, Canada
| | - Richard Malthaner
- Division of Thoracic Surgery, Western University, London, ON, Canada
| | - Robert Arntfield
- Division of Critical Care Medicine, Western University, London, ON, Canada
| |
Collapse
|
42
|
Trinh M, Shahbaba R, Stark C, Ren Y. Alzheimer's disease detection using data fusion with a deep supervised encoder. FRONTIERS IN DEMENTIA 2024; 3:1332928. [PMID: 39055313 PMCID: PMC11271260 DOI: 10.3389/frdem.2024.1332928] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/03/2023] [Accepted: 01/11/2024] [Indexed: 07/27/2024]
Abstract
Alzheimer's disease (AD) is affecting a growing number of individuals. As a result, there is a pressing need for accurate and early diagnosis methods. This study aims to achieve this goal by developing an optimal data analysis strategy to enhance computational diagnosis. Although various modalities of AD diagnostic data are collected, past research on computational methods of AD diagnosis has mainly focused on using single-modal inputs. We hypothesize that integrating, or "fusing," various data modalities as inputs to prediction models could enhance diagnostic accuracy by offering a more comprehensive view of an individual's health profile. However, a potential challenge arises as this fusion of multiple modalities may result in significantly higher dimensional data. We hypothesize that employing suitable dimensionality reduction methods across heterogeneous modalities would not only help diagnosis models extract latent information but also enhance accuracy. Therefore, it is imperative to identify optimal strategies for both data fusion and dimensionality reduction. In this paper, we have conducted a comprehensive comparison of over 80 statistical machine learning methods, considering various classifiers, dimensionality reduction techniques, and data fusion strategies to assess our hypotheses. Specifically, we have explored three primary strategies: (1) Simple data fusion, which involves straightforward concatenation (fusion) of datasets before inputting them into a classifier; (2) Early data fusion, in which datasets are concatenated first, and then a dimensionality reduction technique is applied before feeding the resulting data into a classifier; and (3) Intermediate data fusion, in which dimensionality reduction methods are applied individually to each dataset before concatenating them to construct a classifier. For dimensionality reduction, we have explored several commonly-used techniques such as principal component analysis (PCA), autoencoder (AE), and LASSO. Additionally, we have implemented a new dimensionality-reduction method called the supervised encoder (SE), which involves slight modifications to standard deep neural networks. Our results show that SE substantially improves prediction accuracy compared to PCA, AE, and LASSO, especially in combination with intermediate fusion for multiclass diagnosis prediction.
Collapse
Affiliation(s)
- Minh Trinh
- Department of Computer Science, University of California, Los Angeles, Los Angeles, CA, United States
| | | | - Craig Stark
- Department of Neurobiology and Behavior, University of California, Irvine, Irvine, CA, United States
- Mathematical, Computational and Systems Biology, University of California, Irvine, Irvine, CA, United States
| | - Yueqi Ren
- Mathematical, Computational and Systems Biology, University of California, Irvine, Irvine, CA, United States
- Medical Scientist Training Program, University of California, Irvine, Irvine, CA, United States
| |
Collapse
|
43
|
Maki S, Furuya T, Inoue M, Shiga Y, Inage K, Eguchi Y, Orita S, Ohtori S. Machine Learning and Deep Learning in Spinal Injury: A Narrative Review of Algorithms in Diagnosis and Prognosis. J Clin Med 2024; 13:705. [PMID: 38337399 PMCID: PMC10856760 DOI: 10.3390/jcm13030705] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Revised: 01/14/2024] [Accepted: 01/18/2024] [Indexed: 02/12/2024] Open
Abstract
Spinal injuries, including cervical and thoracolumbar fractures, continue to be a major public health concern. Recent advancements in machine learning and deep learning technologies offer exciting prospects for improving both diagnostic and prognostic approaches in spinal injury care. This narrative review systematically explores the practical utility of these computational methods, with a focus on their application in imaging techniques such as computed tomography (CT) and magnetic resonance imaging (MRI), as well as in structured clinical data. Of the 39 studies included, 34 were focused on diagnostic applications, chiefly using deep learning to carry out tasks like vertebral fracture identification, differentiation between benign and malignant fractures, and AO fracture classification. The remaining five were prognostic, using machine learning to analyze parameters for predicting outcomes such as vertebral collapse and future fracture risk. This review highlights the potential benefit of machine learning and deep learning in spinal injury care, especially their roles in enhancing diagnostic capabilities, detailed fracture characterization, risk assessments, and individualized treatment planning.
Collapse
Affiliation(s)
- Satoshi Maki
- Department of Orthopaedic Surgery, Graduate School of Medicine, Chiba University, Chiba 260-8670, Japan
- Center for Frontier Medical Engineering, Chiba University, Chiba 263-8522, Japan
| | - Takeo Furuya
- Department of Orthopaedic Surgery, Graduate School of Medicine, Chiba University, Chiba 260-8670, Japan
| | - Masahiro Inoue
- Department of Orthopaedic Surgery, Graduate School of Medicine, Chiba University, Chiba 260-8670, Japan
| | - Yasuhiro Shiga
- Department of Orthopaedic Surgery, Graduate School of Medicine, Chiba University, Chiba 260-8670, Japan
| | - Kazuhide Inage
- Department of Orthopaedic Surgery, Graduate School of Medicine, Chiba University, Chiba 260-8670, Japan
| | - Yawara Eguchi
- Department of Orthopaedic Surgery, Graduate School of Medicine, Chiba University, Chiba 260-8670, Japan
| | - Sumihisa Orita
- Department of Orthopaedic Surgery, Graduate School of Medicine, Chiba University, Chiba 260-8670, Japan
- Center for Frontier Medical Engineering, Chiba University, Chiba 263-8522, Japan
| | - Seiji Ohtori
- Department of Orthopaedic Surgery, Graduate School of Medicine, Chiba University, Chiba 260-8670, Japan
| |
Collapse
|
44
|
Jeong J, Chao CJ, Arsanjani R, Ayoub C, Lester SJ, Pereyra M, Said EF, Roarke M, Tagle-Cornell C, Koepke LM, Tsai YL, Jung-Hsuan C, Chang CC, Farina JM, Trivedi H, Patel BN, Banerjee I. Opportunistic screening for coronary artery calcium deposition using chest radiographs - a multi-objective models with multi-modal data fusion. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2024:2024.01.10.23299699. [PMID: 38260571 PMCID: PMC10802643 DOI: 10.1101/2024.01.10.23299699] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/24/2024]
Abstract
Background To create an opportunistic screening strategy by multitask deep learning methods to stratify prediction for coronary artery calcium (CAC) and associated cardiovascular risk with frontal chest x-rays (CXR) and minimal data from electronic health records (EHR). Methods In this retrospective study, 2,121 patients with available computed tomography (CT) scans and corresponding CXR images were collected internally (Mayo Enterprise) with calculated CAC scores binned into 3 categories (0, 1-99, and 100+) as ground truths for model training. Results from the internal training were tested on multiple external datasets (domestic (EUH) and foreign (VGHTPE)) with significant racial and ethnic differences and classification performance was compared. Findings Classification performance between 0, 1-99, and 100+ CAC scores performed moderately on both the internal test and external datasets, reaching average f1-score of 0.66 for Mayo, 0.62 for EUH and 0.61 for VGHTPE. For the clinically relevant binary task of 0 vs 400+ CAC classification, the performance of our model on the internal test and external datasets reached an average AUCROC of 0.84. Interpretation The fusion model trained on CXR performed better (0.84 average AUROC on internal and external dataset) than existing state-of-the-art models on predicting CAC scores only on internal (0.73 AUROC), with robust performance on external datasets. Thus, our proposed model may be used as a robust, first-pass opportunistic screening method for cardiovascular risk from regular chest radiographs. For community use, trained model and the inference code can be downloaded with an academic open-source license from https://github.com/jeong-jasonji/MTL_CAC_classification . Funding The study was partially supported by National Institute of Health 1R01HL155410-01A1 award.
Collapse
|
45
|
Saponaro S, Lizzi F, Serra G, Mainas F, Oliva P, Giuliano A, Calderoni S, Retico A. Deep learning based joint fusion approach to exploit anatomical and functional brain information in autism spectrum disorders. Brain Inform 2024; 11:2. [PMID: 38194126 PMCID: PMC10776521 DOI: 10.1186/s40708-023-00217-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Accepted: 12/20/2023] [Indexed: 01/10/2024] Open
Abstract
BACKGROUND The integration of the information encoded in multiparametric MRI images can enhance the performance of machine-learning classifiers. In this study, we investigate whether the combination of structural and functional MRI might improve the performances of a deep learning (DL) model trained to discriminate subjects with Autism Spectrum Disorders (ASD) with respect to typically developing controls (TD). MATERIAL AND METHODS We analyzed both structural and functional MRI brain scans publicly available within the ABIDE I and II data collections. We considered 1383 male subjects with age between 5 and 40 years, including 680 subjects with ASD and 703 TD from 35 different acquisition sites. We extracted morphometric and functional brain features from MRI scans with the Freesurfer and the CPAC analysis packages, respectively. Then, due to the multisite nature of the dataset, we implemented a data harmonization protocol. The ASD vs. TD classification was carried out with a multiple-input DL model, consisting in a neural network which generates a fixed-length feature representation of the data of each modality (FR-NN), and a Dense Neural Network for classification (C-NN). Specifically, we implemented a joint fusion approach to multiple source data integration. The main advantage of the latter is that the loss is propagated back to the FR-NN during the training, thus creating informative feature representations for each data modality. Then, a C-NN, with a number of layers and neurons per layer to be optimized during the model training, performs the ASD-TD discrimination. The performance was evaluated by computing the Area under the Receiver Operating Characteristic curve within a nested 10-fold cross-validation. The brain features that drive the DL classification were identified by the SHAP explainability framework. RESULTS The AUC values of 0.66±0.05 and of 0.76±0.04 were obtained in the ASD vs. TD discrimination when only structural or functional features are considered, respectively. The joint fusion approach led to an AUC of 0.78±0.04. The set of structural and functional connectivity features identified as the most important for the two-class discrimination supports the idea that brain changes tend to occur in individuals with ASD in regions belonging to the Default Mode Network and to the Social Brain. CONCLUSIONS Our results demonstrate that the multimodal joint fusion approach outperforms the classification results obtained with data acquired by a single MRI modality as it efficiently exploits the complementarity of structural and functional brain information.
Collapse
Affiliation(s)
- Sara Saponaro
- Medical Physics School, University of Pisa, Pisa, Italy.
- National Institute for Nuclear Physics (INFN), Pisa Division, Pisa, Italy.
| | - Francesca Lizzi
- National Institute for Nuclear Physics (INFN), Pisa Division, Pisa, Italy
| | - Giacomo Serra
- Department of Physics, University of Cagliari, Cagliari, Italy
- INFN, Cagliari Division, Cagliari, Italy
| | - Francesca Mainas
- INFN, Cagliari Division, Cagliari, Italy
- Department of Computer Science, University of Pisa, Pisa, Italy
| | - Piernicola Oliva
- INFN, Cagliari Division, Cagliari, Italy
- Department of Chemical, Physical, Mathematical and Natural Sciences, University of Sassari, Sassari, Italy
| | - Alessia Giuliano
- Unit of Medical Physics, Pisa University Hospital "Azienda Ospedaliero-Universitaria Pisana", Pisa, Italy
| | - Sara Calderoni
- Developmental Psychiatry Unit - IRCCS Stella Maris Foundation, Pisa, Italy
- Department of Clinical and Experimental Medicine, University of Pisa, Pisa, Italy
| | - Alessandra Retico
- National Institute for Nuclear Physics (INFN), Pisa Division, Pisa, Italy
| |
Collapse
|
46
|
Lin AC, Liu Z, Lee J, Ranvier GF, Taye A, Owen R, Matteson DS, Lee D. Generating a multimodal artificial intelligence model to differentiate benign and malignant follicular neoplasms of the thyroid: A proof-of-concept study. Surgery 2024; 175:121-127. [PMID: 37925261 DOI: 10.1016/j.surg.2023.06.053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Revised: 05/08/2023] [Accepted: 06/18/2023] [Indexed: 11/06/2023]
Abstract
BACKGROUND Machine learning has been increasingly used to develop algorithms that can improve medical diagnostics and prognostication and has shown promise in improving the classification of thyroid ultrasound images. This proof-of-concept study aims to develop a multimodal machine-learning model to classify follicular carcinoma from adenoma. METHODS This is a retrospective study of patients with follicular adenoma or carcinoma at a single institution between 2010 and 2022. Demographics, imaging, and perioperative variables were collected. The region of interest was annotated on ultrasound and used to perform radiomics analysis. Imaging features and clinical variables were then used to create a random forest classifier to predict malignancy. Leave-one-out cross-validation was conducted to evaluate classifier performance using the area under the receiver operating characteristic curve. RESULTS Patients with follicular adenomas (n = 7) and carcinomas (n = 11) with complete imaging and perioperative data were included. A total of 910 features were extracted from each image. The t-distributed stochastic neighbor embedding method reduced the dimension to 2 primary represented components. The random forest classifier achieved an area under the receiver operating characteristic curve of 0.76 (clinical only), 0.29 (image only), and 0.79 (multimodal data). CONCLUSION Our multimodal machine learning model demonstrates promising results in classifying follicular carcinoma from adenoma. This approach can potentially be applied in future studies to generate models for preoperative differentiation of follicular thyroid neoplasms.
Collapse
Affiliation(s)
- Ann C Lin
- Department of Surgery, Icahn School of Medicine at Mount Sinai, New York City, NY
| | - Zelong Liu
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York City, NY
| | - Justine Lee
- Department of Diagnostic, Molecular, and Interventional Radiology, Icahn School of Medicine at Mount Sinai, New York City, NY
| | | | - Aida Taye
- Department of Surgery, Icahn School of Medicine at Mount Sinai, New York City, NY
| | - Randall Owen
- Department of Surgery, Icahn School of Medicine at Mount Sinai, New York City, NY
| | - David S Matteson
- Department of Statistics and Data Science, Cornell University, Ithaca, NY
| | - Denise Lee
- Department of Surgery, Icahn School of Medicine at Mount Sinai, New York City, NY.
| |
Collapse
|
47
|
Bai L, Wu Y, Li G, Zhang W, Zhang H, Su J. AI-enabled organoids: Construction, analysis, and application. Bioact Mater 2024; 31:525-548. [PMID: 37746662 PMCID: PMC10511344 DOI: 10.1016/j.bioactmat.2023.09.005] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2023] [Revised: 09/09/2023] [Accepted: 09/09/2023] [Indexed: 09/26/2023] Open
Abstract
Organoids, miniature and simplified in vitro model systems that mimic the structure and function of organs, have attracted considerable interest due to their promising applications in disease modeling, drug screening, personalized medicine, and tissue engineering. Despite the substantial success in cultivating physiologically relevant organoids, challenges remain concerning the complexities of their assembly and the difficulties associated with data analysis. The advent of AI-Enabled Organoids, which interfaces with artificial intelligence (AI), holds the potential to revolutionize the field by offering novel insights and methodologies that can expedite the development and clinical application of organoids. This review succinctly delineates the fundamental concepts and mechanisms underlying AI-Enabled Organoids, summarizing the prospective applications on rapid screening of construction strategies, cost-effective extraction of multiscale image features, streamlined analysis of multi-omics data, and precise preclinical evaluation and application. We also explore the challenges and limitations of interfacing organoids with AI, and discuss the future direction of the field. Taken together, the AI-Enabled Organoids hold significant promise for advancing our understanding of organ development and disease progression, ultimately laying the groundwork for clinical application.
Collapse
Affiliation(s)
- Long Bai
- Department of Orthopedics, Xinhua Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, 200092, China
- Organoid Research Center, Institute of Translational Medicine, Shanghai University, Shanghai, 200444, China
- National Center for Translational Medicine (Shanghai) SHU Branch, Shanghai University, Shanghai, 200444, China
- Wenzhou Institute of Shanghai University, Wenzhou, 325000, China
| | - Yan Wu
- Department of Orthopedics, Xinhua Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, 200092, China
- Organoid Research Center, Institute of Translational Medicine, Shanghai University, Shanghai, 200444, China
- National Center for Translational Medicine (Shanghai) SHU Branch, Shanghai University, Shanghai, 200444, China
| | - Guangfeng Li
- Organoid Research Center, Institute of Translational Medicine, Shanghai University, Shanghai, 200444, China
- National Center for Translational Medicine (Shanghai) SHU Branch, Shanghai University, Shanghai, 200444, China
- Department of Orthopedics, Shanghai Zhongye Hospital, Shanghai, 201941, China
| | - Wencai Zhang
- Department of Orthopedics, First Affiliated Hospital, Jinan University, Guangzhou, 510632, China
| | - Hao Zhang
- Department of Orthopedics, Xinhua Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, 200092, China
- Organoid Research Center, Institute of Translational Medicine, Shanghai University, Shanghai, 200444, China
- National Center for Translational Medicine (Shanghai) SHU Branch, Shanghai University, Shanghai, 200444, China
| | - Jiacan Su
- Department of Orthopedics, Xinhua Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, 200092, China
- Organoid Research Center, Institute of Translational Medicine, Shanghai University, Shanghai, 200444, China
- National Center for Translational Medicine (Shanghai) SHU Branch, Shanghai University, Shanghai, 200444, China
| |
Collapse
|
48
|
Luo Y, Chen W, Zhan L, Qiu J, Jia T. Multi-feature concatenation and multi-classifier stacking: An interpretable and generalizable machine learning method for MDD discrimination with rsfMRI. Neuroimage 2024; 285:120497. [PMID: 38142755 DOI: 10.1016/j.neuroimage.2023.120497] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2023] [Revised: 11/21/2023] [Accepted: 12/11/2023] [Indexed: 12/26/2023] Open
Abstract
Major depressive disorder (MDD) is a serious and heterogeneous psychiatric disorder that needs accurate diagnosis. Resting-state functional MRI (rsfMRI), which captures multiple perspectives on brain structure, function, and connectivity, is increasingly applied in the diagnosis and pathological research of MDD. Different machine learning algorithms are then developed to exploit the rich information in rsfMRI and discriminate MDD patients from normal controls. Despite recent advances reported, the MDD discrimination accuracy has room for further improvement. The generalizability and interpretability of the discrimination method are not sufficiently addressed either. Here, we propose a machine learning method (MFMC) for MDD discrimination by concatenating multiple features and stacking multiple classifiers. MFMC is tested on the REST-meta-MDD data set that contains 2428 subjects collected from 25 different sites. MFMC yields 96.9% MDD discrimination accuracy, demonstrating a significant improvement over existing methods. In addition, the generalizability of MFMC is validated by the good performance when the training and testing subjects are from independent sites. The use of XGBoost as the meta classifier allows us to probe the decision process of MFMC. We identify 13 feature values related to 9 brain regions including the posterior cingulate gyrus, superior frontal gyrus orbital part, and angular gyrus, which contribute most to the classification and also demonstrate significant differences at the group level. The use of these 13 feature values alone can reach 87% of MFMC's full performance when taking all feature values. These features may serve as clinically useful diagnostic and prognostic biomarkers for MDD in the future.
Collapse
Affiliation(s)
- Yunsong Luo
- College of Computer and Information Science, Southwest University, Chongqing, 400715, PR China.
| | - Wenyu Chen
- College of Computer and Information Science, Southwest University, Chongqing, 400715, PR China.
| | - Ling Zhan
- College of Computer and Information Science, Southwest University, Chongqing, 400715, PR China.
| | - Jiang Qiu
- Key Laboratory of Cognition and Personality (SWU), Ministry of Education, Chongqing, 400715, PR China; School of Psychology, Southwest University (SWU), Chongqing, 400715, PR China; Southwest University Branch, Collaborative Innovation Center of Assessment Toward Basic Education Quality at Beijing Normal University, Chongqing, 400715, PR China.
| | - Tao Jia
- College of Computer and Information Science, Southwest University, Chongqing, 400715, PR China.
| |
Collapse
|
49
|
Schilcher J, Nilsson A, Andlid O, Eklund A. Fusion of electronic health records and radiographic images for a multimodal deep learning prediction model of atypical femur fractures. Comput Biol Med 2024; 168:107704. [PMID: 37980797 DOI: 10.1016/j.compbiomed.2023.107704] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Revised: 10/15/2023] [Accepted: 11/07/2023] [Indexed: 11/21/2023]
Abstract
Atypical femur fractures (AFF) represent a very rare type of fracture that can be difficult to discriminate radiologically from normal femur fractures (NFF). AFFs are associated with drugs that are administered to prevent osteoporosis-related fragility fractures, which are highly prevalent in the elderly population. Given that these fractures are rare and the radiologic changes are subtle currently only 7% of AFFs are correctly identified, which hinders adequate treatment for most patients with AFF. Deep learning models could be trained to classify automatically a fracture as AFF or NFF, thereby assisting radiologists in detecting these rare fractures. Historically, for this classification task, only imaging data have been used, using convolutional neural networks (CNN) or vision transformers applied to radiographs. However, to mimic situations in which all available data are used to arrive at a diagnosis, we adopted an approach of deep learning that is based on the integration of image data and tabular data (from electronic health records) for 159 patients with AFF and 914 patients with NFF. We hypothesized that the combinatorial data, compiled from all the radiology departments of 72 hospitals in Sweden and the Swedish National Patient Register, would improve classification accuracy, as compared to using only one modality. At the patient level, the area under the ROC curve (AUC) increased from 0.966 to 0.987 when using the integrated set of imaging data and seven pre-selected variables, as compared to only using imaging data. More importantly, the sensitivity increased from 0.796 to 0.903. We found a greater impact of data fusion when only a randomly selected subset of available images was used to make the image and tabular data more balanced for each patient. The AUC then increased from 0.949 to 0.984, and the sensitivity increased from 0.727 to 0.849. These AUC improvements are not large, mainly because of the already excellent performance of the CNN (AUC of 0.966) when only images are used. However, the improvement is clinically highly relevant considering the importance of accuracy in medical diagnostics. We expect an even greater effect when imaging data from a clinical workflow, comprising a more diverse set of diagnostic images, are used.
Collapse
Affiliation(s)
- Jörg Schilcher
- Department of Orthopedics and Experimental and Clinical Medicine, Faculty of Health Science, Linköping University, Linköping, Sweden; Wallenberg Centre for Molecular Medicine, Linköping University, Linköping, Sweden; Center for Medical Image Science and Visualization (CMIV), Linköping University, Linköping, Sweden
| | - Alva Nilsson
- Department of Biomedical Engineering, Linköping University, Linköping, Sweden
| | - Oliver Andlid
- Department of Biomedical Engineering, Linköping University, Linköping, Sweden
| | - Anders Eklund
- Department of Biomedical Engineering, Linköping University, Linköping, Sweden; Division of Statistics and Machine Learning, Department of Computer and Information Science, Linköping University, Linköping, Sweden; Center for Medical Image Science and Visualization (CMIV), Linköping University, Linköping, Sweden.
| |
Collapse
|
50
|
Xie J, Zhong W, Yang R, Wang L, Zhen X. Discriminative fusion of moments-aligned latent representation of multimodality medical data. Phys Med Biol 2023; 69:015015. [PMID: 38052076 DOI: 10.1088/1361-6560/ad1271] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Accepted: 12/05/2023] [Indexed: 12/07/2023]
Abstract
Fusion of multimodal medical data provides multifaceted, disease-relevant information for diagnosis or prognosis prediction modeling. Traditional fusion strategies such as feature concatenation often fail to learn hidden complementary and discriminative manifestations from high-dimensional multimodal data. To this end, we proposed a methodology for the integration of multimodality medical data by matching their moments in a latent space, where the hidden, shared information of multimodal data is gradually learned by optimization with multiple feature collinearity and correlation constrains. We first obtained the multimodal hidden representations by learning mappings between the original domain and shared latent space. Within this shared space, we utilized several relational regularizations, including data attribute preservation, feature collinearity and feature-task correlation, to encourage learning of the underlying associations inherent in multimodal data. The fused multimodal latent features were finally fed to a logistic regression classifier for diagnostic prediction. Extensive evaluations on three independent clinical datasets have demonstrated the effectiveness of the proposed method in fusing multimodal data for medical prediction modeling.
Collapse
Affiliation(s)
- Jincheng Xie
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong 510515, People's Republic of China
| | - Weixiong Zhong
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong 510515, People's Republic of China
| | - Ruimeng Yang
- Department of Radiology, the Second Affiliated Hospital, School of Medicine, South China University of Technology, Guangzhou, Guangdong, 510180, People's Republic of China
| | - Linjing Wang
- Radiotherapy Center, Affiliated Cancer Hospital & Institute of Guangzhou Medical University, Guangzhou, Guangdong 510095, People's Republic of China
| | - Xin Zhen
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong 510515, People's Republic of China
| |
Collapse
|