1
|
Piraianu AI, Fulga A, Musat CL, Ciobotaru OR, Poalelungi DG, Stamate E, Ciobotaru O, Fulga I. Enhancing the Evidence with Algorithms: How Artificial Intelligence Is Transforming Forensic Medicine. Diagnostics (Basel) 2023; 13:2992. [PMID: 37761359 PMCID: PMC10529115 DOI: 10.3390/diagnostics13182992] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2023] [Revised: 09/13/2023] [Accepted: 09/14/2023] [Indexed: 09/29/2023] Open
Abstract
BACKGROUND The integration of artificial intelligence (AI) into various fields has ushered in a new era of multidisciplinary progress. Defined as the ability of a system to interpret external data, learn from it, and adapt to specific tasks, AI is poised to revolutionize the world. In forensic medicine and pathology, algorithms play a crucial role in data analysis, pattern recognition, anomaly identification, and decision making. This review explores the diverse applications of AI in forensic medicine, encompassing fields such as forensic identification, ballistics, traumatic injuries, postmortem interval estimation, forensic toxicology, and more. RESULTS A thorough review of 113 articles revealed a subset of 32 papers directly relevant to the research, covering a wide range of applications. These included forensic identification, ballistics and additional factors of shooting, traumatic injuries, post-mortem interval estimation, forensic toxicology, sexual assaults/rape, crime scene reconstruction, virtual autopsy, and medical act quality evaluation. The studies demonstrated the feasibility and advantages of employing AI technology in various facets of forensic medicine and pathology. CONCLUSIONS The integration of AI in forensic medicine and pathology offers promising prospects for improving accuracy and efficiency in medico-legal practices. From forensic identification to post-mortem interval estimation, AI algorithms have shown the potential to reduce human subjectivity, mitigate errors, and provide cost-effective solutions. While challenges surrounding ethical considerations, data security, and algorithmic correctness persist, continued research and technological advancements hold the key to realizing the full potential of AI in forensic applications. As the field of AI continues to evolve, it is poised to play an increasingly pivotal role in the future of forensic medicine and pathology.
Collapse
Affiliation(s)
| | - Ana Fulga
- Faculty of Medicine and Pharmacy, Dunarea de Jos University of Galati, 35 AI Cuza St., 800010 Galati, Romania; (A.-I.P.); (C.L.M.); (O.-R.C.); (D.G.P.); (O.C.); (I.F.)
| | | | | | | | - Elena Stamate
- Faculty of Medicine and Pharmacy, Dunarea de Jos University of Galati, 35 AI Cuza St., 800010 Galati, Romania; (A.-I.P.); (C.L.M.); (O.-R.C.); (D.G.P.); (O.C.); (I.F.)
| | | | | |
Collapse
|
2
|
Dual-stream parallel model of cartilage injury diagnosis based on local centroid optimization. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104229] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
3
|
Nakagawa J, Fujima N, Hirata K, Tang M, Tsuneta S, Suzuki J, Harada T, Ikebe Y, Homma A, Kano S, Minowa K, Kudo K. Utility of the deep learning technique for the diagnosis of orbital invasion on CT in patients with a nasal or sinonasal tumor. Cancer Imaging 2022; 22:52. [PMID: 36138422 PMCID: PMC9502604 DOI: 10.1186/s40644-022-00492-0] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Accepted: 09/14/2022] [Indexed: 11/25/2022] Open
Abstract
Background In nasal or sinonasal tumors, orbital invasion beyond periorbita by the tumor is one of the important criteria in the selection of the surgical procedure. We investigated the usefulness of the convolutional neural network (CNN)-based deep learning technique for the diagnosis of orbital invasion, using computed tomography (CT) images. Methods A total of 168 lesions with malignant nasal or sinonasal tumors were divided into a training dataset (n = 119) and a test dataset (n = 49). The final diagnosis (invasion-positive or -negative) was determined by experienced radiologists who carefully reviewed all of the CT images. In a CNN-based deep learning analysis, a slice of the square target region that included the orbital bone wall was extracted and fed into a deep-learning training session to create a diagnostic model using transfer learning with the Visual Geometry Group 16 (VGG16) model. The test dataset was subsequently tested in CNN-based diagnostic models and by two other radiologists who were not specialized in head and neck radiology. At approx. 2 months after the first reading session, two radiologists again reviewed all of the images in the test dataset, referring to the diagnoses provided by the trained CNN-based diagnostic model. Results The diagnostic accuracy was 0.92 by the CNN-based diagnostic models, whereas the diagnostic accuracies by the two radiologists at the first reading session were 0.49 and 0.45, respectively. In the second reading session by two radiologists (diagnosing with the assistance by the CNN-based diagnostic model), marked elevations of the diagnostic accuracy were observed (0.94 and 1.00, respectively). Conclusion The CNN-based deep learning technique can be a useful support tool in assessing the presence of orbital invasion on CT images, especially for non-specialized radiologists.
Collapse
Affiliation(s)
- Junichi Nakagawa
- Department of Diagnostic Imaging, Graduate School of Medicine, Hokkaido University, N15 W7, Kita-Ku, Sapporo, Hokkaido, 060-8638, Japan.,Department of Diagnostic and Interventional Radiology, Hokkaido University Hospital, N14 W5, Kita-Ku, Sapporo, Hokkaido, 060-8648, Japan
| | - Noriyuki Fujima
- Department of Diagnostic and Interventional Radiology, Hokkaido University Hospital, N14 W5, Kita-Ku, Sapporo, Hokkaido, 060-8648, Japan.
| | - Kenji Hirata
- Department of Diagnostic Imaging, Graduate School of Medicine, Hokkaido University, N15 W7, Kita-Ku, Sapporo, Hokkaido, 060-8638, Japan.,Department of Nuclear Medicine, Hokkaido University Hospital, N14 W5, Kita-Ku, Sapporo, Hokkaido, 060-8648, Japan.,Clinical AI Human Resources Development Program, Faculty of Medicine, Hokkaido University, N15 W7, Kita-Ku, Sapporo, Hokkaido, 060-8638, Japan
| | - Minghui Tang
- Department of Diagnostic Imaging, Graduate School of Medicine, Hokkaido University, N15 W7, Kita-Ku, Sapporo, Hokkaido, 060-8638, Japan.,Clinical AI Human Resources Development Program, Faculty of Medicine, Hokkaido University, N15 W7, Kita-Ku, Sapporo, Hokkaido, 060-8638, Japan
| | - Satonori Tsuneta
- Department of Diagnostic and Interventional Radiology, Hokkaido University Hospital, N14 W5, Kita-Ku, Sapporo, Hokkaido, 060-8648, Japan
| | - Jun Suzuki
- Department of Radiology, Teine Keijinkai Hospital, 1-40, Maeda 1-12, Teine-ku, Sapporo, Hokkaido, 006-8555, Japan
| | - Taisuke Harada
- Department of Diagnostic Imaging, Graduate School of Medicine, Hokkaido University, N15 W7, Kita-Ku, Sapporo, Hokkaido, 060-8638, Japan.,Department of Diagnostic and Interventional Radiology, Hokkaido University Hospital, N14 W5, Kita-Ku, Sapporo, Hokkaido, 060-8648, Japan
| | - Yohei Ikebe
- Department of Diagnostic and Interventional Radiology, Hokkaido University Hospital, N14 W5, Kita-Ku, Sapporo, Hokkaido, 060-8648, Japan.,Center for Cause of Death investigation, Faculty of Medicine, Hokkaido University, N15 W7, Kita-Ku, Sapporo, Hokkaido, 060-8638, Japan
| | - Akihiro Homma
- Department of Otolaryngology-Head and Neck Surgery, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, N15 W7, Kita ku, Sapporo, 060-8638, Japan
| | - Satoshi Kano
- Department of Otolaryngology-Head and Neck Surgery, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, N15 W7, Kita ku, Sapporo, 060-8638, Japan
| | - Kazuyuki Minowa
- Faculty of Dental Medicine, Department of Radiology, Hokkaido University, N13 W7, Kita-ku, Sapporo, Hokkaido, 060-8586, Japan
| | - Kohsuke Kudo
- Department of Diagnostic Imaging, Graduate School of Medicine, Hokkaido University, N15 W7, Kita-Ku, Sapporo, Hokkaido, 060-8638, Japan.,Department of Diagnostic and Interventional Radiology, Hokkaido University Hospital, N14 W5, Kita-Ku, Sapporo, Hokkaido, 060-8648, Japan.,Clinical AI Human Resources Development Program, Faculty of Medicine, Hokkaido University, N15 W7, Kita-Ku, Sapporo, Hokkaido, 060-8638, Japan.,Global Center for Biomedical Science and Engineering, Faculty of Medicine, Hokkaido University, N14 W5, Kita-Ku, Sapporo, Hokkaido, 060-8638, Japan
| |
Collapse
|
4
|
Rao D, K P, Singh R, J V. Automated segmentation of the larynx on computed tomography images: a review. Biomed Eng Lett 2022; 12:175-183. [PMID: 35529346 PMCID: PMC9046475 DOI: 10.1007/s13534-022-00221-3] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2021] [Revised: 01/29/2022] [Accepted: 02/15/2022] [Indexed: 11/03/2022] Open
Abstract
AbstractThe larynx, or the voice-box, is a common site of occurrence of Head and Neck cancers. Yet, automated segmentation of the larynx has been receiving very little attention. Segmentation of organs is an essential step in cancer treatment-planning. Computed Tomography scans are routinely used to assess the extent of tumor spread in the Head and Neck as they are fast to acquire and tolerant to some movement.This paper reviews various automated detection and segmentation methods used for the larynx on Computed Tomography images. Image registration and deep learning approaches to segmenting the laryngeal anatomy are compared, highlighting their strengths and shortcomings. A list of available annotated laryngeal computed tomography datasets is compiled for encouraging further research. Commercial software currently available for larynx contouring are briefed in our work.We conclude that the lack of standardisation on larynx boundaries and the complexity of the relatively small structure makes automated segmentation of the larynx on computed tomography images a challenge. Reliable computer aided intervention in the contouring and segmentation process will help clinicians easily verify their findings and look for oversight in diagnosis. This review is useful for research that works with artificial intelligence in Head and Neck cancer, specifically that deals with the segmentation of laryngeal anatomy.
Collapse
|
5
|
Li MD, Ahmed SR, Choy E, Lozano-Calderon SA, Kalpathy-Cramer J, Chang CY. Artificial intelligence applied to musculoskeletal oncology: a systematic review. Skeletal Radiol 2022; 51:245-256. [PMID: 34013447 DOI: 10.1007/s00256-021-03820-w] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/18/2021] [Revised: 05/13/2021] [Accepted: 05/13/2021] [Indexed: 02/02/2023]
Abstract
Developments in artificial intelligence have the potential to improve the care of patients with musculoskeletal tumors. We performed a systematic review of the published scientific literature to identify the current state of the art of artificial intelligence applied to musculoskeletal oncology, including both primary and metastatic tumors, and across the radiology, nuclear medicine, pathology, clinical research, and molecular biology literature. Through this search, we identified 252 primary research articles, of which 58 used deep learning and 194 used other machine learning techniques. Articles involving deep learning have mostly involved bone scintigraphy, histopathology, and radiologic imaging. Articles involving other machine learning techniques have mostly involved transcriptomic analyses, radiomics, and clinical outcome prediction models using medical records. These articles predominantly present proof-of-concept work, other than the automated bone scan index for bone metastasis quantification, which has translated to clinical workflows in some regions. We systematically review and discuss this literature, highlight opportunities for multidisciplinary collaboration, and identify potentially clinically useful topics with a relative paucity of research attention. Musculoskeletal oncology is an inherently multidisciplinary field, and future research will need to integrate and synthesize noisy siloed data from across clinical, imaging, and molecular datasets. Building the data infrastructure for collaboration will help to accelerate progress towards making artificial intelligence truly useful in musculoskeletal oncology.
Collapse
Affiliation(s)
- Matthew D Li
- Division of Musculoskeletal Imaging and Intervention, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA. .,Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA.
| | - Syed Rakin Ahmed
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA.,Harvard Medical School, Harvard Graduate Program in Biophysics, Harvard University, Cambridge, MA, USA.,Geisel School of Medicine At Dartmouth, Dartmouth College, Hanover, NH, USA
| | - Edwin Choy
- Division of Hematology Oncology, Department of Medicine, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Santiago A Lozano-Calderon
- Department of Orthopedic Surgery, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Jayashree Kalpathy-Cramer
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Connie Y Chang
- Division of Musculoskeletal Imaging and Intervention, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
6
|
Matsuda S, Yoshimura H. Personal identification with artificial intelligence under COVID-19 crisis: a scoping review. Syst Rev 2022; 11:7. [PMID: 34991695 PMCID: PMC8735726 DOI: 10.1186/s13643-021-01879-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Accepted: 12/26/2021] [Indexed: 01/08/2023] Open
Abstract
BACKGROUND Artificial intelligence is useful for building objective and rapid personal identification systems. It is important to research and develop personal identification methods as social and institutional infrastructure. A critical consideration during the coronavirus disease 2019 pandemic is that there is no contact between the subjects and personal identification systems. The aim of this study was to organize the recent 5-year development of contactless personal identification methods that use artificial intelligence. METHODS This study used a scoping review approach to map the progression of contactless personal identification systems using artificial intelligence over the past 5 years. An electronic systematic literature search was conducted using the PubMed, Web of Science, Cochrane Library, CINAHL, and IEEE Xplore databases. Studies published between January 2016 and December 2020 were included in the study. RESULTS By performing an electronic literature search, 83 articles were extracted. Based on the PRISMA flow diagram, 8 eligible articles were included in this study. These eligible articles were divided based on the analysis targets as follows: (1) face and/or body, (2) eye, and (3) forearm and/or hand. Artificial intelligence, including convolutional neural networks, contributed to the progress of research on contactless personal identification methods. CONCLUSIONS This study clarified that contactless personal identification methods using artificial intelligence have progressed and that they have used information obtained from the face and/or body, eyes, and forearm and/or hand.
Collapse
Affiliation(s)
- Shinpei Matsuda
- Department of Dentistry and Oral Surgery, Unit of Sensory and Locomotor Medicine, Division of Medicine, Faculty of Medical Sciences, University of Fukui, 23-3 Matsuokashimoaizuki, Eiheiji-cho, Yoshida-gun, 910-1193, Fukui, Japan.
| | - Hitoshi Yoshimura
- Department of Dentistry and Oral Surgery, Unit of Sensory and Locomotor Medicine, Division of Medicine, Faculty of Medical Sciences, University of Fukui, 23-3 Matsuokashimoaizuki, Eiheiji-cho, Yoshida-gun, 910-1193, Fukui, Japan
| |
Collapse
|
7
|
Lassau N, Bousaid I, Chouzenoux E, Verdon A, Balleyguier C, Bidault F, Mousseaux E, Harguem-Zayani S, Gaillandre L, Bensalah Z, Doutriaux-Dumoulin I, Monroc M, Haquin A, Ceugnart L, Bachelle F, Charlot M, Thomassin-Naggara I, Fourquet T, Dapvril H, Orabona J, Chamming's F, El Haik M, Zhang-Yin J, Guillot MS, Ohana M, Caramella T, Diascorn Y, Airaud JY, Cuingnet P, Gencer U, Lawrance L, Luciani A, Cotten A, Meder JF. Three artificial intelligence data challenges based on CT and ultrasound. Diagn Interv Imaging 2021; 102:669-674. [PMID: 34312111 DOI: 10.1016/j.diii.2021.06.005] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2021] [Revised: 06/21/2021] [Accepted: 06/23/2021] [Indexed: 12/18/2022]
Abstract
PURPOSE The 2020 edition of these Data Challenges was organized by the French Society of Radiology (SFR), from September 28 to September 30, 2020. The goals were to propose innovative artificial intelligence solutions for the current relevant problems in radiology and to build a large database of multimodal medical images of ultrasound and computed tomography (CT) on these subjects from several French radiology centers. MATERIALS AND METHODS This year the attempt was to create data challenge objectives in line with the clinical routine of radiologists, with less preprocessing of data and annotation, leaving a large part of the preprocessing task to the participating teams. The objectives were proposed by the different organizations depending on their core areas of expertise. A dedicated platform was used to upload the medical image data, to automatically anonymize the uploaded data. RESULTS Three challenges were proposed including classification of benign or malignant breast nodules on ultrasound examinations, detection and contouring of pathological neck lymph nodes from cervical CT examinations and classification of calcium score on coronary calcifications from thoracic CT examinations. A total of 2076 medical examinations were included in the database for the three challenges, in three months, by 18 different centers, of which 12% were excluded. The 39 participants were divided into six multidisciplinary teams among which the coronary calcification score challenge was solved with a concordance index > 95%, and the other two with scores of 67% (breast nodule classification) and 63% (neck lymph node calcifications).
Collapse
Affiliation(s)
- Nathalie Lassau
- Laboratoire d'Imagerie Biomédicale Multimodale Paris-Saclay. BIOMAPS, UMR 1281. Université Paris-Saclay, Inserm, CNRS, CEA, 94800 Villejuif, France; Department of Imaging, Institut Gustave Roussy, 94800 Villejuif, France.
| | - Imad Bousaid
- Direction de la Transformation Numérique et des Systèmes d'Information, Institut Gustave Roussy, 94800 Villejuif, France
| | | | - Antoine Verdon
- Direction de la Transformation Numérique et des Systèmes d'Information, Institut Gustave Roussy, 94800 Villejuif, France
| | - Corinne Balleyguier
- Laboratoire d'Imagerie Biomédicale Multimodale Paris-Saclay. BIOMAPS, UMR 1281. Université Paris-Saclay, Inserm, CNRS, CEA, 94800 Villejuif, France; Department of Imaging, Institut Gustave Roussy, 94800 Villejuif, France
| | - François Bidault
- Laboratoire d'Imagerie Biomédicale Multimodale Paris-Saclay. BIOMAPS, UMR 1281. Université Paris-Saclay, Inserm, CNRS, CEA, 94800 Villejuif, France; Department of Imaging, Institut Gustave Roussy, 94800 Villejuif, France
| | - Elie Mousseaux
- Unité Fonctionnelle d'Imagerie Cardiovasculaire Non Invasive, Hôpital Européen Georges Pompidou, AP-HP, 75015 Paris, France
| | - Sana Harguem-Zayani
- Laboratoire d'Imagerie Biomédicale Multimodale Paris-Saclay. BIOMAPS, UMR 1281. Université Paris-Saclay, Inserm, CNRS, CEA, 94800 Villejuif, France; Department of Imaging, Institut Gustave Roussy, 94800 Villejuif, France
| | - Loic Gaillandre
- Centre Libéral d'Imagerie Médicale Agglomération Lille, 59800 Lille, France
| | - Zoubir Bensalah
- Department of Radiology, Centre Hospitalier St Jean, 66000 Perpignan, France
| | | | - Michèle Monroc
- Department of Radiology, Clinique Saint Antoine, 76230 Bois-Guillaume, France
| | - Audrey Haquin
- Department of Radiology, Hôpital de la Croix-Rousse - HCL, 69004 Lyon, France
| | - Luc Ceugnart
- Department of Radiology, Centre Oscar Lambret, 59000 Lille, France
| | | | - Mathilde Charlot
- Department of Radiology, Hôpital Lyon Sud - HCL, 69310 Pierre-Bénite, France
| | | | - Tiphaine Fourquet
- Department of Radiology, Centre Hospitalier Universitaire de Lille, 59000 Lille, France
| | - Héloise Dapvril
- Service d'Imagerie de la Femme, Centre Hospitalier de Valenciennes, 59300 Valenciennes, France
| | - Joseph Orabona
- Department of Radiology, Centre Hospitalier de Bastia, 20600 Bastia, France
| | | | - Mickael El Haik
- Laboratoire d'Imagerie Biomédicale Multimodale Paris-Saclay. BIOMAPS, UMR 1281. Université Paris-Saclay, Inserm, CNRS, CEA, 94800 Villejuif, France; Department of Imaging, Institut Gustave Roussy, 94800 Villejuif, France
| | - Jules Zhang-Yin
- Department of Radiology, Hôpital Tenon, AP-HP, 75020 Paris, France
| | - Marc-Samir Guillot
- Unité Fonctionnelle d'Imagerie Cardiovasculaire Non Invasive, Hôpital Européen Georges Pompidou, AP-HP, 75015 Paris, France
| | - Mickaël Ohana
- Department of Radiology, Centre Hospitalier Universitaire de Strasbourg, 67200 Strasbourg, France
| | - Thomas Caramella
- Department of Radiology, Institut Arnault Tzanck, 06700 Saint-Laurent du Var, France
| | - Yann Diascorn
- Department of Radiology, Institut Arnault Tzanck, 06700 Saint-Laurent du Var, France
| | | | - Philippe Cuingnet
- Department of Radiology, Centre Hospitalier de Douai, 59507 Douai, France
| | - Umit Gencer
- Unité Fonctionnelle d'Imagerie Cardiovasculaire Non Invasive, Hôpital Européen Georges Pompidou, AP-HP, 75015 Paris, France
| | - Littisha Lawrance
- Laboratoire d'Imagerie Biomédicale Multimodale Paris-Saclay. BIOMAPS, UMR 1281. Université Paris-Saclay, Inserm, CNRS, CEA, 94800 Villejuif, France
| | - Alain Luciani
- Collège des Enseignants de Radiologie de France, 75013 Paris, France; Department of Radiology, Centre Hospitalier Henri Mondor, 94000 Créteil, France
| | - Anne Cotten
- Musculoskeletal Imaging Department, Lille Regional University Hospital, 59000 Lille, France
| | - Jean-François Meder
- Department of Neuroradiology, Centre Hospitalier Sainte-Anne, 75014 Paris, France; Université de Paris, Faculté de Médecine, 75006 Paris, France
| |
Collapse
|
8
|
Courot A, Cabrera DLF, Gogin N, Gaillandre L, Rico G, Zhang-Yin J, Elhaik M, Bidault F, Bousaid I, Lassau N. Automatic cervical lymphadenopathy segmentation from CT data using deep learning. Diagn Interv Imaging 2021; 102:675-681. [PMID: 34023232 DOI: 10.1016/j.diii.2021.04.009] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2021] [Revised: 04/21/2021] [Accepted: 04/26/2021] [Indexed: 12/20/2022]
Abstract
PURPOSE The purpose of this study was to develop a fast and automatic algorithm to detect and segment lymphadenopathy from head and neck computed tomography (CT) examination. MATERIALS AND METHODS An ensemble of three convolutional neural networks (CNNs) based on a U-Net architecture were trained to segment the lymphadenopathies in a fully supervised framework. The resulting predictions were assessed using the Dice similarity coefficient (DSC) on examinations presenting one or more adenopathies. On examinations without adenopathies, the score was given by the formula M/(M+A) where M was the mean adenopathy volume per patient and A the volume segmented by the algorithm. The networks were trained on 117 annotated CT acquisitions. RESULTS The test set included 150 additional CT acquisitions unseen during the training. The performance on the test set yielded a mean score of 0.63. CONCLUSION Despite limited available data and partial annotations, our CNN based approach achieved promising results in the task of cervical lymphadenopathy segmentation. It has the potential to bring precise quantification to the clinical workflow and to assist the clinician in the detection task.
Collapse
Affiliation(s)
| | - Diana L F Cabrera
- General Electric Healthcare, 78530 Buc, France; Université de Reims Champagne Ardenne, CReSTIC EA 3804, 51097 Reims, France
| | | | - Loic Gaillandre
- Centre Libéral d'Imagerie Médicale de l'Agglomération Lilloise, 59000 Lille, France
| | | | | | | | - François Bidault
- Department of Radiology, Institut Gustave Roussy, 94800 Villejuif, France; Laboratoire d'Imagerie Biomédicale Multimodale Paris-Saclay. BIOMAPS, UMR 1281. Université Paris-Saclay, Inserm, CNRS, CEA, 94800 Villejuif, France
| | - Imad Bousaid
- Institut Gustave Roussy, 94800 Villejuif, France
| | - Nathalie Lassau
- Department of Radiology, Institut Gustave Roussy, 94800 Villejuif, France; Laboratoire d'Imagerie Biomédicale Multimodale Paris-Saclay. BIOMAPS, UMR 1281. Université Paris-Saclay, Inserm, CNRS, CEA, 94800 Villejuif, France
| |
Collapse
|
9
|
Yu Y, Chen X, Zhu X, Zhang P, Hou Y, Zhang R, Wu C. Performance of Deep Transfer Learning for Detecting Abnormal Fundus Images. J Curr Ophthalmol 2021; 32:368-374. [PMID: 33553839 PMCID: PMC7861106 DOI: 10.4103/joco.joco_123_20] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2020] [Revised: 07/22/2020] [Accepted: 07/27/2020] [Indexed: 11/04/2022] Open
Abstract
Purpose To develop and validate a deep transfer learning (DTL) algorithm for detecting abnormalities in fundus images from non-mydriatic fundus photography examinations. Methods A total of 1295 fundus images were collected to develop and validate a DTL algorithm for detecting abnormal fundus images. After removing 366 poor images, the DTL model was developed using 929 (370 normal and 559 abnormal) fundus images. Data preprocessing was performed to normalize the images. The inception-ResNet-v2 architecture was applied to achieve transfer learning. We tested our model using a subset of the publicly available Messidor dataset (using 366 images) and evaluated the testing performance of the DTL model for detecting abnormal fundus images. Results In the internal validation dataset (n = 273 images), the area under the curve (AUC), sensitivity, accuracy, and specificity of DTL for correctly classified fundus images were 0.997%, 97.41%, 97.07%, and 96.82%, respectively. For the test dataset (n = 273 images), the AUC, sensitivity, accuracy, and specificity of the DTL for correctly classifying fundus images were 0.926%, 88.17%, 87.18%, and 86.67%, respectively. Conclusion DTL showed high sensitivity and specificity for detecting abnormal fundus-related diseases. Further research is necessary to improve this method and evaluate the applicability of DTL in community health-care centers.
Collapse
Affiliation(s)
- Yan Yu
- Department of Ophthalmology, Yijishan Hospital of Wannan Medical College, Wuhu, China
| | - Xiao Chen
- Optoelectronic Technology Research Center, Anhui Normal University, Wuhu, China
| | - XiangBing Zhu
- Optoelectronic Technology Research Center, Anhui Normal University, Wuhu, China
| | - PengFei Zhang
- Department of Ophthalmology, Yijishan Hospital of Wannan Medical College, Wuhu, China
| | - YinFen Hou
- Department of Ophthalmology, Yijishan Hospital of Wannan Medical College, Wuhu, China
| | - RongRong Zhang
- Department of Ophthalmology, Yijishan Hospital of Wannan Medical College, Wuhu, China
| | - ChangFan Wu
- Department of Ophthalmology, Yijishan Hospital of Wannan Medical College, Wuhu, China
| |
Collapse
|
10
|
Improvement in the Convolutional Neural Network for Computed Tomography Images. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11041505] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
Abstract
Background and purpose. This study evaluated a modified specialized convolutional neural network (CNN) to improve the accuracy of medical images. Materials and Methods. We defined computed tomography (CT) images as belonging to one of the following 10 classes: head, neck, chest, abdomen, and pelvis with and without contrast media, with 10,000 images per class. We modified the CNN based on the AlexNet with an input size of 512 × 512. We resized the filter sizes of the convolution layer and max pooling. Using these modified CNNs, various models were created and evaluated. The improved CNN was evaluated to classify the presence or absence of the pancreas in the CT images. We compared the overall accuracy, which was calculated from images not used for training, to that of the ResNet. Results. The overall accuracies of the most improved CNN and ResNet in the 10 classes were 94.8% and 89.3%, respectively. The filter sizes of the improved CNN for the convolution layer were (13, 13), (7, 7), (5, 5), (5, 5), and (5, 5) in order from the first layer, and that of max-pooling was (7, 7). The calculation times of the most improved CNN and ResNet were 56 and 120 min, respectively. Regarding the classification of the pancreas, the overall accuracies of the most improved CNN and ResNet were 75.75% and 58.25%, respectively. The calculation times of the most improved CNN and ResNet were 36 and 55 min, respectively. Conclusion. By optimizing the filter size of the convolution layer and max-pooling of 512 × 512 images, we quickly obtained a highly accurate medical image classification model. This improved CNN can be useful for classifying lesions and anatomies for related diagnostic aid applications.
Collapse
|
11
|
Chassagnon G, Dohan A. Artificial intelligence: from challenges to clinical implementation. Diagn Interv Imaging 2020; 101:763-764. [DOI: 10.1016/j.diii.2020.10.007] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
12
|
Lassau N, Bousaid I, Chouzenoux E, Lamarque J, Charmettant B, Azoulay M, Cotton F, Khalil A, Lucidarme O, Pigneur F, Benaceur Y, Sadate A, Lederlin M, Laurent F, Chassagnon G, Ernst O, Ferreti G, Diascorn Y, Brillet P, Creze M, Cassagnes L, Caramella C, Loubet A, Dallongeville A, Abassebay N, Ohana M, Banaste N, Cadi M, Behr J, Boussel L, Fournier L, Zins M, Beregi J, Luciani A, Cotten A, Meder J. Three artificial intelligence data challenges based on CT and MRI. Diagn Interv Imaging 2020; 101:783-788. [DOI: 10.1016/j.diii.2020.03.006] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2020] [Accepted: 03/12/2020] [Indexed: 02/07/2023]
|
13
|
Morid MA, Borjali A, Del Fiol G. A scoping review of transfer learning research on medical image analysis using ImageNet. Comput Biol Med 2020; 128:104115. [PMID: 33227578 DOI: 10.1016/j.compbiomed.2020.104115] [Citation(s) in RCA: 125] [Impact Index Per Article: 31.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2020] [Revised: 10/19/2020] [Accepted: 11/09/2020] [Indexed: 02/06/2023]
Abstract
OBJECTIVE Employing transfer learning (TL) with convolutional neural networks (CNNs), well-trained on non-medical ImageNet dataset, has shown promising results for medical image analysis in recent years. We aimed to conduct a scoping review to identify these studies and summarize their characteristics in terms of the problem description, input, methodology, and outcome. MATERIALS AND METHODS To identify relevant studies, MEDLINE, IEEE, and ACM digital library were searched for studies published between June 1st, 2012 and January 2nd, 2020. Two investigators independently reviewed articles to determine eligibility and to extract data according to a study protocol defined a priori. RESULTS After screening of 8421 articles, 102 met the inclusion criteria. Of 22 anatomical areas, eye (18%), breast (14%), and brain (12%) were the most commonly studied. Data augmentation was performed in 72% of fine-tuning TL studies versus 15% of the feature-extracting TL studies. Inception models were the most commonly used in breast related studies (50%), while VGGNet was the common in eye (44%), skin (50%) and tooth (57%) studies. AlexNet for brain (42%) and DenseNet for lung studies (38%) were the most frequently used models. Inception models were the most frequently used for studies that analyzed ultrasound (55%), endoscopy (57%), and skeletal system X-rays (57%). VGGNet was the most common for fundus (42%) and optical coherence tomography images (50%). AlexNet was the most frequent model for brain MRIs (36%) and breast X-Rays (50%). 35% of the studies compared their model with other well-trained CNN models and 33% of them provided visualization for interpretation. DISCUSSION This study identified the most prevalent tracks of implementation in the literature for data preparation, methodology selection and output evaluation for various medical image analysis tasks. Also, we identified several critical research gaps existing in the TL studies on medical image analysis. The findings of this scoping review can be used in future TL studies to guide the selection of appropriate research approaches, as well as identify research gaps and opportunities for innovation.
Collapse
Affiliation(s)
- Mohammad Amin Morid
- Department of Information Systems and Analytics, Leavey School of Business, Santa Clara University, Santa Clara, CA, USA.
| | - Alireza Borjali
- Department of Orthopaedic Surgery, Harvard Medical School, Boston, MA, USA; Department of Orthopaedic Surgery, Harris Orthopaedics Laboratory, Massachusetts General Hospital, Boston, MA, USA
| | - Guilherme Del Fiol
- Department of Biomedical Informatics, University of Utah, Salt Lake City, UT, USA
| |
Collapse
|
14
|
Blum A, Gillet R, Rauch A, Urbaneja A, Biouichi H, Dodin G, Germain E, Lombard C, Jaquet P, Louis M, Simon L, Gondim Teixeira P. 3D reconstructions, 4D imaging and postprocessing with CT in musculoskeletal disorders: Past, present and future. Diagn Interv Imaging 2020; 101:693-705. [PMID: 33036947 DOI: 10.1016/j.diii.2020.09.008] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2020] [Revised: 09/12/2020] [Accepted: 09/15/2020] [Indexed: 12/30/2022]
Abstract
Three-dimensional (3D) imaging and post processing are common tasks used daily in many disciplines. The purpose of this article is to review the new postprocessing tools available. Although 3D imaging can be applied to all anatomical regions and used with all imaging techniques, its most varied and relevant applications are found with computed tomography (CT) data in musculoskeletal imaging. These new applications include global illumination rendering (GIR), unfolded rib reformations, subtracted CT angiography for bone analysis, dynamic studies, temporal subtraction and image fusion. In all of these tasks, registration and segmentation are two basic processes that affect the quality of the results. GIR simulates the complete interaction of photons with the scanned object, providing photorealistic volume rendering. Reformations to unfold the rib cage allow more accurate and faster diagnosis of rib lesions. Dynamic CT can be applied to cinematic joint evaluations a well as to perfusion and angiographic studies. Finally, more traditional techniques, such as minimum intensity projection, might find new applications for bone evaluation with the advent of ultra-high-resolution CT scanners. These tools can be used synergistically to provide morphologic, topographic and functional information and increase the versatility of CT.
Collapse
Affiliation(s)
- A Blum
- Guilloz Imaging Department, CHRU of Nancy, 54000 Nancy, France; Unité INSERM U1254 Imagerie Adaptative Diagnostique et Interventionnelle (IADI), CHRU of Nancy, 54511 Vandœuvre-lès-Nancy, France.
| | - R Gillet
- Guilloz Imaging Department, CHRU of Nancy, 54000 Nancy, France
| | - A Rauch
- Guilloz Imaging Department, CHRU of Nancy, 54000 Nancy, France
| | - A Urbaneja
- Guilloz Imaging Department, CHRU of Nancy, 54000 Nancy, France
| | - H Biouichi
- Guilloz Imaging Department, CHRU of Nancy, 54000 Nancy, France
| | - G Dodin
- Guilloz Imaging Department, CHRU of Nancy, 54000 Nancy, France
| | - E Germain
- Guilloz Imaging Department, CHRU of Nancy, 54000 Nancy, France
| | - C Lombard
- Guilloz Imaging Department, CHRU of Nancy, 54000 Nancy, France
| | - P Jaquet
- Guilloz Imaging Department, CHRU of Nancy, 54000 Nancy, France
| | - M Louis
- Guilloz Imaging Department, CHRU of Nancy, 54000 Nancy, France
| | - L Simon
- Guilloz Imaging Department, CHRU of Nancy, 54000 Nancy, France
| | - P Gondim Teixeira
- Guilloz Imaging Department, CHRU of Nancy, 54000 Nancy, France; Unité INSERM U1254 Imagerie Adaptative Diagnostique et Interventionnelle (IADI), CHRU of Nancy, 54511 Vandœuvre-lès-Nancy, France
| |
Collapse
|
15
|
Abstract
The use of artificial intelligence (AI) is a powerful tool for image analysis that is increasingly being evaluated by radiology professionals. However, due to the fact that these methods have been developed for the analysis of nonmedical image data and data structure in radiology departments is not "AI ready", implementing AI in radiology is not straightforward. The purpose of this review is to guide the reader through the pipeline of an AI project for automated image analysis in radiology and thereby encourage its implementation in radiology departments. At the same time, this review aims to enable readers to critically appraise articles on AI-based software in radiology.
Collapse
|