1
|
Li W, Liu X. Anxiety about artificial intelligence from patient and doctor-physician. PATIENT EDUCATION AND COUNSELING 2024; 133:108619. [PMID: 39721348 DOI: 10.1016/j.pec.2024.108619] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/22/2024] [Revised: 12/09/2024] [Accepted: 12/16/2024] [Indexed: 12/28/2024]
Abstract
OBJECTIVE This paper investigates the anxiety surrounding the integration of artificial intelligence (AI) in doctor-patient interactions, analyzing the perspectives of both patients and healthcare providers to identify key concerns and potential solutions. METHODS The study employs a comprehensive literature review, examining existing research on AI in healthcare, and synthesizes findings from various surveys and studies that explore the attitudes of patients and doctors towards AI applications in medical settings. RESULTS The analysis reveals that patient anxiety encompasses algorithm aversion, robophobia, lack of humanistic care, challenges in human-machine interaction, and concerns about AI's universal applicability. Doctors' anxieties stem from fears of replacement, legal liabilities, emotional impacts of work environment changes, and technological apprehension. The paper highlights the need for patient participation, humanistic care, improved interaction methods, educational training, and policy guidelines to foster public understanding and trust in AI. CONCLUSION The paper concludes that addressing AI anxiety in doctor-patient relationships is crucial for successfully integrating AI in healthcare. It emphasizes the importance of respecting patient autonomy, addressing the lack of humanistic care, and improving patient-AI interaction to enhance the patient experience and reduce medical errors. PRACTICE IMPLICATIONS The study suggests that future research should focus on understanding the needs and concerns of patients and doctors, strengthening medical humanities education, and establishing policies to guide the ethical use of AI in medicine. It also recommends public education to enhance understanding and trust in AI to improve medical services and ensure professional development and stable work environment for doctors.
Collapse
Affiliation(s)
- Wenyu Li
- School of Marxism, Capital Normal University, Beijing, China.
| | - Xueen Liu
- Beijing Hepingli Hospital, Beijing, China
| |
Collapse
|
2
|
Aoyama R, Komatsu M, Harada N, Komatsu R, Sakai A, Takeda K, Teraya N, Asada K, Kaneko S, Iwamoto K, Matsuoka R, Sekizawa A, Hamamoto R. Automated Assessment of the Pulmonary Artery-to-Ascending Aorta Ratio in Fetal Cardiac Ultrasound Screening Using Artificial Intelligence. Bioengineering (Basel) 2024; 11:1256. [PMID: 39768074 PMCID: PMC11673077 DOI: 10.3390/bioengineering11121256] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2024] [Revised: 12/01/2024] [Accepted: 12/10/2024] [Indexed: 01/11/2025] Open
Abstract
The three-vessel view (3VV) is a standardized transverse scanning plane used in fetal cardiac ultrasound screening to measure the absolute and relative diameters of the pulmonary artery (PA), ascending aorta (Ao), and superior vena cava, as required. The PA/Ao ratio is used to support the diagnosis of congenital heart disease (CHD). However, vascular diameters are measured manually by examiners, which causes intra- and interobserver variability in clinical practice. In the present study, we aimed to develop an artificial intelligence-based method for the standardized and quantitative evaluation of 3VV. In total, 315 cases and 20 examiners were included in this study. We used the object-detection software YOLOv7 for the automated extraction of 3VV images and compared three segmentation algorithms: DeepLabv3+, UNet3+, and SegFormer. Using the PA/Ao ratios based on vascular segmentation, YOLOv7 plus UNet3+ yielded the most appropriate classification for normal fetuses and those with CHD. Furthermore, YOLOv7 plus UNet3+ achieved an arithmetic mean value of 0.883 for the area under the receiver operating characteristic curve, which was higher than 0.749 for residents and 0.808 for fellows. Our automated method may support unskilled examiners in performing quantitative and objective assessments of 3VV images during fetal cardiac ultrasound screening.
Collapse
Affiliation(s)
- Rina Aoyama
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan
- Department of Obstetrics and Gynecology, School of Medicine, Showa University, 1-5-8 Hatanodai, Shinagawa-ku, Tokyo 142-8666, Japan
| | - Masaaki Komatsu
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| | - Naoaki Harada
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan
- HLPF Data Analytics Department, Fujitsu Ltd., 1-5 Omiya-cho, Saiwai-ku, Kawasaki 212-0014, Japan
- Department of NCC Cancer Science, Biomedical Science and Engineering Track, Graduate School of Medical and Dental Sciences, Institute of Science Tokyo, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
| | - Reina Komatsu
- Department of Obstetrics and Gynecology, School of Medicine, Showa University, 1-5-8 Hatanodai, Shinagawa-ku, Tokyo 142-8666, Japan
| | - Akira Sakai
- Artificial Intelligence Laboratory, Fujitsu Ltd., 4-1-1 Kamikodanaka, Nakahara-ku, Kawasaki 211-8588, Japan
| | - Katsuji Takeda
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| | - Naoki Teraya
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan
- Department of Obstetrics and Gynecology, School of Medicine, Showa University, 1-5-8 Hatanodai, Shinagawa-ku, Tokyo 142-8666, Japan
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| | - Ken Asada
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| | - Syuzo Kaneko
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| | - Kazuki Iwamoto
- HLPF Data Analytics Department, Fujitsu Ltd., 1-5 Omiya-cho, Saiwai-ku, Kawasaki 212-0014, Japan
| | - Ryu Matsuoka
- Department of Obstetrics and Gynecology, School of Medicine, Showa University, 1-5-8 Hatanodai, Shinagawa-ku, Tokyo 142-8666, Japan
| | - Akihiko Sekizawa
- Department of Obstetrics and Gynecology, School of Medicine, Showa University, 1-5-8 Hatanodai, Shinagawa-ku, Tokyo 142-8666, Japan
| | - Ryuji Hamamoto
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
- Department of NCC Cancer Science, Biomedical Science and Engineering Track, Graduate School of Medical and Dental Sciences, Institute of Science Tokyo, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
| |
Collapse
|
3
|
Mendizabal-Ruiz G, Paredes O, Álvarez Á, Acosta-Gómez F, Hernández-Morales E, González-Sandoval J, Mendez-Zavala C, Borrayo E, Chavez-Badiola A. Artificial Intelligence in Human Reproduction. Arch Med Res 2024; 55:103131. [PMID: 39615376 DOI: 10.1016/j.arcmed.2024.103131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2024] [Revised: 11/04/2024] [Accepted: 11/12/2024] [Indexed: 01/04/2025]
Abstract
The use of artificial intelligence (AI) in human reproduction is a rapidly evolving field with both exciting possibilities and ethical considerations. This technology has the potential to improve success rates and reduce the emotional and financial burden of infertility. However, it also raises ethical and privacy concerns. This paper presents an overview of the current and potential applications of AI in human reproduction. It explores the use of AI in various aspects of reproductive medicine, including fertility tracking, assisted reproductive technologies, management of pregnancy complications, and laboratory automation. In addition, we discuss the need for robust ethical frameworks and regulations to ensure the responsible and equitable use of AI in reproductive medicine.
Collapse
Affiliation(s)
- Gerardo Mendizabal-Ruiz
- Conceivable Life Sciences, Department of Research and Development, Guadalajara, Jalisco, Mexico; Laboratorio de Percepción Computacional, Departamento de Bioingeniería Traslacional, Universidad de Guadalajara, Guadalajara, Jalisco, Mexico.
| | - Omar Paredes
- Laboratorio de Innovación Biodigital, Departamento de Bioingeniería Traslacional, Universidad de Guadalajara, Guadalajara, Jalisco, Mexico; IVF 2.0 Limited, Department of Research and Development, London, UK
| | - Ángel Álvarez
- Conceivable Life Sciences, Department of Research and Development, Guadalajara, Jalisco, Mexico; Laboratorio de Percepción Computacional, Departamento de Bioingeniería Traslacional, Universidad de Guadalajara, Guadalajara, Jalisco, Mexico
| | - Fátima Acosta-Gómez
- Conceivable Life Sciences, Department of Research and Development, Guadalajara, Jalisco, Mexico; Laboratorio de Percepción Computacional, Departamento de Bioingeniería Traslacional, Universidad de Guadalajara, Guadalajara, Jalisco, Mexico
| | - Estefanía Hernández-Morales
- Conceivable Life Sciences, Department of Research and Development, Guadalajara, Jalisco, Mexico; Laboratorio de Percepción Computacional, Departamento de Bioingeniería Traslacional, Universidad de Guadalajara, Guadalajara, Jalisco, Mexico
| | - Josué González-Sandoval
- Laboratorio de Percepción Computacional, Departamento de Bioingeniería Traslacional, Universidad de Guadalajara, Guadalajara, Jalisco, Mexico
| | - Celina Mendez-Zavala
- Laboratorio de Percepción Computacional, Departamento de Bioingeniería Traslacional, Universidad de Guadalajara, Guadalajara, Jalisco, Mexico
| | - Ernesto Borrayo
- Laboratorio de Innovación Biodigital, Departamento de Bioingeniería Traslacional, Universidad de Guadalajara, Guadalajara, Jalisco, Mexico
| | - Alejandro Chavez-Badiola
- Conceivable Life Sciences, Department of Research and Development, Guadalajara, Jalisco, Mexico; IVF 2.0 Limited, Department of Research and Development, London, UK; New Hope Fertility Center, Deparment of Research, Ciudad de México, Mexico
| |
Collapse
|
4
|
Nekoui M, Seyed Bolouri SE, Forouzandeh A, Dehghan M, Zonoobi D, Jaremko JL, Buchanan B, Nagdev A, Kapur J. Enhancing Lung Ultrasound Diagnostics: A Clinical Study on an Artificial Intelligence Tool for the Detection and Quantification of A-Lines and B-Lines. Diagnostics (Basel) 2024; 14:2526. [PMID: 39594192 PMCID: PMC11593069 DOI: 10.3390/diagnostics14222526] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2024] [Revised: 11/07/2024] [Accepted: 11/07/2024] [Indexed: 11/28/2024] Open
Abstract
Background/Objective: A-lines and B-lines are key ultrasound markers that differentiate normal from abnormal lung conditions. A-lines are horizontal lines usually seen in normal aerated lungs, while B-lines are linear vertical artifacts associated with lung abnormalities such as pulmonary edema, infection, and COVID-19, where a higher number of B-lines indicates more severe pathology. This paper aimed to evaluate the effectiveness of a newly released lung ultrasound AI tool (ExoLungAI) in the detection of A-lines and quantification/detection of B-lines to help clinicians in assessing pulmonary conditions. Methods: The algorithm is evaluated on 692 lung ultrasound scans collected from 48 patients (65% males, aged: 55 ± 12.9) following their admission to an Intensive Care Unit (ICU) for COVID-19 symptoms, including respiratory failure, pneumonia, and other complications. Results: ExoLungAI achieved a sensitivity of 91% and specificity of 81% for A-line detection. For B-line detection, it attained a sensitivity of 84% and specificity of 86%. In quantifying B-lines, the algorithm achieved a weighted kappa score of 0.77 (95% CI 0.74 to 0.80) and an ICC of 0.87 (95% CI 0.85 to 0.89), showing substantial agreement between the ground truth and predicted B-line counts. Conclusions: ExoLungAI demonstrates a reliable performance in A-line detection and B-line detection/quantification. This automated tool has greater objectivity, consistency, and efficiency compared to manual methods. Many healthcare professionals including intensivists, radiologists, sonographers, medical trainers, and nurse practitioners can benefit from such a tool, as it assists the diagnostic capabilities of lung ultrasound and delivers rapid responses.
Collapse
Affiliation(s)
| | | | | | | | | | - Jacob L. Jaremko
- Department of Radiology & Diagnostic Imaging, University of Alberta, Edmonton, AB T6G 2R3, Canada
| | - Brian Buchanan
- Department of Critical Care Medicine, University of Alberta, Edmonton, AB T6G 2R3, Canada
| | - Arun Nagdev
- Alameda Health System, Highland General Hospital, University of California San Francisco, San Francisco, CA 94143, USA
| | - Jeevesh Kapur
- Department of Diagnostic Imaging, National University of Singapore, Singapore 119228, Singapore
| |
Collapse
|
5
|
Pohlen M. Space Radiology: Emerging Nonsonographic Medical Imaging Techniques and the Potential Applications for Human Spaceflight. Wilderness Environ Med 2024:10806032241283380. [PMID: 39360501 DOI: 10.1177/10806032241283380] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/04/2024]
Abstract
Space medicine is a multidisciplinary field that requires the integration of medical imaging techniques and expertise in diagnosing and treating a wide range of acute and chronic conditions to maintain astronaut health. Medical imaging within this domain has been viewed historically through the lens of inflight point-of-care ultrasound and predominantly research uses of cross-sectional imaging before and after flight. However, space radiology, a subfield defined here as the applications of imaging before, during, and after spaceflight, will grow to necessitate the involvement of more advanced imaging techniques and subspecialist expertise as missions increase in length and complexity. While the performance of imaging in spaceflight is limited by equipment mass and volume, power supply, radiation exposure, communication delays, and personnel training, recent developments in nonsonographic modalities have opened the door to their potential for in-mission use. Additionally, improved exam protocols and scanner technology in combination with artificial intelligence algorithms have greatly advanced the utility of possible pre- and postflight studies. This article reviews the past and present of space radiology and discusses possible use cases, knowledge gaps, and future research directions for radiography, fluoroscopy, computed tomography, and magnetic resonance imaging within space medicine, including both the performance of new exam types for new indications and the increased extraction of information from exams already routinely obtained. Through thoughtfully augmenting the use of these tools, medical mission risk may be reduced substantially through preflight screening, inflight diagnosis and management, and inflight and postflight surveillance.
Collapse
Affiliation(s)
- Michael Pohlen
- Stanford University School of Medicine, Stanford, CA, USA
| |
Collapse
|
6
|
Ponrani MA, Anand M, Alsaadi M, Dutta AK, Fayaz R, Mathew S, Chaurasia MA, Sunila, Bhende M. Brain-computer interfaces inspired spiking neural network model for depression stage identification. J Neurosci Methods 2024; 409:110203. [PMID: 38880343 DOI: 10.1016/j.jneumeth.2024.110203] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2024] [Revised: 05/30/2024] [Accepted: 06/13/2024] [Indexed: 06/18/2024]
Abstract
BACKGROUND Depression is a global mental disorder, and traditional diagnostic methods mainly rely on scales and subjective evaluations by doctors, which cannot effectively identify symptoms and even carry the risk of misdiagnosis. Brain-Computer Interfaces inspired deep learning-assisted diagnosis based on physiological signals holds promise for improving traditional methods lacking physiological basis and leads next generation neuro-technologies. However, traditional deep learning methods rely on immense computational power and mostly involve end-to-end network learning. These learning methods also lack physiological interpretability, limiting their clinical application in assisted diagnosis. METHODOLOGY A brain-like learning model for diagnosing depression using electroencephalogram (EEG) is proposed. The study collects EEG data using 128-channel electrodes, producing a 128×128 brain adjacency matrix. Given the assumption of undirected connectivity, the upper half of the 128×128 matrix is chosen in order to minimise the input parameter size, producing 8,128-dimensional data. After eliminating 28 components derived from irrelevant or reference electrodes, a 90×90 matrix is produced, which can be used as an input for a single-channel brain-computer interface image. RESULT At the functional level, a spiking neural network is constructed to classify individuals with depression and healthy individuals, achieving an accuracy exceeding 97.5 %. COMPARISON WITH EXISTING METHODS Compared to deep convolutional methods, the spiking method reduces energy consumption. CONCLUSION At the structural level, complex networks are utilized to establish spatial topology of brain connections and analyse their graph features, identifying potential abnormal brain functional connections in individuals with depression.
Collapse
Affiliation(s)
- M Angelin Ponrani
- Department of ECE, St. Joseph's College of Engineering, Chennai -119, India.
| | - Monika Anand
- Computer Science & Engineering, Chandigarh University, Mohali, India
| | - Mahmood Alsaadi
- Department of computer science, Al-Maarif University College, Al Anbar 31001, Iraq
| | - Ashit Kumar Dutta
- Department of Computer Science and Information Systems, College of Applied Sciences, AlMaarefa University, Ad Diriyah, Riyadh 13713, Saudi Arabia
| | - Roma Fayaz
- Dapartmemt of computer science, college of computer science and information technology, Jazan university, Jazan, Saudi Arabia
| | | | - Mousmi Ajay Chaurasia
- Dept of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Hyderabad, India
| | - Sunila
- Guru Jambheshwar University of Science and Technology, Hisar, Haryana, India
| | - Manisha Bhende
- Dr. D. Y. Patil Vidyapeeth, Pune, Dr. D. Y. Patil School of Science & Technology, Tathawade, Pune, India
| |
Collapse
|
7
|
Ho-Gotshall S, Wilson C, Jacks E, Kashyap R. Handheld Ultrasound Bladder Volume Assessment Compared to Standard Technique. Cureus 2024; 16:e64649. [PMID: 39149631 PMCID: PMC11326757 DOI: 10.7759/cureus.64649] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/09/2024] [Indexed: 08/17/2024] Open
Abstract
Urinary retention is a common complaint encountered in the emergency department (ED). Current tools for the assessment of urinary retention are either bladder volume estimation with a bladder scanner performed by nursing staff or direct visualization and measurement via bedside ultrasound performed by an emergency physician. Newer handheld ultrasound devices such as the Butterfly iQ have been brought to the market to bring ultrasound more conveniently to the bedside. A recently released handheld auto-calculation tool produces a 3D image of the bladder and instant bladder volume measurement in milliliters. However, there is a paucity of data assessing the validity of the new Butterfly iQ at the bedside. This study sought to compare the diagnostic accuracy and rated user convenience of the nursing bladder scanner, the cart-based ultrasound machine, and the Butterfly iQ auto-bladder volume tool. ED patients were prospectively enrolled and underwent bladder measurements in a randomized, pre-determined order with each modality. Measurements were subsequently compared to the gold standard of catheterization. Cart-based ultrasound had the highest agreement to catheterization when compared to the RN scanner and the Butterfly iQ. However, the Butterfly iQ and RN scanner were both considered more convenient measurement modalities than the cart-based ultrasound. The Butterfly iQ serves as a cost-effective alternative to cart-based ultrasound while providing greater general utility compared to bladder scanners.
Collapse
Affiliation(s)
| | - Casey Wilson
- Emergency Medicine, Grand Strand Medical Center, Myrtle Beach, USA
| | - Errett Jacks
- Emergency Medicine, Grand Strand Medical Center, Myrtle Beach, USA
| | - Rahul Kashyap
- Medicine, Drexel University College of Medicine, Philadelphia, USA
- Global Clinical Scholars Research Training (GCSRT), Harvard Medical School, Boston, USA
- Research, Global Remote Research Program, St. Paul, USA
- Critical Care Medicine, Mayo Clinic, Rochester, USA
- Research, WellSpan Health, York, USA
| |
Collapse
|
8
|
Hernandez Torres SI, Holland L, Edwards TH, Venn EC, Snider EJ. Deep learning models for interpretation of point of care ultrasound in military working dogs. Front Vet Sci 2024; 11:1374890. [PMID: 38903685 PMCID: PMC11187302 DOI: 10.3389/fvets.2024.1374890] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Accepted: 05/20/2024] [Indexed: 06/22/2024] Open
Abstract
Introduction Military working dogs (MWDs) are essential for military operations in a wide range of missions. With this pivotal role, MWDs can become casualties requiring specialized veterinary care that may not always be available far forward on the battlefield. Some injuries such as pneumothorax, hemothorax, or abdominal hemorrhage can be diagnosed using point of care ultrasound (POCUS) such as the Global FAST® exam. This presents a unique opportunity for artificial intelligence (AI) to aid in the interpretation of ultrasound images. In this article, deep learning classification neural networks were developed for POCUS assessment in MWDs. Methods Images were collected in five MWDs under general anesthesia or deep sedation for all scan points in the Global FAST® exam. For representative injuries, a cadaver model was used from which positive and negative injury images were captured. A total of 327 ultrasound clips were captured and split across scan points for training three different AI network architectures: MobileNetV2, DarkNet-19, and ShrapML. Gradient class activation mapping (GradCAM) overlays were generated for representative images to better explain AI predictions. Results Performance of AI models reached over 82% accuracy for all scan points. The model with the highest performance was trained with the MobileNetV2 network for the cystocolic scan point achieving 99.8% accuracy. Across all trained networks the diaphragmatic hepatorenal scan point had the best overall performance. However, GradCAM overlays showed that the models with highest accuracy, like MobileNetV2, were not always identifying relevant features. Conversely, the GradCAM heatmaps for ShrapML show general agreement with regions most indicative of fluid accumulation. Discussion Overall, the AI models developed can automate POCUS predictions in MWDs. Preliminarily, ShrapML had the strongest performance and prediction rate paired with accurately tracking fluid accumulation sites, making it the most suitable option for eventual real-time deployment with ultrasound systems. Further integration of this technology with imaging technologies will expand use of POCUS-based triage of MWDs.
Collapse
Affiliation(s)
- Sofia I. Hernandez Torres
- Organ Support and Automation Technologies Group, U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX, United States
| | - Lawrence Holland
- Organ Support and Automation Technologies Group, U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX, United States
| | - Thomas H. Edwards
- Hemorrhage Control and Vascular Dysfunction Group, U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX, United States
- Texas A&M University, School of Veterinary Medicine, College Station, TX, United States
| | - Emilee C. Venn
- Veterinary Support Group, U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX, United States
| | - Eric J. Snider
- Organ Support and Automation Technologies Group, U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX, United States
| |
Collapse
|
9
|
Chua CYX, Jimenez M, Mozneb M, Traverso G, Lugo R, Sharma A, Svendsen CN, Wagner WR, Langer R, Grattoni A. Advanced material technologies for space and terrestrial medicine. NATURE REVIEWS MATERIALS 2024; 9:808-821. [DOI: 10.1038/s41578-024-00691-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 04/30/2024] [Indexed: 01/05/2025]
|
10
|
Scharf JL, Dracopoulos C, Gembicki M, Rody A, Welp A, Weichert J. How automated techniques ease functional assessment of the fetal heart: Applicability of two-dimensional speckle-tracking echocardiography for comprehensive analysis of global and segmental cardiac deformation using fetalHQ®. Echocardiography 2024; 41:e15833. [PMID: 38873982 DOI: 10.1111/echo.15833] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2024] [Revised: 04/17/2024] [Accepted: 05/05/2024] [Indexed: 06/15/2024] Open
Abstract
BACKGROUND Prenatal echocardiographic assessment of fetal cardiac function has become increasingly important. Fetal two-dimensional speckle-tracking echocardiography (2D-STE) allows the determination of global and segmental functional cardiac parameters. Prenatal diagnostics is relying increasingly on artificial intelligence, whose algorithms transform the way clinicians use ultrasound in their daily workflow. The purpose of this study was to demonstrate the feasibility of whether less experienced operators can handle and might benefit from an automated tool of 2D-STE in the clinical routine. METHODS A total of 136 unselected, normal, singleton, second- and third-trimester fetuses with normofrequent heart rates were examined by targeted ultrasound. 2D-STE was performed separately by beginner and expert semiautomatically using a GE Voluson E10 (FetalHQ®, GE Healthcare, Chicago, IL). Several fetal cardiac parameters were calculated (end-diastolic diameter [ED], sphericity index [SI], global longitudinal strain [EndoGLS], fractional shortening [FS]) and assigned to gestational age (GA). Bland-Altman plots were used to test agreement between both operators. RESULTS The mean maternal age was 33 years, and the mean maternal body mass index prior to pregnancy was 24.78 kg/m2. The GA ranged from 16.4 to 32.0 weeks (average 22.9 weeks). Averaged endoGLS value of the beginner was -18.57% ± 6.59 percentage points (pp) for the right and -19.58% ± 5.63 pp for the left ventricle, that of the expert -14.33% ± 4.88 pp and -16.37% ± 5.42 pp. With increasing GA, right ventricular endoGLS decreased slightly while the left ventricular was almost constant. The statistical analysis for endoGLS showed a Bland-Altman-Bias of -4.24 pp ± 8.06 pp for the right and -3.21 pp ± 7.11 pp for the left ventricle. The Bland-Altman-Bias of the ED in both ventricles in all analyzed segments ranged from -.49 mm ± 1.54 mm to -.10 mm ± 1.28 mm, that for FS from -.33 pp ± 11.82 pp to 3.91 pp ± 15.56 pp and that for SI from -.38 ± .68 to -.15 ± .45. CONCLUSIONS Between both operators, our data indicated that 2D-STE analysis showed excellent agreement for cardiac morphometry parameters (ED and SI), and good agreement for cardiac function parameters (EndoGLS and FS). Due to its complexity, the application of fetal 2D-STE remains the domain of scientific-academic perinatal ultrasound and should be placed preferably in the hands of skilled operators. At present, from our perspective, an implementation into clinical practice "on-the-fly" cannot be recommended.
Collapse
Affiliation(s)
- Jann Lennard Scharf
- Department of Gynecology and Obstetrics, Division of Prenatal Medicine, University Hospital of Schleswig-Holstein, Lübeck, Germany
| | - Christoph Dracopoulos
- Department of Gynecology and Obstetrics, Division of Prenatal Medicine, University Hospital of Schleswig-Holstein, Lübeck, Germany
| | - Michael Gembicki
- Department of Gynecology and Obstetrics, Division of Prenatal Medicine, University Hospital of Schleswig-Holstein, Lübeck, Germany
| | - Achim Rody
- Department of Gynecology and Obstetrics, Division of Prenatal Medicine, University Hospital of Schleswig-Holstein, Lübeck, Germany
| | - Amrei Welp
- Department of Gynecology and Obstetrics, Division of Prenatal Medicine, University Hospital of Schleswig-Holstein, Lübeck, Germany
| | - Jan Weichert
- Department of Gynecology and Obstetrics, Division of Prenatal Medicine, University Hospital of Schleswig-Holstein, Lübeck, Germany
| |
Collapse
|
11
|
Serrano RA, Smeltz AM. The Promise of Artificial Intelligence-Assisted Point-of-Care Ultrasonography in Perioperative Care. J Cardiothorac Vasc Anesth 2024; 38:1244-1250. [PMID: 38402063 DOI: 10.1053/j.jvca.2024.01.034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Accepted: 01/29/2024] [Indexed: 02/26/2024]
Abstract
The role of point-of-care ultrasonography in the perioperative setting has expanded rapidly over recent years. Revolutionizing this technology further is integrating artificial intelligence to assist clinicians in optimizing images, identifying anomalies, performing automated measurements and calculations, and facilitating diagnoses. Artificial intelligence can increase point-of-care ultrasonography efficiency and accuracy, making it an even more valuable point-of-care tool. Given this topic's importance and ever-changing landscape, this review discusses the latest trends to serve as an introduction and update in this area.
Collapse
Affiliation(s)
| | - Alan M Smeltz
- University of North Carolina School of Medicine, Chapel Hill, NC
| |
Collapse
|
12
|
Hernandez Torres SI, Ruiz A, Holland L, Ortiz R, Snider EJ. Evaluation of Deep Learning Model Architectures for Point-of-Care Ultrasound Diagnostics. Bioengineering (Basel) 2024; 11:392. [PMID: 38671813 PMCID: PMC11048259 DOI: 10.3390/bioengineering11040392] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2024] [Revised: 04/05/2024] [Accepted: 04/13/2024] [Indexed: 04/28/2024] Open
Abstract
Point-of-care ultrasound imaging is a critical tool for patient triage during trauma for diagnosing injuries and prioritizing limited medical evacuation resources. Specifically, an eFAST exam evaluates if there are free fluids in the chest or abdomen but this is only possible if ultrasound scans can be accurately interpreted, a challenge in the pre-hospital setting. In this effort, we evaluated the use of artificial intelligent eFAST image interpretation models. Widely used deep learning model architectures were evaluated as well as Bayesian models optimized for six different diagnostic models: pneumothorax (i) B- or (ii) M-mode, hemothorax (iii) B- or (iv) M-mode, (v) pelvic or bladder abdominal hemorrhage and (vi) right upper quadrant abdominal hemorrhage. Models were trained using images captured in 27 swine. Using a leave-one-subject-out training approach, the MobileNetV2 and DarkNet53 models surpassed 85% accuracy for each M-mode scan site. The different B-mode models performed worse with accuracies between 68% and 74% except for the pelvic hemorrhage model, which only reached 62% accuracy for all model architectures. These results highlight which eFAST scan sites can be easily automated with image interpretation models, while other scan sites, such as the bladder hemorrhage model, will require more robust model development or data augmentation to improve performance. With these additional improvements, the skill threshold for ultrasound-based triage can be reduced, thus expanding its utility in the pre-hospital setting.
Collapse
Affiliation(s)
| | | | | | | | - Eric J. Snider
- Organ Support and Automation Technologies Group, U.S. Army Institute of Surgical Research, Joint Base San Antonio, Fort Sam Houston, San Antonio, TX 78234, USA; (S.I.H.T.); (A.R.); (L.H.); (R.O.)
| |
Collapse
|
13
|
Bekedam NM, Idzerda LHW, van Alphen MJA, van Veen RLP, Karssemakers LHE, Karakullukcu MB, Smeele LE. Implementing a deep learning model for automatic tongue tumour segmentation in ex-vivo 3-dimensional ultrasound volumes. Br J Oral Maxillofac Surg 2024; 62:284-289. [PMID: 38402068 DOI: 10.1016/j.bjoms.2023.12.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Revised: 11/28/2023] [Accepted: 12/27/2023] [Indexed: 02/26/2024]
Abstract
Three-dimensional (3D) ultrasound can assess the margins of resected tongue carcinoma during surgery. Manual segmentation (MS) is time-consuming, labour-intensive, and subject to operator variability. This study aims to investigate use of a 3D deep learning model for fast intraoperative segmentation of tongue carcinoma in 3D ultrasound volumes. Additionally, it investigates the clinical effect of automatic segmentation. A 3D No New U-Net (nnUNet) was trained on 113 manually annotated ultrasound volumes of resected tongue carcinoma. The model was implemented on a mobile workstation and clinically validated on 16 prospectively included tongue carcinoma patients. Different prediction settings were investigated. Automatic segmentations with multiple islands were adjusted by selecting the best-representing island. The final margin status (FMS) based on automatic, semi-automatic, and manual segmentation was computed and compared with the histopathological margin. The standard 3D nnUNet resulted in the best-performing automatic segmentation with a mean (SD) Dice volumetric score of 0.65 (0.30), Dice surface score of 0.73 (0.26), average surface distance of 0.44 (0.61) mm, Hausdorff distance of 6.65 (8.84) mm, and prediction time of 8 seconds. FMS based on automatic segmentation had a low correlation with histopathology (r = 0.12, p = 0.67); MS resulted in a moderate but insignificant correlation with histopathology (r = 0.4, p = 0.12, n = 16). Implementing the 3D nnUNet yielded fast, automatic segmentation of tongue carcinoma in 3D ultrasound volumes. Correlation between FMS and histopathology obtained from these segmentations was lower than the moderate correlation between MS and histopathology.
Collapse
Affiliation(s)
- N M Bekedam
- Department of Head and Neck Surgery and Oncology, Netherlands Cancer Institute, Antoni van Leeuwenhoek, Amsterdam, The Netherlands; Academic Centre of Dentistry Amsterdam, Vrije Universiteit, Gustav Mahlerlaan 3004, 1081 LA Amsterdam, The Netherlands.
| | - L H W Idzerda
- Department of Head and Neck Surgery and Oncology, Netherlands Cancer Institute, Antoni van Leeuwenhoek, Amsterdam, The Netherlands
| | - M J A van Alphen
- Department of Head and Neck Surgery and Oncology, Verwelius 3D Lab, Netherlands Cancer Institute, Antoni van Leeuwenhoek, Amsterdam, The Netherlands
| | - R L P van Veen
- Department of Head and Neck Surgery and Oncology, Verwelius 3D Lab, Netherlands Cancer Institute, Antoni van Leeuwenhoek, Amsterdam, The Netherlands
| | - L H E Karssemakers
- Department of Head and Neck Surgery and Oncology, Netherlands Cancer Institute, Antoni van Leeuwenhoek, Amsterdam, The Netherlands
| | - M B Karakullukcu
- Department of Head and Neck Surgery and Oncology, Netherlands Cancer Institute, Antoni van Leeuwenhoek, Amsterdam, The Netherlands
| | - L E Smeele
- Department of Head and Neck Surgery and Oncology, Netherlands Cancer Institute, Antoni van Leeuwenhoek, Amsterdam, The Netherlands
| |
Collapse
|
14
|
Taksoee-Vester CA, Mikolaj K, Bashir Z, Christensen AN, Petersen OB, Sundberg K, Feragen A, Svendsen MBS, Nielsen M, Tolsgaard MG. AI supported fetal echocardiography with quality assessment. Sci Rep 2024; 14:5809. [PMID: 38461322 PMCID: PMC10925034 DOI: 10.1038/s41598-024-56476-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2023] [Accepted: 03/06/2024] [Indexed: 03/11/2024] Open
Abstract
This study aimed to develop a deep learning model to assess the quality of fetal echocardiography and to perform prospective clinical validation. The model was trained on data from the 18-22-week anomaly scan conducted in seven hospitals from 2008 to 2018. Prospective validation involved 100 patients from two hospitals. A total of 5363 images from 2551 pregnancies were used for training and validation. The model's segmentation accuracy depended on image quality measured by a quality score (QS). It achieved an overall average accuracy of 0.91 (SD 0.09) across the test set, with images having above-average QS scoring 0.97 (SD 0.03). During prospective validation of 192 images, clinicians rated 44.8% (SD 9.8) of images as equal in quality, 18.69% (SD 5.7) favoring auto-captured images and 36.51% (SD 9.0) preferring manually captured ones. Images with above average QS showed better agreement on segmentations (p < 0.001) and QS (p < 0.001) with fetal medicine experts. Auto-capture saved additional planes beyond protocol requirements, resulting in more comprehensive echocardiographies. Low QS had adverse effect on both model performance and clinician's agreement with model feedback. The findings highlight the importance of developing and evaluating AI models based on 'noisy' real-life data rather than pursuing the highest accuracy possible with retrospective academic-grade data.
Collapse
Affiliation(s)
- Caroline A Taksoee-Vester
- Department of Clinical Medicine, Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark.
- Center of Fetal Medicine, Department of Obstetrics, Copenhagen University Hospital, Rigshospitalet, Blegdamsvej 9, Dept. 4071, 2100, Copenhagen, Denmark.
- Copenhagen Academy of Medical Education and Simulation (CAMES), Rigshospitalet, Copenhagen, Denmark.
| | - Kamil Mikolaj
- DTU Compute, Technical University of Denmark (DTU), Lyngby, Denmark
| | - Zahra Bashir
- Department of Clinical Medicine, Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark
- Copenhagen Academy of Medical Education and Simulation (CAMES), Rigshospitalet, Copenhagen, Denmark
- Center for Fetal Medicine, Department of Obstetrics, Slagelse Hospital, Slagelse, Denmark
| | | | - Olav B Petersen
- Department of Clinical Medicine, Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark
- Center of Fetal Medicine, Department of Obstetrics, Copenhagen University Hospital, Rigshospitalet, Blegdamsvej 9, Dept. 4071, 2100, Copenhagen, Denmark
| | - Karin Sundberg
- Center of Fetal Medicine, Department of Obstetrics, Copenhagen University Hospital, Rigshospitalet, Blegdamsvej 9, Dept. 4071, 2100, Copenhagen, Denmark
| | - Aasa Feragen
- DTU Compute, Technical University of Denmark (DTU), Lyngby, Denmark
| | - Morten B S Svendsen
- Copenhagen Academy of Medical Education and Simulation (CAMES), Rigshospitalet, Copenhagen, Denmark
| | - Mads Nielsen
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | - Martin G Tolsgaard
- Department of Clinical Medicine, Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark
- Center of Fetal Medicine, Department of Obstetrics, Copenhagen University Hospital, Rigshospitalet, Blegdamsvej 9, Dept. 4071, 2100, Copenhagen, Denmark
- Copenhagen Academy of Medical Education and Simulation (CAMES), Rigshospitalet, Copenhagen, Denmark
| |
Collapse
|
15
|
Lokaj B, Pugliese MT, Kinkel K, Lovis C, Schmid J. Barriers and facilitators of artificial intelligence conception and implementation for breast imaging diagnosis in clinical practice: a scoping review. Eur Radiol 2024; 34:2096-2109. [PMID: 37658895 PMCID: PMC10873444 DOI: 10.1007/s00330-023-10181-6] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Revised: 06/07/2023] [Accepted: 07/10/2023] [Indexed: 09/05/2023]
Abstract
OBJECTIVE Although artificial intelligence (AI) has demonstrated promise in enhancing breast cancer diagnosis, the implementation of AI algorithms in clinical practice encounters various barriers. This scoping review aims to identify these barriers and facilitators to highlight key considerations for developing and implementing AI solutions in breast cancer imaging. METHOD A literature search was conducted from 2012 to 2022 in six databases (PubMed, Web of Science, CINHAL, Embase, IEEE, and ArXiv). The articles were included if some barriers and/or facilitators in the conception or implementation of AI in breast clinical imaging were described. We excluded research only focusing on performance, or with data not acquired in a clinical radiology setup and not involving real patients. RESULTS A total of 107 articles were included. We identified six major barriers related to data (B1), black box and trust (B2), algorithms and conception (B3), evaluation and validation (B4), legal, ethical, and economic issues (B5), and education (B6), and five major facilitators covering data (F1), clinical impact (F2), algorithms and conception (F3), evaluation and validation (F4), and education (F5). CONCLUSION This scoping review highlighted the need to carefully design, deploy, and evaluate AI solutions in clinical practice, involving all stakeholders to yield improvement in healthcare. CLINICAL RELEVANCE STATEMENT The identification of barriers and facilitators with suggested solutions can guide and inform future research, and stakeholders to improve the design and implementation of AI for breast cancer detection in clinical practice. KEY POINTS • Six major identified barriers were related to data; black-box and trust; algorithms and conception; evaluation and validation; legal, ethical, and economic issues; and education. • Five major identified facilitators were related to data, clinical impact, algorithms and conception, evaluation and validation, and education. • Coordinated implication of all stakeholders is required to improve breast cancer diagnosis with AI.
Collapse
Affiliation(s)
- Belinda Lokaj
- Geneva School of Health Sciences, HES-SO University of Applied Sciences and Arts Western Switzerland, Delémont, Switzerland.
- Faculty of Medicine, University of Geneva, Geneva, Switzerland.
- Division of Medical Information Sciences, Geneva University Hospitals, Geneva, Switzerland.
| | - Marie-Thérèse Pugliese
- Geneva School of Health Sciences, HES-SO University of Applied Sciences and Arts Western Switzerland, Delémont, Switzerland
| | - Karen Kinkel
- Réseau Hospitalier Neuchâtelois, Neuchâtel, Switzerland
| | - Christian Lovis
- Faculty of Medicine, University of Geneva, Geneva, Switzerland
- Division of Medical Information Sciences, Geneva University Hospitals, Geneva, Switzerland
| | - Jérôme Schmid
- Geneva School of Health Sciences, HES-SO University of Applied Sciences and Arts Western Switzerland, Delémont, Switzerland
| |
Collapse
|
16
|
Amezcua KL, Collier J, Lopez M, Hernandez Torres SI, Ruiz A, Gathright R, Snider EJ. Design and testing of ultrasound probe adapters for a robotic imaging platform. Sci Rep 2024; 14:5102. [PMID: 38429442 PMCID: PMC10907673 DOI: 10.1038/s41598-024-55480-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Accepted: 02/23/2024] [Indexed: 03/03/2024] Open
Abstract
Medical imaging-based triage is a critical tool for emergency medicine in both civilian and military settings. Ultrasound imaging can be used to rapidly identify free fluid in abdominal and thoracic cavities which could necessitate immediate surgical intervention. However, proper ultrasound image capture requires a skilled ultrasonography technician who is likely unavailable at the point of injury where resources are limited. Instead, robotics and computer vision technology can simplify image acquisition. As a first step towards this larger goal, here, we focus on the development of prototypes for ultrasound probe securement using a robotics platform. The ability of four probe adapter technologies to precisely capture images at anatomical locations, repeatedly, and with different ultrasound transducer types were evaluated across more than five scoring criteria. Testing demonstrated two of the adapters outperformed the traditional robot gripper and manual image capture, with a compact, rotating design compatible with wireless imaging technology being most suitable for use at the point of injury. Next steps will integrate the robotic platform with computer vision and deep learning image interpretation models to automate image capture and diagnosis. This will lower the skill threshold needed for medical imaging-based triage, enabling this procedure to be available at or near the point of injury.
Collapse
Affiliation(s)
- Krysta-Lynn Amezcua
- Organ Support and Automation Technologies Group, U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX, 78234, USA
| | - James Collier
- Organ Support and Automation Technologies Group, U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX, 78234, USA
| | - Michael Lopez
- Organ Support and Automation Technologies Group, U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX, 78234, USA
| | - Sofia I Hernandez Torres
- Organ Support and Automation Technologies Group, U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX, 78234, USA
| | - Austin Ruiz
- Organ Support and Automation Technologies Group, U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX, 78234, USA
| | - Rachel Gathright
- Organ Support and Automation Technologies Group, U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX, 78234, USA
| | - Eric J Snider
- Organ Support and Automation Technologies Group, U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX, 78234, USA.
| |
Collapse
|
17
|
Grignaffini F, Barbuto F, Troiano M, Piazzo L, Simeoni P, Mangini F, De Stefanis C, Onetti Muda A, Frezza F, Alisi A. The Use of Artificial Intelligence in the Liver Histopathology Field: A Systematic Review. Diagnostics (Basel) 2024; 14:388. [PMID: 38396427 PMCID: PMC10887838 DOI: 10.3390/diagnostics14040388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Revised: 02/02/2024] [Accepted: 02/06/2024] [Indexed: 02/25/2024] Open
Abstract
Digital pathology (DP) has begun to play a key role in the evaluation of liver specimens. Recent studies have shown that a workflow that combines DP and artificial intelligence (AI) applied to histopathology has potential value in supporting the diagnosis, treatment evaluation, and prognosis prediction of liver diseases. Here, we provide a systematic review of the use of this workflow in the field of hepatology. Based on the PRISMA 2020 criteria, a search of the PubMed, SCOPUS, and Embase electronic databases was conducted, applying inclusion/exclusion filters. The articles were evaluated by two independent reviewers, who extracted the specifications and objectives of each study, the AI tools used, and the results obtained. From the 266 initial records identified, 25 eligible studies were selected, mainly conducted on human liver tissues. Most of the studies were performed using whole-slide imaging systems for imaging acquisition and applying different machine learning and deep learning methods for image pre-processing, segmentation, feature extractions, and classification. Of note, most of the studies selected demonstrated good performance as classifiers of liver histological images compared to pathologist annotations. Promising results to date bode well for the not-too-distant inclusion of these techniques in clinical practice.
Collapse
Affiliation(s)
- Flavia Grignaffini
- Department of Information Engineering, Electronics and Telecommunications (DIET), “La Sapienza”, University of Rome, 00184 Rome, Italy; (F.G.); (F.B.); (L.P.); (F.M.); (F.F.)
| | - Francesco Barbuto
- Department of Information Engineering, Electronics and Telecommunications (DIET), “La Sapienza”, University of Rome, 00184 Rome, Italy; (F.G.); (F.B.); (L.P.); (F.M.); (F.F.)
| | - Maurizio Troiano
- Research Unit of Genetics of Complex Phenotypes, Bambino Gesù Children’s Hospital, IRCCS, 00165 Rome, Italy; (M.T.); (C.D.S.)
| | - Lorenzo Piazzo
- Department of Information Engineering, Electronics and Telecommunications (DIET), “La Sapienza”, University of Rome, 00184 Rome, Italy; (F.G.); (F.B.); (L.P.); (F.M.); (F.F.)
| | - Patrizio Simeoni
- National Transport Authority (NTA), D02 WT20 Dublin, Ireland;
- Faculty of Lifelong Learning, South East Technological University (SETU), R93 V960 Carlow, Ireland
| | - Fabio Mangini
- Department of Information Engineering, Electronics and Telecommunications (DIET), “La Sapienza”, University of Rome, 00184 Rome, Italy; (F.G.); (F.B.); (L.P.); (F.M.); (F.F.)
| | - Cristiano De Stefanis
- Research Unit of Genetics of Complex Phenotypes, Bambino Gesù Children’s Hospital, IRCCS, 00165 Rome, Italy; (M.T.); (C.D.S.)
| | | | - Fabrizio Frezza
- Department of Information Engineering, Electronics and Telecommunications (DIET), “La Sapienza”, University of Rome, 00184 Rome, Italy; (F.G.); (F.B.); (L.P.); (F.M.); (F.F.)
| | - Anna Alisi
- Research Unit of Genetics of Complex Phenotypes, Bambino Gesù Children’s Hospital, IRCCS, 00165 Rome, Italy; (M.T.); (C.D.S.)
| |
Collapse
|
18
|
Dubey G, Srivastava S, Jayswal AK, Saraswat M, Singh P, Memoria M. Fetal Ultrasound Segmentation and Measurements Using Appearance and Shape Prior Based Density Regression with Deep CNN and Robust Ellipse Fitting. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:247-267. [PMID: 38343234 DOI: 10.1007/s10278-023-00908-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 09/05/2023] [Accepted: 09/06/2023] [Indexed: 03/02/2024]
Abstract
Accurately segmenting the structure of the fetal head (FH) and performing biometry measurements, including head circumference (HC) estimation, stands as a vital requirement for addressing abnormal fetal growth during pregnancy under the expertise of experienced radiologists using ultrasound (US) images. However, accurate segmentation and measurement is a challenging task due to image artifact, incomplete ellipse fitting, and fluctuations due to FH dimensions over different trimesters. Also, it is highly time-consuming due to the absence of specialized features, which leads to low segmentation accuracy. To address these challenging tasks, we propose an automatic density regression approach to incorporate appearance and shape priors into the deep learning-based network model (DR-ASPnet) with robust ellipse fitting using fetal US images. Initially, we employed multiple pre-processing steps to remove unwanted distortions, variable fluctuations, and a clear view of significant features from the US images. Then some form of augmentation operation is applied to increase the diversity of the dataset. Next, we proposed the hierarchical density regression deep convolutional neural network (HDR-DCNN) model, which involves three network models to determine the complex location of FH for accurate segmentation during the training and testing processes. Then, we used post-processing operations using contrast enhancement filtering with a morphological operation model to smooth the region and remove unnecessary artifacts from the segmentation results. After post-processing, we applied the smoothed segmented result to the robust ellipse fitting-based least square (REFLS) method for HC estimation. Experimental results of the DR-ASPnet model obtain 98.86% dice similarity coefficient (DSC) as segmentation accuracy, and it also obtains 1.67 mm absolute distance (AD) as measurement accuracy compared to other state-of-the-art methods. Finally, we achieved a 0.99 correlation coefficient (CC) in estimating the measured and predicted HC values on the HC18 dataset.
Collapse
Affiliation(s)
- Gaurav Dubey
- Department of Computer Science, KIET Group of Institutions, Delhi-NCR, Ghaziabad, U.P, India
| | | | | | - Mala Saraswat
- Department of Computer Science, Bennett University, Greater Noida, India
| | - Pooja Singh
- Shiv Nadar University, Greater Noida, Uttar Pradesh, India
| | - Minakshi Memoria
- CSE Department, UIT, Uttaranchal University, Dehradun, Uttarakhand, India
| |
Collapse
|
19
|
Xie Y, Huang Y, Stevenson HCS, Yin L, Zhang K, Islam ZH, Marcum WA, Johnston C, Hoyt N, Kent EW, Wang B, Hossack JA. A Quantitative Method for the Evaluation of Deep Vein Thrombosis in a Murine Model Using Three-Dimensional Ultrasound Imaging. Biomedicines 2024; 12:200. [PMID: 38255304 PMCID: PMC11154521 DOI: 10.3390/biomedicines12010200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2023] [Revised: 01/13/2024] [Accepted: 01/15/2024] [Indexed: 01/24/2024] Open
Abstract
Deep vein thrombosis (DVT) is a life-threatening condition that can lead to its sequelae pulmonary embolism (PE) or post-thrombotic syndrome (PTS). Murine models of DVT are frequently used in early-stage disease research and to assess potential therapies. This creates the need for the reliable and easy quantification of blood clots. In this paper, we present a novel high-frequency 3D ultrasound approach for the quantitative evaluation of the volume of DVT in an in vitro model and an in vivo murine model. The proposed method involves the use of a high-resolution ultrasound acquisition system and semiautomatic segmentation of the clot. The measured 3D volume of blood clots was validated to be correlated with in vitro blood clot weights with an R2 of 0.89. Additionally, the method was confirmed with an R2 of 0.91 in the in vivo mouse model with a cylindrical volume from macroscopic measurement. We anticipate that the proposed method will be useful in pharmacological or therapeutic studies in murine models of DVT.
Collapse
Affiliation(s)
- Yanjun Xie
- Department of Biomedical Engineering, University of Virginia, Charlottesville, VA 22908, USA; (Y.X.); (Y.H.); (H.C.S.S.)
| | - Yi Huang
- Department of Biomedical Engineering, University of Virginia, Charlottesville, VA 22908, USA; (Y.X.); (Y.H.); (H.C.S.S.)
| | - Hugo C. S. Stevenson
- Department of Biomedical Engineering, University of Virginia, Charlottesville, VA 22908, USA; (Y.X.); (Y.H.); (H.C.S.S.)
| | - Li Yin
- Department of Surgery, School of Medicine, University of Virginia, Charlottesville, VA 22908, USA; (L.Y.); (K.Z.); (Z.H.I.); (W.A.M.); (C.J.); (N.H.); (E.W.K.); (B.W.)
| | - Kaijie Zhang
- Department of Surgery, School of Medicine, University of Virginia, Charlottesville, VA 22908, USA; (L.Y.); (K.Z.); (Z.H.I.); (W.A.M.); (C.J.); (N.H.); (E.W.K.); (B.W.)
| | - Zain Husain Islam
- Department of Surgery, School of Medicine, University of Virginia, Charlottesville, VA 22908, USA; (L.Y.); (K.Z.); (Z.H.I.); (W.A.M.); (C.J.); (N.H.); (E.W.K.); (B.W.)
| | - William Aaron Marcum
- Department of Surgery, School of Medicine, University of Virginia, Charlottesville, VA 22908, USA; (L.Y.); (K.Z.); (Z.H.I.); (W.A.M.); (C.J.); (N.H.); (E.W.K.); (B.W.)
| | - Campbell Johnston
- Department of Surgery, School of Medicine, University of Virginia, Charlottesville, VA 22908, USA; (L.Y.); (K.Z.); (Z.H.I.); (W.A.M.); (C.J.); (N.H.); (E.W.K.); (B.W.)
| | - Nicholas Hoyt
- Department of Surgery, School of Medicine, University of Virginia, Charlottesville, VA 22908, USA; (L.Y.); (K.Z.); (Z.H.I.); (W.A.M.); (C.J.); (N.H.); (E.W.K.); (B.W.)
| | - Eric William Kent
- Department of Surgery, School of Medicine, University of Virginia, Charlottesville, VA 22908, USA; (L.Y.); (K.Z.); (Z.H.I.); (W.A.M.); (C.J.); (N.H.); (E.W.K.); (B.W.)
| | - Bowen Wang
- Department of Surgery, School of Medicine, University of Virginia, Charlottesville, VA 22908, USA; (L.Y.); (K.Z.); (Z.H.I.); (W.A.M.); (C.J.); (N.H.); (E.W.K.); (B.W.)
| | - John A. Hossack
- Department of Biomedical Engineering, University of Virginia, Charlottesville, VA 22908, USA; (Y.X.); (Y.H.); (H.C.S.S.)
| |
Collapse
|
20
|
Zhang L, Xu R, Zhao J. Learning technology for detection and grading of cancer tissue using tumour ultrasound images1. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2024; 32:157-171. [PMID: 37424493 DOI: 10.3233/xst-230085] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/11/2023]
Abstract
BACKGROUND Early diagnosis of breast cancer is crucial to perform effective therapy. Many medical imaging modalities including MRI, CT, and ultrasound are used to diagnose cancer. OBJECTIVE This study aims to investigate feasibility of applying transfer learning techniques to train convoluted neural networks (CNNs) to automatically diagnose breast cancer via ultrasound images. METHODS Transfer learning techniques helped CNNs recognise breast cancer in ultrasound images. Each model's training and validation accuracies were assessed using the ultrasound image dataset. Ultrasound images educated and tested the models. RESULTS MobileNet had the greatest accuracy during training and DenseNet121 during validation. Transfer learning algorithms can detect breast cancer in ultrasound images. CONCLUSIONS Based on the results, transfer learning models may be useful for automated breast cancer diagnosis in ultrasound images. However, only a trained medical professional should diagnose cancer, and computational approaches should only be used to help make quick decisions.
Collapse
Affiliation(s)
- Liyan Zhang
- Department of Ultrasound, Sunshine Union Hospital, Weifang, China
| | - Ruiyan Xu
- College of Health, Binzhou Polytechnical College, Binzhou, China
| | - Jingde Zhao
- Department of Imaging, Qingdao Hospital of Traditional Chinese Medicine (Qingdao HaiCi Hospital), Qingdao, China
| |
Collapse
|
21
|
Rauf F, Khan MA, Bashir AK, Jabeen K, Hamza A, Alzahrani AI, Alalwan N, Masood A. Automated deep bottleneck residual 82-layered architecture with Bayesian optimization for the classification of brain and common maternal fetal ultrasound planes. Front Med (Lausanne) 2023; 10:1330218. [PMID: 38188327 PMCID: PMC10769562 DOI: 10.3389/fmed.2023.1330218] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Accepted: 12/07/2023] [Indexed: 01/09/2024] Open
Abstract
Despite a worldwide decline in maternal mortality over the past two decades, a significant gap persists between low- and high-income countries, with 94% of maternal mortality concentrated in low and middle-income nations. Ultrasound serves as a prevalent diagnostic tool in prenatal care for monitoring fetal growth and development. Nevertheless, acquiring standard fetal ultrasound planes with accurate anatomical structures proves challenging and time-intensive, even for skilled sonographers. Therefore, for determining common maternal fetuses from ultrasound images, an automated computer-aided diagnostic (CAD) system is required. A new residual bottleneck mechanism-based deep learning architecture has been proposed that includes 82 layers deep. The proposed architecture has added three residual blocks, each including two highway paths and one skip connection. In addition, a convolutional layer has been added of size 3 × 3 before each residual block. In the training process, several hyper parameters have been initialized using Bayesian optimization (BO) rather than manual initialization. Deep features are extracted from the average pooling layer and performed the classification. In the classification process, an increase occurred in the computational time; therefore, we proposed an improved search-based moth flame optimization algorithm for optimal feature selection. The data is then classified using neural network classifiers based on the selected features. The experimental phase involved the analysis of ultrasound images, specifically focusing on fetal brain and common maternal fetal images. The proposed method achieved 78.5% and 79.4% accuracy for brain fetal planes and common maternal fetal planes. Comparison with several pre-trained neural nets and state-of-the-art (SOTA) optimization algorithms shows improved accuracy.
Collapse
Affiliation(s)
- Fatima Rauf
- Department of Computer Science, HITEC University, Taxila, Pakistan
| | | | - Ali Kashif Bashir
- Department of Computing and Mathematics, Manchester Metropolitan University, Manchester, United Kingdom
| | - Kiran Jabeen
- Department of Computer Science, HITEC University, Taxila, Pakistan
| | - Ameer Hamza
- Department of Computer Science, HITEC University, Taxila, Pakistan
| | | | - Nasser Alalwan
- Computer Science Department, Community College, King Saud University, Riyadh, Saudi Arabia
| | - Anum Masood
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Trondheim, Norway
- Institute of Neurosciences and Medicine (INM), Forschungszentrum Jülich, Jülich, Germany
| |
Collapse
|
22
|
Shi J, Bendig D, Vollmar HC, Rasche P. Mapping the Bibliometrics Landscape of AI in Medicine: Methodological Study. J Med Internet Res 2023; 25:e45815. [PMID: 38064255 PMCID: PMC10746970 DOI: 10.2196/45815] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Revised: 08/16/2023] [Accepted: 09/30/2023] [Indexed: 12/18/2023] Open
Abstract
BACKGROUND Artificial intelligence (AI), conceived in the 1950s, has permeated numerous industries, intensifying in tandem with advancements in computing power. Despite the widespread adoption of AI, its integration into medicine trails other sectors. However, medical AI research has experienced substantial growth, attracting considerable attention from researchers and practitioners. OBJECTIVE In the absence of an existing framework, this study aims to outline the current landscape of medical AI research and provide insights into its future developments by examining all AI-related studies within PubMed over the past 2 decades. We also propose potential data acquisition and analysis methods, developed using Python (version 3.11) and to be executed in Spyder IDE (version 5.4.3), for future analogous research. METHODS Our dual-pronged approach involved (1) retrieving publication metadata related to AI from PubMed (spanning 2000-2022) via Python, including titles, abstracts, authors, journals, country, and publishing years, followed by keyword frequency analysis and (2) classifying relevant topics using latent Dirichlet allocation, an unsupervised machine learning approach, and defining the research scope of AI in medicine. In the absence of a universal medical AI taxonomy, we used an AI dictionary based on the European Commission Joint Research Centre AI Watch report, which emphasizes 8 domains: reasoning, planning, learning, perception, communication, integration and interaction, service, and AI ethics and philosophy. RESULTS From 2000 to 2022, a comprehensive analysis of 307,701 AI-related publications from PubMed highlighted a 36-fold increase. The United States emerged as a clear frontrunner, producing 68,502 of these articles. Despite its substantial contribution in terms of volume, China lagged in terms of citation impact. Diving into specific AI domains, as the Joint Research Centre AI Watch report categorized, the learning domain emerged dominant. Our classification analysis meticulously traced the nuanced research trajectories across each domain, revealing the multifaceted and evolving nature of AI's application in the realm of medicine. CONCLUSIONS The research topics have evolved as the volume of AI studies increases annually. Machine learning remains central to medical AI research, with deep learning expected to maintain its fundamental role. Empowered by predictive algorithms, pattern recognition, and imaging analysis capabilities, the future of AI research in medicine is anticipated to concentrate on medical diagnosis, robotic intervention, and disease management. Our topic modeling outcomes provide a clear insight into the focus of AI research in medicine over the past decades and lay the groundwork for predicting future directions. The domains that have attracted considerable research attention, primarily the learning domain, will continue to shape the trajectory of AI in medicine. Given the observed growing interest, the domain of AI ethics and philosophy also stands out as a prospective area of increased focus.
Collapse
Affiliation(s)
- Jin Shi
- Institute for Entrepreneurship, University of Münster, Münster, Germany
| | - David Bendig
- Institute for Entrepreneurship, University of Münster, Münster, Germany
| | | | - Peter Rasche
- Department of Healthcare, University of Applied Science - Hochschule Niederrhein, Krefeld, Germany
| |
Collapse
|
23
|
Ur Rehman H, Shuaib M, Ismail EAA, Li S. Enhancing medical ultrasound imaging through fractional mathematical modeling of ultrasound bubble dynamics. ULTRASONICS SONOCHEMISTRY 2023; 100:106603. [PMID: 37741023 PMCID: PMC10523275 DOI: 10.1016/j.ultsonch.2023.106603] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/05/2023] [Revised: 09/06/2023] [Accepted: 09/16/2023] [Indexed: 09/25/2023]
Abstract
The classical mathematical modeling of ultrasound acoustic bubble is so far using to improve the medical imaging quality. A clear and visible medical ultrasound image relies on bubble's diameter, wavelength and intensity of the scattered sound. A bubble with diameter much smaller than the sound wavelength is regarded as highly efficient source of sound scattering. The dynamical equation for a medical ultrasound bubble is primarily modeled in classical integer-order differential equation. Then a reduction of order technique is used to convert the modeled dynamic equation for the bubble surface into a system of incommensurate fractional-orders. The incommensurate fractional-order values are calculated directly, by using Riemann stability region. On the basis of stability the convergence and accuracy of the numerical scheme is also discussed in detail. It has been found that the system will remain stable and chaotic for the incommensurate values α1<0.737 and α2<2.80, respectively.
Collapse
Affiliation(s)
- Hijab Ur Rehman
- City University of Science and Information Technology, Peshawar, Khyber Pakhtunkhwa, Pakistan.
| | - Muhammad Shuaib
- City University of Science and Information Technology, Peshawar, Khyber Pakhtunkhwa, Pakistan.
| | - Emad A A Ismail
- Department of Quantitative Analysis, College of Business Administration, King Saud University, P. O. Box 71115, Riyadh 11587, Saudi Arabia.
| | - Shuo Li
- School of Mathematics and Data Sciences, Changji University, Changji, Xinjiang 831100, PR China.
| |
Collapse
|
24
|
Shu YC, Lo YC, Chiu HC, Chen LR, Lin CY, Wu WT, Özçakar L, Chang KV. Deep learning algorithm for predicting subacromial motion trajectory: Dynamic shoulder ultrasound analysis. ULTRASONICS 2023; 134:107057. [PMID: 37290256 DOI: 10.1016/j.ultras.2023.107057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Revised: 05/14/2023] [Accepted: 05/24/2023] [Indexed: 06/10/2023]
Abstract
Subacromial motion metrics can be extracted from dynamic shoulder ultrasonography, which is useful for identifying abnormal motion patterns in painful shoulders. However, frame-by-frame manual labeling of anatomical landmarks in ultrasound images is time consuming. The present study aims to investigate the feasibility of a deep learning algorithm for extracting subacromial motion metrics from dynamic ultrasonography. Dynamic ultrasound imaging was retrieved by asking 17 participants to perform cyclic shoulder abduction and adduction along the scapular plane, whereby the trajectory of the humeral greater tubercle (in relation to the lateral acromion) was depicted by the deep learning algorithm. Extraction of the subacromial motion metrics was conducted using a convolutional neural network (CNN) or a self-transfer learning-based (STL)-CNN with or without an autoencoder (AE). The mean absolute error (MAE) compared with the manually-labeled data (ground truth) served as the main outcome variable. Using eight-fold cross-validation, the average MAE was proven to be significantly higher in the group using CNN than in those using STL-CNN or STL-CNN+AE for the relative difference between the greater tubercle and lateral acromion on the horizontal axis. The MAE for the localization of the two aforementioned landmarks on the vertical axis also seemed to be enlarged in those using CNN compared with those using STL-CNN. In the testing dataset, the errors in relation to the ground truth for the minimal vertical acromiohumeral distance were 0.081-0.333 cm using CNN, compared with 0.002-0.007 cm using STL-CNN. We successfully demonstrated the feasibility of a deep learning algorithm for automatic detection of the greater tubercle and lateral acromion during dynamic shoulder ultrasonography. Our framework also demonstrated the capability of capturing the minimal vertical acromiohumeral distance, which is the most important indicator of subacromial motion metrics in daily clinical practice.
Collapse
Affiliation(s)
- Yi-Chung Shu
- Institute of Applied Mechanics, College of Engineering, National Taiwan University, Taipei, Taiwan
| | - Yu-Cheng Lo
- Institute of Applied Mechanics, College of Engineering, National Taiwan University, Taipei, Taiwan
| | - Hsiao-Chi Chiu
- Institute of Applied Mechanics, College of Engineering, National Taiwan University, Taipei, Taiwan
| | - Lan-Rong Chen
- Department of Physical Medicine and Rehabilitation and Community and Geriatric Research Center, National Taiwan University Hospital, Bei-Hu Branch, Taipei, Taiwan
| | - Che-Yu Lin
- Institute of Applied Mechanics, College of Engineering, National Taiwan University, Taipei, Taiwan
| | - Wei-Ting Wu
- Department of Physical Medicine and Rehabilitation and Community and Geriatric Research Center, National Taiwan University Hospital, Bei-Hu Branch, Taipei, Taiwan; Department of Physical Medicine and Rehabilitation, National Taiwan University College of Medicine, Taipei, Taiwan
| | - Levent Özçakar
- Department of Physical and Rehabilitation Medicine, Hacettepe University Medical School, Ankara, Turkey
| | - Ke-Vin Chang
- Department of Physical Medicine and Rehabilitation and Community and Geriatric Research Center, National Taiwan University Hospital, Bei-Hu Branch, Taipei, Taiwan; Department of Physical Medicine and Rehabilitation, National Taiwan University College of Medicine, Taipei, Taiwan; Center for Regional Anesthesia and Pain Medicine, Wang-Fang Hospital, Taipei Medical University, Taipei, Taiwan.
| |
Collapse
|
25
|
Salih AK, ALWAN AH, Opulencia MJC, Uinarni H, Khamidova FM, Atiyah MS, Awadh SA, Hammid AT, Arzehgar Z. Evaluation of Cholesterol Thickness of Blood Vessels Using Photoacoustic Technology. BIOMED RESEARCH INTERNATIONAL 2023; 2023:2721427. [PMID: 37090193 PMCID: PMC10115531 DOI: 10.1155/2023/2721427] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/17/2022] [Revised: 06/05/2022] [Accepted: 06/24/2022] [Indexed: 04/25/2023]
Abstract
One of the primary indicators of plaque vulnerability is the lipid composition of atherosclerotic plaques. Therefore, the medical industry requires a method to evaluate necrotic nuclei in atherosclerosis imaging with sensitivity. In this regard, photoacoustic imaging is a plaque detection method that provides chemical information on lipids and cholesterol thickness in the arterial walls of the patient. This aspect aims to increase the low-frequency axial resolution by developing a new photoacoustic-based system. A photoacoustic system has been developed to detect the cholesterol thickness of the blood vessels to observe the progression of plaque in the heart's blood vessels. The application of the coherent photoacoustic discontinuous correlation tomography technique, which is based on a novel signal processing, significantly increased the cholesterol oleate's sensitivity to plaque necrosis. By enhancing the quality of thickness detection, the system for measuring the thickness of cholesterol in blood vessels has been reduced to approximately 23 microns. The results show that the phase spectrum peaked at 100 Hz at 58.66 degrees, and at 400 Hz, the phase spectrum was 46.37 degrees. The minimum amplitude is 1.95 at 100 Hz and 17.67 at 400 Hz. In conclusion, it can be stated that photoacoustic imaging as a method based on new technologies is of great importance in medical research, which is based on the use of nonionizing radiation to perform diagnostic processes and measure different types of body tissues.
Collapse
Affiliation(s)
| | - Ala Hadi ALWAN
- Ibn Al-Bitar Specialized Center for Cardiac Surgery, Baghdad, Iraq
| | | | - Herlina Uinarni
- Atma Jaya Catholic University of Indonesia, Jakarta, Indonesia
- Pantai Indah Kapuk Hospital, North Jakarta, Indonesia
| | - Firuza M. Khamidova
- Department of Ophthalmology, Samarkand State Medical Institute, Samarkand, Uzbekistan
- Tashkent State Dental Institute, Tashkent, Uzbekistan
| | | | | | | | - Zeinab Arzehgar
- Department of Chemistry, Payame Noor University, Tehran, Iran
| |
Collapse
|
26
|
Nazir S, Dickson DM, Akram MU. Survey of explainable artificial intelligence techniques for biomedical imaging with deep neural networks. Comput Biol Med 2023; 156:106668. [PMID: 36863192 DOI: 10.1016/j.compbiomed.2023.106668] [Citation(s) in RCA: 21] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2022] [Revised: 01/12/2023] [Accepted: 02/10/2023] [Indexed: 02/21/2023]
Abstract
Artificial Intelligence (AI) techniques of deep learning have revolutionized the disease diagnosis with their outstanding image classification performance. In spite of the outstanding results, the widespread adoption of these techniques in clinical practice is still taking place at a moderate pace. One of the major hindrance is that a trained Deep Neural Networks (DNN) model provides a prediction, but questions about why and how that prediction was made remain unanswered. This linkage is of utmost importance for the regulated healthcare domain to increase the trust in the automated diagnosis system by the practitioners, patients and other stakeholders. The application of deep learning for medical imaging has to be interpreted with caution due to the health and safety concerns similar to blame attribution in the case of an accident involving autonomous cars. The consequences of both a false positive and false negative cases are far reaching for patients' welfare and cannot be ignored. This is exacerbated by the fact that the state-of-the-art deep learning algorithms comprise of complex interconnected structures, millions of parameters, and a 'black box' nature, offering little understanding of their inner working unlike the traditional machine learning algorithms. Explainable AI (XAI) techniques help to understand model predictions which help develop trust in the system, accelerate the disease diagnosis, and meet adherence to regulatory requirements. This survey provides a comprehensive review of the promising field of XAI for biomedical imaging diagnostics. We also provide a categorization of the XAI techniques, discuss the open challenges, and provide future directions for XAI which would be of interest to clinicians, regulators and model developers.
Collapse
Affiliation(s)
- Sajid Nazir
- Department of Computing, Glasgow Caledonian University, Glasgow, UK.
| | - Diane M Dickson
- Department of Podiatry and Radiography, Research Centre for Health, Glasgow Caledonian University, Glasgow, UK
| | - Muhammad Usman Akram
- Computer and Software Engineering Department, National University of Sciences and Technology, Islamabad, Pakistan
| |
Collapse
|
27
|
Dicle O. Artificial intelligence in diagnostic ultrasonography. Diagn Interv Radiol 2023; 29:40-45. [PMID: 36959754 PMCID: PMC10679601 DOI: 10.4274/dir.2022.211260] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2022] [Accepted: 03/28/2022] [Indexed: 01/15/2023]
Abstract
Artificial intelligence (AI) continues to change paradigms in the field of medicine with new applications that are applicable to daily life. The field of ultrasonography, which has been developing since the 1950s and continues to be one of the most powerful tools in the field of diagnosis, is also the subject of AI studies, despite its unique problems. It is predicted that many operations, such as appropriate diagnostic tool selection, use of the most relevant parameters, improvement of low-quality images, automatic lesion detection and diagnosis from the image, and classification of pathologies, will be performed using AI tools in the near future. Especially with the use of convolutional neural networks, successful results can be obtained for lesion detection, segmentation, and classification from images. In this review, relevant developments are summarized based on the literature, and examples of the tools used in the field are presented.
Collapse
Affiliation(s)
- Oğuz Dicle
- Department of Radiology, Dokuz Eylül University Faculty of Medicine, İzmir, Turkey
| |
Collapse
|
28
|
Huang H, Ali A, Liu Y, Xie H, Ullah S, Roy S, Song Z, Guo B, Xu J. Advances in image-guided drug delivery for antibacterial therapy. Adv Drug Deliv Rev 2023; 192:114634. [PMID: 36503884 DOI: 10.1016/j.addr.2022.114634] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2022] [Revised: 10/20/2022] [Accepted: 11/22/2022] [Indexed: 11/29/2022]
Abstract
The emergence of antibiotic-resistant bacterial strains is seriously endangering the global healthcare system. There is an urgent need for combining imaging with therapies to realize the real-time monitoring of pathological condition and treatment progress. It also provides guidance on exploring new medicines and enhance treatment strategies to overcome the antibiotic resistance of existing conventional antibiotics. In this review, we provide a thorough overview of the most advanced image-guided approaches for bacterial diagnosis (e.g., computed tomography imaging, magnetic resonance imaging, photoacoustic imaging, ultrasound imaging, fluorescence imaging, positron emission tomography, single photon emission computed tomography imaging, and multiple imaging), and therapies (e.g., photothermal therapy, photodynamic therapy, chemodynamic therapy, sonodynamic therapy, immunotherapy, and multiple therapies). This review focuses on how to design and fabricate photo-responsive materials for improved image-guided bacterial theranostics applications. We present a potential application of different image-guided modalities for both bacterial diagnosis and therapies with representative examples. Finally, we highlighted the current challenges and future perspectives image-guided approaches for future clinical translation of nano-theranostics in bacterial infections therapies. We envision that this review will provide for future development in image-guided systems for bacterial theranostics applications.
Collapse
Affiliation(s)
- Haiyan Huang
- Institute of Low-Dimensional Materials Genome Initiative, College of Chemistry and Environmental Engineering, Shenzhen University, Shenzhen 518060, China; School of Science and Shenzhen Key Laboratory of Flexible Printed Electronics Technology, Harbin Institute of Technology, Shenzhen 518055, China
| | - Arbab Ali
- Beijing Key Laboratory of Farmland Soil Pollution Prevention and Remediation, College of Resources and Environmental Sciences, China Agricultural University, Beijing 100193, China; CAS Key Laboratory for Biomedical Effects of Nanomaterials and Nano Safety, CAS Center for Excellence in Nanoscience, National Center for Nanoscience and Technology, Beijing 100190, China
| | - Yi Liu
- State Key Laboratory of Agricultural Microbiology, College of Science, Huazhong Agricultural University, Wuhan 430070, China
| | - Hui Xie
- Institute of Low-Dimensional Materials Genome Initiative, College of Chemistry and Environmental Engineering, Shenzhen University, Shenzhen 518060, China; Chengdu Institute of Organic Chemistry, Chinese Academy of Sciences, Chengdu 610041, China
| | - Sana Ullah
- Department of Biotechnology, Quaid-i-Azam University, Islamabad 45320, Pakistan; Natural and Medical Sciences Research Center, University of Nizwa, P.O. Box: 33, PC: 616, Oman
| | - Shubham Roy
- School of Science and Shenzhen Key Laboratory of Flexible Printed Electronics Technology, Harbin Institute of Technology, Shenzhen 518055, China
| | - Zhiyong Song
- State Key Laboratory of Agricultural Microbiology, College of Science, Huazhong Agricultural University, Wuhan 430070, China.
| | - Bing Guo
- School of Science and Shenzhen Key Laboratory of Flexible Printed Electronics Technology, Harbin Institute of Technology, Shenzhen 518055, China.
| | - Jian Xu
- Institute of Low-Dimensional Materials Genome Initiative, College of Chemistry and Environmental Engineering, Shenzhen University, Shenzhen 518060, China.
| |
Collapse
|
29
|
Hsu ST, Su YJ, Hung CH, Chen MJ, Lu CH, Kuo CE. Automatic ovarian tumors recognition system based on ensemble convolutional neural network with ultrasound imaging. BMC Med Inform Decis Mak 2022; 22:298. [PMID: 36397100 PMCID: PMC9673368 DOI: 10.1186/s12911-022-02047-6] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2022] [Accepted: 11/14/2022] [Indexed: 11/18/2022] Open
Abstract
BACKGROUND Upon the discovery of ovarian cysts, obstetricians, gynecologists, and ultrasound examiners must address the common clinical challenge of distinguishing between benign and malignant ovarian tumors. Numerous types of ovarian tumors exist, many of which exhibit similar characteristics that increase the ambiguity in clinical diagnosis. Using deep learning technology, we aimed to develop a method that rapidly and accurately assists the different diagnosis of ovarian tumors in ultrasound images. METHODS Based on deep learning method, we used ten well-known convolutional neural network models (e.g., Alexnet, GoogleNet, and ResNet) for training of transfer learning. To ensure method stability and robustness, we repeated the random sampling of the training and validation data ten times. The mean of the ten test results was set as the final assessment data. After the training process was completed, the three models with the highest ratio of calculation accuracy to time required for classification were used for ensemble learning pertaining. Finally, the interpretation results of the ensemble classifier were used as the final results. We also applied ensemble gradient-weighted class activation mapping (Grad-CAM) technology to visualize the decision-making results of the models. RESULTS The highest mean accuracy, mean sensitivity, and mean specificity of ten single CNN models were 90.51 ± 4.36%, 89.77 ± 4.16%, and 92.00 ± 5.95%, respectively. The mean accuracy, mean sensitivity, and mean specificity of the ensemble classifier method were 92.15 ± 2.84%, 91.37 ± 3.60%, and 92.92 ± 4.00%, respectively. The performance of the ensemble classifier is better than that of a single classifier in three evaluation metrics. Moreover, the standard deviation is also better which means the ensemble classifier is more stable and robust. CONCLUSION From the comprehensive perspective of data quantity, data diversity, robustness of validation strategy, and overall accuracy, the proposed method outperformed the methods used in previous studies. In future studies, we will continue to increase the number of authenticated images and apply our proposed method in clinical settings to increase its robustness and reliability.
Collapse
Affiliation(s)
- Shih-Tien Hsu
- Department of Obstetrics, Gynecology and Women's Health, Taichung Veterans General Hospital, No. 1650 Sec. 4 Taiwan Blvd. Xitun Dist., Taichung, 407, Taiwan
| | - Yu-Jie Su
- Master's Program of Biomedical Infomatics and Biomedical Engineering, Feng Chia University, No. 100 Wenhua Rd. Xitun Dist., Taichung, 407, Taiwan
| | - Chian-Huei Hung
- Department of Obstetrics, Gynecology and Women's Health, Taichung Veterans General Hospital, No. 1650 Sec. 4 Taiwan Blvd. Xitun Dist., Taichung, 407, Taiwan
| | - Ming-Jer Chen
- Department of Obstetrics, Gynecology and Women's Health, Taichung Veterans General Hospital, No. 1650 Sec. 4 Taiwan Blvd. Xitun Dist., Taichung, 407, Taiwan
| | - Chien-Hsing Lu
- Department of Obstetrics, Gynecology and Women's Health, Taichung Veterans General Hospital, No. 1650 Sec. 4 Taiwan Blvd. Xitun Dist., Taichung, 407, Taiwan
| | - Chih-En Kuo
- Department of Applied Mathematics, National Chung Hsing University, No. 145, Xingda Rd., South Dist., Taichung, 402, Taiwan.
| |
Collapse
|
30
|
Hamamoto R, Koyama T, Kouno N, Yasuda T, Yui S, Sudo K, Hirata M, Sunami K, Kubo T, Takasawa K, Takahashi S, Machino H, Kobayashi K, Asada K, Komatsu M, Kaneko S, Yatabe Y, Yamamoto N. Introducing AI to the molecular tumor board: one direction toward the establishment of precision medicine using large-scale cancer clinical and biological information. Exp Hematol Oncol 2022; 11:82. [PMID: 36316731 PMCID: PMC9620610 DOI: 10.1186/s40164-022-00333-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Accepted: 10/05/2022] [Indexed: 11/10/2022] Open
Abstract
Since U.S. President Barack Obama announced the Precision Medicine Initiative in his New Year's State of the Union address in 2015, the establishment of a precision medicine system has been emphasized worldwide, particularly in the field of oncology. With the advent of next-generation sequencers specifically, genome analysis technology has made remarkable progress, and there are active efforts to apply genome information to diagnosis and treatment. Generally, in the process of feeding back the results of next-generation sequencing analysis to patients, a molecular tumor board (MTB), consisting of experts in clinical oncology, genetic medicine, etc., is established to discuss the results. On the other hand, an MTB currently involves a large amount of work, with humans searching through vast databases and literature, selecting the best drug candidates, and manually confirming the status of available clinical trials. In addition, as personalized medicine advances, the burden on MTB members is expected to increase in the future. Under these circumstances, introducing cutting-edge artificial intelligence (AI) technology and information and communication technology to MTBs while reducing the burden on MTB members and building a platform that enables more accurate and personalized medical care would be of great benefit to patients. In this review, we introduced the latest status of elemental technologies that have potential for AI utilization in MTB, and discussed issues that may arise in the future as we progress with AI implementation.
Collapse
Affiliation(s)
- Ryuji Hamamoto
- grid.272242.30000 0001 2168 5385Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan ,grid.509456.bCancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo, 103-0027 Japan
| | - Takafumi Koyama
- grid.272242.30000 0001 2168 5385Department of Experimental Therapeutics, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan
| | - Nobuji Kouno
- grid.272242.30000 0001 2168 5385Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan ,grid.258799.80000 0004 0372 2033Department of Surgery, Graduate School of Medicine, Kyoto University, Yoshida-konoe-cho, Sakyo-ku, Kyoto, 606-8303 Japan
| | - Tomohiro Yasuda
- grid.272242.30000 0001 2168 5385Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan ,grid.417547.40000 0004 1763 9564Research and Development Group, Hitachi, Ltd., 1-280 Higashi-koigakubo, Kokubunji, Tokyo, 185-8601 Japan
| | - Shuntaro Yui
- grid.272242.30000 0001 2168 5385Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan ,grid.417547.40000 0004 1763 9564Research and Development Group, Hitachi, Ltd., 1-280 Higashi-koigakubo, Kokubunji, Tokyo, 185-8601 Japan
| | - Kazuki Sudo
- grid.272242.30000 0001 2168 5385Department of Experimental Therapeutics, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan ,grid.272242.30000 0001 2168 5385Department of Medical Oncology, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan
| | - Makoto Hirata
- grid.272242.30000 0001 2168 5385Department of Genetic Medicine and Services, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan
| | - Kuniko Sunami
- grid.272242.30000 0001 2168 5385Department of Laboratory Medicine, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan
| | - Takashi Kubo
- grid.272242.30000 0001 2168 5385Department of Laboratory Medicine, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan
| | - Ken Takasawa
- grid.272242.30000 0001 2168 5385Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan ,grid.509456.bCancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo, 103-0027 Japan
| | - Satoshi Takahashi
- grid.272242.30000 0001 2168 5385Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan ,grid.509456.bCancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo, 103-0027 Japan
| | - Hidenori Machino
- grid.272242.30000 0001 2168 5385Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan ,grid.509456.bCancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo, 103-0027 Japan
| | - Kazuma Kobayashi
- grid.272242.30000 0001 2168 5385Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan ,grid.509456.bCancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo, 103-0027 Japan
| | - Ken Asada
- grid.272242.30000 0001 2168 5385Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan ,grid.509456.bCancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo, 103-0027 Japan
| | - Masaaki Komatsu
- grid.272242.30000 0001 2168 5385Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan ,grid.509456.bCancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo, 103-0027 Japan
| | - Syuzo Kaneko
- grid.272242.30000 0001 2168 5385Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan ,grid.509456.bCancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo, 103-0027 Japan
| | - Yasushi Yatabe
- grid.272242.30000 0001 2168 5385Department of Diagnostic Pathology, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan ,grid.272242.30000 0001 2168 5385Division of Molecular Pathology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan
| | - Noboru Yamamoto
- grid.272242.30000 0001 2168 5385Department of Experimental Therapeutics, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan
| |
Collapse
|
31
|
Corridon PR, Wang X, Shakeel A, Chan V. Digital Technologies: Advancing Individualized Treatments through Gene and Cell Therapies, Pharmacogenetics, and Disease Detection and Diagnostics. Biomedicines 2022; 10:biomedicines10102445. [PMID: 36289707 PMCID: PMC9599083 DOI: 10.3390/biomedicines10102445] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Accepted: 09/25/2022] [Indexed: 11/28/2022] Open
Abstract
Digital technologies are shifting the paradigm of medicine in a way that will transform the healthcare industry. Conventional medical approaches focus on treating symptoms and ailments for large groups of people. These approaches can elicit differences in treatment responses and adverse reactions based on population variations, and are often incapable of treating the inherent pathophysiology of the medical conditions. Advances in genetics and engineering are improving healthcare via individualized treatments that include gene and cell therapies, pharmacogenetics, disease detection, and diagnostics. This paper highlights ways that artificial intelligence can help usher in an age of personalized medicine.
Collapse
Affiliation(s)
- Peter R. Corridon
- Department of Immunology and Physiology, College of Medicine and Health Sciences, Khalifa University, Abu Dhabi P.O. Box 127788, United Arab Emirates
- Biomedical Engineering and Healthcare Engineering Innovation Center, Khalifa University, Abu Dhabi P.O. Box 127788, United Arab Emirates
- Center for Biotechnology, Khalifa University, Abu Dhabi P.O. Box 127788, United Arab Emirates
- Correspondence:
| | - Xinyu Wang
- Department of Immunology and Physiology, College of Medicine and Health Sciences, Khalifa University, Abu Dhabi P.O. Box 127788, United Arab Emirates
- Biomedical Engineering and Healthcare Engineering Innovation Center, Khalifa University, Abu Dhabi P.O. Box 127788, United Arab Emirates
| | - Adeeba Shakeel
- Department of Immunology and Physiology, College of Medicine and Health Sciences, Khalifa University, Abu Dhabi P.O. Box 127788, United Arab Emirates
| | - Vincent Chan
- Biomedical Engineering and Healthcare Engineering Innovation Center, Khalifa University, Abu Dhabi P.O. Box 127788, United Arab Emirates
| |
Collapse
|
32
|
Segmentation-Based Classification Deep Learning Model Embedded with Explainable AI for COVID-19 Detection in Chest X-ray Scans. Diagnostics (Basel) 2022; 12:diagnostics12092132. [PMID: 36140533 PMCID: PMC9497601 DOI: 10.3390/diagnostics12092132] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Revised: 08/26/2022] [Accepted: 08/30/2022] [Indexed: 12/16/2022] Open
Abstract
Background and Motivation: COVID-19 has resulted in a massive loss of life during the last two years. The current imaging-based diagnostic methods for COVID-19 detection in multiclass pneumonia-type chest X-rays are not so successful in clinical practice due to high error rates. Our hypothesis states that if we can have a segmentation-based classification error rate <5%, typically adopted for 510 (K) regulatory purposes, the diagnostic system can be adapted in clinical settings. Method: This study proposes 16 types of segmentation-based classification deep learning-based systems for automatic, rapid, and precise detection of COVID-19. The two deep learning-based segmentation networks, namely UNet and UNet+, along with eight classification models, namely VGG16, VGG19, Xception, InceptionV3, Densenet201, NASNetMobile, Resnet50, and MobileNet, were applied to select the best-suited combination of networks. Using the cross-entropy loss function, the system performance was evaluated by Dice, Jaccard, area-under-the-curve (AUC), and receiver operating characteristics (ROC) and validated using Grad-CAM in explainable AI framework. Results: The best performing segmentation model was UNet, which exhibited the accuracy, loss, Dice, Jaccard, and AUC of 96.35%, 0.15%, 94.88%, 90.38%, and 0.99 (p-value <0.0001), respectively. The best performing segmentation-based classification model was UNet+Xception, which exhibited the accuracy, precision, recall, F1-score, and AUC of 97.45%, 97.46%, 97.45%, 97.43%, and 0.998 (p-value <0.0001), respectively. Our system outperformed existing methods for segmentation-based classification models. The mean improvement of the UNet+Xception system over all the remaining studies was 8.27%. Conclusion: The segmentation-based classification is a viable option as the hypothesis (error rate <5%) holds true and is thus adaptable in clinical practice.
Collapse
|
33
|
Hamamoto R, Takasawa K, Machino H, Kobayashi K, Takahashi S, Bolatkan A, Shinkai N, Sakai A, Aoyama R, Yamada M, Asada K, Komatsu M, Okamoto K, Kameoka H, Kaneko S. Application of non-negative matrix factorization in oncology: one approach for establishing precision medicine. Brief Bioinform 2022; 23:6628783. [PMID: 35788277 PMCID: PMC9294421 DOI: 10.1093/bib/bbac246] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Revised: 05/06/2022] [Accepted: 05/25/2022] [Indexed: 12/19/2022] Open
Abstract
The increase in the expectations of artificial intelligence (AI) technology has led to machine learning technology being actively used in the medical field. Non-negative matrix factorization (NMF) is a machine learning technique used for image analysis, speech recognition, and language processing; recently, it is being applied to medical research. Precision medicine, wherein important information is extracted from large-scale medical data to provide optimal medical care for every individual, is considered important in medical policies globally, and the application of machine learning techniques to this end is being handled in several ways. NMF is also introduced differently because of the characteristics of its algorithms. In this review, the importance of NMF in the field of medicine, with a focus on the field of oncology, is described by explaining the mathematical science of NMF and the characteristics of the algorithm, providing examples of how NMF can be used to establish precision medicine, and presenting the challenges of NMF. Finally, the direction regarding the effective use of NMF in the field of oncology is also discussed.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | | | - Rina Aoyama
- Showa University Graduate School of Medicine School of Medicine
| | | | - Ken Asada
- RIKEN Center for Advanced Intelligence Project
| | | | | | | | | |
Collapse
|
34
|
Alzubaidi M, Agus M, Alyafei K, Althelaya KA, Shah U, Abd-Alrazaq AA, Anbar M, Makhlouf M, Househ M. Towards deep observation: A systematic survey on artificial intelligence techniques to monitor fetus via Ultrasound Images. iScience 2022; 25:104713. [PMID: 35856024 PMCID: PMC9287600 DOI: 10.1016/j.isci.2022.104713] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Revised: 06/09/2022] [Accepted: 06/28/2022] [Indexed: 11/26/2022] Open
Abstract
Several reviews have been conducted regarding artificial intelligence (AI) techniques to improve pregnancy outcomes. But they are not focusing on ultrasound images. This survey aims to explore how AI can assist with fetal growth monitoring via ultrasound image. We reported our findings using the guidelines for PRISMA. We conducted a comprehensive search of eight bibliographic databases. Out of 1269 studies 107 are included. We found that 2D ultrasound images were more popular (88) than 3D and 4D ultrasound images (19). Classification is the most used method (42), followed by segmentation (31), classification integrated with segmentation (16) and other miscellaneous methods such as object-detection, regression, and reinforcement learning (18). The most common areas that gained traction within the pregnancy domain were the fetus head (43), fetus body (31), fetus heart (13), fetus abdomen (10), and the fetus face (10). This survey will promote the development of improved AI models for fetal clinical applications. Artificial intelligence studies to monitor fetal development via ultrasound images Fetal issues categorized based on four categories — general, head, heart, face, abdomen The most used AI techniques are classification, segmentation, object detection, and RL The research and practical implications are included.
Collapse
|
35
|
Automated Endocardial Border Detection and Left Ventricular Functional Assessment in Echocardiography Using Deep Learning. Biomedicines 2022; 10:biomedicines10051082. [PMID: 35625819 PMCID: PMC9138644 DOI: 10.3390/biomedicines10051082] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 05/02/2022] [Accepted: 05/04/2022] [Indexed: 02/05/2023] Open
Abstract
Endocardial border detection is a key step in assessing left ventricular systolic function in echocardiography. However, this process is still not sufficiently accurate, and manual retracing is often required, causing time-consuming and intra-/inter-observer variability in clinical practice. To address these clinical issues, more accurate and normalized automatic endocardial border detection would be valuable. Here, we develop a deep learning-based method for automated endocardial border detection and left ventricular functional assessment in two-dimensional echocardiographic videos. First, segmentation of the left ventricular cavity was performed in the six representative projections for a cardiac cycle. We employed four segmentation methods: U-Net, UNet++, UNet3+, and Deep Residual U-Net. UNet++ and UNet3+ showed a sufficiently high performance in the mean value of intersection over union and Dice coefficient. The accuracy of the four segmentation methods was then evaluated by calculating the mean value for the estimation error of the echocardiographic indexes. UNet++ was superior to the other segmentation methods, with the acceptable mean estimation error of the left ventricular ejection fraction of 10.8%, global longitudinal strain of 8.5%, and global circumferential strain of 5.8%, respectively. Our method using UNet++ demonstrated the best performance. This method may potentially support examiners and improve the workflow in echocardiography.
Collapse
|
36
|
Li Y, Gu H, Wang H, Qin P, Wang J. BUSnet: A Deep Learning Model of Breast Tumor Lesion Detection for Ultrasound Images. Front Oncol 2022; 12:848271. [PMID: 35402269 PMCID: PMC8989926 DOI: 10.3389/fonc.2022.848271] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2022] [Accepted: 02/23/2022] [Indexed: 12/01/2022] Open
Abstract
Ultrasound (US) imaging is a main modality for breast disease screening. Automatically detecting the lesions in US images is essential for developing the artificial-intelligence-based diagnostic support technologies. However, the intrinsic characteristics of ultrasound imaging, like speckle noise and acoustic shadow, always degenerate the detection accuracy. In this study, we developed a deep learning model called BUSnet to detect the breast tumor lesions in US images with high accuracy. We first developed a two-stage method including the unsupervised region proposal and bounding-box regression algorithms. Then, we proposed a post-processing method to enhance the detecting accuracy further. The proposed method was used to a benchmark dataset, which includes 487 benign samples and 210 malignant samples. The results proved the effectiveness and accuracy of the proposed method.
Collapse
Affiliation(s)
- Yujie Li
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, China
| | - Hong Gu
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, China
| | - Hongyu Wang
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, China
| | - Pan Qin
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, China
| | - Jia Wang
- Department of Surgery, The Second Hospital of Dalian Medical University, Dalian, China
| |
Collapse
|
37
|
Xia Q, Du M, Li B, Hou L, Chen Z. Interdisciplinary Collaboration Opportunities, Challenges and Solutions for Artificial Intelligence in Ultrasound. Curr Med Imaging 2022; 18:1046-1051. [PMID: 35319383 DOI: 10.2174/1573405618666220321123126] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Revised: 12/20/2021] [Accepted: 01/19/2022] [Indexed: 11/22/2022]
Abstract
Ultrasound is one of the most widely utilized imaging tools in clinical practice with the advantages of noninvasive nature and ease of use. However, ultrasound examinations have low reproducibility and considerable heterogeneity due to the variability of operators, scanners, and patients. In recent years, Artificial Intelligence (AI) -assisted ultrasound has matured and moved closer to routine clinical uses. The combination of AI with ultrasound has opened up a world of possibilities for increasing work productivity and precision diagnostics. In this article, we describe AI strategies in ultrasound, from current opportunities, constraints to potential options for AI-assisted ultrasound.
Collapse
Affiliation(s)
- Qingrong Xia
- The First Affiliated Hospital, Medical Imaging Centre, Hengyang Medical School, University of South China, Hengyang, China
- Institute of Medical Imaging, University of South China, Hengyang, China
| | - Meng Du
- The First Affiliated Hospital, Medical Imaging Centre, Hengyang Medical School, University of South China, Hengyang, China
- Institute of Medical Imaging, University of South China, Hengyang, China
| | - Bin Li
- The First Affiliated Hospital, Medical Imaging Centre, Hengyang Medical School, University of South China, Hengyang, China
- Institute of Medical Imaging, University of South China, Hengyang, China
| | - Likang Hou
- The First Affiliated Hospital, Medical Imaging Centre, Hengyang Medical School, University of South China, Hengyang, China
- Institute of Medical Imaging, University of South China, Hengyang, China
| | - Zhiyi Chen
- The First Affiliated Hospital, Medical Imaging Centre, Hengyang Medical School, University of South China, Hengyang, China
- Institute of Medical Imaging, University of South China, Hengyang, China
| |
Collapse
|
38
|
Common and Uncommon Errors in Emergency Ultrasound. Diagnostics (Basel) 2022; 12:diagnostics12030631. [PMID: 35328184 PMCID: PMC8947314 DOI: 10.3390/diagnostics12030631] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 02/27/2022] [Accepted: 03/01/2022] [Indexed: 12/19/2022] Open
Abstract
Errors in emergency ultrasound (US) have been representing an increasing problem in recent years thanks to several unique features related to both the inherent characteristics of the discipline and to the latest developments, which every medical operator should be aware of. Because of the subjective nature of the interpretation of emergency US findings, it is more prone to errors than other diagnostic imaging modalities. The misinterpretation of US images should therefore be considered as a serious risk in diagnosis. The etiology of error is multi-factorial: it depends on environmental factors, patients and the technical skills of the operator; it is influenced by intrinsic US artifacts, poor clinical correlation, US-setting errors and anatomical variants; and it is conditioned by the lack of a methodologically correct clinical approach and excessive diagnostic confidence too. In this review, we evaluate the common and uncommon sources of diagnostic errors in emergency US during clinical practice, showing how to recognize and avoid them.
Collapse
|
39
|
Sakai A, Komatsu M, Komatsu R, Matsuoka R, Yasutomi S, Dozen A, Shozu K, Arakaki T, Machino H, Asada K, Kaneko S, Sekizawa A, Hamamoto R. Medical Professional Enhancement Using Explainable Artificial Intelligence in Fetal Cardiac Ultrasound Screening. Biomedicines 2022; 10:551. [PMID: 35327353 PMCID: PMC8945208 DOI: 10.3390/biomedicines10030551] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Revised: 02/18/2022] [Accepted: 02/21/2022] [Indexed: 12/10/2022] Open
Abstract
Diagnostic support tools based on artificial intelligence (AI) have exhibited high performance in various medical fields. However, their clinical application remains challenging because of the lack of explanatory power in AI decisions (black box problem), making it difficult to build trust with medical professionals. Nevertheless, visualizing the internal representation of deep neural networks will increase explanatory power and improve the confidence of medical professionals in AI decisions. We propose a novel deep learning-based explainable representation "graph chart diagram" to support fetal cardiac ultrasound screening, which has low detection rates of congenital heart diseases due to the difficulty in mastering the technique. Screening performance improves using this representation from 0.966 to 0.975 for experts, 0.829 to 0.890 for fellows, and 0.616 to 0.748 for residents in the arithmetic mean of area under the curve of a receiver operating characteristic curve. This is the first demonstration wherein examiners used deep learning-based explainable representation to improve the performance of fetal cardiac ultrasound screening, highlighting the potential of explainable AI to augment examiner capabilities.
Collapse
Affiliation(s)
- Akira Sakai
- Artificial Intelligence Laboratory, Research Unit, Fujitsu Research, Fujitsu Ltd., 4-1-1 Kamikodanaka, Nakahara-ku, Kawasaki 211-8588, Japan; (A.S.); (S.Y.)
- RIKEN AIP-Fujitsu Collaboration Center, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (R.K.); (R.M.)
- Department of NCC Cancer Science, Biomedical Science and Engineering Track, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.); (H.M.); (K.A.); (S.K.)
| | - Masaaki Komatsu
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| | - Reina Komatsu
- RIKEN AIP-Fujitsu Collaboration Center, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (R.K.); (R.M.)
- Department of Obstetrics and Gynecology, School of Medicine, Showa University, 1-5-8 Hatanodai, Shinagawa-ku, Tokyo 142-8666, Japan; (T.A.); (A.S.)
| | - Ryu Matsuoka
- RIKEN AIP-Fujitsu Collaboration Center, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (R.K.); (R.M.)
- Department of Obstetrics and Gynecology, School of Medicine, Showa University, 1-5-8 Hatanodai, Shinagawa-ku, Tokyo 142-8666, Japan; (T.A.); (A.S.)
| | - Suguru Yasutomi
- Artificial Intelligence Laboratory, Research Unit, Fujitsu Research, Fujitsu Ltd., 4-1-1 Kamikodanaka, Nakahara-ku, Kawasaki 211-8588, Japan; (A.S.); (S.Y.)
- RIKEN AIP-Fujitsu Collaboration Center, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (R.K.); (R.M.)
| | - Ai Dozen
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.); (H.M.); (K.A.); (S.K.)
| | - Kanto Shozu
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.); (H.M.); (K.A.); (S.K.)
| | - Tatsuya Arakaki
- Department of Obstetrics and Gynecology, School of Medicine, Showa University, 1-5-8 Hatanodai, Shinagawa-ku, Tokyo 142-8666, Japan; (T.A.); (A.S.)
| | - Hidenori Machino
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.); (H.M.); (K.A.); (S.K.)
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| | - Ken Asada
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.); (H.M.); (K.A.); (S.K.)
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| | - Syuzo Kaneko
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.); (H.M.); (K.A.); (S.K.)
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| | - Akihiko Sekizawa
- Department of Obstetrics and Gynecology, School of Medicine, Showa University, 1-5-8 Hatanodai, Shinagawa-ku, Tokyo 142-8666, Japan; (T.A.); (A.S.)
| | - Ryuji Hamamoto
- Department of NCC Cancer Science, Biomedical Science and Engineering Track, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.); (H.M.); (K.A.); (S.K.)
| |
Collapse
|
40
|
Ahmad B, Sun J, You Q, Palade V, Mao Z. Brain Tumor Classification Using a Combination of Variational Autoencoders and Generative Adversarial Networks. Biomedicines 2022; 10:223. [PMID: 35203433 PMCID: PMC8869455 DOI: 10.3390/biomedicines10020223] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2021] [Revised: 12/23/2021] [Accepted: 01/03/2022] [Indexed: 11/16/2022] Open
Abstract
Brain tumors are a pernicious cancer with one of the lowest five-year survival rates. Neurologists often use magnetic resonance imaging (MRI) to diagnose the type of brain tumor. Automated computer-assisted tools can help them speed up the diagnosis process and reduce the burden on the health care systems. Recent advances in deep learning for medical imaging have shown remarkable results, especially in the automatic and instant diagnosis of various cancers. However, we need a large amount of data (images) to train the deep learning models in order to obtain good results. Large public datasets are rare in medicine. This paper proposes a framework based on unsupervised deep generative neural networks to solve this limitation. We combine two generative models in the proposed framework: variational autoencoders (VAEs) and generative adversarial networks (GANs). We swap the encoder-decoder network after initially training it on the training set of available MR images. The output of this swapped network is a noise vector that has information of the image manifold, and the cascaded generative adversarial network samples the input from this informative noise vector instead of random Gaussian noise. The proposed method helps the GAN to avoid mode collapse and generate realistic-looking brain tumor magnetic resonance images. These artificially generated images could solve the limitation of small medical datasets up to a reasonable extent and help the deep learning models perform acceptably. We used the ResNet50 as a classifier, and the artificially generated brain tumor images are used to augment the real and available images during the classifier training. We compared the classification results with several existing studies and state-of-the-art machine learning models. Our proposed methodology noticeably achieved better results. By using brain tumor images generated artificially by our proposed method, the classification average accuracy improved from 72.63% to 96.25%. For the most severe class of brain tumor, glioma, we achieved 0.769, 0.837, 0.833, and 0.80 values for recall, specificity, precision, and F1-score, respectively. The proposed generative model framework could be used to generate medical images in any domain, including PET (positron emission tomography) and MRI scans of various parts of the body, and the results show that it could be a useful clinical tool for medical experts.
Collapse
Affiliation(s)
- Bilal Ahmad
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China; (B.A.); (Q.Y.); (Z.M.)
| | - Jun Sun
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China; (B.A.); (Q.Y.); (Z.M.)
| | - Qi You
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China; (B.A.); (Q.Y.); (Z.M.)
| | - Vasile Palade
- Centre for Computational Science and Mathematical Modelling, Coventry University, Coventry CV1 5FB, UK;
| | - Zhongjie Mao
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China; (B.A.); (Q.Y.); (Z.M.)
| |
Collapse
|
41
|
Abstract
The delivery of healthcare from a distance, also known as telemedicine, has evolved over the past 50 years, changing the way healthcare is delivered globally. Its integration into numerous domains has permitted high-quality care that transcends the obstacles of geographic distance, lack of access to health care providers, and cost. Ultrasound is an effective diagnostic tool and its application within telemedicine has advanced substantially in recent years, particularly in high-income settings and low-resource areas. The literature in Pubmed from 1960–2020 was assessed with the keywords “ultrasound”, “telemedicine”, “ultrasound remote”, and “tele-ultrasound” to conduct a SWOT analysis (strengths, weaknesses, opportunities, and threats). In addressing strengths and opportunities, we emphasized practical aspects, such as the usefulness of tele-ultrasound and the cost efficiency of it. Furthermore, aspects of medical education in tele-ultrasound were considered. When it came to weaknesses and threats, we focused on issues that may not be solved immediately, and that require careful consideration or further development, such as new software that is not yet available commercially.
Collapse
|
42
|
Asada K, Takasawa K, Machino H, Takahashi S, Shinkai N, Bolatkan A, Kobayashi K, Komatsu M, Kaneko S, Okamoto K, Hamamoto R. Single-Cell Analysis Using Machine Learning Techniques and Its Application to Medical Research. Biomedicines 2021; 9:biomedicines9111513. [PMID: 34829742 PMCID: PMC8614827 DOI: 10.3390/biomedicines9111513] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2021] [Revised: 10/06/2021] [Accepted: 10/19/2021] [Indexed: 01/14/2023] Open
Abstract
In recent years, the diversity of cancer cells in tumor tissues as a result of intratumor heterogeneity has attracted attention. In particular, the development of single-cell analysis technology has made a significant contribution to the field; technologies that are centered on single-cell RNA sequencing (scRNA-seq) have been reported to analyze cancer constituent cells, identify cell groups responsible for therapeutic resistance, and analyze gene signatures of resistant cell groups. However, although single-cell analysis is a powerful tool, various issues have been reported, including batch effects and transcriptional noise due to gene expression variation and mRNA degradation. To overcome these issues, machine learning techniques are currently being introduced for single-cell analysis, and promising results are being reported. In addition, machine learning has also been used in various ways for single-cell analysis, such as single-cell assay of transposase accessible chromatin sequencing (ATAC-seq), chromatin immunoprecipitation sequencing (ChIP-seq) analysis, and multi-omics analysis; thus, it contributes to a deeper understanding of the characteristics of human diseases, especially cancer, and supports clinical applications. In this review, we present a comprehensive introduction to the implementation of machine learning techniques in medical research for single-cell analysis, and discuss their usefulness and future potential.
Collapse
Affiliation(s)
- Ken Asada
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (K.T.); (H.M.); (S.T.); (N.S.); (A.B.); (M.K.)
- Correspondence: (K.A.); (R.H.); Tel.: +81-3-3547-5271 (R.H.)
| | - Ken Takasawa
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (K.T.); (H.M.); (S.T.); (N.S.); (A.B.); (M.K.)
| | - Hidenori Machino
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (K.T.); (H.M.); (S.T.); (N.S.); (A.B.); (M.K.)
| | - Satoshi Takahashi
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (K.T.); (H.M.); (S.T.); (N.S.); (A.B.); (M.K.)
| | - Norio Shinkai
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (K.T.); (H.M.); (S.T.); (N.S.); (A.B.); (M.K.)
- Department of NCC Cancer Science, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
| | - Amina Bolatkan
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (K.T.); (H.M.); (S.T.); (N.S.); (A.B.); (M.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (K.K.); (S.K.)
| | - Kazuma Kobayashi
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (K.K.); (S.K.)
| | - Masaaki Komatsu
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (K.T.); (H.M.); (S.T.); (N.S.); (A.B.); (M.K.)
| | - Syuzo Kaneko
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (K.K.); (S.K.)
| | - Koji Okamoto
- Division of Cancer Differentiation, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan;
| | - Ryuji Hamamoto
- Department of NCC Cancer Science, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (K.K.); (S.K.)
- Correspondence: (K.A.); (R.H.); Tel.: +81-3-3547-5271 (R.H.)
| |
Collapse
|
43
|
Asada K, Komatsu M, Shimoyama R, Takasawa K, Shinkai N, Sakai A, Bolatkan A, Yamada M, Takahashi S, Machino H, Kobayashi K, Kaneko S, Hamamoto R. Application of Artificial Intelligence in COVID-19 Diagnosis and Therapeutics. J Pers Med 2021; 11:886. [PMID: 34575663 PMCID: PMC8471764 DOI: 10.3390/jpm11090886] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2021] [Revised: 09/01/2021] [Accepted: 09/02/2021] [Indexed: 12/12/2022] Open
Abstract
The coronavirus disease 2019 (COVID-19) pandemic began at the end of December 2019, giving rise to a high rate of infections and causing COVID-19-associated deaths worldwide. It was first reported in Wuhan, China, and since then, not only global leaders, organizations, and pharmaceutical/biotech companies, but also researchers, have directed their efforts toward overcoming this threat. The use of artificial intelligence (AI) has recently surged internationally and has been applied to diverse aspects of many problems. The benefits of using AI are now widely accepted, and many studies have shown great success in medical research on tasks, such as the classification, detection, and prediction of disease, or even patient outcome. In fact, AI technology has been actively employed in various ways in COVID-19 research, and several clinical applications of AI-equipped medical devices for the diagnosis of COVID-19 have already been reported. Hence, in this review, we summarize the latest studies that focus on medical imaging analysis, drug discovery, and therapeutics such as vaccine development and public health decision-making using AI. This survey clarifies the advantages of using AI in the fight against COVID-19 and provides future directions for tackling the COVID-19 pandemic using AI techniques.
Collapse
Affiliation(s)
- Ken Asada
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (K.A.); (M.K.); (R.S.); (K.T.); (N.S.); (A.B.); (S.T.); (H.M.); (K.K.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.S.); (M.Y.)
| | - Masaaki Komatsu
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (K.A.); (M.K.); (R.S.); (K.T.); (N.S.); (A.B.); (S.T.); (H.M.); (K.K.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.S.); (M.Y.)
| | - Ryo Shimoyama
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (K.A.); (M.K.); (R.S.); (K.T.); (N.S.); (A.B.); (S.T.); (H.M.); (K.K.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.S.); (M.Y.)
| | - Ken Takasawa
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (K.A.); (M.K.); (R.S.); (K.T.); (N.S.); (A.B.); (S.T.); (H.M.); (K.K.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.S.); (M.Y.)
| | - Norio Shinkai
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (K.A.); (M.K.); (R.S.); (K.T.); (N.S.); (A.B.); (S.T.); (H.M.); (K.K.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.S.); (M.Y.)
- Department of NCC Cancer Science, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
| | - Akira Sakai
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.S.); (M.Y.)
- Department of NCC Cancer Science, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
| | - Amina Bolatkan
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (K.A.); (M.K.); (R.S.); (K.T.); (N.S.); (A.B.); (S.T.); (H.M.); (K.K.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.S.); (M.Y.)
| | - Masayoshi Yamada
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.S.); (M.Y.)
- Department of Endoscopy, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan
| | - Satoshi Takahashi
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (K.A.); (M.K.); (R.S.); (K.T.); (N.S.); (A.B.); (S.T.); (H.M.); (K.K.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.S.); (M.Y.)
| | - Hidenori Machino
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (K.A.); (M.K.); (R.S.); (K.T.); (N.S.); (A.B.); (S.T.); (H.M.); (K.K.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.S.); (M.Y.)
| | - Kazuma Kobayashi
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (K.A.); (M.K.); (R.S.); (K.T.); (N.S.); (A.B.); (S.T.); (H.M.); (K.K.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.S.); (M.Y.)
| | - Syuzo Kaneko
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (K.A.); (M.K.); (R.S.); (K.T.); (N.S.); (A.B.); (S.T.); (H.M.); (K.K.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.S.); (M.Y.)
| | - Ryuji Hamamoto
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (K.A.); (M.K.); (R.S.); (K.T.); (N.S.); (A.B.); (S.T.); (H.M.); (K.K.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.S.); (M.Y.)
- Department of NCC Cancer Science, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
| |
Collapse
|