1
|
S S, Rufus NHA. Investigation on ultrasound images for detection of fetal congenital heart defects. Biomed Phys Eng Express 2024; 10:042001. [PMID: 38781934 DOI: 10.1088/2057-1976/ad4f91] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2023] [Accepted: 05/23/2024] [Indexed: 05/25/2024]
Abstract
Congenital heart defects (CHD) are one of the serious problems that arise during pregnancy. Early CHD detection reduces death rates and morbidity but is hampered by the relatively low detection rates (i.e., 60%) of current screening technology. The detection rate could be increased by supplementing ultrasound imaging with fetal ultrasound image evaluation (FUSI) using deep learning techniques. As a result, the non-invasive foetal ultrasound image has clear potential in the diagnosis of CHD and should be considered in addition to foetal echocardiography. This review paper highlights cutting-edge technologies for detecting CHD using ultrasound images, which involve pre-processing, localization, segmentation, and classification. Existing technique of preprocessing includes spatial domain filter, non-linear mean filter, transform domain filter, and denoising methods based on Convolutional Neural Network (CNN); segmentation includes thresholding-based techniques, region growing-based techniques, edge detection techniques, Artificial Neural Network (ANN) based segmentation methods, non-deep learning approaches and deep learning approaches. The paper also suggests future research directions for improving current methodologies.
Collapse
Affiliation(s)
- Satish S
- Department of Electronics and Communication Engineering, Vel Tech Rangarajan Dr Sagunthala R&D Institute of Science and Technology, Chennai-600062, Tamil Nadu, India
| | - N Herald Anantha Rufus
- Department of Electronics and Communication Engineering, Vel Tech Rangarajan Dr Sagunthala R&D Institute of Science and Technology, Chennai-600062, Tamil Nadu, India
| |
Collapse
|
2
|
Pu B, Li K, Chen J, Lu Y, Zeng Q, Yang J, Li S. HFSCCD: A Hybrid Neural Network for Fetal Standard Cardiac Cycle Detection in Ultrasound Videos. IEEE J Biomed Health Inform 2024; 28:2943-2954. [PMID: 38412077 DOI: 10.1109/jbhi.2024.3370507] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/29/2024]
Abstract
In the fetal cardiac ultrasound examination, standard cardiac cycle (SCC) recognition is the essential foundation for diagnosing congenital heart disease. Previous studies have mostly focused on the detection of adult CCs, which may not be applicable to the fetus. In clinical practice, localization of SCCs needs to recognize end-systole (ES) and end-diastole (ED) frames accurately, ensuring that every frame in the cycle is a standard view. Most existing methods are not based on the detection of key anatomical structures, which may not recognize irrelevant views and background frames, results containing non-standard frames, or even it does not work in clinical practice. We propose an end-to-end hybrid neural network based on an object detector to detect SCCs from fetal ultrasound videos efficiently, which consists of 3 modules, namely Anatomical Structure Detection (ASD), Cardiac Cycle Localization (CCL), and Standard Plane Recognition (SPR). Specifically, ASD uses an object detector to identify 9 key anatomical structures, 3 cardiac motion phases, and the corresponding confidence scores from fetal ultrasound videos. On this basis, we propose a joint probability method in the CCL to learn the cardiac motion cycle based on the 3 cardiac motion phases. In SPR, to reduce the impact of structure detection errors on the accuracy of the standard plane recognition, we use XGBoost algorithm to learn the relation knowledge of the detected anatomical structures. We evaluate our method on the test fetal ultrasound video datasets and clinical examination cases and achieve remarkable results. This study may pave the way for clinical practices.
Collapse
|
3
|
Zhang J, Xiao S, Zhu Y, Zhang Z, Cao H, Xie M, Zhang L. Advances in the Application of Artificial Intelligence in Fetal Echocardiography. J Am Soc Echocardiogr 2024; 37:550-561. [PMID: 38199332 DOI: 10.1016/j.echo.2023.12.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Revised: 12/23/2023] [Accepted: 12/23/2023] [Indexed: 01/12/2024]
Abstract
Congenital heart disease is a severe health risk for newborns. Early detection of abnormalities in fetal cardiac structure and function during pregnancy can help patients seek timely diagnostic and therapeutic advice, and early intervention planning can significantly improve fetal survival rates. Echocardiography is one of the most accessible and widely used diagnostic tools in the diagnosis of fetal congenital heart disease. However, traditional fetal echocardiography has limitations due to fetal, maternal, and ultrasound equipment factors and is highly dependent on the skill level of the operator. Artificial intelligence (AI) technology, with its rapid development utilizing advanced computer algorithms, has great potential to empower sonographers in time-saving and accurate diagnosis and to bridge the skill gap in different regions. In recent years, AI-assisted fetal echocardiography has been successfully applied to a wide range of ultrasound diagnoses. This review systematically reviews the applications of AI in the field of fetal echocardiography over the years in terms of image processing, biometrics, and disease diagnosis and provides an outlook for future research.
Collapse
Affiliation(s)
- Junmin Zhang
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China; Clinical Research Center for Medical Imaging, Hubei Province, Wuhan, China; Hubei Province Key Laboratory of Molecular Imaging, Wuhan, China
| | - Sushan Xiao
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China; Clinical Research Center for Medical Imaging, Hubei Province, Wuhan, China; Hubei Province Key Laboratory of Molecular Imaging, Wuhan, China
| | - Ye Zhu
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China; Clinical Research Center for Medical Imaging, Hubei Province, Wuhan, China; Hubei Province Key Laboratory of Molecular Imaging, Wuhan, China
| | - Zisang Zhang
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China; Clinical Research Center for Medical Imaging, Hubei Province, Wuhan, China; Hubei Province Key Laboratory of Molecular Imaging, Wuhan, China
| | - Haiyan Cao
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China; Clinical Research Center for Medical Imaging, Hubei Province, Wuhan, China; Hubei Province Key Laboratory of Molecular Imaging, Wuhan, China
| | - Mingxing Xie
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China; Clinical Research Center for Medical Imaging, Hubei Province, Wuhan, China; Hubei Province Key Laboratory of Molecular Imaging, Wuhan, China
| | - Li Zhang
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China; Clinical Research Center for Medical Imaging, Hubei Province, Wuhan, China; Hubei Province Key Laboratory of Molecular Imaging, Wuhan, China.
| |
Collapse
|
4
|
Shen W, Zhou M, Luo J, Li Z, Kwong S. Graph-Represented Distribution Similarity Index for Full-Reference Image Quality Assessment. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2024; 33:3075-3089. [PMID: 38656839 DOI: 10.1109/tip.2024.3390565] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/26/2024]
Abstract
In this paper, we propose a graph-represented image distribution similarity (GRIDS) index for full-reference (FR) image quality assessment (IQA), which can measure the perceptual distance between distorted and reference images by assessing the disparities between their distribution patterns under a graph-based representation. First, we transform the input image into a graph-based representation, which is proven to be a versatile and effective choice for capturing visual perception features. This is achieved through the automatic generation of a vision graph from the given image content, leading to holistic perceptual associations for irregular image regions. Second, to reflect the perceived image distribution, we decompose the undirected graph into cliques and then calculate the product of the potential functions for the cliques to obtain the joint probability distribution of the undirected graph. Finally, we compare the distances between the graph feature distributions of the distorted and reference images at different stages; thus, we combine the distortion distribution measurements derived from different graph model depths to determine the perceived quality of the distorted images. The empirical results obtained from an extensive array of experiments underscore the competitive nature of our proposed method, which achieves performance on par with that of the state-of-the-art methods, demonstrating its exceptional predictive accuracy and ability to maintain consistent and monotonic behaviour in image quality prediction tasks. The source code is publicly available at the following website https://github.com/Land5cape/GRIDS.
Collapse
|
5
|
Dubey G, Srivastava S, Jayswal AK, Saraswat M, Singh P, Memoria M. Fetal Ultrasound Segmentation and Measurements Using Appearance and Shape Prior Based Density Regression with Deep CNN and Robust Ellipse Fitting. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:247-267. [PMID: 38343234 DOI: 10.1007/s10278-023-00908-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 09/05/2023] [Accepted: 09/06/2023] [Indexed: 03/02/2024]
Abstract
Accurately segmenting the structure of the fetal head (FH) and performing biometry measurements, including head circumference (HC) estimation, stands as a vital requirement for addressing abnormal fetal growth during pregnancy under the expertise of experienced radiologists using ultrasound (US) images. However, accurate segmentation and measurement is a challenging task due to image artifact, incomplete ellipse fitting, and fluctuations due to FH dimensions over different trimesters. Also, it is highly time-consuming due to the absence of specialized features, which leads to low segmentation accuracy. To address these challenging tasks, we propose an automatic density regression approach to incorporate appearance and shape priors into the deep learning-based network model (DR-ASPnet) with robust ellipse fitting using fetal US images. Initially, we employed multiple pre-processing steps to remove unwanted distortions, variable fluctuations, and a clear view of significant features from the US images. Then some form of augmentation operation is applied to increase the diversity of the dataset. Next, we proposed the hierarchical density regression deep convolutional neural network (HDR-DCNN) model, which involves three network models to determine the complex location of FH for accurate segmentation during the training and testing processes. Then, we used post-processing operations using contrast enhancement filtering with a morphological operation model to smooth the region and remove unnecessary artifacts from the segmentation results. After post-processing, we applied the smoothed segmented result to the robust ellipse fitting-based least square (REFLS) method for HC estimation. Experimental results of the DR-ASPnet model obtain 98.86% dice similarity coefficient (DSC) as segmentation accuracy, and it also obtains 1.67 mm absolute distance (AD) as measurement accuracy compared to other state-of-the-art methods. Finally, we achieved a 0.99 correlation coefficient (CC) in estimating the measured and predicted HC values on the HC18 dataset.
Collapse
Affiliation(s)
- Gaurav Dubey
- Department of Computer Science, KIET Group of Institutions, Delhi-NCR, Ghaziabad, U.P, India
| | | | | | - Mala Saraswat
- Department of Computer Science, Bennett University, Greater Noida, India
| | - Pooja Singh
- Shiv Nadar University, Greater Noida, Uttar Pradesh, India
| | - Minakshi Memoria
- CSE Department, UIT, Uttaranchal University, Dehradun, Uttarakhand, India
| |
Collapse
|
6
|
Li F, Li P, Wu X, Zeng P, Lyu G, Fan Y, Liu P, Song H, Liu Z. FHUSP-NET: A Multi-task model for fetal heart ultrasound standard plane recognition and key anatomical structures detection. Comput Biol Med 2024; 168:107741. [PMID: 38042103 DOI: 10.1016/j.compbiomed.2023.107741] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 10/22/2023] [Accepted: 11/19/2023] [Indexed: 12/04/2023]
Abstract
In prenatal ultrasound screening, rapid and accurate recognition of the fetal heart ultrasound standard planes(FHUSPs) can more objectively predict fetal heart growth. However, the small size and movement of the fetal heart make this process more difficult. Therefore, we design a deep learning-based FHUSP recognition network (FHUSP-NET), which can automatically recognize the five FHUSPs and detect tiny key anatomical structures at the same time. 3360 ultrasound images of five FHUSPs from 1300 mid-pregnancy pregnant women are included in this study. 10 fetal heart key anatomical structures are manually annotated by experts. We apply spatial pyramid pooling with a fully connected spatial pyramid convolution module to capture information about targets and scenes of different sizes as well as improve the perceptual ability and feature representation of the model. Additionally, we adopt the squeeze-and-excitation networks to improve the sensitivity of the model to the channel features. We also introduce a new loss function, the efficient IOU loss, which makes the model effective for optimizing similarity. The results demonstrate the superiority of FHUSP-NET in detecting fetal heart key anatomical structures and recognizing FHUSPs. In the detection task, the value of mAP@0.5, precision, and recall are 0.955, 0.958, and 0.931, respectively, while the accuracy reaches 0.964 in the recognition task. Furthermore, it takes only 13.6 ms to detect and recognize one FHUSP image. This method helps to improve ultrasonographers' quality control of the fetal heart ultrasound standard plane and aids in the identification of fetal heart structures in a less experienced group of physicians.
Collapse
Affiliation(s)
- Furong Li
- College of Physics and Electronic Engineering, Northwest Normal University, Lanzhou, 730070, China
| | - Ping Li
- Department of Gynecology and Obstetrics, The First Hospital of Quanzhou Affiliated to Fujian Medical University, Quanzhou, China
| | - Xiuming Wu
- Department of Ultrasound, The First Hospital of Quanzhou Affiliated to Fujian Medical University, Quanzhou, China
| | - Pan Zeng
- College of Medicine, Huaqiao University, Quanzhou, 362021, China
| | - Guorong Lyu
- Department of Ultrasound, The Second Hospital of Quanzhou Affiliated to Fujian Medical University, Quanzhou, China; Collaborative Innovation Center for Maternal and Infant Health Service Application Technology, Quanzhou Medical College, Quanzhou, 362011, China
| | - Yuling Fan
- College of Engineering, Huaqiao University, Quanzhou, 362021, China
| | - Peizhong Liu
- College of Medicine, Huaqiao University, Quanzhou, 362021, China; Collaborative Innovation Center for Maternal and Infant Health Service Application Technology, Quanzhou Medical College, Quanzhou, 362011, China; College of Engineering, Huaqiao University, Quanzhou, 362021, China.
| | - Haisheng Song
- College of Physics and Electronic Engineering, Northwest Normal University, Lanzhou, 730070, China.
| | - Zhonghua Liu
- Department of Ultrasound, The First Hospital of Quanzhou Affiliated to Fujian Medical University, Quanzhou, China.
| |
Collapse
|
7
|
Zhao H, Zheng Q, Teng C, Yasrab R, Drukker L, Papageorghiou AT, Noble JA. Memory-based unsupervised video clinical quality assessment with multi-modality data in fetal ultrasound. Med Image Anal 2023; 90:102977. [PMID: 37778101 DOI: 10.1016/j.media.2023.102977] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Revised: 08/03/2023] [Accepted: 09/18/2023] [Indexed: 10/03/2023]
Abstract
In obstetric sonography, the quality of acquisition of ultrasound scan video is crucial for accurate (manual or automated) biometric measurement and fetal health assessment. However, the nature of fetal ultrasound involves free-hand probe manipulation and this can make it challenging to capture high-quality videos for fetal biometry, especially for the less-experienced sonographer. Manually checking the quality of acquired videos would be time-consuming, subjective and requires a comprehensive understanding of fetal anatomy. Thus, it would be advantageous to develop an automatic quality assessment method to support video standardization and improve diagnostic accuracy of video-based analysis. In this paper, we propose a general and purely data-driven video-based quality assessment framework which directly learns a distinguishable feature representation from high-quality ultrasound videos alone, without anatomical annotations. Our solution effectively utilizes both spatial and temporal information of ultrasound videos. The spatio-temporal representation is learned by a bi-directional reconstruction between the video space and the feature space, enhanced by a key-query memory module proposed in the feature space. To further improve performance, two additional modalities are introduced in training which are the sonographer gaze and optical flow derived from the video. Two different clinical quality assessment tasks in fetal ultrasound are considered in our experiments, i.e., measurement of the fetal head circumference and cerebellar diameter; in both of these, low-quality videos are detected by the large reconstruction error in the feature space. Extensive experimental evaluation demonstrates the merits of our approach.
Collapse
Affiliation(s)
- He Zhao
- Institute of Biomedical Engineering, University of Oxford, United Kingdom.
| | - Qingqing Zheng
- Guangdong Provincial Key Laboratory of Computer Vision and Virtual Reality, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, China
| | - Clare Teng
- Institute of Biomedical Engineering, University of Oxford, United Kingdom
| | - Robail Yasrab
- Institute of Biomedical Engineering, University of Oxford, United Kingdom
| | - Lior Drukker
- Nuffield Department of Women's and Reproductive Health, University of Oxford, United Kingdom; Department of Obstetrics and Gynecology, Tel-Aviv University, Israel
| | - Aris T Papageorghiou
- Nuffield Department of Women's and Reproductive Health, University of Oxford, United Kingdom
| | - J Alison Noble
- Institute of Biomedical Engineering, University of Oxford, United Kingdom
| |
Collapse
|
8
|
Slimani S, Hounka S, Mahmoudi A, Rehah T, Laoudiyi D, Saadi H, Bouziyane A, Lamrissi A, Jalal M, Bouhya S, Akiki M, Bouyakhf Y, Badaoui B, Radgui A, Mhlanga M, Bouyakhf EH. Fetal biometry and amniotic fluid volume assessment end-to-end automation using Deep Learning. Nat Commun 2023; 14:7047. [PMID: 37923713 PMCID: PMC10624828 DOI: 10.1038/s41467-023-42438-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Accepted: 10/10/2023] [Indexed: 11/06/2023] Open
Abstract
Fetal biometry and amniotic fluid volume assessments are two essential yet repetitive tasks in fetal ultrasound screening scans, aiding in the detection of potentially life-threatening conditions. However, these assessment methods can occasionally yield unreliable results. Advances in deep learning have opened up new avenues for automated measurements in fetal ultrasound, demonstrating human-level performance in various fetal ultrasound tasks. Nevertheless, the majority of these studies are retrospective in silico studies, with a limited number including African patients in their datasets. In this study we developed and prospectively assessed the performance of deep learning models for end-to-end automation of fetal biometry and amniotic fluid volume measurements. These models were trained using a newly constructed database of 172,293 de-identified Moroccan fetal ultrasound images, supplemented with publicly available datasets. the models were then tested on prospectively acquired video clips from 172 pregnant people forming a consecutive series gathered at four healthcare centers in Morocco. Our results demonstrate that the 95% limits of agreement between the models and practitioners for the studied measurements were narrower than the reported intra- and inter-observer variability among expert human sonographers for all the parameters under study. This means that these models could be deployed in clinical conditions, to alleviate time-consuming, repetitive tasks, and make fetal ultrasound more accessible in limited-resource environments.
Collapse
Affiliation(s)
- Saad Slimani
- Deepecho, 10106, Rabat, Morocco.
- Ibn Rochd University Hospital, Hassan II University, 20100, Casablanca, Morocco.
| | - Salaheddine Hounka
- Telecommunications Systems Services and Networks lab (STRS Lab), INPT, 10112, Rabat, Morocco
| | - Abdelhak Mahmoudi
- Deepecho, 10106, Rabat, Morocco
- Ecole Normale Supérieure, LIMIARF, Mohammed V University in Rabat, 4014, Rabat, Morocco
| | | | - Dalal Laoudiyi
- Ibn Rochd University Hospital, Hassan II University, 20100, Casablanca, Morocco
| | - Hanane Saadi
- Mohammed VI University Hospital, 60049, Oujda, Morocco
| | - Amal Bouziyane
- Université Mohammed VI des Sciences de la Santé, Hôpital Universitaire Cheikh Khalifa, 82403, Casablanca, Morocco
| | - Amine Lamrissi
- Ibn Rochd University Hospital, Hassan II University, 20100, Casablanca, Morocco
| | - Mohamed Jalal
- Ibn Rochd University Hospital, Hassan II University, 20100, Casablanca, Morocco
| | - Said Bouhya
- Ibn Rochd University Hospital, Hassan II University, 20100, Casablanca, Morocco
| | | | | | - Bouabid Badaoui
- Laboratory of Biodiversity, Ecology, and Genome, Department of Biology, Faculty of Sciences, Mohammed V University in Rabat, 1014, Rabat, Morocco
- African Sustainable Agriculture Research Institute (ASARI), Mohammed VI Polytechnic University (UM6P), 43150, Laâyoune, Morocco
| | - Amina Radgui
- Telecommunications Systems Services and Networks lab (STRS Lab), INPT, 10112, Rabat, Morocco
| | - Musa Mhlanga
- Radboud Institute for Molecular Life Sciences, Epigenomics & Single Cell Biophysics, 6525 XZ, Nijmegen, the Netherlands
| | | |
Collapse
|
9
|
Guo Y, Hu M, Min X, Wang Y, Dai M, Zhai G, Zhang XP, Yang X. Blind Image Quality Assessment for Pathological Microscopic Image Under Screen and Immersion Scenarios. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3295-3306. [PMID: 37267133 DOI: 10.1109/tmi.2023.3282387] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
The high-quality pathological microscopic images are essential for physicians or pathologists to make a correct diagnosis. Image quality assessment (IQA) can quantify the visual distortion degree of images and guide the imaging system to improve image quality, thus raising the quality of pathological microscopic images. Current IQA methods are not ideal for pathological microscopy images due to their specificity. In this paper, we present deep learning-based blind image quality assessment model with saliency block and patch block for pathological microscopic images. The saliency block and patch block can handle the local and global distortions, respectively. To better capture the area of interest of pathologists when viewing pathological images, the saliency block is fine-tuned by eye movement data of pathologists. The patch block can capture lots of global information strongly related to image quality via the interaction between different image patches from different positions. The performance of the developed model is validated by the home-made Pathological Microscopic Image Quality Database under Screen and Immersion Scenarios (PMIQD-SIS) and cross-validated by the five public datasets. The results of ablation experiments demonstrate the contribution of the added blocks. The dataset and the corresponding code are publicly available at: https://github.com/mikugyf/PMIQD-SIS.
Collapse
|
10
|
Zhang Y, Zhu H, Cheng J, Wang J, Gu X, Han J, Zhang Y, Zhao Y, He Y, Zhang H. Improving the Quality of Fetal Heart Ultrasound Imaging With Multihead Enhanced Self-Attention and Contrastive Learning. IEEE J Biomed Health Inform 2023; 27:5518-5529. [PMID: 37556337 DOI: 10.1109/jbhi.2023.3303573] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/11/2023]
Abstract
Fetal congenital heart disease (FCHD) is a common, serious birth defect affecting ∼1% of newborns annually. Fetal echocardiography is the most effective and important technique for prenatal FCHD diagnosis. The prerequisites for accurate ultrasound FCHD diagnosis are accurate view recognition and high-quality diagnostic view extraction. However, these manual clinical procedures have drawbacks such as, varying technical capabilities and inefficiency. Therefore, the automatic identification of high-quality multiview fetal heart scan images is highly desirable to improve prenatal diagnosis efficiency and accuracy of FCHD. Here, we present a framework for multiview fetal heart ultrasound image recognition and quality assessment that comprises two parts: a multiview classification and localization network (MCLN) and an improved contrastive learning network (ICLN). In the MCLN, a multihead enhanced self-attention mechanism is applied to construct the classification network and identify six accurate and interpretable views of the fetal heart. In the ICLN, anatomical structure standardization and image clarity are considered. With contrastive learning, the absolute loss, feature relative loss and predicted value relative loss are combined to achieve favorable quality assessment results. Experiments show that the MCLN outperforms other state-of-the-art networks by 1.52-13.61% when determining the F1 score in six standard view recognition tasks, and the ICLN is comparable to the performance of expert cardiologists in the quality assessment of fetal heart ultrasound images, reaching 97% on a test set within 2 points for the four-chamber view task. Thus, our architecture offers great potential in helping cardiologists improve quality control for fetal echocardiographic images in clinical practice.
Collapse
|
11
|
Jost E, Kosian P, Jimenez Cruz J, Albarqouni S, Gembruch U, Strizek B, Recker F. Evolving the Era of 5D Ultrasound? A Systematic Literature Review on the Applications for Artificial Intelligence Ultrasound Imaging in Obstetrics and Gynecology. J Clin Med 2023; 12:6833. [PMID: 37959298 PMCID: PMC10649694 DOI: 10.3390/jcm12216833] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Revised: 10/17/2023] [Accepted: 10/25/2023] [Indexed: 11/15/2023] Open
Abstract
Artificial intelligence (AI) has gained prominence in medical imaging, particularly in obstetrics and gynecology (OB/GYN), where ultrasound (US) is the preferred method. It is considered cost effective and easily accessible but is time consuming and hindered by the need for specialized training. To overcome these limitations, AI models have been proposed for automated plane acquisition, anatomical measurements, and pathology detection. This study aims to overview recent literature on AI applications in OB/GYN US imaging, highlighting their benefits and limitations. For the methodology, a systematic literature search was performed in the PubMed and Cochrane Library databases. Matching abstracts were screened based on the PICOS (Participants, Intervention or Exposure, Comparison, Outcome, Study type) scheme. Articles with full text copies were distributed to the sections of OB/GYN and their research topics. As a result, this review includes 189 articles published from 1994 to 2023. Among these, 148 focus on obstetrics and 41 on gynecology. AI-assisted US applications span fetal biometry, echocardiography, or neurosonography, as well as the identification of adnexal and breast masses, and assessment of the endometrium and pelvic floor. To conclude, the applications for AI-assisted US in OB/GYN are abundant, especially in the subspecialty of obstetrics. However, while most studies focus on common application fields such as fetal biometry, this review outlines emerging and still experimental fields to promote further research.
Collapse
Affiliation(s)
- Elena Jost
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Philipp Kosian
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Jorge Jimenez Cruz
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Shadi Albarqouni
- Department of Diagnostic and Interventional Radiology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
- Helmholtz AI, Helmholtz Munich, Ingolstädter Landstraße 1, 85764 Neuherberg, Germany
| | - Ulrich Gembruch
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Brigitte Strizek
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Florian Recker
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| |
Collapse
|
12
|
Wang L, Wang J, Zhu L, Fu H, Li P, Cheng G, Feng Z, Li S, Heng PA. Dual Multiscale Mean Teacher Network for Semi-Supervised Infection Segmentation in Chest CT Volume for COVID-19. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:6363-6375. [PMID: 37015538 DOI: 10.1109/tcyb.2022.3223528] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Automated detecting lung infections from computed tomography (CT) data plays an important role for combating coronavirus 2019 (COVID-19). However, there are still some challenges for developing AI system: 1) most current COVID-19 infection segmentation methods mainly relied on 2-D CT images, which lack 3-D sequential constraint; 2) existing 3-D CT segmentation methods focus on single-scale representations, which do not achieve the multiple level receptive field sizes on 3-D volume; and 3) the emergent breaking out of COVID-19 makes it hard to annotate sufficient CT volumes for training deep model. To address these issues, we first build a multiple dimensional-attention convolutional neural network (MDA-CNN) to aggregate multiscale information along different dimension of input feature maps and impose supervision on multiple predictions from different convolutional neural networks (CNNs) layers. Second, we assign this MDA-CNN as a basic network into a novel dual multiscale mean teacher network (DM [Formula: see text]-Net) for semi-supervised COVID-19 lung infection segmentation on CT volumes by leveraging unlabeled data and exploring the multiscale information. Our DM [Formula: see text]-Net encourages multiple predictions at different CNN layers from the student and teacher networks to be consistent for computing a multiscale consistency loss on unlabeled data, which is then added to the supervised loss on the labeled data from multiple predictions of MDA-CNN. Third, we collect two COVID-19 segmentation datasets to evaluate our method. The experimental results show that our network consistently outperforms the compared state-of-the-art methods.
Collapse
|
13
|
Guo J, Tan G, Wu F, Wen H, Li K. Fetal Ultrasound Standard Plane Detection With Coarse-to-Fine Multi-Task Learning. IEEE J Biomed Health Inform 2023; 27:5023-5031. [PMID: 36173776 DOI: 10.1109/jbhi.2022.3209589] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
The ultrasound standard plane plays an important role in prenatal fetal growth parameter measurement and disease diagnosis in prenatal screening. However, obtaining standard planes in a fetal ultrasound video is not only laborious and time-consuming but also depends on the clinical experience of sonographers to a certain extent. To improve the acquisition efficiency and accuracy of the ultrasound standard plane, we propose a novel detection framework that utilizes both the coarse-to-fine detection strategy and multi-task learning mechanism for feature-fused images. First, traditional manually-designed features and deep learning-based features are fused to obtain low-level shared features, which can enhance the model's feature expression ability. Inspired by the process of human recognition, ultrasound standard plane detection is divided into a coarse process of plane type classification and a fine process of standard-or-not detection, which is implemented via an end-to-end multi-task learning network. The region-of-interest area is also recognised in our detection framework to suppress the influence of a variable maternal background. Extensive experiments are conducted on three ultrasound planes of the first-class fetal examination, i.e., the femur, thalamus, and abdomen ultrasound images. The experiment results show that our method outperforms competing methods in terms of accuracy, which demonstrates the efficacy of the proposed method and can reduce the workload of sonographers in prenatal screening.
Collapse
|
14
|
Li D, Peng Y, Sun J, Guo Y. A task-unified network with transformer and spatial-temporal convolution for left ventricular quantification. Sci Rep 2023; 13:13529. [PMID: 37598235 PMCID: PMC10439898 DOI: 10.1038/s41598-023-40841-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Accepted: 08/17/2023] [Indexed: 08/21/2023] Open
Abstract
Quantification of the cardiac function is vital for diagnosing and curing the cardiovascular diseases. Left ventricular function measurement is the most commonly used measure to evaluate the function of cardiac in clinical practice, how to improve the accuracy of left ventricular quantitative assessment results has always been the subject of research by medical researchers. Although considerable efforts have been put forward to measure the left ventricle (LV) automatically using deep learning methods, the accurate quantification is yet a challenge work as a result of the changeable anatomy structure of heart in the systolic diastolic cycle. Besides, most methods used direct regression method which lacks of visual based analysis. In this work, a deep learning segmentation and regression task-unified network with transformer and spatial-temporal convolution is proposed to segment and quantify the LV simultaneously. The segmentation module leverages a U-Net like 3D Transformer model to predict the contour of three anatomy structures, while the regression module learns spatial-temporal representations from the original images and the reconstruct feature map from segmentation path to estimate the finally desired quantification metrics. Furthermore, we employ a joint task loss function to train the two module networks. Our framework is evaluated on the MICCAI 2017 Left Ventricle Full Quantification Challenge dataset. The results of experiments demonstrate the effectiveness of our framework, which achieves competitive cardiac quantification metric results and at the same time produces visualized segmentation results that are conducive to later analysis.
Collapse
Affiliation(s)
- Dapeng Li
- Shandong University of Science and Technology, Qingdao, China
| | - Yanjun Peng
- Shandong University of Science and Technology, Qingdao, China.
- Shandong Province Key Laboratory of Wisdom Mining Information Technology, Qingdao, China.
| | - Jindong Sun
- Shandong University of Science and Technology, Qingdao, China
| | - Yanfei Guo
- Shandong University of Science and Technology, Qingdao, China
| |
Collapse
|
15
|
Tang J, Liang Y, Jiang Y, Liu J, Zhang R, Huang D, Pang C, Huang C, Luo D, Zhou X, Li R, Zhang K, Xie B, Hu L, Zhu F, Xia H, Lu L, Wang H. A multicenter study on two-stage transfer learning model for duct-dependent CHDs screening in fetal echocardiography. NPJ Digit Med 2023; 6:143. [PMID: 37573426 PMCID: PMC10423245 DOI: 10.1038/s41746-023-00883-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Accepted: 07/21/2023] [Indexed: 08/14/2023] Open
Abstract
Duct-dependent congenital heart diseases (CHDs) are a serious form of CHD with a low detection rate, especially in underdeveloped countries and areas. Although existing studies have developed models for fetal heart structure identification, there is a lack of comprehensive evaluation of the long axis of the aorta. In this study, a total of 6698 images and 48 videos are collected to develop and test a two-stage deep transfer learning model named DDCHD-DenseNet for screening critical duct-dependent CHDs. The model achieves a sensitivity of 0.973, 0.843, 0.769, and 0.759, and a specificity of 0.985, 0.967, 0.956, and 0.759, respectively, on the four multicenter test sets. It is expected to be employed as a potential automatic screening tool for hierarchical care and computer-aided diagnosis. Our two-stage strategy effectively improves the robustness of the model and can be extended to screen for other fetal heart development defects.
Collapse
Affiliation(s)
- Jiajie Tang
- Department of Medical Ultrasonics/Institute of Pediatrics, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, China
- School of Information Management, Wuhan University, Wuhan, China
| | - Yongen Liang
- Department of Medical Ultrasonics/Institute of Pediatrics, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, China
| | - Yuxuan Jiang
- Department of Medical Ultrasonics/Institute of Pediatrics, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, China
- School of Information Management, Wuhan University, Wuhan, China
| | - Jinrong Liu
- Department of Medical Ultrasonics/Institute of Pediatrics, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, China
| | - Rui Zhang
- Department of Medical Ultrasonics/Institute of Pediatrics, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, China
| | - Danping Huang
- Department of Medical Ultrasonics/Institute of Pediatrics, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, China
| | - Chengcheng Pang
- Cardiovascular Pediatrics/Guangdong Cardiovascular Institute/Medical Big Data Center, Guangdong Provincial People's Hospital, Guangzhou, China
| | - Chen Huang
- Department of Medical Ultrasonics/Shenzhen Longgang Maternal and Child Health Hospital, Shenzhen, China
| | - Dongni Luo
- Department of Medical Ultrasonics/Institute of Pediatrics, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, China
| | - Xue Zhou
- Department of Medical Ultrasonics/Institute of Pediatrics, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, China
| | - Ruizhuo Li
- Department of Medical Ultrasonics/Institute of Pediatrics, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, China
- School of Medicine, Southern China University of Technology, Guangzhou, China
| | - Kanghui Zhang
- School of Information Management, Wuhan University, Wuhan, China
| | - Bingbing Xie
- School of Information Management, Wuhan University, Wuhan, China
| | - Lianting Hu
- Cardiovascular Pediatrics/Guangdong Cardiovascular Institute/Medical Big Data Center, Guangdong Provincial People's Hospital, Guangzhou, China
| | - Fanfan Zhu
- School of Information Management, Wuhan University, Wuhan, China
| | - Huimin Xia
- Department of Medical Ultrasonics/Institute of Pediatrics, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, China.
| | - Long Lu
- Department of Medical Ultrasonics/Institute of Pediatrics, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, China.
- School of Information Management, Wuhan University, Wuhan, China.
- Center for Healthcare Big Data Research, The Big Data Institute, Wuhan University, Wuhan, China.
- School of Public Health, Wuhan University, Wuhan, China.
| | - Hongying Wang
- Department of Medical Ultrasonics/Institute of Pediatrics, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, China.
| |
Collapse
|
16
|
Wang Z, Song Y, Zhao B, Zhong Z, Yao L, Lv F, Li B, Hu Y. A Soft-Reference Breast Ultrasound Image Quality Assessment Method That Considers the Local Lesion Area. Bioengineering (Basel) 2023; 10:940. [PMID: 37627825 PMCID: PMC10451797 DOI: 10.3390/bioengineering10080940] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 07/23/2023] [Accepted: 08/03/2023] [Indexed: 08/27/2023] Open
Abstract
The quality of breast ultrasound images has a significant impact on the accuracy of disease diagnosis. Existing image quality assessment (IQA) methods usually use pixel-level feature statistical methods or end-to-end deep learning methods, which focus on the global image quality but ignore the image quality of the lesion region. However, in clinical practice, doctors' evaluation of ultrasound image quality relies more on the local area of the lesion, which determines the diagnostic value of ultrasound images. In this study, a global-local integrated IQA framework for breast ultrasound images was proposed to learn doctors' clinical evaluation standards. In this study, 1285 breast ultrasound images were collected and scored by experienced doctors. After being classified as either images with lesions or images without lesions, they were evaluated using soft-reference IQA or bilinear CNN IQA, respectively. Experiments showed that for ultrasound images with lesions, our proposed soft-reference IQA achieved PLCC 0.8418 with doctors' annotation, while the existing end-to-end deep learning method that did not consider the local lesion features only achieved PLCC 0.6606. Due to the accuracy improvement for the images with lesions, our proposed global-local integrated IQA framework had better performance in the IQA task than the existing end-to-end deep learning method, with PLCC improving from 0.8306 to 0.8851.
Collapse
Affiliation(s)
- Ziwen Wang
- School of Mechanical Engineering and Automation, Harbin Institute of Technology, Shenzhen 518055, China;
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (Y.S.); (L.Y.); (Y.H.)
| | - Yuxin Song
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (Y.S.); (L.Y.); (Y.H.)
| | - Baoliang Zhao
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (Y.S.); (L.Y.); (Y.H.)
| | - Zhaoming Zhong
- The Second School of Clinical Medicine, Southern Medical University, Guangzhou 510515, China (F.L.)
- Department of Ultrasound, The Third Medical Centre of Chinese PLA General Hospital, Beijing 100039, China
| | - Liang Yao
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (Y.S.); (L.Y.); (Y.H.)
| | - Faqin Lv
- The Second School of Clinical Medicine, Southern Medical University, Guangzhou 510515, China (F.L.)
- Department of Ultrasound, The Third Medical Centre of Chinese PLA General Hospital, Beijing 100039, China
| | - Bing Li
- School of Mechanical Engineering and Automation, Harbin Institute of Technology, Shenzhen 518055, China;
| | - Ying Hu
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (Y.S.); (L.Y.); (Y.H.)
| |
Collapse
|
17
|
Day TG, Matthew J, Budd S, Hajnal JV, Simpson JM, Razavi R, Kainz B. Sonographer interaction with artificial intelligence: collaboration or conflict? ULTRASOUND IN OBSTETRICS & GYNECOLOGY : THE OFFICIAL JOURNAL OF THE INTERNATIONAL SOCIETY OF ULTRASOUND IN OBSTETRICS AND GYNECOLOGY 2023; 62:167-174. [PMID: 37523514 DOI: 10.1002/uog.26238] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Revised: 04/05/2023] [Accepted: 04/14/2023] [Indexed: 08/02/2023]
Affiliation(s)
- T G Day
- Department of Congenital Cardiology, Evelina London Children's Healthcare, Guy's and St Thomas' NHS Foundation Trust, London, UK
- Faculty of Life Sciences and Medicine, School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - J Matthew
- Faculty of Life Sciences and Medicine, School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - S Budd
- Faculty of Life Sciences and Medicine, School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - J V Hajnal
- Faculty of Life Sciences and Medicine, School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - J M Simpson
- Department of Congenital Cardiology, Evelina London Children's Healthcare, Guy's and St Thomas' NHS Foundation Trust, London, UK
- Faculty of Life Sciences and Medicine, School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - R Razavi
- Department of Congenital Cardiology, Evelina London Children's Healthcare, Guy's and St Thomas' NHS Foundation Trust, London, UK
- Faculty of Life Sciences and Medicine, School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - B Kainz
- Faculty of Life Sciences and Medicine, School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
- Department of Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
- Department of Computing, Faculty of Engineering, Imperial College London, London, UK
| |
Collapse
|
18
|
Horgan R, Nehme L, Abuhamad A. Artificial intelligence in obstetric ultrasound: A scoping review. Prenat Diagn 2023; 43:1176-1219. [PMID: 37503802 DOI: 10.1002/pd.6411] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Revised: 06/05/2023] [Accepted: 07/17/2023] [Indexed: 07/29/2023]
Abstract
The objective is to summarize the current use of artificial intelligence (AI) in obstetric ultrasound. PubMed, Cochrane Library, and ClinicalTrials.gov databases were searched using the following keywords "neural networks", OR "artificial intelligence", OR "machine learning", OR "deep learning", AND "obstetrics", OR "obstetrical", OR "fetus", OR "foetus", OR "fetal", OR "foetal", OR "pregnancy", or "pregnant", AND "ultrasound" from inception through May 2022. The search was limited to the English language. Studies were eligible for inclusion if they described the use of AI in obstetric ultrasound. Obstetric ultrasound was defined as the process of obtaining ultrasound images of a fetus, amniotic fluid, or placenta. AI was defined as the use of neural networks, machine learning, or deep learning methods. The authors' search identified a total of 127 papers that fulfilled our inclusion criteria. The current uses of AI in obstetric ultrasound include first trimester pregnancy ultrasound, assessment of placenta, fetal biometry, fetal echocardiography, fetal neurosonography, assessment of fetal anatomy, and other uses including assessment of fetal lung maturity and screening for risk of adverse pregnancy outcomes. AI holds the potential to improve the ultrasound efficiency, pregnancy outcomes in low resource settings, detection of congenital malformations and prediction of adverse pregnancy outcomes.
Collapse
Affiliation(s)
- Rebecca Horgan
- Division of Maternal Fetal Medicine, Department of Obstetrics & Gynecology, Eastern Virginia Medical School, Norfolk, Virginia, USA
| | - Lea Nehme
- Division of Maternal Fetal Medicine, Department of Obstetrics & Gynecology, Eastern Virginia Medical School, Norfolk, Virginia, USA
| | - Alfred Abuhamad
- Division of Maternal Fetal Medicine, Department of Obstetrics & Gynecology, Eastern Virginia Medical School, Norfolk, Virginia, USA
| |
Collapse
|
19
|
Zhen C, Wang H, Cheng J, Yang X, Chen C, Hu X, Zhang Y, Cao Y, Ni D, Huang W, Wang P. Locating Multiple Standard Planes in First-Trimester Ultrasound Videos via the Detection and Scoring of Key Anatomical Structures. ULTRASOUND IN MEDICINE & BIOLOGY 2023:S0301-5629(23)00163-1. [PMID: 37291008 DOI: 10.1016/j.ultrasmedbio.2023.05.005] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 04/20/2023] [Accepted: 05/10/2023] [Indexed: 06/10/2023]
Abstract
OBJECTIVE This study was aimed at developing a first-trimester standard plane detection (FTSPD) system that can automatically locate nine standard planes in ultrasound videos and investigating its utility in clinical practice. METHODS The FTSPD system, based on the YOLOv3 network, was developed to detect structures and evaluate the quality of plane images by using a pre-defined scoring system. A total of 220 videos from two different ultrasound scanners were collected to compare detection performance between our FTSPD system and sonographers with different levels of experience. The quality of the detected standard planes was quantitatively rated by an expert according to a scoring protocol. Kolmogorov-Smirnov analysis was used to compare the distributions of scores across all nine standard planes. RESULTS The expert-rated scores indicated that the quality of the standard planes detected by the FTSPD system was on par with that of the planes detected by senior sonographers. There were no significant differences in the distributions of the scores across all nine standard planes. The FTSPD system performed significantly better than junior sonographers in five standard plane types. CONCLUSION The results of this study suggest that our FTSPD system has significant potential for detecting standard planes in first-trimester ultrasound screening, which may help to improve the accuracy of fetal ultrasound screening and facilitate early diagnosis of abnormalities. The quality of the standard planes selected by junior sonographers can be significantly improved with the assistance of our FTSPD system.
Collapse
Affiliation(s)
- Chaojiong Zhen
- Department of Ultrasonography, Academy of Orthopedics, Third Affiliated Hospital of Southern Medical University, Guangdong Province, Guangzhou, China; Department of Medical Ultrasonics, First People's Hospital of Foshan, Foshan, China
| | - Hongzhang Wang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, Guangdong, China; Medical Ultrasound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Jun Cheng
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, Guangdong, China; Medical Ultrasound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Xin Yang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, Guangdong, China; Medical Ultrasound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Chaoyu Chen
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, Guangdong, China; Medical Ultrasound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Xindi Hu
- Shenzhen RayShape Medical Technology Co., Ltd, Shenzhen, Guangdong, China
| | - Yuanji Zhang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, Guangdong, China; Medical Ultrasound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Yan Cao
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, Guangdong, China; Medical Ultrasound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Dong Ni
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, Guangdong, China; Medical Ultrasound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Weijun Huang
- Department of Medical Ultrasonics, First People's Hospital of Foshan, Foshan, China
| | - Ping Wang
- Department of Ultrasonography, Academy of Orthopedics, Third Affiliated Hospital of Southern Medical University, Guangdong Province, Guangzhou, China.
| |
Collapse
|
20
|
Kawanishi K, Kakimoto A, Anegawa K, Tsutsumi M, Yamaguchi I, Kudo S. Automatic Identification of Ultrasound Images of the Tibial Nerve in Different Ankle Positions Using Deep Learning. SENSORS (BASEL, SWITZERLAND) 2023; 23:4855. [PMID: 37430769 DOI: 10.3390/s23104855] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/04/2023] [Revised: 05/15/2023] [Accepted: 05/16/2023] [Indexed: 07/12/2023]
Abstract
Peripheral nerve tension is known to be related to the pathophysiology of neuropathy; however, assessing this tension is difficult in a clinical setting. In this study, we aimed to develop a deep learning algorithm for the automatic assessment of tibial nerve tension using B-mode ultrasound imaging. To develop the algorithm, we used 204 ultrasound images of the tibial nerve in three positions: the maximum dorsiflexion position and -10° and -20° plantar flexion from maximum dorsiflexion. The images were taken of 68 healthy volunteers who did not have any abnormalities in the lower limbs at the time of testing. The tibial nerve was manually segmented in all images, and 163 cases were automatically extracted as the training dataset using U-Net. Additionally, convolutional neural network (CNN)-based classification was performed to determine each ankle position. The automatic classification was validated using five-fold cross-validation from the testing data composed of 41 data points. The highest mean accuracy (0.92) was achieved using manual segmentation. The mean accuracy of the full auto-classification of the tibial nerve at each ankle position was more than 0.77 using five-fold cross-validation. Thus, the tension of the tibial nerve can be accurately assessed with different dorsiflexion angles using an ultrasound imaging analysis with U-Net and a CNN.
Collapse
Affiliation(s)
- Kengo Kawanishi
- Inclusive Medical Science Research Institute, Morinomiya University of Medical Sciences, Osaka 559-8611, Japan
- Department of Rehabilitation, Kano General Hospital, Osaka 531-0041, Japan
| | - Akihiro Kakimoto
- Inclusive Medical Science Research Institute, Morinomiya University of Medical Sciences, Osaka 559-8611, Japan
- Department of Radiological Sciences, Faculty of Health Sciences, Morinomiya University of Medical Sciences, Osaka 559-8611, Japan
| | - Keisuke Anegawa
- Graduate School of Health Science, Morinomiya University of Medical Sciences, Osaka 559-8611, Japan
| | - Masahiro Tsutsumi
- Inclusive Medical Science Research Institute, Morinomiya University of Medical Sciences, Osaka 559-8611, Japan
- Department of Physical Therapy, Morinomiya University of Medical Sciences, Osaka 559-8611, Japan
| | - Isao Yamaguchi
- Inclusive Medical Science Research Institute, Morinomiya University of Medical Sciences, Osaka 559-8611, Japan
- Department of Radiological Sciences, Faculty of Health Sciences, Morinomiya University of Medical Sciences, Osaka 559-8611, Japan
| | - Shintarou Kudo
- Inclusive Medical Science Research Institute, Morinomiya University of Medical Sciences, Osaka 559-8611, Japan
- Department of Physical Therapy, Morinomiya University of Medical Sciences, Osaka 559-8611, Japan
- AR-Ex Medical Research Center, Tokyo 158-0082, Japan
| |
Collapse
|
21
|
Automatic No-Reference kidney tissue whole slide image quality assessment based on composite fusion models. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104547] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
|
22
|
Al-Naser YA. The impact of artificial intelligence on radiography as a profession: A narrative review. J Med Imaging Radiat Sci 2023; 54:162-166. [PMID: 36376210 DOI: 10.1016/j.jmir.2022.10.196] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2022] [Revised: 09/27/2022] [Accepted: 10/14/2022] [Indexed: 11/13/2022]
Abstract
BACKGROUND AND PURPOSE Artificial intelligence (AI) algorithms, particularly deep learning, have made significant strides in image recognition and classification, providing remarkable diagnostic accuracy to various diseases. This domain of AI has been the focus of many research papers as it directly relates to the roles and responsibilities of a radiologist. However, discussions on the impact of such technology on the radiography profession are often overlooked. To address this gap in the literature, this paper aims to address the application of AI in radiography and how AI's rapid emergence into healthcare is impacting not only standard radiographic protocols but the role of the radiographic technologist as well. METHODS A review of the literature on AI and radiography was performed, using databases within PubMed, Google Scholar, and ScienceDirect. Video presentations from YouTube were also utilized to weigh the varying opinions of world leaders at the fore of artificial intelligence. RESULTS AI can augment routine standard radiographic protocols. It can automatically ensure optimal patient positioning within the gantry as well as automate image processing. As AI technologies continue to emerge in diagnostic imaging, practicing radiologic technologists are urged to achieve threshold computational and technical literacy to operate AI-driven imaging technology. CONCLUSION There are many applications of AI in radiography including acquisition and image processing. In the near future, it will be important to supply the demand for radiographers skilled in AI-driven technologies.
Collapse
|
23
|
Balagalla UB, Jayasooriya J, de Alwis C, Subasinghe A. Automated segmentation of standard scanning planes to measure biometric parameters in foetal ultrasound images – a survey. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2023. [DOI: 10.1080/21681163.2023.2179343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/25/2023]
Affiliation(s)
- U. B. Balagalla
- Department of Electrical and Electronic Engineering, University of Sri Jayewardenepura, Nugegoda, Sri Lanka
| | - J.V.D. Jayasooriya
- Department of Electrical and Electronic Engineering, University of Sri Jayewardenepura, Nugegoda, Sri Lanka
| | - C. de Alwis
- Department of Electrical and Electronic Engineering, University of Sri Jayewardenepura, Nugegoda, Sri Lanka
| | - A. Subasinghe
- Department of Electrical and Electronic Engineering, University of Sri Jayewardenepura, Nugegoda, Sri Lanka
| |
Collapse
|
24
|
Sarno L, Neola D, Carbone L, Saccone G, Carlea A, Miceli M, Iorio GG, Mappa I, Rizzo G, Girolamo RD, D'Antonio F, Guida M, Maruotti GM. Use of artificial intelligence in obstetrics: not quite ready for prime time. Am J Obstet Gynecol MFM 2023; 5:100792. [PMID: 36356939 DOI: 10.1016/j.ajogmf.2022.100792] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Revised: 10/18/2022] [Accepted: 10/28/2022] [Indexed: 11/09/2022]
Abstract
Artificial intelligence is finding several applications in healthcare settings. This study aimed to report evidence on the effectiveness of artificial intelligence application in obstetrics. Through a narrative review of literature, we described artificial intelligence use in different obstetrical areas as follows: prenatal diagnosis, fetal heart monitoring, prediction and management of pregnancy-related complications (preeclampsia, preterm birth, gestational diabetes mellitus, and placenta accreta spectrum), and labor. Artificial intelligence seems to be a promising tool to help clinicians in daily clinical activity. The main advantages that emerged from this review are related to the reduction of inter- and intraoperator variability, time reduction of procedures, and improvement of overall diagnostic performance. However, nowadays, the diffusion of these systems in routine clinical practice raises several issues. Reported evidence is still very limited, and further studies are needed to confirm the clinical applicability of artificial intelligence. Moreover, better training of clinicians designed to use these systems should be ensured, and evidence-based guidelines regarding this topic should be produced to enhance the strengths of artificial systems and minimize their limits.
Collapse
Affiliation(s)
- Laura Sarno
- Gynecology and Obstetrics Unit, Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Sarno, Dr Neola, Dr Carbone, Dr Saccone, Dr Carlea, Dr Miceli, Dr Iorio, Dr Girolamo, and Dr Guida)
| | - Daniele Neola
- Gynecology and Obstetrics Unit, Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Sarno, Dr Neola, Dr Carbone, Dr Saccone, Dr Carlea, Dr Miceli, Dr Iorio, Dr Girolamo, and Dr Guida).
| | - Luigi Carbone
- Gynecology and Obstetrics Unit, Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Sarno, Dr Neola, Dr Carbone, Dr Saccone, Dr Carlea, Dr Miceli, Dr Iorio, Dr Girolamo, and Dr Guida)
| | - Gabriele Saccone
- Gynecology and Obstetrics Unit, Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Sarno, Dr Neola, Dr Carbone, Dr Saccone, Dr Carlea, Dr Miceli, Dr Iorio, Dr Girolamo, and Dr Guida)
| | - Annunziata Carlea
- Gynecology and Obstetrics Unit, Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Sarno, Dr Neola, Dr Carbone, Dr Saccone, Dr Carlea, Dr Miceli, Dr Iorio, Dr Girolamo, and Dr Guida)
| | - Marco Miceli
- Gynecology and Obstetrics Unit, Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Sarno, Dr Neola, Dr Carbone, Dr Saccone, Dr Carlea, Dr Miceli, Dr Iorio, Dr Girolamo, and Dr Guida); CEINGE Biotecnologie Avanzate, Naples, Italy (Dr Miceli)
| | - Giuseppe Gabriele Iorio
- Gynecology and Obstetrics Unit, Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Sarno, Dr Neola, Dr Carbone, Dr Saccone, Dr Carlea, Dr Miceli, Dr Iorio, Dr Girolamo, and Dr Guida)
| | - Ilenia Mappa
- Division of Maternal Fetal Medicine, Department of Obstetrics and Gynecology, University of Rome Tor Vergata, Rome, Italy (Dr Mappa and Dr Rizzo)
| | - Giuseppe Rizzo
- Division of Maternal Fetal Medicine, Department of Obstetrics and Gynecology, University of Rome Tor Vergata, Rome, Italy (Dr Mappa and Dr Rizzo)
| | - Raffaella Di Girolamo
- Gynecology and Obstetrics Unit, Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Sarno, Dr Neola, Dr Carbone, Dr Saccone, Dr Carlea, Dr Miceli, Dr Iorio, Dr Girolamo, and Dr Guida)
| | - Francesco D'Antonio
- Center for Fetal Care and High Risk Pregnancy, Department of Obstetrics and Gynecology, University G. D'Annunzio of Chieti-Pescara, Chieti, Italy (Dr D'Antonio)
| | - Maurizio Guida
- Gynecology and Obstetrics Unit, Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Sarno, Dr Neola, Dr Carbone, Dr Saccone, Dr Carlea, Dr Miceli, Dr Iorio, Dr Girolamo, and Dr Guida)
| | - Giuseppe Maria Maruotti
- Gynecology and Obstetrics Unit, Department of Public Health, School of Medicine, University of Naples Federico II, Naples, Italy (Dr Maruotti)
| |
Collapse
|
25
|
Taye M, Morrow D, Cull J, Smith DH, Hagan M. Deep Learning for FAST Quality Assessment. JOURNAL OF ULTRASOUND IN MEDICINE : OFFICIAL JOURNAL OF THE AMERICAN INSTITUTE OF ULTRASOUND IN MEDICINE 2023; 42:71-79. [PMID: 35770928 DOI: 10.1002/jum.16045] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Revised: 04/30/2022] [Accepted: 06/04/2022] [Indexed: 06/15/2023]
Abstract
OBJECTIVES To determine the feasibility of using a deep learning (DL) algorithm to assess the quality of focused assessment with sonography in trauma (FAST) exams. METHODS Our dataset consists of 441 FAST exams, classified as good-quality or poor-quality, with 3161 videos. We first used convolutional neural networks (CNNs), pretrained on the Imagenet dataset and fine-tuned on the FAST dataset. Second, we trained a CNN autoencoder to compress FAST images, with a 20-1 compression ratio. The compressed codes were input to a two-layer classifier network. To train the networks, each video was labeled with the quality of the exam, and the frames were labeled with the quality of the video. For inference, a video was classified as poor-quality if half the frames were classified as poor-quality by the network, and an exam was classified as poor-quality if half the videos were classified as poor-quality. RESULTS The results with the encoder-classifier networks were much better than the transfer learning results with CNNs. This was primarily because the Imagenet dataset is not a good match for the ultrasound quality assessment problem. The DL models produced video sensitivities and specificities of 99% and 98% on held-out test sets. CONCLUSIONS Using an autoencoder to compress FAST images is a very effective way to obtain features that can be used to predict exam quality. These features are more suitable than those obtained from CNNs pretrained on Imagenet.
Collapse
Affiliation(s)
- Mesfin Taye
- School of Electrical and Computer Engineering, Oklahoma State University, Stillwater, OK, USA
- IBM, IBM Cloud, Armonk, New York, USA
| | - Dustin Morrow
- Prisma Health, Department of Emergency Medicine, Division Chief of Emergency Ultrasound, University of South Carolina, School of Medicine Greenville, Greenville, SC, USA
| | - John Cull
- Prisma Health, University of South Carolina School of Medicine-Greenville, Greenville, SC, USA
| | - Dane Hudson Smith
- Holcombe Department of Electrical Engineering, Watt Family Innovation Center, Clemson University, Clemson, SC, USA
| | - Martin Hagan
- Oklahoma State University, School of Electrical and Computer Engineering, Stillwater, OK, USA
| |
Collapse
|
26
|
Dan T, Chen X, He M, Guo H, He X, Chen J, Xian J, Hu Y, Zhang B, Wang N, Xie H, Cai H. DeepGA for automatically estimating fetal gestational age through ultrasound imaging. Artif Intell Med 2023; 135:102453. [PMID: 36628790 DOI: 10.1016/j.artmed.2022.102453] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2021] [Revised: 08/02/2022] [Accepted: 11/13/2022] [Indexed: 11/18/2022]
Abstract
Accurate estimation of gestational age (GA) is vital for identifying fetal abnormalities. Conventionally, GA is estimated by measuring the morphology of the cranium, abdomen, and femur manually and inputting them into the classic Hadlock formula to assess fetal growth. However, this procedure incurs considerable overhead and suffers from bias caused by the operators, yielding suboptimal estimations. To address this challenge, we develop an automatic DeepGA model to achieve fully automatic GA prediction in an end-to-end manner. Our model uses a deep segmentation model (DeepSeg) to accurately identify and segment three critical tissues, including the cranium, abdomen, and femur, in which their morphology is automatically extracted. After that, we are able to directly estimate the GA via a deep regression model (DeepReg). We evaluate DeepGA on a large dataset, including 10,413 ultrasound images from 7113 subjects. It achieves superior performance over the traditional measurement approach, with a mean absolute estimation error (MAE) of 5 days. Our DeepGA model is a novel automatic solution on the basis of artificial intelligence learning that can help radiologists improve the performance of GA estimation in various clinical scenarios, thereby enhancing the efficiency of prenatal examinations.
Collapse
Affiliation(s)
- Tingting Dan
- School of Computer Science and Engineering, South China University of Technology, Guangzhou, China
| | - Xijie Chen
- School of Computer Science and Engineering, South China University of Technology, Guangzhou, China
| | - Miao He
- The First Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Hongmei Guo
- Dongguan City Maternal and Child Health Hospital, Dongguan, China
| | - Xiaoqin He
- Women and Children's Hospital, School of Medicine, Xiamen University, China
| | - Jiazhou Chen
- School of Computer Science and Engineering, South China University of Technology, Guangzhou, China
| | - Jianbo Xian
- Guangzhou Aiyunji Information Technology Co., Ltd., Guangzhou, China
| | - Yu Hu
- School of Computer Science and Engineering, South China University of Technology, Guangzhou, China
| | - Bin Zhang
- School of Computer Science and Engineering, South China University of Technology, Guangzhou, China
| | - Nan Wang
- Guangzhou Aiyunji Information Technology Co., Ltd., Guangzhou, China
| | - Hongning Xie
- The First Affiliated Hospital of Sun Yat-sen University, Guangzhou, China.
| | - Hongmin Cai
- School of Computer Science and Engineering, South China University of Technology, Guangzhou, China.
| |
Collapse
|
27
|
Fiorentino MC, Villani FP, Di Cosmo M, Frontoni E, Moccia S. A review on deep-learning algorithms for fetal ultrasound-image analysis. Med Image Anal 2023; 83:102629. [PMID: 36308861 DOI: 10.1016/j.media.2022.102629] [Citation(s) in RCA: 15] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Revised: 07/12/2022] [Accepted: 09/10/2022] [Indexed: 11/07/2022]
Abstract
Deep-learning (DL) algorithms are becoming the standard for processing ultrasound (US) fetal images. A number of survey papers in the field is today available, but most of them are focusing on a broader area of medical-image analysis or not covering all fetal US DL applications. This paper surveys the most recent work in the field, with a total of 153 research papers published after 2017. Papers are analyzed and commented from both the methodology and the application perspective. We categorized the papers into (i) fetal standard-plane detection, (ii) anatomical structure analysis and (iii) biometry parameter estimation. For each category, main limitations and open issues are presented. Summary tables are included to facilitate the comparison among the different approaches. In addition, emerging applications are also outlined. Publicly-available datasets and performance metrics commonly used to assess algorithm performance are summarized, too. This paper ends with a critical summary of the current state of the art on DL algorithms for fetal US image analysis and a discussion on current challenges that have to be tackled by researchers working in the field to translate the research methodology into actual clinical practice.
Collapse
Affiliation(s)
| | | | - Mariachiara Di Cosmo
- Department of Information Engineering, Università Politecnica delle Marche, Italy
| | - Emanuele Frontoni
- Department of Information Engineering, Università Politecnica delle Marche, Italy; Department of Political Sciences, Communication and International Relations, Università degli Studi di Macerata, Italy
| | - Sara Moccia
- The BioRobotics Institute and Department of Excellence in Robotics & AI, Scuola Superiore Sant'Anna, Italy
| |
Collapse
|
28
|
Nnabuife SG, Kuang B, Whidborne JF, Rana ZA. Development of Gas-Liquid Flow Regimes Identification Using a Noninvasive Ultrasonic Sensor, Belt-Shape Features, and Convolutional Neural Network in an S-Shaped Riser. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:3-17. [PMID: 34260363 DOI: 10.1109/tcyb.2021.3084860] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
The problem of classifying gas-liquid two-phase flow regimes from ultrasonic signals is considered. A new method, belt-shaped features (BSFs), is proposed for performing feature extraction on the preprocessed data. A convolutional neural network (CNN/ConvNet)-based classifier is then applied to categorize into one of the four flow regimes: 1) annular; 2) churn; 3) slug; or 4) bubbly. The proposed ConvNet classifier includes multiple stages of convolution and pooling layers, which both decrease the dimension and learn the classification features. Using experimental data collected from an industrial-scale multiphase flow facility, the proposed ConvNet classifier achieved 97.40%, 94.57%, and 94.94% accuracy, respectively, for the training set, testing set, and validation set. These results demonstrate the applicability of the BSF features and the ConvNet classifier for flow regime classification in industrial applications.
Collapse
|
29
|
Fleurentin A, Mazellier JP, Meyer A, Montanelli J, Swanstrom L, Gallix B, Sosa Valencia L, Padoy N. Automatic pancreas anatomical part detection in endoscopic ultrasound videos. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2022. [DOI: 10.1080/21681163.2022.2154274] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Affiliation(s)
| | - Jean-Paul Mazellier
- IHU, Strasbourg, France
- ICube, University of Strasbourg, CNRS, Strasbourg, France
| | - Adrien Meyer
- ICube, University of Strasbourg, CNRS, Strasbourg, France
| | | | | | | | | | - Nicolas Padoy
- IHU, Strasbourg, France
- ICube, University of Strasbourg, CNRS, Strasbourg, France
| |
Collapse
|
30
|
Mumuni AN, Hasford F, Udeme NI, Dada MO, Awojoyogbe BO. A SWOT analysis of artificial intelligence in diagnostic imaging in the developing world: making a case for a paradigm shift. PHYSICAL SCIENCES REVIEWS 2022. [DOI: 10.1515/psr-2022-0121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Abstract
Diagnostic imaging (DI) refers to techniques and methods of creating images of the body’s internal parts and organs with or without the use of ionizing radiation, for purposes of diagnosing, monitoring and characterizing diseases. By default, DI equipment are technology based and in recent times, there has been widespread automation of DI operations in high-income countries while low and middle-income countries (LMICs) are yet to gain traction in automated DI. Advanced DI techniques employ artificial intelligence (AI) protocols to enable imaging equipment perceive data more accurately than humans do, and yet automatically or under expert evaluation, make clinical decisions such as diagnosis and characterization of diseases. In this narrative review, SWOT analysis is used to examine the strengths, weaknesses, opportunities and threats associated with the deployment of AI-based DI protocols in LMICs. Drawing from this analysis, a case is then made to justify the need for widespread AI applications in DI in resource-poor settings. Among other strengths discussed, AI-based DI systems could enhance accuracies in diagnosis, monitoring, characterization of diseases and offer efficient image acquisition, processing, segmentation and analysis procedures, but may have weaknesses regarding the need for big data, huge initial and maintenance costs, and inadequate technical expertise of professionals. They present opportunities for synthetic modality transfer, increased access to imaging services, and protocol optimization; and threats of input training data biases, lack of regulatory frameworks and perceived fear of job losses among DI professionals. The analysis showed that successful integration of AI in DI procedures could position LMICs towards achievement of universal health coverage by 2030/2035. LMICs will however have to learn from the experiences of advanced settings, train critical staff in relevant areas of AI and proceed to develop in-house AI systems with all relevant stakeholders onboard.
Collapse
Affiliation(s)
| | - Francis Hasford
- Department of Medical Physics , University of Ghana, Ghana Atomic Energy Commission , Accra , Ghana
| | | | | | | |
Collapse
|
31
|
Khan ZA, Beghdadi A, Kaaniche M, Alaya-Cheikh F, Gharbi O. A neural network based framework for effective laparoscopic video quality assessment. Comput Med Imaging Graph 2022; 101:102121. [PMID: 36174307 DOI: 10.1016/j.compmedimag.2022.102121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Revised: 08/22/2022] [Accepted: 08/30/2022] [Indexed: 01/27/2023]
Abstract
Video quality assessment is a challenging problem having a critical significance in the context of medical imaging. For instance, in laparoscopic surgery, the acquired video data suffers from different kinds of distortion that not only hinder surgery performance but also affect the execution of subsequent tasks in surgical navigation and robotic surgeries. For this reason, we propose in this paper neural network-based approaches for distortion classification as well as quality prediction. More precisely, a Residual Network (ResNet) based approach is firstly developed for simultaneous ranking and classification task. Then, this architecture is extended to make it appropriate for the quality prediction task by using an additional Fully Connected Neural Network (FCNN). To train the overall architecture (ResNet and FCNN models), transfer learning and end-to-end learning approaches are investigated. Experimental results, carried out on a new laparoscopic video quality database, have shown the efficiency of the proposed methods compared to recent conventional and deep learning based approaches.
Collapse
Affiliation(s)
- Zohaib Amjad Khan
- Université Sorbonne Paris Nord, L2TI, UR 3043, F-93430, Villetaneuse, France
| | - Azeddine Beghdadi
- Université Sorbonne Paris Nord, L2TI, UR 3043, F-93430, Villetaneuse, France.
| | - Mounir Kaaniche
- Université Sorbonne Paris Nord, L2TI, UR 3043, F-93430, Villetaneuse, France
| | | | - Osama Gharbi
- Université Sorbonne Paris Nord, L2TI, UR 3043, F-93430, Villetaneuse, France
| |
Collapse
|
32
|
Nageswari CS, Vimal Kumar M, Grace NVA, Thiyagarajan J. Tunicate swarm-based grey wolf algorithm for fetal heart chamber segmentation and classification: a heuristic-based optimal feature selection concept. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-221654] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Ultrasound image quality management and assessment are an important stage in clinical diagnosis. This operation is often carried out manually, which has several issues, including reliance on the operator’s experience, lengthy labor, and considerable intra-observer variance. As a result, automatic quality evaluation of Ultrasound images is particularly desirable in medical applications. This research work plans to perform the fetal heart chamber segmentation and classification using the novel intelligent technology named as hybrid optimization algorithm Tunicate Swarm-based Grey Wolf Algorithm (TS-GWA). Initially, the US fetal images data is collected and data undergoes the preprocessing using the total variation technique. From the preprocessed images, the optimal features are extracted using the TF-IDF approach. Then, Segmentation is processed on optimally selected features using Spatially Regularized Discriminative Correlation Filters (SRDCF) method. In the final step, the classification of fetal images is done using the Modified Long Short-Term Memory (MLSTM) Network. The fitness function behind the optimal feature selection as well as the hidden neuron optimization of MLSTM is the maximization of PSNR and minimization of MSE. The PSNR value is improved from 3.1 to 9.8 in the proposed method and accuracy of the proposed classification algorithm is improved from 1.9 to 12.13 compared to other existing techniques. The generalization ability and the adaptability of proposed TS-GWA method are described by conducting the various performance analysis. Extensive performance result shows that proposed intelligent techniques performs better than the existing segmentation methods.
Collapse
|
33
|
Zhao H, Zheng Q, Teng C, Yasrab R, Drukker L, Papageorghiou AT, Noble JA. Towards Unsupervised Ultrasound Video Clinical Quality Assessment with Multi-modality Data. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2022; 13434:228-237. [PMID: 36649384 PMCID: PMC7614065 DOI: 10.1007/978-3-031-16440-8_22] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
Abstract
Video quality assurance is an important topic in obstetric ultrasound imaging to ensure that captured videos are suitable for biometry and fetal health assessment. Previously, one successful objective approach to automated ultrasound image quality assurance has considered it as a supervised learning task of detecting anatomical structures defined by a clinical protocol. In this paper, we propose an alternative and purely data-driven approach that makes effective use of both spatial and temporal information and the model learns from high-quality videos without any anatomy-specific annotations. This makes it attractive for potentially scalable generalisation. In the proposed model, a 3D encoder and decoder pair bi-directionally learns a spatio-temporal representation between the video space and the feature space. A zoom-in module is introduced to encourage the model to focus on the main object in a frame. A further design novelty is the introduction of two additional modalities in model training (sonographer gaze and optical flow derived from the video). Finally, our approach is applied to identify high-quality videos for fetal head circumference measurement in freehand second-trimester ultrasound scans. Extensive experiments are conducted, and the results demonstrate the effectiveness of our approach with an AUC of 0.911.
Collapse
Affiliation(s)
- He Zhao
- Institute of Biomedical Engineering, University of Oxford, Oxford, UK
| | - Qingqing Zheng
- Guangdong Provincial Key Laboratory of Computer Vision and Virtual Reality, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Clare Teng
- Institute of Biomedical Engineering, University of Oxford, Oxford, UK
| | - Robail Yasrab
- Institute of Biomedical Engineering, University of Oxford, Oxford, UK
| | - Lior Drukker
- Nuffield Department of Women's and Reproductive Health, University of Oxford, Oxford, UK
- Department of Obsterics and Gynecology, Tel-Aviv University, Tel Aviv, Israel
| | - Aris T Papageorghiou
- Nuffield Department of Women's and Reproductive Health, University of Oxford, Oxford, UK
| | - J Alison Noble
- Institute of Biomedical Engineering, University of Oxford, Oxford, UK
| |
Collapse
|
34
|
Hu Y, Wen G, Luo M, Dai D, Cao W, Yu Z, Hall W. Inner-Imaging Networks: Put Lenses Into Convolutional Structure. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:8547-8560. [PMID: 34398768 DOI: 10.1109/tcyb.2020.3034605] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Despite the tremendous success in computer vision, deep convolutional networks suffer from serious computation costs and redundancies. Although previous works address that by enhancing the diversities of filters, they have not considered the complementarity and the completeness of the internal convolutional structure. To respond to this problem, we propose a novel inner-imaging (InI) architecture, which allows relationships between channels to meet the above requirement. Specifically, we organize the channel signal points in groups using convolutional kernels to model both the intragroup and intergroup relationships simultaneously. A convolutional filter is a powerful tool for modeling spatial relations and organizing grouped signals, so the proposed methods map the channel signals onto a pseudoimage, like putting a lens into the internal convolution structure. Consequently, not only is the diversity of channels increased but also the complementarity and completeness can be explicitly enhanced. The proposed architecture is lightweight and easy to be implement. It provides an efficient self-organization strategy for convolutional networks to improve their performance. Extensive experiments are conducted on multiple benchmark datasets, including CIFAR, SVHN, and ImageNet. Experimental results verify the effectiveness of the InI mechanism with the most popular convolutional networks as the backbones.
Collapse
|
35
|
Chai HH, Ye RZ, Xiong LF, Xu ZN, Chen X, Xu LJ, Hu X, Jiang LF, Peng CZ. Successful Use of a 5G-Based Robot-Assisted Remote Ultrasound System in a Care Center for Disabled Patients in Rural China. Front Public Health 2022; 10:915071. [PMID: 35923952 PMCID: PMC9339711 DOI: 10.3389/fpubh.2022.915071] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Accepted: 06/22/2022] [Indexed: 12/07/2022] Open
Abstract
Background Disability has become a global population health challenge. Due to difficulties in self-care or independent living, patients with disability mainly live in community-based care centers or institutions for long-term care. Nonetheless, these settings often lack basic medical resources, such as ultrasonography. Thus, remote ultrasonic robot technology for clinical applications across wide regions is imperative. To date, few experiences of remote diagnostic systems in rural care centers have been reported. Objective To assess the feasibility of a fifth-generation cellular technology (5G)-based robot-assisted remote ultrasound system in a care center for disabled patients in rural China. Methods Patients underwent remote robot-assisted and bedside ultrasound examinations of the liver, gallbladder, spleen, and kidneys. We compared the diagnostic consistency and differences between the two modalities and evaluated the examination duration, image quality, and safety. Results Forty-nine patients were included (21 men; mean age: 61.0 ± 19.0 [range: 19–91] years). Thirty-nine and ten had positive and negative results, respectively; 67 lesions were detected. Comparing the methods, 41 and 8 patients had consistent and inconsistent diagnoses, respectively. The McNemar and kappa values were 0.727 and 0.601, respectively. The mean duration of remote and bedside examinations was 12.2 ± 4.5 (range: 5–26) min and 7.5 ± 1.8 (range: 5–13) min (p < 0.001), respectively. The median image score for original images on the patient side and transmitted images on the doctor side was 5 points (interquartile range: [IQR]: 4.7–5.0) and 4.7 points (IQR: 4.5–5.0) (p = 0.176), respectively. No obvious complications from the examination were reported. Conclusions A 5G-based robot-assisted remote ultrasound system is feasible and has comparable diagnostic efficiency to traditional bedside ultrasound. This system may provide a unique solution for basic ultrasound diagnostic services in primary healthcare settings.
Collapse
Affiliation(s)
- Hui-hui Chai
- Department of Medical Ultrasound, Shanghai Tenth People' Hospital, Tongji University School of Medicine, Shanghai, China
| | - Rui-zhong Ye
- Emergency and Critical Care Center, Department of Ultrasound Medicine, Zhejiang Provincial People's Hospital (Affiliated People's Hospital, Hangzhou Medical College), Hangzhou, China
| | - Lin-fei Xiong
- Department of Engineering, BGI Life Science Research Institution, Shenzhen, China
| | - Zi-ning Xu
- Emergency and Critical Care Center, Department of Ultrasound Medicine, Zhejiang Provincial People's Hospital (Affiliated People's Hospital, Hangzhou Medical College), Hangzhou, China
| | - Xuan Chen
- Department of Engineering, BGI Life Science Research Institution, Shenzhen, China
| | - Li-juan Xu
- Department of General Practice, Yuanshu Disabled Care Center, Huzhou, China
| | - Xin Hu
- Department of General Practice, Yuanshu Disabled Care Center, Huzhou, China
| | - Lian-feng Jiang
- Department of General Practice, Yuanshu Disabled Care Center, Huzhou, China
| | - Cheng-zhong Peng
- Department of Medical Ultrasound, Shanghai Tenth People' Hospital, Tongji University School of Medicine, Shanghai, China
- Ultrasound Research and Education Institute, Clinical Research Center for Interventional Medicine, Tongji University School of Medicine, Shanghai, China
- Shanghai Engineering Research Center of Ultrasound Diagnosis and Treatment, Shanghai, China
- *Correspondence: Cheng-zhong Peng
| |
Collapse
|
36
|
Song Y, Zhong Z, Zhao B, Zhang P, Wang Q, Wang Z, Yao L, Lv F, Hu Y. Medical Ultrasound Image Quality Assessment for Autonomous Robotic Screening. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3170209] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Affiliation(s)
- Yuxin Song
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Zhaoming Zhong
- The Second School of Clinical Medicine, Southern Medical University, Guangzhou, China
| | - Baoliang Zhao
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Peng Zhang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Qiong Wang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Ziwen Wang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Liang Yao
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Faqin Lv
- Department of Ultrasound, The Third Medical Centre of Chinese PLA General Hospital, Beijing, China
| | - Ying Hu
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
37
|
Alzubaidi M, Agus M, Alyafei K, Althelaya KA, Shah U, Abd-Alrazaq AA, Anbar M, Makhlouf M, Househ M. Towards deep observation: A systematic survey on artificial intelligence techniques to monitor fetus via Ultrasound Images. iScience 2022; 25:104713. [PMID: 35856024 PMCID: PMC9287600 DOI: 10.1016/j.isci.2022.104713] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Revised: 06/09/2022] [Accepted: 06/28/2022] [Indexed: 11/26/2022] Open
Abstract
Several reviews have been conducted regarding artificial intelligence (AI) techniques to improve pregnancy outcomes. But they are not focusing on ultrasound images. This survey aims to explore how AI can assist with fetal growth monitoring via ultrasound image. We reported our findings using the guidelines for PRISMA. We conducted a comprehensive search of eight bibliographic databases. Out of 1269 studies 107 are included. We found that 2D ultrasound images were more popular (88) than 3D and 4D ultrasound images (19). Classification is the most used method (42), followed by segmentation (31), classification integrated with segmentation (16) and other miscellaneous methods such as object-detection, regression, and reinforcement learning (18). The most common areas that gained traction within the pregnancy domain were the fetus head (43), fetus body (31), fetus heart (13), fetus abdomen (10), and the fetus face (10). This survey will promote the development of improved AI models for fetal clinical applications. Artificial intelligence studies to monitor fetal development via ultrasound images Fetal issues categorized based on four categories — general, head, heart, face, abdomen The most used AI techniques are classification, segmentation, object detection, and RL The research and practical implications are included.
Collapse
|
38
|
Jia J, Gao Z, Chen K, Hu M, Min X, Zhai G, Yang X. RIHOOP: Robust Invisible Hyperlinks in Offline and Online Photographs. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:7094-7106. [PMID: 33315574 DOI: 10.1109/tcyb.2020.3037208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
In the era of multimedia and Internet, the quick response (QR) code helps people obtain information from offline to online quickly. However, the QR code is often limited in many scenarios because of its random and dull appearance. Therefore, this article proposes a novel approach to embed hyperlinks into common images, making the hyperlinks invisible for human eyes but detectable for mobile devices equipped with a camera. Our approach is an end-to-end neural network with an encoder to hide messages and a decoder to extract messages. To maintain the hidden message resilient to cameras, we build a distortion network between the encoder and the decoder to augment the encoded images. The distortion network uses differentiable 3-D rendering operations, which can simulate the distortion introduced by camera imaging in both printing and display scenarios. To maintain the visual attraction of the image with hyperlinks, a loss function conforming to the human visual system (HVS) is used to supervise the training of the encoder. Experimental results show that the proposed approach outperforms the previous work on both robustness and quality. Based on the proposed approach, many applications become possible, for example, "image hyperlinks" for advertisement on TV, website, or poster, and "invisible watermark" for copyright protection on digital resources or product packagings.
Collapse
|
39
|
Breast Tumor Ultrasound Image Segmentation Method Based on Improved Residual U-Net Network. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:3905998. [PMID: 35795762 PMCID: PMC9252688 DOI: 10.1155/2022/3905998] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Revised: 05/19/2022] [Accepted: 05/31/2022] [Indexed: 11/25/2022]
Abstract
In order to achieve efficient and accurate breast tumor recognition and diagnosis, this paper proposes a breast tumor ultrasound image segmentation method based on U-Net framework, combined with residual block and attention mechanism. In this method, the residual block is introduced into U-Net network for improvement to avoid the degradation of model performance caused by the gradient disappearance and reduce the training difficulty of deep network. At the same time, considering the features of spatial and channel attention, a fusion attention mechanism is proposed to be introduced into the image analysis model to improve the ability to obtain the feature information of ultrasound images and realize the accurate recognition and extraction of breast tumors. The experimental results show that the Dice index value of the proposed method can reach 0.921, which shows excellent image segmentation performance.
Collapse
|
40
|
Reddy CD, Van den Eynde J, Kutty S. Artificial intelligence in perinatal diagnosis and management of congenital heart disease. Semin Perinatol 2022; 46:151588. [PMID: 35396036 DOI: 10.1016/j.semperi.2022.151588] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Prenatal diagnosis and management of congenital heart disease (CHD) has progressed substantially in the past few decades. Fetal echocardiography can accurately detect and diagnose approximately 85% of cardiac anomalies. The prenatal diagnosis of CHD results in improved care, with improved risk stratification, perioperative status and survival. However, there is much work to be done. A minority of CHD is actually identified prenatally. This seemingly incongruous gap is due, in part, to diminished recognition of an anomaly even when present in the images and the need for increased training to obtain specialized cardiac views. Artificial intelligence (AI) is a field within computer science that focuses on the development of algorithms that "learn, reason, and self-correct" in a human-like fashion. When applied to fetal echocardiography, AI has the potential to improve image acquisition, image optimization, automated measurements, identification of outliers, classification of diagnoses, and prediction of outcomes. Adoption of AI in the field has been thus far limited by a paucity of data, limited resources to implement new technologies, and legal and ethical concerns. Despite these barriers, recognition of the potential benefits will push us to a future in which AI will become a routine part of clinical practice.
Collapse
Affiliation(s)
- Charitha D Reddy
- Division of Pediatric Cardiology, Stanford University, Palo Alto, CA, USA.
| | - Jef Van den Eynde
- Helen B. Taussig Heart Center, The Johns Hopkins Hospital and School of Medicine, Baltimore, MD, USA; Department of Cardiovascular Sciences, KU Leuven, Leuven, Belgium
| | - Shelby Kutty
- Helen B. Taussig Heart Center, The Johns Hopkins Hospital and School of Medicine, Baltimore, MD, USA
| |
Collapse
|
41
|
Saeed SU, Fu Y, Stavrinides V, Baum ZMC, Yang Q, Rusu M, Fan RE, Sonn GA, Noble JA, Barratt DC, Hu Y. Image quality assessment for machine learning tasks using meta-reinforcement learning. Med Image Anal 2022; 78:102427. [PMID: 35344824 DOI: 10.1016/j.media.2022.102427] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2021] [Revised: 01/24/2022] [Accepted: 03/18/2022] [Indexed: 11/23/2022]
Abstract
In this paper, we consider image quality assessment (IQA) as a measure of how images are amenable with respect to a given downstream task, or task amenability. When the task is performed using machine learning algorithms, such as a neural-network-based task predictor for image classification or segmentation, the performance of the task predictor provides an objective estimate of task amenability. In this work, we use an IQA controller to predict the task amenability which, itself being parameterised by neural networks, can be trained simultaneously with the task predictor. We further develop a meta-reinforcement learning framework to improve the adaptability for both IQA controllers and task predictors, such that they can be fine-tuned efficiently on new datasets or meta-tasks. We demonstrate the efficacy of the proposed task-specific, adaptable IQA approach, using two clinical applications for ultrasound-guided prostate intervention and pneumonia detection on X-ray images.
Collapse
Affiliation(s)
- Shaheer U Saeed
- Centre for Medical Image Computing, Wellcome/EPSRC Centre for Interventional & Surgical Sciences, and Department of Medical Physics & Biomedical Engineering, University College London, London, UK.
| | - Yunguan Fu
- Centre for Medical Image Computing, Wellcome/EPSRC Centre for Interventional & Surgical Sciences, and Department of Medical Physics & Biomedical Engineering, University College London, London, UK; InstaDeep, London, UK
| | - Vasilis Stavrinides
- Division of Surgery & Interventional Science, University College London, London, UK; Department of Urology, University College Hospital NHS Foundation Trust, London, UK
| | - Zachary M C Baum
- Centre for Medical Image Computing, Wellcome/EPSRC Centre for Interventional & Surgical Sciences, and Department of Medical Physics & Biomedical Engineering, University College London, London, UK
| | - Qianye Yang
- Centre for Medical Image Computing, Wellcome/EPSRC Centre for Interventional & Surgical Sciences, and Department of Medical Physics & Biomedical Engineering, University College London, London, UK
| | - Mirabela Rusu
- Department of Radiology, Stanford University, Stanford, California, USA
| | - Richard E Fan
- Department of Urology, Stanford University, Stanford, California, USA
| | - Geoffrey A Sonn
- Department of Radiology, Stanford University, Stanford, California, USA; Department of Urology, Stanford University, Stanford, California, USA
| | - J Alison Noble
- Department of Engineering Science, University of Oxford, Oxford, UK
| | - Dean C Barratt
- Centre for Medical Image Computing, Wellcome/EPSRC Centre for Interventional & Surgical Sciences, and Department of Medical Physics & Biomedical Engineering, University College London, London, UK
| | - Yipeng Hu
- Centre for Medical Image Computing, Wellcome/EPSRC Centre for Interventional & Surgical Sciences, and Department of Medical Physics & Biomedical Engineering, University College London, London, UK; Department of Engineering Science, University of Oxford, Oxford, UK
| |
Collapse
|
42
|
A mutual promotion encoder-decoder method for ultrasonic hydronephrosis diagnosis. Methods 2022; 203:78-89. [DOI: 10.1016/j.ymeth.2022.03.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2021] [Revised: 01/28/2022] [Accepted: 03/24/2022] [Indexed: 11/17/2022] Open
|
43
|
Lin M, He X, Guo H, He M, Zhang L, Xian J, Lei T, Xu Q, Zheng J, Feng J, Hao C, Yang Y, Wang N, Xie H. Use of real-time artificial intelligence in detection of abnormal image patterns in standard sonographic reference planes in screening for fetal intracranial malformations. ULTRASOUND IN OBSTETRICS & GYNECOLOGY : THE OFFICIAL JOURNAL OF THE INTERNATIONAL SOCIETY OF ULTRASOUND IN OBSTETRICS AND GYNECOLOGY 2022; 59:304-316. [PMID: 34940999 DOI: 10.1002/uog.24843] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/20/2021] [Revised: 11/02/2021] [Accepted: 11/25/2021] [Indexed: 06/14/2023]
Abstract
OBJECTIVES To develop and validate an artificial intelligence system, the Prenatal ultrasound diagnosis Artificial Intelligence Conduct System (PAICS), to detect different patterns of fetal intracranial abnormality in standard sonographic reference planes for screening for congenital central nervous system (CNS) malformations. METHODS Neurosonographic images from normal fetuses and fetuses with CNS malformations at 18-40 gestational weeks were retrieved from the databases of two tertiary hospitals in China and assigned randomly (ratio, 8:1:1) to training, fine-tuning and internal validation datasets to develop and evaluate the PAICS. The system was built based on a real-time convolutional neural network (CNN) algorithm, You Only Look Once, version 3 (YOLOv3). An image dataset from a third tertiary hospital was used to further validate, externally, the performance of the PAICS and to compare its performance with that of sonologists with different levels of expertise. Furthermore, a prospective video dataset was employed to evaluate the performance of the PAICS in a real-time scan scenario. The diagnostic accuracy, sensitivity, specificity and area under the receiver-operating-characteristics curve (AUC) were calculated to assess the performance of the PAICS and to compare this with the performance of sonologists with different levels of experience. RESULTS In total, 43 890 images from 16 297 pregnancies and 169 videos from 166 pregnancies were used to develop and validate the PAICS. The system achieved excellent performance in identifying 10 types of intracranial image pattern, with macro- and microaverage AUCs, respectively, of 0.933 (95% CI, 0.798-1.000) and 0.977 (95% CI, 0.970-0.985) for the internal validation image dataset, 0.902 (95% CI, 0.816-0.989) and 0.898 (95% CI, 0.885-0.911) for the external validation image dataset and 0.969 (95% CI, 0.886-1.000) and 0.981 (95% CI, 0.974-0.988) in the real-time scan setting. The performance of the PAICS was comparable to that of expert sonologists in terms of macro- and microaverage accuracy (P = 0.863 and P = 0.775, respectively), sensitivity (P = 0.883, P = 0.846) and AUC (P = 0.891, P = 0.788), but required significantly less time (0.025 s per image for PAICS vs 4.4 s for experts, P < 0.001). CONCLUSIONS Both in the image dataset and in the real-time scan setting, the PAICS achieved excellent diagnostic performance for various fetal CNS abnormalities. Its performance was comparable to that of experts, but it required less time. A CNN algorithm can be trained to detect fetal CNS abnormalities. The PAICS has the potential to be an effective and efficient tool in screening for fetal CNS malformations in clinical practice. © 2021 International Society of Ultrasound in Obstetrics and Gynecology.
Collapse
Affiliation(s)
- M Lin
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - X He
- Department of Ultrasound, Women and Children's Hospital affiliated to Xiamen University, Fujian, China
| | - H Guo
- Department of Ultrasound, Dongguan Maternal and Child Health Hospital, Dongguan, Guangdong, China
| | - M He
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - L Zhang
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - J Xian
- Guangzhou Aiyunji Information Technology Co., Ltd, Guangdong China & School of Computer Science and Engineering, South China University of Technology, Guangzhou, Guangdong, China
| | - T Lei
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Q Xu
- Department of Ultrasound, Dongguan Maternal and Child Health Hospital, Dongguan, Guangdong, China
| | - J Zheng
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - J Feng
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - C Hao
- Department of Medical Statistics & Sun Yat-sen Global Health Institute, School of Public Health and Institute of State Governance, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Y Yang
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - N Wang
- Guangzhou Aiyunji Information Technology Co., Ltd, Guangdong, China
| | - H Xie
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| |
Collapse
|
44
|
High-Frequency Ultrasound Dataset for Deep Learning-Based Image Quality Assessment. SENSORS 2022; 22:s22041478. [PMID: 35214381 PMCID: PMC8875486 DOI: 10.3390/s22041478] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Revised: 02/09/2022] [Accepted: 02/12/2022] [Indexed: 12/04/2022]
Abstract
This study aims at high-frequency ultrasound image quality assessment for computer-aided diagnosis of skin. In recent decades, high-frequency ultrasound imaging opened up new opportunities in dermatology, utilizing the most recent deep learning-based algorithms for automated image analysis. An individual dermatological examination contains either a single image, a couple of pictures, or an image series acquired during the probe movement. The estimated skin parameters might depend on the probe position, orientation, or acquisition setup. Consequently, the more images analyzed, the more precise the obtained measurements. Therefore, for the automated measurements, the best choice is to acquire the image series and then analyze its parameters statistically. However, besides the correctly received images, the resulting series contains plenty of non-informative data: Images with different artifacts, noise, or the images acquired for the time stamp when the ultrasound probe has no contact with the patient skin. All of them influence further analysis, leading to misclassification or incorrect image segmentation. Therefore, an automated image selection step is crucial. To meet this need, we collected and shared 17,425 high-frequency images of the facial skin from 516 measurements of 44 patients. Two experts annotated each image as correct or not. The proposed framework utilizes a deep convolutional neural network followed by a fuzzy reasoning system to assess the acquired data’s quality automatically. Different approaches to binary and multi-class image analysis, based on the VGG-16 model, were developed and compared. The best classification results reach 91.7% accuracy for the first, and 82.3% for the second analysis, respectively.
Collapse
|
45
|
Sanchez-Martinez S, Camara O, Piella G, Cikes M, González-Ballester MÁ, Miron M, Vellido A, Gómez E, Fraser AG, Bijnens B. Machine Learning for Clinical Decision-Making: Challenges and Opportunities in Cardiovascular Imaging. Front Cardiovasc Med 2022; 8:765693. [PMID: 35059445 PMCID: PMC8764455 DOI: 10.3389/fcvm.2021.765693] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2021] [Accepted: 12/07/2021] [Indexed: 11/30/2022] Open
Abstract
The use of machine learning (ML) approaches to target clinical problems is called to revolutionize clinical decision-making in cardiology. The success of these tools is dependent on the understanding of the intrinsic processes being used during the conventional pathway by which clinicians make decisions. In a parallelism with this pathway, ML can have an impact at four levels: for data acquisition, predominantly by extracting standardized, high-quality information with the smallest possible learning curve; for feature extraction, by discharging healthcare practitioners from performing tedious measurements on raw data; for interpretation, by digesting complex, heterogeneous data in order to augment the understanding of the patient status; and for decision support, by leveraging the previous steps to predict clinical outcomes, response to treatment or to recommend a specific intervention. This paper discusses the state-of-the-art, as well as the current clinical status and challenges associated with the two later tasks of interpretation and decision support, together with the challenges related to the learning process, the auditability/traceability, the system infrastructure and the integration within clinical processes in cardiovascular imaging.
Collapse
Affiliation(s)
| | - Oscar Camara
- Department of Information and Communication Technologies, University Pompeu Fabra, Barcelona, Spain
| | - Gemma Piella
- Department of Information and Communication Technologies, University Pompeu Fabra, Barcelona, Spain
| | - Maja Cikes
- Department of Cardiovascular Diseases, University of Zagreb School of Medicine, University Hospital Centre Zagreb, Zagreb, Croatia
| | | | - Marius Miron
- Joint Research Centre, European Commission, Seville, Spain
| | - Alfredo Vellido
- Computer Science Department, Intelligent Data Science and Artificial Intelligence (IDEAI-UPC) Research Center, Universitat Politècnica de Catalunya, Barcelona, Spain
| | - Emilia Gómez
- Department of Information and Communication Technologies, University Pompeu Fabra, Barcelona, Spain
- Joint Research Centre, European Commission, Seville, Spain
| | - Alan G. Fraser
- School of Medicine, Cardiff University, Cardiff, United Kingdom
| | - Bart Bijnens
- August Pi i Sunyer Biomedical Research Institute (IDIBAPS), Barcelona, Spain
- ICREA, Barcelona, Spain
- Department of Cardiovascular Sciences, KU Leuven, Leuven, Belgium
| |
Collapse
|
46
|
Hossain MM, Hasan MM, Rahim MA, Rahman MM, Yousuf MA, Al-Ashhab S, Akhdar HF, Alyami SA, Azad A, Moni MA. Particle Swarm Optimized Fuzzy CNN With Quantitative Feature Fusion for Ultrasound Image Quality Identification. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE 2022; 10:1800712. [PMID: 36226132 PMCID: PMC9550163 DOI: 10.1109/jtehm.2022.3197923] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 07/04/2022] [Accepted: 08/03/2022] [Indexed: 11/07/2022]
Abstract
Inherently ultrasound images are susceptible to noise which leads to several image quality issues. Hence, rating of an image’s quality is crucial since diagnosing diseases requires accurate and high-quality ultrasound images. This research presents an intelligent architecture to rate the quality of ultrasound images. The formulated image quality recognition approach fuses feature from a Fuzzy convolutional neural network (fuzzy CNN) and a handcrafted feature extraction method. We implement the fuzzy layer in between the last max pooling and the fully connected layer of the multiple state-of-the-art CNN models to handle the uncertainty of information. Moreover, the fuzzy CNN uses Particle swarm optimization (PSO) as an optimizer. In addition, a novel Quantitative feature extraction machine (QFEM) extracts hand-crafted features from ultrasound images. Next, the proposed method uses different classifiers to predict the image quality. The classifiers categories ultrasound images into four types (normal, noisy, blurry, and distorted) instead of binary classification into good or poor-quality images. The results of the proposed method exhibit a significant performance in accuracy (99.62%), precision (99.62%), recall (99.61%), and f1-score (99.61%). This method will assist a physician in automatically rating informative ultrasound images with steadfast operation in real-time medical diagnosis.
Collapse
Affiliation(s)
- Muhammad Minoar Hossain
- Department of Computer Science and Engineering, Mawlana Bhashani Science and Technology University, Tangail, Bangladesh
| | - Md. Mahmodul Hasan
- Department of Computer Science and Engineering, Mawlana Bhashani Science and Technology University, Tangail, Bangladesh
| | - Md. Abdur Rahim
- Department of Computer Science and Engineering, Mawlana Bhashani Science and Technology University, Tangail, Bangladesh
| | - Mohammad Motiur Rahman
- Department of Computer Science and Engineering, Mawlana Bhashani Science and Technology University, Tangail, Bangladesh
| | - Mohammad Abu Yousuf
- Institute of Information Technology, Jahangirnagar University, Savar, Dhaka, Bangladesh
| | - Samer Al-Ashhab
- Department of Mathematics and Statistics, Faculty of Science, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh, Saudi Arabia
| | - Hanan F. Akhdar
- Department of Physics, Faculty of Science, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh, Saudi Arabia
| | - Salem A. Alyami
- Department of Mathematics and Statistics, Faculty of Science, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh, Saudi Arabia
| | - Akm Azad
- Faculty of Science, Engineering and Technology, Swinburne University of Technology Sydney, Parramatta, NSW, Australia
| | - Mohammad Ali Moni
- School of Health and Rehabilitation Sciences, The University of Queensland, Brisbane, QLD, Australia
| |
Collapse
|
47
|
Abstract
The operational challenge of a photovoltaic (PV) integrated system is the uncertainty (irregularity) of the future power output. The integration and correct operation can be carried out with accurate forecasting of the PV output power. A distinct artificial intelligence method was employed in the present study to forecast the PV output power and investigate the accuracy using endogenous data. Discrete wavelet transforms were used to decompose PV output power into approximate and detailed components. The decomposed PV output was fed into an adaptive neuro-fuzzy inference system (ANFIS) input model to forecast the short-term PV power output. Various wavelet mother functions were also investigated, including Haar, Daubechies, Coiflets, and Symlets. The proposed model performance was highly correlated to the input set and wavelet mother function. The statistical performance of the wavelet-ANFIS was found to have better efficiency compared with the ANFIS and ANN models. In addition, wavelet-ANFIS coif2 and sym4 offer the best precision among all the studied models. The result highlights that the combination of wavelet decomposition and the ANFIS model can be a helpful tool for accurate short-term PV output forecasting and yield better efficiency and performance than the conventional model.
Collapse
|
48
|
Yang X, Dou H, Huang R, Xue W, Huang Y, Qian J, Zhang Y, Luo H, Guo H, Wang T, Xiong Y, Ni D. Agent With Warm Start and Adaptive Dynamic Termination for Plane Localization in 3D Ultrasound. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1950-1961. [PMID: 33784618 DOI: 10.1109/tmi.2021.3069663] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Accurate standard plane (SP) localization is the fundamental step for prenatal ultrasound (US) diagnosis. Typically, dozens of US SPs are collected to determine the clinical diagnosis. 2D US has to perform scanning for each SP, which is time-consuming and operator-dependent. While 3D US containing multiple SPs in one shot has the inherent advantages of less user-dependency and more efficiency. Automatically locating SP in 3D US is very challenging due to the huge search space and large fetal posture variations. Our previous study proposed a deep reinforcement learning (RL) framework with an alignment module and active termination to localize SPs in 3D US automatically. However, termination of agent search in RL is important and affects the practical deployment. In this study, we enhance our previous RL framework with a newly designed adaptive dynamic termination to enable an early stop for the agent searching, saving at most 67% inference time, thus boosting the accuracy and efficiency of the RL framework at the same time. Besides, we validate the effectiveness and generalizability of our algorithm extensively on our in-house multi-organ datasets containing 433 fetal brain volumes, 519 fetal abdomen volumes, and 683 uterus volumes. Our approach achieves localization error of 2.52mm/10.26° , 2.48mm/10.39° , 2.02mm/10.48° , 2.00mm/14.57° , 2.61mm/9.71° , 3.09mm/9.58° , 1.49mm/7.54° for the transcerebellar, transventricular, transthalamic planes in fetal brain, abdominal plane in fetal abdomen, and mid-sagittal, transverse and coronal planes in uterus, respectively. Experimental results show that our method is general and has the potential to improve the efficiency and standardization of US scanning.
Collapse
|
49
|
Hu X, Wang L, Yang X, Zhou X, Xue W, Cao Y, Liu S, Huang Y, Guo S, Shang N, Ni D, Gu N. Joint Landmark and Structure Learning for Automatic Evaluation of Developmental Dysplasia of the Hip. IEEE J Biomed Health Inform 2021; 26:345-358. [PMID: 34101608 DOI: 10.1109/jbhi.2021.3087494] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
The ultrasound (US) screening of the infant hip is vital for early diagnosis of developmental dysplasia of the hip (DDH). The US diagnosis of DDH refers to measuring alpha and beta angles that quantify hip joint development. These two angles are calculated from key anatomical landmarks and structures of the hip. However, this measurement process is not trivial for sonographers and usually requires a thorough understanding of complex anatomical structures. In this study, we propose a multi-task framework to learn the relationships among landmarks and structures jointly and automatically evaluate DDH. Our multi-task networks are equipped with three novel modules. Firstly, we adopt Mask R-CNN as the basic framework to detect and segment key anatomical structures and add one landmark detection branch to form a new multi-task framework. Secondly, we propose a novel shape similarity loss to refine the incomplete anatomical structure prediction robustly and accurately. Thirdly, we further incorporate the landmark-structure consistent prior to ensure the consistency of the bony rim estimated from the segmented structure and the detected landmark. In our experiments, 1,231 US images of the infant hip from 632 patients are collected, of which 247 images from 126 patients are tested. The average errors in alpha and beta angles are 2.221 and 2.899. About 93% and 85% estimates of alpha and beta angles have errors less than 5 degrees, respectively. Experimental results demonstrate that the proposed method can accurately and robustly realize the automatic evaluation of DDH, showing great potential for clinical application.
Collapse
|
50
|
Zhen X, Qu R, Chen W, Wu W, Jiang X. The development of phosphorescent probes for in vitro and in vivo bioimaging. Biomater Sci 2021; 9:285-300. [PMID: 32756681 DOI: 10.1039/d0bm00819b] [Citation(s) in RCA: 48] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
Phosphorescence is a process that slowly releases the photoexcitation energy after the removal of the excitation source. Although transition metal complexes and purely organic room-temperature phosphorescence (RTP) materials show excellent phosphorescence property, their applications in in vitro and in vivo bioimaging are limited due to their poor solubility in water. To overcome this issue, phosphorescent materials are modified with amphiphilic or hydrophilic polymers to endow them with biocompatibility. This review focuses on recent advances in the development of phosphorescent probes for in vitro and in vivo bioimaging. The photophysical mechanism and the design principles of transition metal complexes and purely organic RTP materials for the stabilization of the triplet excited state for enhanced phosphorescence are first discussed. Then, the applications in in vitro and in vivo bioimaging using transition metal complexes including iridium(iii) complexes, platinum(ii) complexes, rhodium(i) complexes, and purely organic RTP materials are summarized. Finally, the current challenges and perspectives for these emerging materials in bioimaging are discussed.
Collapse
Affiliation(s)
- Xu Zhen
- MOE Key Laboratory of High Performance Polymer Materials and Technology, and Department of Polymer Science & Engineering, College of Chemistry & Chemical Engineering, Nanjing University, Nanjing, 210093, P. R. China.
| | - Rui Qu
- MOE Key Laboratory of High Performance Polymer Materials and Technology, and Department of Polymer Science & Engineering, College of Chemistry & Chemical Engineering, Nanjing University, Nanjing, 210093, P. R. China.
| | - Weizhi Chen
- MOE Key Laboratory of High Performance Polymer Materials and Technology, and Department of Polymer Science & Engineering, College of Chemistry & Chemical Engineering, Nanjing University, Nanjing, 210093, P. R. China.
| | - Wei Wu
- MOE Key Laboratory of High Performance Polymer Materials and Technology, and Department of Polymer Science & Engineering, College of Chemistry & Chemical Engineering, Nanjing University, Nanjing, 210093, P. R. China.
| | - Xiqun Jiang
- MOE Key Laboratory of High Performance Polymer Materials and Technology, and Department of Polymer Science & Engineering, College of Chemistry & Chemical Engineering, Nanjing University, Nanjing, 210093, P. R. China.
| |
Collapse
|