1
|
Hild O, Berriet P, Nallet J, Salvi L, Lenoir M, Henriet J, Thiran JP, Auber F, Chaussy Y. Automation of Wilms' tumor segmentation by artificial intelligence. Cancer Imaging 2024; 24:83. [PMID: 38956718 PMCID: PMC11218149 DOI: 10.1186/s40644-024-00729-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Accepted: 06/20/2024] [Indexed: 07/04/2024] Open
Abstract
BACKGROUND 3D reconstruction of Wilms' tumor provides several advantages but are not systematically performed because manual segmentation is extremely time-consuming. The objective of our study was to develop an artificial intelligence tool to automate the segmentation of tumors and kidneys in children. METHODS A manual segmentation was carried out by two experts on 14 CT scans. Then, the segmentation of Wilms' tumor and neoplastic kidney was automatically performed using the CNN U-Net and the same CNN U-Net trained according to the OV2ASSION method. The time saving for the expert was estimated depending on the number of sections automatically segmented. RESULTS When segmentations were performed manually by two experts, the inter-individual variability resulted in a Dice index of 0.95 for tumor and 0.87 for kidney. Fully automatic segmentation with the CNN U-Net yielded a poor Dice index of 0.69 for Wilms' tumor and 0.27 for kidney. With the OV2ASSION method, the Dice index varied depending on the number of manually segmented sections. For the segmentation of the Wilms' tumor and neoplastic kidney, it varied respectively from 0.97 to 0.94 for a gap of 1 (2 out of 3 sections performed manually) to 0.94 and 0.86 for a gap of 10 (1 section out of 6 performed manually). CONCLUSION Fully automated segmentation remains a challenge in the field of medical image processing. Although it is possible to use already developed neural networks, such as U-Net, we found that the results obtained were not satisfactory for segmentation of neoplastic kidneys or Wilms' tumors in children. We developed an innovative CNN U-Net training method that makes it possible to segment the kidney and its tumor with the same precision as an expert while reducing their intervention time by 80%.
Collapse
Affiliation(s)
- Olivier Hild
- Department of Pediatric Surgery, CHU Besançon, 3 boulevard Fleming, Besançon, F-25000, France
| | - Pierre Berriet
- Université de Franche-Comté, FEMTO-ST Institute, DISC, Besançon, F-25000, France
| | - Jérémie Nallet
- Department of Pediatric Surgery, CHU Besançon, 3 boulevard Fleming, Besançon, F-25000, France
| | - Lorédane Salvi
- Department of Pediatric Surgery, CHU Besançon, 3 boulevard Fleming, Besançon, F-25000, France
| | - Marion Lenoir
- Department of Radiology, CHU Besançon, Besançon, F-25000, France
| | - Julien Henriet
- Université de Franche-Comté, FEMTO-ST Institute, DISC, Besançon, F-25000, France
| | - Jean-Philippe Thiran
- Signal Processing Laboratory 5 (LTS5), Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, 1015, Switzerland
- University Hospital Center (CHUV) and University of Lausanne (UNIL), Lausanne, 1011, Switzerland
| | - Frédéric Auber
- Department of Pediatric Surgery, CHU Besançon, 3 boulevard Fleming, Besançon, F-25000, France
- Université de Franche-Comté, SINERGIES, Besançon, F-25000, France
| | - Yann Chaussy
- Department of Pediatric Surgery, CHU Besançon, 3 boulevard Fleming, Besançon, F-25000, France.
- Université de Franche-Comté, SINERGIES, Besançon, F-25000, France.
| |
Collapse
|
2
|
Li S, Wang H, Meng Y, Zhang C, Song Z. Multi-organ segmentation: a progressive exploration of learning paradigms under scarce annotation. Phys Med Biol 2024; 69:11TR01. [PMID: 38479023 DOI: 10.1088/1361-6560/ad33b5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Accepted: 03/13/2024] [Indexed: 05/21/2024]
Abstract
Precise delineation of multiple organs or abnormal regions in the human body from medical images plays an essential role in computer-aided diagnosis, surgical simulation, image-guided interventions, and especially in radiotherapy treatment planning. Thus, it is of great significance to explore automatic segmentation approaches, among which deep learning-based approaches have evolved rapidly and witnessed remarkable progress in multi-organ segmentation. However, obtaining an appropriately sized and fine-grained annotated dataset of multiple organs is extremely hard and expensive. Such scarce annotation limits the development of high-performance multi-organ segmentation models but promotes many annotation-efficient learning paradigms. Among these, studies on transfer learning leveraging external datasets, semi-supervised learning including unannotated datasets and partially-supervised learning integrating partially-labeled datasets have led the dominant way to break such dilemmas in multi-organ segmentation. We first review the fully supervised method, then present a comprehensive and systematic elaboration of the 3 abovementioned learning paradigms in the context of multi-organ segmentation from both technical and methodological perspectives, and finally summarize their challenges and future trends.
Collapse
Affiliation(s)
- Shiman Li
- Digital Medical Research Center, School of Basic Medical Science, Fudan University, Shanghai Key Lab of Medical Image Computing and Computer Assisted Intervention, Shanghai 200032, People's Republic of China
| | - Haoran Wang
- Digital Medical Research Center, School of Basic Medical Science, Fudan University, Shanghai Key Lab of Medical Image Computing and Computer Assisted Intervention, Shanghai 200032, People's Republic of China
| | - Yucong Meng
- Digital Medical Research Center, School of Basic Medical Science, Fudan University, Shanghai Key Lab of Medical Image Computing and Computer Assisted Intervention, Shanghai 200032, People's Republic of China
| | - Chenxi Zhang
- Digital Medical Research Center, School of Basic Medical Science, Fudan University, Shanghai Key Lab of Medical Image Computing and Computer Assisted Intervention, Shanghai 200032, People's Republic of China
| | - Zhijian Song
- Digital Medical Research Center, School of Basic Medical Science, Fudan University, Shanghai Key Lab of Medical Image Computing and Computer Assisted Intervention, Shanghai 200032, People's Republic of China
| |
Collapse
|
3
|
Zhao Q, Chang CW, Yang X, Zhao L. Robust explanation supervision for false positive reduction in pulmonary nodule detection. Med Phys 2024; 51:1687-1701. [PMID: 38224306 PMCID: PMC10939846 DOI: 10.1002/mp.16937] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Revised: 11/08/2023] [Accepted: 12/12/2023] [Indexed: 01/16/2024] Open
Abstract
BACKGROUND Lung cancer is the deadliest and second most common cancer in the United States due to the lack of symptoms for early diagnosis. Pulmonary nodules are small abnormal regions that can be potentially correlated to the occurrence of lung cancer. Early detection of these nodules is critical because it can significantly improve the patient's survival rates. Thoracic thin-sliced computed tomography (CT) scanning has emerged as a widely used method for diagnosing and prognosis lung abnormalities. PURPOSE The standard clinical workflow of detecting pulmonary nodules relies on radiologists to analyze CT images to assess the risk factors of cancerous nodules. However, this approach can be error-prone due to the various nodule formation causes, such as pollutants and infections. Deep learning (DL) algorithms have recently demonstrated remarkable success in medical image classification and segmentation. As an ever more important assistant to radiologists in nodule detection, it is imperative ensure the DL algorithm and radiologist to better understand the decisions from each other. This study aims to develop a framework integrating explainable AI methods to achieve accurate pulmonary nodule detection. METHODS A robust and explainable detection (RXD) framework is proposed, focusing on reducing false positives in pulmonary nodule detection. Its implementation is based on an explanation supervision method, which uses nodule contours of radiologists as supervision signals to force the model to learn nodule morphologies, enabling improved learning ability on small dataset, and enable small dataset learning ability. In addition, two imputation methods are applied to the nodule region annotations to reduce the noise within human annotations and allow the model to have robust attributions that meet human expectations. The 480, 265, and 265 CT image sets from the public Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) dataset are used for training, validation, and testing. RESULTS Using only 10, 30, 50, and 100 training samples sequentially, our method constantly improves the classification performance and explanation quality of baseline in terms of Area Under the Curve (AUC) and Intersection over Union (IoU). In particular, our framework with a learnable imputation kernel improves IoU from baseline by 24.0% to 80.0%. A pre-defined Gaussian imputation kernel achieves an even greater improvement, from 38.4% to 118.8% from baseline. Compared to the baseline trained on 100 samples, our method shows less drop in AUC when trained on fewer samples. A comprehensive comparison of interpretability shows that our method aligns better with expert opinions. CONCLUSIONS A pulmonary nodule detection framework was demonstrated using public thoracic CT image datasets. The framework integrates the robust explanation supervision (RES) technique to ensure the performance of nodule classification and morphology. The method can reduce the workload of radiologists and enable them to focus on the diagnosis and prognosis of the potential cancerous pulmonary nodules at the early stage to improve the outcomes for lung cancer patients.
Collapse
Affiliation(s)
- Qilong Zhao
- Department of Computer Science, Emory University, Atlanta, GA 30308
| | - Chih-Wei Chang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30308
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30308
| | - Liang Zhao
- Department of Computer Science, Emory University, Atlanta, GA 30308
| |
Collapse
|
4
|
Hossain MSA, Gul S, Chowdhury MEH, Khan MS, Sumon MSI, Bhuiyan EH, Khandakar A, Hossain M, Sadique A, Al-Hashimi I, Ayari MA, Mahmud S, Alqahtani A. Deep Learning Framework for Liver Segmentation from T1-Weighted MRI Images. SENSORS (BASEL, SWITZERLAND) 2023; 23:8890. [PMID: 37960589 PMCID: PMC10650219 DOI: 10.3390/s23218890] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Revised: 08/08/2023] [Accepted: 08/15/2023] [Indexed: 11/15/2023]
Abstract
The human liver exhibits variable characteristics and anatomical information, which is often ambiguous in radiological images. Machine learning can be of great assistance in automatically segmenting the liver in radiological images, which can be further processed for computer-aided diagnosis. Magnetic resonance imaging (MRI) is preferred by clinicians for liver pathology diagnosis over volumetric abdominal computerized tomography (CT) scans, due to their superior representation of soft tissues. The convenience of Hounsfield unit (HoU) based preprocessing in CT scans is not available in MRI, making automatic segmentation challenging for MR images. This study investigates multiple state-of-the-art segmentation networks for liver segmentation from volumetric MRI images. Here, T1-weighted (in-phase) scans are investigated using expert-labeled liver masks from a public dataset of 20 patients (647 MR slices) from the Combined Healthy Abdominal Organ Segmentation grant challenge (CHAOS). The reason for using T1-weighted images is that it demonstrates brighter fat content, thus providing enhanced images for the segmentation task. Twenty-four different state-of-the-art segmentation networks with varying depths of dense, residual, and inception encoder and decoder backbones were investigated for the task. A novel cascaded network is proposed to segment axial liver slices. The proposed framework outperforms existing approaches reported in the literature for the liver segmentation task (on the same test set) with a dice similarity coefficient (DSC) score and intersect over union (IoU) of 95.15% and 92.10%, respectively.
Collapse
Affiliation(s)
- Md. Sakib Abrar Hossain
- NSU Genome Research Institute (NGRI), North South University, Dhaka 1229, Bangladesh
- Department of Electrical Engineering, Qatar University, Doha 2713, Qatar
| | - Sidra Gul
- Department of Computer Systems Engineering, University of Engineering and Technology Peshawar, Peshawar 25000, Pakistan
- Artificial Intelligence in Healthcare, IIPL, National Center of Artificial Intelligence, Peshawar 25000, Pakistan
| | | | | | | | - Enamul Haque Bhuiyan
- Center for Magnetic Resonance Research, University of Illinois Chicago, Chicago, IL 60607, USA
| | - Amith Khandakar
- Department of Electrical Engineering, Qatar University, Doha 2713, Qatar
| | - Maqsud Hossain
- NSU Genome Research Institute (NGRI), North South University, Dhaka 1229, Bangladesh
| | - Abdus Sadique
- NSU Genome Research Institute (NGRI), North South University, Dhaka 1229, Bangladesh
| | | | | | - Sakib Mahmud
- Department of Electrical Engineering, Qatar University, Doha 2713, Qatar
| | - Abdulrahman Alqahtani
- Department of Medical Equipment Technology, College of Applied, Medical Science, Majmaah University, Majmaah City 11952, Saudi Arabia
- Department of Biomedical Technology, College of Applied Medical Sciences, Prince Sattam Bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia
| |
Collapse
|
5
|
Feng R, Deb B, Ganesan P, Tjong FVY, Rogers AJ, Ruipérez-Campillo S, Somani S, Clopton P, Baykaner T, Rodrigo M, Zou J, Haddad F, Zahari M, Narayan SM. Segmenting computed tomograms for cardiac ablation using machine learning leveraged by domain knowledge encoding. Front Cardiovasc Med 2023; 10:1189293. [PMID: 37849936 PMCID: PMC10577270 DOI: 10.3389/fcvm.2023.1189293] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2023] [Accepted: 09/18/2023] [Indexed: 10/19/2023] Open
Abstract
Background Segmentation of computed tomography (CT) is important for many clinical procedures including personalized cardiac ablation for the management of cardiac arrhythmias. While segmentation can be automated by machine learning (ML), it is limited by the need for large, labeled training data that may be difficult to obtain. We set out to combine ML of cardiac CT with domain knowledge, which reduces the need for large training datasets by encoding cardiac geometry, which we then tested in independent datasets and in a prospective study of atrial fibrillation (AF) ablation. Methods We mathematically represented atrial anatomy with simple geometric shapes and derived a model to parse cardiac structures in a small set of N = 6 digital hearts. The model, termed "virtual dissection," was used to train ML to segment cardiac CT in N = 20 patients, then tested in independent datasets and in a prospective study. Results In independent test cohorts (N = 160) from 2 Institutions with different CT scanners, atrial structures were accurately segmented with Dice scores of 96.7% in internal (IQR: 95.3%-97.7%) and 93.5% in external (IQR: 91.9%-94.7%) test data, with good agreement with experts (r = 0.99; p < 0.0001). In a prospective study of 42 patients at ablation, this approach reduced segmentation time by 85% (2.3 ± 0.8 vs. 15.0 ± 6.9 min, p < 0.0001), yet provided similar Dice scores to experts (93.9% (IQR: 93.0%-94.6%) vs. 94.4% (IQR: 92.8%-95.7%), p = NS). Conclusions Encoding cardiac geometry using mathematical models greatly accelerated training of ML to segment CT, reducing the need for large training sets while retaining accuracy in independent test data. Combining ML with domain knowledge may have broad applications.
Collapse
Affiliation(s)
- Ruibin Feng
- Department of Medicine and Cardiovascular Institute, Stanford University, Stanford, CA, United States
| | - Brototo Deb
- Department of Medicine and Cardiovascular Institute, Stanford University, Stanford, CA, United States
| | - Prasanth Ganesan
- Department of Medicine and Cardiovascular Institute, Stanford University, Stanford, CA, United States
| | - Fleur V. Y. Tjong
- Department of Medicine and Cardiovascular Institute, Stanford University, Stanford, CA, United States
- Heart Center, Department of Clinical and Experimental Cardiology, Amsterdam UMC, University of Amsterdam, Amsterdam, Netherlands
| | - Albert J. Rogers
- Department of Medicine and Cardiovascular Institute, Stanford University, Stanford, CA, United States
| | - Samuel Ruipérez-Campillo
- Department of Medicine and Cardiovascular Institute, Stanford University, Stanford, CA, United States
- Bioengineering Department, University of California, Berkeley, Berkeley, CA, United States
| | - Sulaiman Somani
- Department of Medicine and Cardiovascular Institute, Stanford University, Stanford, CA, United States
| | - Paul Clopton
- Department of Medicine and Cardiovascular Institute, Stanford University, Stanford, CA, United States
| | - Tina Baykaner
- Department of Medicine and Cardiovascular Institute, Stanford University, Stanford, CA, United States
| | - Miguel Rodrigo
- Department of Medicine and Cardiovascular Institute, Stanford University, Stanford, CA, United States
- CoMMLab, Universitat Politècnica de València, Valencia, Spain
| | - James Zou
- Department of Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Francois Haddad
- Department of Medicine and Cardiovascular Institute, Stanford University, Stanford, CA, United States
| | - Matei Zahari
- Department of Computer Science, Stanford University, Stanford, CA, United States
| | - Sanjiv M. Narayan
- Department of Medicine and Cardiovascular Institute, Stanford University, Stanford, CA, United States
| |
Collapse
|
6
|
Jayaprakash N, Song W, Toth V, Vardhan A, Levy T, Tomaio J, Qanud K, Mughrabi I, Chang YC, Rob M, Daytz A, Abbas A, Nassrallah Z, Volpe BT, Tracey KJ, Al-Abed Y, Datta-Chaudhuri T, Miller L, Barbe MF, Lee SC, Zanos TP, Zanos S. Organ- and function-specific anatomical organization of vagal fibers supports fascicular vagus nerve stimulation. Brain Stimul 2023; 16:484-506. [PMID: 36773779 DOI: 10.1016/j.brs.2023.02.003] [Citation(s) in RCA: 13] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2022] [Revised: 02/03/2023] [Accepted: 02/03/2023] [Indexed: 02/11/2023] Open
Abstract
Vagal fibers travel inside fascicles and form branches to innervate organs and regulate organ functions. Existing vagus nerve stimulation (VNS) therapies activate vagal fibers non-selectively, often resulting in reduced efficacy and side effects from non-targeted organs. The transverse and longitudinal arrangement of fibers inside the vagal trunk with respect to the functions they mediate and organs they innervate is unknown, however it is crucial for selective VNS. Using micro-computed tomography imaging, we tracked fascicular trajectories and found that, in swine, sensory and motor fascicles are spatially separated cephalad, close to the nodose ganglion, and merge caudad, towards the lower cervical and upper thoracic region; larynx-, heart- and lung-specific fascicles are separated caudad and progressively merge cephalad. Using quantified immunohistochemistry at single fiber level, we identified and characterized all vagal fibers and found that fibers of different morphological types are differentially distributed in fascicles: myelinated afferents and efferents occupy separate fascicles, myelinated and unmyelinated efferents also occupy separate fascicles, and small unmyelinated afferents are widely distributed within most fascicles. We developed a multi-contact cuff electrode to accommodate the fascicular structure of the vagal trunk and used it to deliver fascicle-selective cervical VNS in anesthetized and awake swine. Compound action potentials from distinct fiber types, and physiological responses from different organs, including laryngeal muscle, cough, breathing, and heart rate responses are elicited in a radially asymmetric manner, with consistent angular separations that agree with the documented fascicular organization. These results indicate that fibers in the trunk of the vagus nerve are anatomically organized according to functions they mediate and organs they innervate and can be asymmetrically activated by fascicular cervical VNS.
Collapse
Affiliation(s)
| | - Weiguo Song
- Feinstein Institutes for Medical Research, Manhasset, NY, USA
| | - Viktor Toth
- Feinstein Institutes for Medical Research, Manhasset, NY, USA
| | | | - Todd Levy
- Feinstein Institutes for Medical Research, Manhasset, NY, USA
| | | | - Khaled Qanud
- Feinstein Institutes for Medical Research, Manhasset, NY, USA
| | | | - Yao-Chuan Chang
- Feinstein Institutes for Medical Research, Manhasset, NY, USA
| | - Moontahinaz Rob
- Feinstein Institutes for Medical Research, Manhasset, NY, USA
| | - Anna Daytz
- Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, NY, USA
| | - Adam Abbas
- Feinstein Institutes for Medical Research, Manhasset, NY, USA
| | - Zeinab Nassrallah
- Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, NY, USA
| | - Bruce T Volpe
- Feinstein Institutes for Medical Research, Manhasset, NY, USA
| | - Kevin J Tracey
- Feinstein Institutes for Medical Research, Manhasset, NY, USA
| | - Yousef Al-Abed
- Feinstein Institutes for Medical Research, Manhasset, NY, USA
| | | | - Larry Miller
- Feinstein Institutes for Medical Research, Manhasset, NY, USA
| | | | - Sunhee C Lee
- Feinstein Institutes for Medical Research, Manhasset, NY, USA
| | | | - Stavros Zanos
- Feinstein Institutes for Medical Research, Manhasset, NY, USA; Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, NY, USA; Elmezzi Graduate School of Molecular Medicine, Manhasset, NY, USA.
| |
Collapse
|
7
|
Li Y, Liu J, Yang X, Xu F, Wang L, He C, Lin L, Qing H, Ren J, Zhou P. Radiomic and quantitative-semantic models of low-dose computed tomography for predicting the poorly differentiated invasive non-mucinous pulmonary adenocarcinoma. LA RADIOLOGIA MEDICA 2023; 128:191-202. [PMID: 36637740 DOI: 10.1007/s11547-023-01591-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Accepted: 01/04/2023] [Indexed: 01/14/2023]
Abstract
PURPOSE Poorly differentiated invasive non-mucinous pulmonary adenocarcinoma (IPA), based on the novel grading system, was related to poor prognosis, with a high risk of lymph node metastasis and local recurrence. This study aimed to build the radiomic and quantitative-semantic models of low-dose computed tomography (LDCT) to preoperatively predict the poorly differentiated IPA in nodules with solid component, and compare their diagnostic performance with radiologists. MATERIALS AND METHODS A total of 396 nodules from 388 eligible patients, who underwent LDCT scan within 2 weeks before surgery and were pathologically diagnosed with IPA, were retrospectively enrolled between July 2018 and December 2021. Nodules were divided into two independent cohorts according to scanners: primary cohort (195 well/moderate differentiated and 64 poorly differentiated) and validation cohort (104 well/moderate differentiated and 33 poorly differentiated). The radiomic and quantitative-semantic models were built using multivariable logistic regression. The diagnostic performance of the models and radiologists was assessed by area under curve (AUC) of receiver operating characteristic (ROC) curve, accuracy, sensitivity, and specificity. RESULTS No significant differences of AUCs were found between the radiomic and quantitative-semantic model in primary and validation cohorts (0.921 vs. 0.923, P = 0.846 and 0.938 vs. 0.911, P = 0.161). Both the models outperformed three radiologists in the validation cohort (all P < 0.05). CONCLUSIONS The radiomic and quantitative-semantic models of LDCT, which could identify the poorly differentiated IPA with excellent diagnostic performance, might provide guidance for therapeutic decision making, such as choosing appropriate surgical method or adjuvant chemotherapy.
Collapse
Affiliation(s)
- Yong Li
- Department of Radiology, Sichuan Cancer Hospital & Institute, Sichuan Cancer Center, School of Medicine, University of Electronic Science and Technology of China, No.55, Section 4, South Renmin Road, Chengdu, 610041, Sichuan, China
| | - Jieke Liu
- Department of Radiology, Sichuan Cancer Hospital & Institute, Sichuan Cancer Center, School of Medicine, University of Electronic Science and Technology of China, No.55, Section 4, South Renmin Road, Chengdu, 610041, Sichuan, China
| | - Xi Yang
- Department of Radiology, Sichuan Cancer Hospital & Institute, Sichuan Cancer Center, School of Medicine, University of Electronic Science and Technology of China, No.55, Section 4, South Renmin Road, Chengdu, 610041, Sichuan, China
| | - Fuyang Xu
- Department of Radiology, Sichuan Cancer Hospital & Institute, Sichuan Cancer Center, School of Medicine, University of Electronic Science and Technology of China, No.55, Section 4, South Renmin Road, Chengdu, 610041, Sichuan, China
| | - Lu Wang
- Department of Radiology, Sichuan Cancer Hospital & Institute, Sichuan Cancer Center, School of Medicine, University of Electronic Science and Technology of China, No.55, Section 4, South Renmin Road, Chengdu, 610041, Sichuan, China
| | - Changjiu He
- Department of Radiology, Sichuan Cancer Hospital & Institute, Sichuan Cancer Center, School of Medicine, University of Electronic Science and Technology of China, No.55, Section 4, South Renmin Road, Chengdu, 610041, Sichuan, China
| | - Libo Lin
- Department of Radiology, Sichuan Cancer Hospital & Institute, Sichuan Cancer Center, School of Medicine, University of Electronic Science and Technology of China, No.55, Section 4, South Renmin Road, Chengdu, 610041, Sichuan, China
| | - Haomiao Qing
- Department of Radiology, Sichuan Cancer Hospital & Institute, Sichuan Cancer Center, School of Medicine, University of Electronic Science and Technology of China, No.55, Section 4, South Renmin Road, Chengdu, 610041, Sichuan, China
| | - Jing Ren
- Department of Radiology, Sichuan Cancer Hospital & Institute, Sichuan Cancer Center, School of Medicine, University of Electronic Science and Technology of China, No.55, Section 4, South Renmin Road, Chengdu, 610041, Sichuan, China
| | - Peng Zhou
- Department of Radiology, Sichuan Cancer Hospital & Institute, Sichuan Cancer Center, School of Medicine, University of Electronic Science and Technology of China, No.55, Section 4, South Renmin Road, Chengdu, 610041, Sichuan, China.
| |
Collapse
|
8
|
Shi H, Lee WL. Image segmentation using K-means clustering, Gabor filter and moving mesh method. THE IMAGING SCIENCE JOURNAL 2023. [DOI: 10.1080/13682199.2022.2161159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
Affiliation(s)
- Hongjian Shi
- Faculty of Science and Technology, BNU-HKBU United International College, Zhuhai, People’s Republic of China
| | - Wan-Lung Lee
- Faculty of Science and Technology, BNU-HKBU United International College, Zhuhai, People’s Republic of China
| |
Collapse
|
9
|
Liu BS, Valenzuela CD, Mentzer KL, Wagner WL, Khalil HA, Chen Z, Ackermann M, Mentzer SJ. Topography of pleural epithelial structure enabled by en face isolation and machine learning. J Cell Physiol 2023; 238:274-284. [PMID: 36502471 PMCID: PMC9845181 DOI: 10.1002/jcp.30927] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2022] [Revised: 11/11/2022] [Accepted: 11/18/2022] [Indexed: 12/14/2022]
Abstract
Pleural epithelial adaptations to mechanical stress are relevant to both normal lung function and parenchymal lung diseases. Assessing regional differences in mechanical stress, however, has been complicated by the nonlinear stress-strain properties of the lung and the large displacements with ventilation. Moreover, there is no reliable method of isolating pleural epithelium for structural studies. To define the topographic variation in pleural structure, we developed a method of en face harvest of murine pleural epithelium. Silver-stain was used to highlight cell borders and facilitate imaging with light microscopy. Machine learning and watershed segmentation were used to define the cell area and cell perimeter of the isolated pleural epithelial cells. In the deflated lung at residual volume, the pleural epithelial cells were significantly larger in the apex (624 ± 247 μm2 ) than in basilar regions of the lung (471 ± 119 μm2 ) (p < 0.001). The distortion of apical epithelial cells was consistent with a vertical gradient of pleural pressures. To assess epithelial changes with inflation, the pleura was studied at total lung capacity. The average epithelial cell area increased 57% and the average perimeter increased 27% between residual volume and total lung capacity. The increase in lung volume was less than half the percent change predicted by uniform or isotropic expansion of the lung. We conclude that the structured analysis of pleural epithelial cells complements studies of pulmonary microstructure and provides useful insights into the regional distribution of mechanical stresses in the lung.
Collapse
Affiliation(s)
- Betty S. Liu
- Laboratory of Adaptive and Regenerative Biology, Brigham & Women’s Hospital, Harvard Medical School, Boston MA
| | - Cristian D. Valenzuela
- Laboratory of Adaptive and Regenerative Biology, Brigham & Women’s Hospital, Harvard Medical School, Boston MA
| | - Katherine L. Mentzer
- Institute for Computational and Mathematical Engineering, Stanford University, Stanford CA
| | - Willi L. Wagner
- Translational Lung Research Center, Department of Diagnostic and Interventional Radiology, University of Heidelberg, Heidelberg, Germany
| | - Hassan A. Khalil
- Laboratory of Adaptive and Regenerative Biology, Brigham & Women’s Hospital, Harvard Medical School, Boston MA
| | - Zi Chen
- Laboratory of Adaptive and Regenerative Biology, Brigham & Women’s Hospital, Harvard Medical School, Boston MA
| | - Maximilian Ackermann
- Institute of Functional and Clinical Anatomy, University Medical Center of the Johannes Gutenberg-University, Mainz, Germany
| | - Steven J. Mentzer
- Laboratory of Adaptive and Regenerative Biology, Brigham & Women’s Hospital, Harvard Medical School, Boston MA
| |
Collapse
|
10
|
Tamash Y, Hammer N, Varga I, Supilnikov A, Iukhimetc S. Arterial Blood Supply of the Mesosalpinx Appears Segmentally Organized in Absence of Uterine Tubes Arteries. Physiol Res 2022. [DOI: 10.33549/physiolres.935015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023] Open
Abstract
Arterial branches to the uterus and ovaries that pass through the mesosalpinx contribute significantly to the maintenance of the ovarian reserve. Especially arterial supply of the uterine tube is provided by a number of anastomoses between both the uterine and ovarian vessels. Knowledge on the morphologic peculiarities will allow to identify main contributors especially blood flow ultrasound examination for the purpose of ovary preserving surgery. This study aimed at identifying landmarks especially for so-called low-flow tubal vessels. Arteries of 17 female Thiel-embalmed bodies were studied along three preselected paramedian segments and measurements taken. A section was made through the center of the ovary perpendicular to uterine tube, then the mesosalpinx tissue distance was divided into 3 equivalent zones: upper, middle and lower thirds. The surface area of the mesosalpinx averaged 1088 ± 62 mm2. 47.7 ± 7.1 % of the mesosalpinx zones included macroscopically visible vessels. The lower third segment of mesosalpinx was the thickest averaging 2.4 ± 1.5 mm. One to three tubal branches were identified in the middle third of the mesosalpinx. Arterial anastomoses were found in the upper segment of the mesosalpinx, but no presence of a marginal vessel supplying the fallopian tube could be found. Statistically significant moderate positive correlations were established between the diameters of the mesosalpingeal arteries between the three zones. The mesosalpinx, uterine tube and the ovary form areas of segmental blood supply. Variants of tubal vessels appear to be a sparse source of blood supply.
Collapse
|
11
|
Nie K, Xiao Y. Radiomics in clinical trials: perspectives on standardization. Phys Med Biol 2022; 68. [PMID: 36384049 DOI: 10.1088/1361-6560/aca388] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2022] [Accepted: 11/16/2022] [Indexed: 11/17/2022]
Abstract
The term biomarker is used to describe a biological measure of the disease behavior. The existing imaging biomarkers are associated with the known tissue biological characteristics and follow a well-established roadmap to be implemented in routine clinical practice. Recently, a new quantitative imaging analysis approach named radiomics has emerged. It refers to the extraction of a large number of advanced imaging features with high-throughput computing. Extensive research has demonstrated its value in predicting disease behavior, progression, and response to therapeutic options. However, there are numerous challenges to establishing it as a clinically viable solution, including lack of reproducibility and transparency. The data-driven nature also does not offer insights into the underpinning biology of the observed relationships. As such, additional effort is needed to establish it as a qualified biomarker to inform clinical decisions. Here we review the technical difficulties encountered in the clinical applications of radiomics and current effort in addressing some of these challenges in clinical trial designs. By addressing these challenges, the true potential of radiomics can be unleashed.
Collapse
Affiliation(s)
- Ke Nie
- Rutgers-Cancer Institute of New Jersey, Rutgers-Robert Wood Johnson Medical School, Department of Radiation Oncology, New Brunswick, NJ, 08901, United States of America
| | - Ying Xiao
- University of Pennsylvania, Department of Radiation Oncology, 3400 Civic Center Blvd, TRC-2 West Philadelphia, PA 19104, United States of America
| |
Collapse
|
12
|
Amruthalingam L, Gottfrois P, Gonzalez Jimenez A, Gökduman B, Kunz M, Koller T, Pouly M, Navarini A. Improved diagnosis by automated macro- and micro-anatomical region mapping of skin photographs. J Eur Acad Dermatol Venereol 2022; 36:2525-2532. [PMID: 35924423 PMCID: PMC9804282 DOI: 10.1111/jdv.18476] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Accepted: 06/30/2022] [Indexed: 01/05/2023]
Abstract
BACKGROUND The exact location of skin lesions is key in clinical dermatology. On one hand, it supports differential diagnosis (DD) since most skin conditions have specific predilection sites. On the other hand, location matters for dermatosurgical interventions. In practice, lesion evaluation is not well standardized and anatomical descriptions vary or lack altogether. Automated determination of anatomical location could benefit both situations. OBJECTIVE Establish an automated method to determine anatomical regions in clinical patient pictures and evaluate the gain in DD performance of a deep learning model (DLM) when trained with lesion locations and images. METHODS Retrospective study based on three datasets: macro-anatomy for the main body regions with 6000 patient pictures partially labelled by a student, micro-anatomy for the ear region with 182 pictures labelled by a student and DD with 3347 pictures of 16 diseases determined by dermatologists in clinical settings. For each dataset, a DLM was trained and evaluated on an independent test set. The primary outcome measures were the precision and sensitivity with 95% CI. For DD, we compared the performance of a DLM trained with lesion pictures only with a DLM trained with both pictures and locations. RESULTS The average precision and sensitivity were 85% (CI 84-86), 84% (CI 83-85) for macro-anatomy, 81% (CI 80-83), 80% (CI 77-83) for micro-anatomy and 82% (CI 78-85), 81% (CI 77-84) for DD. We observed an improvement in DD performance of 6% (McNemar test P-value 0.0009) for both average precision and sensitivity when training with both lesion pictures and locations. CONCLUSION Including location can be beneficial for DD DLM performance. The proposed method can generate body region maps from patient pictures and even reach surgery relevant anatomical precision, e.g. the ear region. Our method enables automated search of large clinical databases and make targeted anatomical image retrieval possible.
Collapse
Affiliation(s)
- L. Amruthalingam
- Department of Biomedical EngineeringUniversity of BaselBaselSwitzerland,Lucerne School of Computer Science and Information TechnologyLucerne University of Applied Sciences and ArtsLucerneSwitzerland
| | - P. Gottfrois
- Department of Biomedical EngineeringUniversity of BaselBaselSwitzerland
| | | | - B. Gökduman
- Lucerne School of Computer Science and Information TechnologyLucerne University of Applied Sciences and ArtsLucerneSwitzerland
| | - M. Kunz
- Department of Health Sciences and TechnologySwiss Federal Institute of TechnologyZurichSwitzerland
| | - T. Koller
- Department of DermatologyUniversity Hospital of BaselBaselSwitzerland
| | | | - M. Pouly
- Department of DermatologyUniversity Hospital of BaselBaselSwitzerland
| | - A.A. Navarini
- Department of Health Sciences and TechnologySwiss Federal Institute of TechnologyZurichSwitzerland
| |
Collapse
|
13
|
Montalt-Tordera J, Pajaziti E, Jones R, Sauvage E, Puranik R, Singh AAV, Capelli C, Steeden J, Schievano S, Muthurangu V. Automatic segmentation of the great arteries for computational hemodynamic assessment. J Cardiovasc Magn Reson 2022; 24:57. [PMID: 36336682 PMCID: PMC9639271 DOI: 10.1186/s12968-022-00891-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Accepted: 10/03/2022] [Indexed: 11/09/2022] Open
Abstract
BACKGROUND Computational fluid dynamics (CFD) is increasingly used for the assessment of blood flow conditions in patients with congenital heart disease (CHD). This requires patient-specific anatomy, typically obtained from segmented 3D cardiovascular magnetic resonance (CMR) images. However, segmentation is time-consuming and requires expert input. This study aims to develop and validate a machine learning (ML) method for segmentation of the aorta and pulmonary arteries for CFD studies. METHODS 90 CHD patients were retrospectively selected for this study. 3D CMR images were manually segmented to obtain ground-truth (GT) background, aorta and pulmonary artery labels. These were used to train and optimize a U-Net model, using a 70-10-10 train-validation-test split. Segmentation performance was primarily evaluated using Dice score. CFD simulations were set up from GT and ML segmentations using a semi-automatic meshing and simulation pipeline. Mean pressure and velocity fields across 99 planes along the vessel centrelines were extracted, and a mean average percentage error (MAPE) was calculated for each vessel pair (ML vs GT). A second observer (SO) segmented the test dataset for assessment of inter-observer variability. Friedman tests were used to compare ML vs GT, SO vs GT and ML vs SO metrics, and pressure/velocity field errors. RESULTS The network's Dice score (ML vs GT) was 0.945 (interquartile range: 0.929-0.955) for the aorta and 0.885 (0.851-0.899) for the pulmonary arteries. Differences with the inter-observer Dice score (SO vs GT) and ML vs SO Dice scores were not statistically significant for either aorta or pulmonary arteries (p = 0.741, p = 0.061). The ML vs GT MAPEs for pressure and velocity in the aorta were 10.1% (8.5-15.7%) and 4.1% (3.1-6.9%), respectively, and for the pulmonary arteries 14.6% (11.5-23.2%) and 6.3% (4.3-7.9%), respectively. Inter-observer (SO vs GT) and ML vs SO pressure and velocity MAPEs were of a similar magnitude to ML vs GT (p > 0.2). CONCLUSIONS ML can successfully segment the great vessels for CFD, with errors similar to inter-observer variability. This fast, automatic method reduces the time and effort needed for CFD analysis, making it more attractive for routine clinical use.
Collapse
Affiliation(s)
| | | | - Rod Jones
- Great Ormond Street Hospital, London, UK
| | - Emilie Sauvage
- UCL Institute of Cardiovascular Science, UCL, London, UK
| | - Rajesh Puranik
- Children’s Hospital at Westmead, Sydney, Australia
- Faculty of Medicine and Health, University of Sydney, Sydney, Australia
| | - Aakansha Ajay Vir Singh
- Children’s Hospital at Westmead, Sydney, Australia
- Faculty of Medicine and Health, University of Sydney, Sydney, Australia
| | | | | | | | | |
Collapse
|
14
|
Du X, He Y. Application of CT Multimodal Images in Rehabilitation Monitoring of Long-Distance Running. SCANNING 2022; 2022:6425448. [PMID: 36263095 PMCID: PMC9553550 DOI: 10.1155/2022/6425448] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Revised: 09/10/2022] [Accepted: 09/16/2022] [Indexed: 06/16/2023]
Abstract
In order to monitor the rehabilitation of athletes injured in long-distance running, the author proposes a method for rehabilitation monitoring of long-distance running based on CT multimodal images. This method combines the latest multimodal image technology, integrates multimodal technology into CT images to improve the accuracy, performs image segmentation on CT multimodal images through medical segmentation methods, and analyzes the segmented images; finally, it can achieve the effect of rehabilitation treatment for athletes in long-distance running. Experimental results show that the total time taken by the authors' method is 10.9 hours, with an average time of 8 seconds, which is much shorter than the other two control methods. In conclusion, the authors' method allows for better rehabilitation monitoring of long-distance running sports injuries.
Collapse
Affiliation(s)
- Xufeng Du
- Physical Education Department of Shanxi Medical University, Taiyuan Shanxi 030001, China
| | - Yaye He
- Science and Education Department Taiyuan Central Hospital, Taiyuan Shanxi 030001, China
| |
Collapse
|
15
|
Analysis of facial ultrasonography images based on deep learning. Sci Rep 2022; 12:16480. [PMID: 36182939 PMCID: PMC9526737 DOI: 10.1038/s41598-022-20969-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2022] [Accepted: 09/21/2022] [Indexed: 11/28/2022] Open
Abstract
Transfer learning using a pre-trained model with the ImageNet database is frequently used when obtaining large datasets in the medical imaging field is challenging. We tried to estimate the value of deep learning for facial US images by assessing the classification performance for facial US images through transfer learning using current representative deep learning models and analyzing the classification criteria. For this clinical study, we recruited 86 individuals from whom we acquired ultrasound images of nine facial regions. To classify these facial regions, 15 deep learning models were trained using augmented or non-augmented datasets and their performance was evaluated. The F-measure scores average of all models was about 93% regardless of augmentation in the dataset, and the best performing model was the classic model VGGs. The models regarded the contours of skin and bones, rather than muscles and blood vessels, as distinct features for distinguishing regions in the facial US images. The results of this study can be used as reference data for future deep learning research on facial US images and content development.
Collapse
|
16
|
Xu J, Zeng B, Egger J, Wang C, Smedby Ö, Jiang X, Chen X. A review on AI-based medical image computing in head and neck surgery. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac840f] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Accepted: 07/25/2022] [Indexed: 11/11/2022]
Abstract
Abstract
Head and neck surgery is a fine surgical procedure with a complex anatomical space, difficult operation and high risk. Medical image computing (MIC) that enables accurate and reliable preoperative planning is often needed to reduce the operational difficulty of surgery and to improve patient survival. At present, artificial intelligence, especially deep learning, has become an intense focus of research in MIC. In this study, the application of deep learning-based MIC in head and neck surgery is reviewed. Relevant literature was retrieved on the Web of Science database from January 2015 to May 2022, and some papers were selected for review from mainstream journals and conferences, such as IEEE Transactions on Medical Imaging, Medical Image Analysis, Physics in Medicine and Biology, Medical Physics, MICCAI, etc. Among them, 65 references are on automatic segmentation, 15 references on automatic landmark detection, and eight references on automatic registration. In the elaboration of the review, first, an overview of deep learning in MIC is presented. Then, the application of deep learning methods is systematically summarized according to the clinical needs, and generalized into segmentation, landmark detection and registration of head and neck medical images. In segmentation, it is mainly focused on the automatic segmentation of high-risk organs, head and neck tumors, skull structure and teeth, including the analysis of their advantages, differences and shortcomings. In landmark detection, the focus is mainly on the introduction of landmark detection in cephalometric and craniomaxillofacial images, and the analysis of their advantages and disadvantages. In registration, deep learning networks for multimodal image registration of the head and neck are presented. Finally, their shortcomings and future development directions are systematically discussed. The study aims to serve as a reference and guidance for researchers, engineers or doctors engaged in medical image analysis of head and neck surgery.
Collapse
|
17
|
Mancosu P, Lambri N, Castiglioni I, Dei D, Iori M, Loiacono D, Russo S, Talamonti C, Villaggi E, Scorsetti M, Avanzo M. Applications of artificial intelligence in stereotactic body radiation therapy. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac7e18] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2022] [Accepted: 07/04/2022] [Indexed: 11/12/2022]
Abstract
Abstract
This topical review focuses on the applications of artificial intelligence (AI) tools to stereotactic body radiation therapy (SBRT). The high dose per fraction and the limited number of fractions in SBRT require stricter accuracy than standard radiation therapy. The intent of this review is to describe the development and evaluate the possible benefit of AI tools integration into the radiation oncology workflow for SBRT automation. The selected papers were subdivided into four sections, representative of the whole radiotherapy process: ‘AI in SBRT target and organs at risk contouring’, ‘AI in SBRT planning’, ‘AI during the SBRT delivery’, and ‘AI for outcome prediction after SBRT’. Each section summarises the challenges, as well as limits and needs for improvement to achieve better integration of AI tools in the clinical workflow.
Collapse
|
18
|
Segmentation of Pancreatic Subregions in Computed Tomography Images. J Imaging 2022; 8:jimaging8070195. [PMID: 35877639 PMCID: PMC9317715 DOI: 10.3390/jimaging8070195] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2022] [Revised: 07/02/2022] [Accepted: 07/07/2022] [Indexed: 11/16/2022] Open
Abstract
The accurate segmentation of pancreatic subregions (head, body, and tail) in CT images provides an opportunity to examine the local morphological and textural changes in the pancreas. Quantifying such changes aids in understanding the spatial heterogeneity of the pancreas and assists in the diagnosis and treatment planning of pancreatic cancer. Manual outlining of pancreatic subregions is tedious, time-consuming, and prone to subjective inconsistency. This paper presents a multistage anatomy-guided framework for accurate and automatic 3D segmentation of pancreatic subregions in CT images. Using the delineated pancreas, two soft-label maps were estimated for subregional segmentation—one by training a fully supervised naïve Bayes model that considers the length and volumetric proportions of each subregional structure based on their anatomical arrangement, and the other by using the conventional deep learning U-Net architecture for 3D segmentation. The U-Net model then estimates the joint probability of the two maps and performs optimal segmentation of subregions. Model performance was assessed using three datasets of contrast-enhanced abdominal CT scans: one public NIH dataset of the healthy pancreas, and two datasets D1 and D2 (one for each of pre-cancerous and cancerous pancreas). The model demonstrated excellent performance during the multifold cross-validation using the NIH dataset, and external validation using D1 and D2. To the best of our knowledge, this is the first automated model for the segmentation of pancreatic subregions in CT images. A dataset consisting of reference anatomical labels for subregions in all images of the NIH dataset is also established.
Collapse
|
19
|
Kim M, Lee S, Dan I, Tak S. A deep convolutional neural network for estimating hemodynamic response function with reduction of motion artifacts in fNIRS. J Neural Eng 2022; 19. [PMID: 35038682 DOI: 10.1088/1741-2552/ac4bfc] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2021] [Accepted: 01/17/2022] [Indexed: 11/11/2022]
Abstract
OBJECTIVE Functional near-infrared spectroscopy (fNIRS) is a neuroimaging technique for monitoring hemoglobin concentration changes in a non-invasive manner. However, subject movements are often significant sources of artifacts. While several methods have been developed for suppressing this confounding noise, the conventional techniques have limitations on optimal selections of model parameters across participants or brain regions. To address this shortcoming, we aim to propose a method based on a deep convolutional neural network (CNN). APPROACH The U-net is employed as a CNN architecture. Specifically, large-scale training and testing data are generated by combining variants of hemodynamic response function (HRF) with experimental measurements of motion noises. The neural network is then trained to reconstruct hemodynamic response coupled to neuronal activity with a reduction of motion artifacts. MAIN RESULTS Using extensive analysis, we show that the proposed method estimates the task-related HRF more accurately than the existing methods of wavelet decomposition and autoregressive models. Specifically, the mean squared error and variance of HRF estimates, based on the CNN, are the smallest among all methods considered in this study. These results are more prominent when the semi-simulated data contains variants of shapes and amplitudes of HRF. SIGNIFICANCE The proposed CNN method allows for accurately estimating amplitude and shape of HRF with significant reduction of motion artifacts. This method may have a great potential for monitoring HRF changes in real-life settings that involve excessive motion artifacts.
Collapse
Affiliation(s)
- MinWoo Kim
- School of Biomedical Convergence Engineering, Pusan National University, 49 Busandaehak-ro, Mulgeum-eup, Yangsan-si, Gyeongsangnam-do, Yangsan, 50612, Korea (the Republic of)
| | - Seonjin Lee
- Research Center for Bioconvergence Analysis, Korea Basic Science Institute, 162 Yeongudanji-ro, Cheongwon-gu, Ochang-eup, Cheongju, 28119, Korea (the Republic of)
| | - Ippeita Dan
- Faculty of Science and Engineering, Chuo University, Tama Campus 742-1 Higashinakano Hachioji-shi, Tokyo, 192-0393, JAPAN
| | - Sungho Tak
- Research Center for Bioconvergence Analysis, Korea Basic Science Institute, 162 Yeongudanji-ro, Cheongwon-gu, Ochang-eup, Cheongju, 28119, Korea (the Republic of)
| |
Collapse
|
20
|
Cheng R, Crouzier M, Hug F, Tucker K, Juneau P, McCreedy E, Gandler W, McAuliffe MJ, Sheehan FT. Automatic quadriceps and patellae segmentation of MRI with cascaded U 2 -Net and SASSNet deep learning model. Med Phys 2022; 49:443-460. [PMID: 34755359 PMCID: PMC8758556 DOI: 10.1002/mp.15335] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Revised: 10/18/2021] [Accepted: 10/20/2021] [Indexed: 01/03/2023] Open
Abstract
PURPOSE Automatic muscle segmentation is critical for advancing our understanding of human physiology, biomechanics, and musculoskeletal pathologies, as it allows for timely exploration of large multi-dimensional image sets. Segmentation models are rarely developed/validated for the pediatric model. As such, autosegmentation is not available to explore how muscle architectural changes during development and how disease/pathology affects the developing musculoskeletal system. Thus, we aimed to develop and validate an end-to-end, fully automated, deep learning model for accurate segmentation of the rectus femoris and vastus lateral, medialis, and intermedialis using a pediatric database. METHODS We developed a two-stage cascaded deep learning model in a coarse-to-fine manner. In the first stage, the U2 -Net roughly detects the muscle subcompartment region. Then, in the second stage, the shape-aware 3D semantic segmentation method SASSNet refines the cropped target regions to generate the more finer and accurate segmentation masks. We utilized multifeature image maps in both stages to stabilize performance and validated their use with an ablation study. The second-stage SASSNet was independently run and evaluated with three different cropped region resolutions: the original image resolution, and images downsampled 2× and 4× (high, mid, and low). The relationship between image resolution and segmentation accuracy was explored. In addition, the patella was included as a comparator to past work. We evaluated segmentation accuracy using leave-one-out testing on a database of 3D MR images (0.43 × 0.43 × 2 mm) from 40 pediatric participants (age 15.3 ± 1.9 years, 55.8 ± 11.8 kg, 164.2 ± 7.9 cm, 38F/2 M). RESULTS The mid-resolution second stage produced the best results for the vastus medialis, rectus femoris, and patella (Dice similarity coefficient = 95.0%, 95.1%, 93.7%), whereas the low-resolution second stage produced the best results for the vastus lateralis and vastus intermedialis (DSC = 94.5% and 93.7%). In comparing the low- to mid-resolution cases, the vasti intermedialis, vastus medialis, rectus femoris, and patella produced significant differences (p = 0.0015, p = 0.0101, p < 0.0001, p = 0.0003) and the vasti lateralis did not (p = 0.2177). The high-resolution stage 2 had significantly lower accuracy (1.0 to 4.4 dice percentage points) compared to both the mid- and low-resolution routines (p value ranged from < 0.001 to 0.04). The one exception was the rectus femoris, where there was no difference between the low- and high-resolution cases. The ablation study demonstrated that the multifeature is more reliable than the single feature. CONCLUSIONS Our successful implementation of this two-stage segmentation pipeline provides a critical tool for expanding pediatric muscle physiology and clinical research. With a relatively small and variable dataset, our fully automatic segmentation technique produces accuracies that matched or exceeded the current state of the art. The two-stage segmentation avoids memory issues and excessive run times by using a first stage focused on cropping out unnecessary data. The excellent Dice similarity coefficients improve upon previous template-based automatic and semiautomatic methodologies targeting the leg musculature. More importantly, with a naturally variable dataset (size, shape, etc.), the proposed model demonstrates slightly improved accuracies, compared to previous neural networks methods.
Collapse
Affiliation(s)
- Ruida Cheng
- Scientific Application Services (SAS), Office of Scientific Computing Services (OSCS), Office of Intramural Research, Center of Information Technology, NIH, Bethesda, MD, USA
| | - Marion Crouzier
- University of Nantes, Movement, Interactions, Performance, MIP, EA 4334, F-44000 Nantes, France,The University of Queensland, School of Biomedical Sciences, Brisbane
| | - François Hug
- Institut Universitaire de France (IUF), Paris, France,Université Côte d’Azur, LAMHESS, Nice, France
| | - Kylie Tucker
- The University of Queensland, School of Biomedical Sciences, Brisbane
| | - Paul Juneau
- NIH Library, Office of Research Services, National Institutes of Health, Bethesda, MD, USA
| | - Evan McCreedy
- Scientific Application Services (SAS), Office of Scientific Computing Services (OSCS), Office of Intramural Research, Center of Information Technology, NIH, Bethesda, MD, USA
| | - William Gandler
- Scientific Application Services (SAS), Office of Scientific Computing Services (OSCS), Office of Intramural Research, Center of Information Technology, NIH, Bethesda, MD, USA
| | - Matthew J. McAuliffe
- Scientific Application Services (SAS), Office of Scientific Computing Services (OSCS), Office of Intramural Research, Center of Information Technology, NIH, Bethesda, MD, USA
| | - Frances T. Sheehan
- Rehabilitation Medicine Department, National Institutes of Health Clinical Center, Bethesda, MD, USA
| |
Collapse
|
21
|
Chen X, Yang B, Li J, Zhu J, Ma X, Chen D, Hu Z, Men K, Dai J. A deep-learning method for generating synthetic kV-CT and improving tumor segmentation for helical tomotherapy of nasopharyngeal carcinoma. Phys Med Biol 2021; 66. [PMID: 34700300 DOI: 10.1088/1361-6560/ac3345] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2021] [Accepted: 10/26/2021] [Indexed: 12/11/2022]
Abstract
Objective:Megavoltage computed tomography (MV-CT) is used for setup verification and adaptive radiotherapy in tomotherapy. However, its low contrast and high noise lead to poor image quality. This study aimed to develop a deep-learning-based method to generate synthetic kilovoltage CT (skV-CT) and then evaluate its ability to improve image quality and tumor segmentation.Approach:The planning kV-CT and MV-CT images of 270 patients with nasopharyngeal carcinoma (NPC) treated on an Accuray TomoHD system were used. An improved cycle-consistent adversarial network which used residual blocks as its generator was adopted to learn the mapping between MV-CT and kV-CT and then generate skV-CT from MV-CT. A Catphan 700 phantom and 30 patients with NPC were used to evaluate image quality. The quantitative indices included contrast-to-noise ratio (CNR), uniformity and signal-to-noise ratio (SNR) for the phantom and the structural similarity index measure (SSIM), mean absolute error (MAE), and peak signal-to-noise ratio (PSNR) for patients. Next, we trained three models for segmentation of the clinical target volume (CTV): MV-CT, skV-CT, and MV-CT combined with skV-CT. The segmentation accuracy was compared with indices of the dice similarity coefficient (DSC) and mean distance agreement (MDA).Mainresults:Compared with MV-CT, skV-CT showed significant improvement in CNR (184.0%), image uniformity (34.7%), and SNR (199.0%) in the phantom study and improved SSIM (1.7%), MAE (24.7%), and PSNR (7.5%) in the patient study. For CTV segmentation with only MV-CT, only skV-CT, and MV-CT combined with skV-CT, the DSCs were 0.75 ± 0.04, 0.78 ± 0.04, and 0.79 ± 0.03, respectively, and the MDAs (in mm) were 3.69 ± 0.81, 3.14 ± 0.80, and 2.90 ± 0.62, respectively.Significance:The proposed method improved the image quality of MV-CT and thus tumor segmentation in helical tomotherapy. The method potentially can benefit adaptive radiotherapy.
Collapse
Affiliation(s)
- Xinyuan Chen
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| | - Bining Yang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| | - Jingwen Li
- Cloud Computing and Big Data Research Institute, China Academy of Information and Communications Technology, People's Republic of China
| | - Ji Zhu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| | - Xiangyu Ma
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| | - Deqi Chen
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| | - Zhihui Hu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| | - Kuo Men
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| | - Jianrong Dai
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| |
Collapse
|