1
|
Zhang Z, Zhou X, Fang Y, Xiong Z, Zhang T. AI-driven 3D bioprinting for regenerative medicine: From bench to bedside. Bioact Mater 2025; 45:201-230. [PMID: 39651398 PMCID: PMC11625302 DOI: 10.1016/j.bioactmat.2024.11.021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2024] [Revised: 11/01/2024] [Accepted: 11/16/2024] [Indexed: 12/11/2024] Open
Abstract
In recent decades, 3D bioprinting has garnered significant research attention due to its ability to manipulate biomaterials and cells to create complex structures precisely. However, due to technological and cost constraints, the clinical translation of 3D bioprinted products (BPPs) from bench to bedside has been hindered by challenges in terms of personalization of design and scaling up of production. Recently, the emerging applications of artificial intelligence (AI) technologies have significantly improved the performance of 3D bioprinting. However, the existing literature remains deficient in a methodological exploration of AI technologies' potential to overcome these challenges in advancing 3D bioprinting toward clinical application. This paper aims to present a systematic methodology for AI-driven 3D bioprinting, structured within the theoretical framework of Quality by Design (QbD). This paper commences by introducing the QbD theory into 3D bioprinting, followed by summarizing the technology roadmap of AI integration in 3D bioprinting, including multi-scale and multi-modal sensing, data-driven design, and in-line process control. This paper further describes specific AI applications in 3D bioprinting's key elements, including bioink formulation, model structure, printing process, and function regulation. Finally, the paper discusses current prospects and challenges associated with AI technologies to further advance the clinical translation of 3D bioprinting.
Collapse
Affiliation(s)
- Zhenrui Zhang
- Biomanufacturing Center, Department of Mechanical Engineering, Tsinghua University, Beijing, 100084, PR China
- Biomanufacturing and Rapid Forming Technology Key Laboratory of Beijing, Beijing, 100084, PR China
- “Biomanufacturing and Engineering Living Systems” Innovation International Talents Base (111 Base), Beijing, 100084, PR China
| | - Xianhao Zhou
- Biomanufacturing Center, Department of Mechanical Engineering, Tsinghua University, Beijing, 100084, PR China
- Biomanufacturing and Rapid Forming Technology Key Laboratory of Beijing, Beijing, 100084, PR China
- “Biomanufacturing and Engineering Living Systems” Innovation International Talents Base (111 Base), Beijing, 100084, PR China
| | - Yongcong Fang
- Biomanufacturing Center, Department of Mechanical Engineering, Tsinghua University, Beijing, 100084, PR China
- Biomanufacturing and Rapid Forming Technology Key Laboratory of Beijing, Beijing, 100084, PR China
- “Biomanufacturing and Engineering Living Systems” Innovation International Talents Base (111 Base), Beijing, 100084, PR China
- State Key Laboratory of Tribology in Advanced Equipment, Tsinghua University, Beijing, 100084, PR China
| | - Zhuo Xiong
- Biomanufacturing Center, Department of Mechanical Engineering, Tsinghua University, Beijing, 100084, PR China
- Biomanufacturing and Rapid Forming Technology Key Laboratory of Beijing, Beijing, 100084, PR China
- “Biomanufacturing and Engineering Living Systems” Innovation International Talents Base (111 Base), Beijing, 100084, PR China
| | - Ting Zhang
- Biomanufacturing Center, Department of Mechanical Engineering, Tsinghua University, Beijing, 100084, PR China
- Biomanufacturing and Rapid Forming Technology Key Laboratory of Beijing, Beijing, 100084, PR China
- “Biomanufacturing and Engineering Living Systems” Innovation International Talents Base (111 Base), Beijing, 100084, PR China
- State Key Laboratory of Tribology in Advanced Equipment, Tsinghua University, Beijing, 100084, PR China
| |
Collapse
|
2
|
Hochreuter KM, Ren J, Nijkamp J, Korreman SS, Lukacova S, Kallehauge JF, Trip AK. The effect of editing clinical contours on deep-learning segmentation accuracy of the gross tumor volume in glioblastoma. Phys Imaging Radiat Oncol 2024; 31:100620. [PMID: 39220114 PMCID: PMC11364127 DOI: 10.1016/j.phro.2024.100620] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2024] [Revised: 07/29/2024] [Accepted: 08/01/2024] [Indexed: 09/04/2024] Open
Abstract
Background and purpose Deep-learning (DL) models for segmentation of the gross tumor volume (GTV) in radiotherapy are generally based on clinical delineations which suffer from inter-observer variability. The aim of this study was to compare performance of a DL-model based on clinical glioblastoma GTVs to a model based on a single-observer edited version of the same GTVs. Materials and methods The dataset included imaging data (Computed Tomography (CT), T1, contrast-T1 (T1C), and fluid-attenuated-inversion-recovery (FLAIR)) of 259 glioblastoma patients treated with post-operative radiotherapy between 2012 and 2019 at a single institute. The clinical GTVs were edited using all imaging data. The dataset was split into 207 cases for training/validation and 52 for testing.GTV segmentation models (nnUNet) were trained on clinical and edited GTVs separately and compared using Surface Dice with 1 mm tolerance (sDSC1mm). We also evaluated model performance with respect to extent of resection (EOR), and different imaging combinations (T1C/T1/FLAIR/CT, T1C/FLAIR/CT, T1C/FLAIR, T1C/CT, T1C/T1, T1C). A Wilcoxon test was used for significance testing. Results The median (range) sDSC1mm of the clinical-GTV-model and edited-GTV-model both evaluated with the edited contours, was 0.76 (0.43-0.94) vs. 0.92 (0.60-0.98) respectively (p < 0.001). sDSC1mm was not significantly different between patients with a biopsy, partial, and complete resection. T1C as single input performed as good as use of imaging combinations. Conclusions High segmentation accuracy was obtained by the DL-models. Editing of the clinical GTVs significantly increased DL performance with a relevant effect size. DL performance was robust for EOR and highly accurate using only T1C.
Collapse
Affiliation(s)
- Kim M. Hochreuter
- Danish Centre for Particle Therapy, Aarhus University Hospital, Aarhus, Denmark
- Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| | - Jintao Ren
- Danish Centre for Particle Therapy, Aarhus University Hospital, Aarhus, Denmark
- Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
- Department of Oncology, Aarhus University Hospital, Aarhus, Denmark
| | - Jasper Nijkamp
- Danish Centre for Particle Therapy, Aarhus University Hospital, Aarhus, Denmark
- Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| | - Stine S. Korreman
- Danish Centre for Particle Therapy, Aarhus University Hospital, Aarhus, Denmark
- Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
- Department of Oncology, Aarhus University Hospital, Aarhus, Denmark
| | - Slávka Lukacova
- Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
- Department of Oncology, Aarhus University Hospital, Aarhus, Denmark
| | - Jesper F. Kallehauge
- Danish Centre for Particle Therapy, Aarhus University Hospital, Aarhus, Denmark
- Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| | - Anouk K. Trip
- Danish Centre for Particle Therapy, Aarhus University Hospital, Aarhus, Denmark
| |
Collapse
|
3
|
Liu P, Sun Y, Zhao X, Yan Y. Deep learning algorithm performance in contouring head and neck organs at risk: a systematic review and single-arm meta-analysis. Biomed Eng Online 2023; 22:104. [PMID: 37915046 PMCID: PMC10621161 DOI: 10.1186/s12938-023-01159-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Accepted: 09/21/2023] [Indexed: 11/03/2023] Open
Abstract
PURPOSE The contouring of organs at risk (OARs) in head and neck cancer radiation treatment planning is a crucial, yet repetitive and time-consuming process. Recent studies have applied deep learning (DL) algorithms to automatically contour head and neck OARs. This study aims to conduct a systematic review and meta-analysis to summarize and analyze the performance of DL algorithms in contouring head and neck OARs. The objective is to assess the advantages and limitations of DL algorithms in contour planning of head and neck OARs. METHODS This study conducted a literature search of Pubmed, Embase and Cochrane Library databases, to include studies related to DL contouring head and neck OARs, and the dice similarity coefficient (DSC) of four categories of OARs from the results of each study are selected as effect sizes for meta-analysis. Furthermore, this study conducted a subgroup analysis of OARs characterized by image modality and image type. RESULTS 149 articles were retrieved, and 22 studies were included in the meta-analysis after excluding duplicate literature, primary screening, and re-screening. The combined effect sizes of DSC for brainstem, spinal cord, mandible, left eye, right eye, left optic nerve, right optic nerve, optic chiasm, left parotid, right parotid, left submandibular, and right submandibular are 0.87, 0.83, 0.92, 0.90, 0.90, 0.71, 0.74, 0.62, 0.85, 0.85, 0.82, and 0.82, respectively. For subgroup analysis, the combined effect sizes for segmentation of the brainstem, mandible, left optic nerve, and left parotid gland using CT and MRI images are 0.86/0.92, 0.92/0.90, 0.71/0.73, and 0.84/0.87, respectively. Pooled effect sizes using 2D and 3D images of the brainstem, mandible, left optic nerve, and left parotid gland for contouring are 0.88/0.87, 0.92/0.92, 0.75/0.71 and 0.87/0.85. CONCLUSIONS The use of automated contouring technology based on DL algorithms is an essential tool for contouring head and neck OARs, achieving high accuracy, reducing the workload of clinical radiation oncologists, and providing individualized, standardized, and refined treatment plans for implementing "precision radiotherapy". Improving DL performance requires the construction of high-quality data sets and enhancing algorithm optimization and innovation.
Collapse
Affiliation(s)
- Peiru Liu
- General Hospital of Northern Theater Command, Department of Radiation Oncology, Shenyang, China
- Beifang Hospital of China Medical University, Shenyang, China
| | - Ying Sun
- General Hospital of Northern Theater Command, Department of Radiation Oncology, Shenyang, China
| | - Xinzhuo Zhao
- Shenyang University of Technology, School of Electrical Engineering,, Shenyang, China
| | - Ying Yan
- General Hospital of Northern Theater Command, Department of Radiation Oncology, Shenyang, China.
| |
Collapse
|
4
|
Wang J, Peng Y. MHL-Net: A Multistage Hierarchical Learning Network for Head and Neck Multiorgan Segmentation. IEEE J Biomed Health Inform 2023; 27:4074-4085. [PMID: 37171918 DOI: 10.1109/jbhi.2023.3275746] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
Accurate segmentation of head and neck organs at risk is crucial in radiotherapy. However, the existing methods suffer from incomplete feature mining, insufficient information utilization, and difficulty in simultaneously improving the performance of small and large organ segmentation. In this paper, a multistage hierarchical learning network is designed to fully extract multidimensional features, combined with anatomical prior information and imaging features, using multistage subnetworks to improve the segmentation performance. First, multilevel subnetworks are constructed for primary segmentation, localization, and fine segmentation by dividing organs into two levels-large and small. Different networks both have their own learning focuses and feature reuse and information sharing among each other, which comprehensively improved the segmentation performance of all organs. Second, an anatomical prior probability map and a boundary contour attention mechanism are developed to address the problem of complex anatomical shapes. Prior information and boundary contour features effectively assist in detecting and segmenting special shapes. Finally, a multidimensional combination attention mechanism is proposed to analyze axial, coronal, and sagittal information, capture spatial and channel features, and maximize the use of structural information and semantic features of 3D medical images. Experimental results on several datasets showed that our method was competitive with state-of-the-art methods and improved the segmentation results for multiscale organs.
Collapse
|
5
|
Henderson EGA, Vasquez Osorio EM, van Herk M, Brouwer CL, Steenbakkers RJHM, Green AF. Accurate segmentation of head and neck radiotherapy CT scans with 3D CNNs: consistency is key. Phys Med Biol 2023; 68:085003. [PMID: 36893469 DOI: 10.1088/1361-6560/acc309] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2022] [Accepted: 03/09/2023] [Indexed: 03/11/2023]
Abstract
Objective.Automatic segmentation of organs-at-risk in radiotherapy planning computed tomography (CT) scans using convolutional neural networks (CNNs) is an active research area. Very large datasets are usually required to train such CNN models. In radiotherapy, large, high-quality datasets are scarce and combining data from several sources can reduce the consistency of training segmentations. It is therefore important to understand the impact of training data quality on the performance of auto-segmentation models for radiotherapy.Approach.In this study, we took an existing 3D CNN architecture for head and neck CT auto-segmentation and compare the performance of models trained with a small, well-curated dataset (n= 34) and then a far larger dataset (n= 185) containing less consistent training segmentations. We performed 5-fold cross-validations in each dataset and tested segmentation performance using the 95th percentile Hausdorff distance and mean distance-to-agreement metrics. Finally, we validated the generalisability of our models with an external cohort of patient data (n= 12) with five expert annotators.Main results.The models trained with a large dataset were greatly outperformed by models (of identical architecture) trained with a smaller, but higher consistency set of training samples. Our models trained with a small dataset produce segmentations of similar accuracy as expert human observers and generalised well to new data, performing within inter-observer variation.Significance.We empirically demonstrate the importance of highly consistent training samples when training a 3D auto-segmentation model for use in radiotherapy. Crucially, it is the consistency of the training segmentations which had a greater impact on model performance rather than the size of the dataset used.
Collapse
Affiliation(s)
- Edward G A Henderson
- Division of Cancer Sciences, The University of Manchester, M13 9PL Manchester, United Kingdom
| | - Eliana M Vasquez Osorio
- Division of Cancer Sciences, The University of Manchester, M13 9PL Manchester, United Kingdom
- Department of Radiotherapy Related Research, The Christie NHS Foundation Trust, M20 4BX Manchester, United Kingdom
| | - Marcel van Herk
- Division of Cancer Sciences, The University of Manchester, M13 9PL Manchester, United Kingdom
- Department of Radiotherapy Related Research, The Christie NHS Foundation Trust, M20 4BX Manchester, United Kingdom
| | - Charlotte L Brouwer
- Department of Radiation Oncology, University Medical Center Groningen, 9713 GZ Groningen, The Netherlands
| | - Roel J H M Steenbakkers
- Department of Radiation Oncology, University Medical Center Groningen, 9713 GZ Groningen, The Netherlands
| | - Andrew F Green
- Division of Cancer Sciences, The University of Manchester, M13 9PL Manchester, United Kingdom
- Department of Radiotherapy Related Research, The Christie NHS Foundation Trust, M20 4BX Manchester, United Kingdom
| |
Collapse
|
6
|
Towards real-time radiotherapy planning: The role of autonomous treatment strategies. Phys Imaging Radiat Oncol 2022; 24:136-137. [DOI: 10.1016/j.phro.2022.11.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
|
7
|
Nijkamp J. Challenges and chances for deep-learning based target and organ at risk segmentation in radiotherapy of head and neck cancer. Phys Imaging Radiat Oncol 2022; 23:150-152. [PMID: 36035089 PMCID: PMC9405092 DOI: 10.1016/j.phro.2022.08.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Affiliation(s)
- Jasper Nijkamp
- Address: Department of Clinical Medicine, Aarhus University, Aarhus, Denmark.
| |
Collapse
|