1
|
Warin K, Limprasert W, Paipongna T, Chaowchuen S, Vicharueang S. Deep convolutional neural network for automatic segmentation and classification of jaw tumors in contrast-enhanced computed tomography images. Int J Oral Maxillofac Surg 2024:S0901-5027(24)00382-5. [PMID: 39414518 DOI: 10.1016/j.ijom.2024.10.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Revised: 09/26/2024] [Accepted: 10/03/2024] [Indexed: 10/18/2024]
Abstract
The purpose of this study was to evaluate the performance of convolutional neural network (CNN)-based image segmentation models for segmentation and classification of benign and malignant jaw tumors in contrast-enhanced computed tomography (CT) images. A dataset comprising 3416 CT images (1163 showing benign jaw tumors, 1253 showing malignant jaw tumors, and 1000 without pathological lesions) was obtained retrospectively from a cancer hospital and two regional hospitals in Thailand; the images were from 150 patients presenting with jaw tumors between 2016 and 2020. U-Net and Mask R-CNN image segmentation models were adopted. U-Net and Mask R-CNN were trained to distinguish between benign and malignant jaw tumors and to segment jaw tumors to identify their boundaries in CT images. The performance of each model in segmenting the jaw tumors in the CT images was evaluated on a test dataset. All models yielded high accuracy, with a Dice coefficient of 0.90-0.98 and Jaccard index of 0.82-0.97 for segmentation, and an area under the precision-recall curve of 0.63-0.85 for the classification of benign and malignant jaw tumors. In conclusion, CNN-based segmentation models demonstrated high potential for automated segmentation and classification of jaw tumors in contrast-enhanced CT images.
Collapse
Affiliation(s)
- K Warin
- Faculty of Dentistry, Thammasat University, Pathum Thani, Thailand.
| | - W Limprasert
- College of Interdisciplinary Studies, Thammasat University, Pathum Thani, Thailand.
| | - T Paipongna
- Sakon Nakhon Hospital, Mueang Sakon Nakhon, Sakon Nakhon, Thailand.
| | - S Chaowchuen
- Udonthani Cancer Hospital, Mueang Udon Thani, Udon Thani, Thailand.
| | - S Vicharueang
- StoreMesh, Thailand Science Park, Pathum Thani, Thailand.
| |
Collapse
|
2
|
Hu KG, Aral A, Rancu A, Alperovich M. Computerized Surgical Planning for Mandibular Distraction Osteogenesis. Semin Plast Surg 2024; 38:234-241. [PMID: 39118864 PMCID: PMC11305829 DOI: 10.1055/s-0044-1786757] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/10/2024]
Abstract
Mandibular distraction osteogenesis is a technically challenging procedure due to complex mandibular anatomy, especially in the treatment of Pierre-Robin Sequence due to variable bone thickness in the infant mandible and the presence of tooth buds. Computerized surgical planning (CSP) simplifies the procedure by preoperatively visualizing critical structures, producing cutting guides, and planning distractor placement. This paper describes the process of using CSP to plan mandibular distraction osteogenesis, including discussion of recent advances in the use of custom distractors.
Collapse
Affiliation(s)
- Kevin G. Hu
- Division of Plastic and Reconstructive Surgery, Department of Surgery, Yale School of Medicine, New Haven, Connecticut
| | - Ali Aral
- Division of Plastic and Reconstructive Surgery, Department of Surgery, Yale School of Medicine, New Haven, Connecticut
| | - Albert Rancu
- Division of Plastic and Reconstructive Surgery, Department of Surgery, Yale School of Medicine, New Haven, Connecticut
| | - Michael Alperovich
- Division of Plastic and Reconstructive Surgery, Department of Surgery, Yale School of Medicine, New Haven, Connecticut
| |
Collapse
|
3
|
Baecher H, Hoch CC, Knoedler S, Maheta BJ, Kauke-Navarro M, Safi AF, Alfertshofer M, Knoedler L. From bench to bedside - current clinical and translational challenges in fibula free flap reconstruction. Front Med (Lausanne) 2023; 10:1246690. [PMID: 37886365 PMCID: PMC10598714 DOI: 10.3389/fmed.2023.1246690] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2023] [Accepted: 09/29/2023] [Indexed: 10/28/2023] Open
Abstract
Fibula free flaps (FFF) represent a working horse for different reconstructive scenarios in facial surgery. While FFF were initially established for mandible reconstruction, advancements in planning for microsurgical techniques have paved the way toward a broader spectrum of indications, including maxillary defects. Essential factors to improve patient outcomes following FFF include minimal donor site morbidity, adequate bone length, and dual blood supply. Yet, persisting clinical and translational challenges hamper the effectiveness of FFF. In the preoperative phase, virtual surgical planning and artificial intelligence tools carry untapped potential, while the intraoperative role of individualized surgical templates and bioprinted prostheses remains to be summarized. Further, the integration of novel flap monitoring technologies into postoperative patient management has been subject to translational and clinical research efforts. Overall, there is a paucity of studies condensing the body of knowledge on emerging technologies and techniques in FFF surgery. Herein, we aim to review current challenges and solution possibilities in FFF. This line of research may serve as a pocket guide on cutting-edge developments and facilitate future targeted research in FFF.
Collapse
Affiliation(s)
- Helena Baecher
- Department of Plastic, Hand and Reconstructive Surgery, University Hospital Regensburg, Regensburg, Germany
| | - Cosima C. Hoch
- Medical Faculty, Friedrich Schiller University Jena, Jena, Germany
| | - Samuel Knoedler
- Division of Plastic Surgery, Department of Surgery, Yale New Haven Hospital, Yale School of Medicine, New Haven, CT, United States
- Division of Plastic Surgery, Department of Surgery, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, United States
- Department of Plastic Surgery and Hand Surgery, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Bhagvat J. Maheta
- College of Medicine, California Northstate University, Elk Grove, CA, United States
| | - Martin Kauke-Navarro
- Division of Plastic Surgery, Department of Surgery, Yale New Haven Hospital, Yale School of Medicine, New Haven, CT, United States
| | - Ali-Farid Safi
- Craniologicum, Center for Cranio-Maxillo-Facial Surgery, Bern, Switzerland
- Faculty of Medicine, University of Bern, Bern, Switzerland
| | - Michael Alfertshofer
- Division of Hand, Plastic and Aesthetic Surgery, Ludwig-Maximilians-University Munich, Munich, Germany
| | - Leonard Knoedler
- Department of Plastic, Hand and Reconstructive Surgery, University Hospital Regensburg, Regensburg, Germany
- Division of Plastic Surgery, Department of Surgery, Yale New Haven Hospital, Yale School of Medicine, New Haven, CT, United States
| |
Collapse
|
4
|
Zhong NN, Wang HQ, Huang XY, Li ZZ, Cao LM, Huo FY, Liu B, Bu LL. Enhancing head and neck tumor management with artificial intelligence: Integration and perspectives. Semin Cancer Biol 2023; 95:52-74. [PMID: 37473825 DOI: 10.1016/j.semcancer.2023.07.002] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2023] [Revised: 07/11/2023] [Accepted: 07/15/2023] [Indexed: 07/22/2023]
Abstract
Head and neck tumors (HNTs) constitute a multifaceted ensemble of pathologies that primarily involve regions such as the oral cavity, pharynx, and nasal cavity. The intricate anatomical structure of these regions poses considerable challenges to efficacious treatment strategies. Despite the availability of myriad treatment modalities, the overall therapeutic efficacy for HNTs continues to remain subdued. In recent years, the deployment of artificial intelligence (AI) in healthcare practices has garnered noteworthy attention. AI modalities, inclusive of machine learning (ML), neural networks (NNs), and deep learning (DL), when amalgamated into the holistic management of HNTs, promise to augment the precision, safety, and efficacy of treatment regimens. The integration of AI within HNT management is intricately intertwined with domains such as medical imaging, bioinformatics, and medical robotics. This article intends to scrutinize the cutting-edge advancements and prospective applications of AI in the realm of HNTs, elucidating AI's indispensable role in prevention, diagnosis, treatment, prognostication, research, and inter-sectoral integration. The overarching objective is to stimulate scholarly discourse and invigorate insights among medical practitioners and researchers to propel further exploration, thereby facilitating superior therapeutic alternatives for patients.
Collapse
Affiliation(s)
- Nian-Nian Zhong
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China
| | - Han-Qi Wang
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China
| | - Xin-Yue Huang
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China
| | - Zi-Zhan Li
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China
| | - Lei-Ming Cao
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China
| | - Fang-Yi Huo
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China
| | - Bing Liu
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China; Department of Oral & Maxillofacial - Head Neck Oncology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China.
| | - Lin-Lin Bu
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China; Department of Oral & Maxillofacial - Head Neck Oncology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China.
| |
Collapse
|
5
|
Morita D, Mazen S, Tsujiko S, Otake Y, Sato Y, Numajiri T. Deep-learning-based automatic facial bone segmentation using a two-dimensional U-Net. Int J Oral Maxillofac Surg 2023; 52:787-792. [PMID: 36328865 DOI: 10.1016/j.ijom.2022.10.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2022] [Revised: 08/16/2022] [Accepted: 10/24/2022] [Indexed: 06/04/2023]
Abstract
The use of deep learning (DL) in medical imaging is becoming increasingly widespread. Although DL has been used previously for the segmentation of facial bones in computed tomography (CT) images, there are few reports of segmentation involving multiple areas. In this study, a U-Net was used to investigate the automatic segmentation of facial bones into eight areas, with the aim of facilitating virtual surgical planning (VSP) and computer-aided design and manufacturing (CAD/CAM) in maxillofacial surgery. CT data from 50 patients were prepared and used for training, and five-fold cross-validation was performed. The output results generated by the DL model were validated by Dice coefficient and average symmetric surface distance (ASSD). The automatic segmentation was successful in all cases, with a mean± standard deviation Dice coefficient of 0.897 ± 0.077 and ASSD of 1.168 ± 1.962 mm. The accuracy was very high for the mandible (Dice coefficient 0.984, ASSD 0.324 mm) and zygomatic bones (Dice coefficient 0.931, ASSD 0.487 mm), and these could be introduced for VSP and CAD/CAM without any modification. The results for other areas, particularly the teeth, were slightly inferior, with possible reasons being the effects of defects, bonded maxillary and mandibular teeth, and metal artefacts. A limitation of this study is that the data were from a single institution. Hence further research is required to improve the accuracy for some facial areas and to validate the results in larger and more diverse populations.
Collapse
Affiliation(s)
- D Morita
- Department of Plastic and Reconstructive Surgery, Kyoto Prefectural University of Medicine, Kyoto, Japan.
| | - S Mazen
- Division of Information Science, Nara Institute of Science and Technology, Nara, Japan
| | - S Tsujiko
- Department of Plastic and Reconstructive Surgery, Saiseikai Shigaken Hospital, Shiga, Japan
| | - Y Otake
- Division of Information Science, Nara Institute of Science and Technology, Nara, Japan
| | - Y Sato
- Division of Information Science, Nara Institute of Science and Technology, Nara, Japan
| | - T Numajiri
- Department of Plastic and Reconstructive Surgery, Kyoto Prefectural University of Medicine, Kyoto, Japan
| |
Collapse
|
6
|
Ileșan RR, Beyer M, Kunz C, Thieringer FM. Comparison of Artificial Intelligence-Based Applications for Mandible Segmentation: From Established Platforms to In-House-Developed Software. Bioengineering (Basel) 2023; 10:604. [PMID: 37237673 PMCID: PMC10215609 DOI: 10.3390/bioengineering10050604] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Accepted: 05/16/2023] [Indexed: 05/28/2023] Open
Abstract
Medical image segmentation, whether semi-automatically or manually, is labor-intensive, subjective, and needs specialized personnel. The fully automated segmentation process recently gained importance due to its better design and understanding of CNNs. Considering this, we decided to develop our in-house segmentation software and compare it to the systems of established companies, an inexperienced user, and an expert as ground truth. The companies included in the study have a cloud-based option that performs accurately in clinical routine (dice similarity coefficient of 0.912 to 0.949) with an average segmentation time ranging from 3'54″ to 85'54″. Our in-house model achieved an accuracy of 94.24% compared to the best-performing software and had the shortest mean segmentation time of 2'03″. During the study, developing in-house segmentation software gave us a glimpse into the strenuous work that companies face when offering clinically relevant solutions. All the problems encountered were discussed with the companies and solved, so both parties benefited from this experience. In doing so, we demonstrated that fully automated segmentation needs further research and collaboration between academics and the private sector to achieve full acceptance in clinical routines.
Collapse
Affiliation(s)
- Robert R. Ileșan
- Department of Oral and Cranio-Maxillofacial Surgery, University Hospital Basel, 4031 Basel, Switzerland; (M.B.); (C.K.); (F.M.T.)
| | - Michel Beyer
- Department of Oral and Cranio-Maxillofacial Surgery, University Hospital Basel, 4031 Basel, Switzerland; (M.B.); (C.K.); (F.M.T.)
- Medical Additive Manufacturing Research Group (Swiss MAM), Department of Biomedical Engineering, University of Basel, 4123 Allschwil, Switzerland
| | - Christoph Kunz
- Department of Oral and Cranio-Maxillofacial Surgery, University Hospital Basel, 4031 Basel, Switzerland; (M.B.); (C.K.); (F.M.T.)
| | - Florian M. Thieringer
- Department of Oral and Cranio-Maxillofacial Surgery, University Hospital Basel, 4031 Basel, Switzerland; (M.B.); (C.K.); (F.M.T.)
- Medical Additive Manufacturing Research Group (Swiss MAM), Department of Biomedical Engineering, University of Basel, 4123 Allschwil, Switzerland
| |
Collapse
|
7
|
Paxton NC. Navigating the intersection of 3D printing, software regulation and quality control for point-of-care manufacturing of personalized anatomical models. 3D Print Med 2023; 9:9. [PMID: 37024730 PMCID: PMC10080800 DOI: 10.1186/s41205-023-00175-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2023] [Accepted: 03/24/2023] [Indexed: 04/08/2023] Open
Abstract
3D printing technology has become increasingly popular in healthcare settings, with applications of 3D printed anatomical models ranging from diagnostics and surgical planning to patient education. However, as the use of 3D printed anatomical models becomes more widespread, there is a growing need for regulation and quality control to ensure their accuracy and safety. This literature review examines the current state of 3D printing in hospitals and FDA regulation process for software intended for use in producing 3D printed models and provides for the first time a comprehensive list of approved software platforms alongside the 3D printers that have been validated with each for producing 3D printed anatomical models. The process for verification and validation of these 3D printed products, as well as the potential for inaccuracy in these models, is discussed, including methods for testing accuracy, limits, and standards for accuracy testing. This article emphasizes the importance of regulation and quality control in the use of 3D printing technology in healthcare, the need for clear guidelines and standards for both the software and the printed products to ensure the safety and accuracy of 3D printed anatomical models, and the opportunity to expand the library of regulated 3D printers.
Collapse
Affiliation(s)
- Naomi C Paxton
- Phil & Penny Knight Campus for Accelerating Scientific Impact, University of Oregon, Eugene, OR, USA.
| |
Collapse
|
8
|
Synergy between artificial intelligence and precision medicine for computer-assisted oral and maxillofacial surgical planning. Clin Oral Investig 2023; 27:897-906. [PMID: 36323803 DOI: 10.1007/s00784-022-04706-4] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Accepted: 08/29/2022] [Indexed: 11/06/2022]
Abstract
OBJECTIVES The aim of this review was to investigate the application of artificial intelligence (AI) in maxillofacial computer-assisted surgical planning (CASP) workflows with the discussion of limitations and possible future directions. MATERIALS AND METHODS An in-depth search of the literature was undertaken to review articles concerned with the application of AI for segmentation, multimodal image registration, virtual surgical planning (VSP), and three-dimensional (3D) printing steps of the maxillofacial CASP workflows. RESULTS The existing AI models were trained to address individual steps of CASP, and no single intelligent workflow was found encompassing all steps of the planning process. Segmentation of dentomaxillofacial tissue from computed tomography (CT)/cone-beam CT imaging was the most commonly explored area which could be applicable in a clinical setting. Nevertheless, a lack of generalizability was the main issue, as the majority of models were trained with the data derived from a single device and imaging protocol which might not offer similar performance when considering other devices. In relation to registration, VSP and 3D printing, the presence of inadequate heterogeneous data limits the automatization of these tasks. CONCLUSION The synergy between AI and CASP workflows has the potential to improve the planning precision and efficacy. However, there is a need for future studies with big data before the emergent technology finds application in a real clinical setting. CLINICAL RELEVANCE The implementation of AI models in maxillofacial CASP workflows could minimize a surgeon's workload and increase efficiency and consistency of the planning process, meanwhile enhancing the patient-specific predictability.
Collapse
|
9
|
Steybe D, Poxleitner P, Metzger MC, Brandenburg LS, Schmelzeisen R, Bamberg F, Tran PH, Kellner E, Reisert M, Russe MF. Automated segmentation of head CT scans for computer-assisted craniomaxillofacial surgery applying a hierarchical patch-based stack of convolutional neural networks. Int J Comput Assist Radiol Surg 2022; 17:2093-2101. [PMID: 35665881 PMCID: PMC9515026 DOI: 10.1007/s11548-022-02673-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2021] [Accepted: 05/03/2022] [Indexed: 11/25/2022]
Abstract
PURPOSE Computer-assisted techniques play an important role in craniomaxillofacial surgery. As segmentation of three-dimensional medical imaging represents a cornerstone for these procedures, the present study was aiming at investigating a deep learning approach for automated segmentation of head CT scans. METHODS The deep learning approach of this study was based on the patchwork toolbox, using a multiscale stack of 3D convolutional neural networks. The images were split into nested patches using a fixed 3D matrix size with decreasing physical size in a pyramid format of four scale depths. Manual segmentation of 18 craniomaxillofacial structures was performed in 20 CT scans, of which 15 were used for the training of the deep learning network and five were used for validation of the results of automated segmentation. Segmentation accuracy was evaluated by Dice similarity coefficient (DSC), surface DSC, 95% Hausdorff distance (95HD) and average symmetric surface distance (ASSD). RESULTS Mean for DSC was 0.81 ± 0.13 (range: 0.61 [mental foramen] - 0.98 [mandible]). Mean Surface DSC was 0.94 ± 0.06 (range: 0.87 [mental foramen] - 0.99 [mandible]), with values > 0.9 for all structures but the mental foramen. Mean 95HD was 1.93 ± 2.05 mm (range: 1.00 [mandible] - 4.12 mm [maxillary sinus]) and for ASSD, a mean of 0.42 ± 0.44 mm (range: 0.09 [mandible] - 1.19 mm [mental foramen]) was found, with values < 1 mm for all structures but the mental foramen. CONCLUSION In this study, high accuracy of automated segmentation of a variety of craniomaxillofacial structures could be demonstrated, suggesting this approach to be suitable for the incorporation into a computer-assisted craniomaxillofacial surgery workflow. The small amount of training data required and the flexibility of an open source-based network architecture enable a broad variety of clinical and research applications.
Collapse
Affiliation(s)
- David Steybe
- Department of Oral and Maxillofacial Surgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Hugstetter Str. 55, 79106, Freiburg, Germany.
| | - Philipp Poxleitner
- Department of Oral and Maxillofacial Surgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Hugstetter Str. 55, 79106, Freiburg, Germany
- Berta-Ottenstein-Programme for Clinician Scientists, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Marc Christian Metzger
- Department of Oral and Maxillofacial Surgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Hugstetter Str. 55, 79106, Freiburg, Germany
| | - Leonard Simon Brandenburg
- Department of Oral and Maxillofacial Surgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Hugstetter Str. 55, 79106, Freiburg, Germany
| | - Rainer Schmelzeisen
- Department of Oral and Maxillofacial Surgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Hugstetter Str. 55, 79106, Freiburg, Germany
| | - Fabian Bamberg
- Department of Diagnostic and Interventional Radiology, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Phuong Hien Tran
- Department of Diagnostic and Interventional Radiology, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Elias Kellner
- Department of Medical Physics, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Marco Reisert
- Department of Medical Physics, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Maximilian Frederik Russe
- Department of Diagnostic and Interventional Radiology, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| |
Collapse
|
10
|
Xu J, Zeng B, Egger J, Wang C, Smedby Ö, Jiang X, Chen X. A review on AI-based medical image computing in head and neck surgery. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac840f] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Accepted: 07/25/2022] [Indexed: 11/11/2022]
Abstract
Abstract
Head and neck surgery is a fine surgical procedure with a complex anatomical space, difficult operation and high risk. Medical image computing (MIC) that enables accurate and reliable preoperative planning is often needed to reduce the operational difficulty of surgery and to improve patient survival. At present, artificial intelligence, especially deep learning, has become an intense focus of research in MIC. In this study, the application of deep learning-based MIC in head and neck surgery is reviewed. Relevant literature was retrieved on the Web of Science database from January 2015 to May 2022, and some papers were selected for review from mainstream journals and conferences, such as IEEE Transactions on Medical Imaging, Medical Image Analysis, Physics in Medicine and Biology, Medical Physics, MICCAI, etc. Among them, 65 references are on automatic segmentation, 15 references on automatic landmark detection, and eight references on automatic registration. In the elaboration of the review, first, an overview of deep learning in MIC is presented. Then, the application of deep learning methods is systematically summarized according to the clinical needs, and generalized into segmentation, landmark detection and registration of head and neck medical images. In segmentation, it is mainly focused on the automatic segmentation of high-risk organs, head and neck tumors, skull structure and teeth, including the analysis of their advantages, differences and shortcomings. In landmark detection, the focus is mainly on the introduction of landmark detection in cephalometric and craniomaxillofacial images, and the analysis of their advantages and disadvantages. In registration, deep learning networks for multimodal image registration of the head and neck are presented. Finally, their shortcomings and future development directions are systematically discussed. The study aims to serve as a reference and guidance for researchers, engineers or doctors engaged in medical image analysis of head and neck surgery.
Collapse
|
11
|
Su YX, Thieringer FM, Fernandes R, Parmar S. Editorial: Virtual surgical planning and 3d printing in head and neck tumor resection and reconstruction. Front Oncol 2022; 12:960545. [PMID: 36003774 PMCID: PMC9394458 DOI: 10.3389/fonc.2022.960545] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2022] [Accepted: 07/27/2022] [Indexed: 11/29/2022] Open
Affiliation(s)
- Yu-xiong Su
- Division of Oral and Maxillofacial Surgery, Faculty of Dentistry, The University of Hong Kong, Hong Kong, Hong Kong SAR, China
- *Correspondence: Yu-xiong Su,
| | - Florian M. Thieringer
- Department of Oral and Maxillofacial Surgery, University Hospital of Basel, Basel, Switzerland
| | - Rui Fernandes
- Department of Oral and Maxillofacial Surgery, College of Medicine - Jacksonville, University of Florida, Jacksonville, FL, United States
| | - Sat Parmar
- Department of Oral and Maxillofacial Surgery, University Hospitals Birmingham NHS Foundation Trust, Birmingham, United Kingdom
| |
Collapse
|
12
|
Pu JJ, Hakim SG, Melville JC, Su YX. Current Trends in the Reconstruction and Rehabilitation of Jaw following Ablative Surgery. Cancers (Basel) 2022; 14:cancers14143308. [PMID: 35884369 PMCID: PMC9320033 DOI: 10.3390/cancers14143308] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2022] [Revised: 06/18/2022] [Accepted: 06/21/2022] [Indexed: 12/04/2022] Open
Abstract
Simple Summary The Maxilla and mandible provide skeletal support for of the middle and lower third of our faces, allowing for the normal functioning of breathing, chewing, swallowing, and speech. The ablative surgery of jaws in the past often led to serious disfigurement and disruption in form and function. However, with recent strides made in computer-assisted surgery and patient-specific implants, the individual functional reconstruction of the jaw is evolving rapidly and the prompt rehabilitation of both the masticatory function and aesthetics after jaw resection has been made possible. In the present review, the recent advancements in jaw reconstruction technology and future perspectives will be discussed. Abstract The reconstruction and rehabilitation of jaws following ablative surgery have been transformed in recent years by the development of computer-assisted surgery and virtual surgical planning. In this narrative literature review, we aim to discuss the current state-of-the-art jaw reconstruction, and to preview the potential future developments. The application of patient-specific implants and the “jaw-in-a-day technique” have made the fast restoration of jaws’ function and aesthetics possible. The improved efficiency of primary reconstructive surgery allows for the rehabilitation of neurosensory function following ablative surgery. Currently, a great deal of research has been conducted on augmented/mixed reality, artificial intelligence, virtual surgical planning for soft tissue reconstruction, and the rehabilitation of the stomatognathic system. This will lead to an even more exciting future for the functional reconstruction and rehabilitation of the jaw following ablative surgery.
Collapse
Affiliation(s)
- Jane J. Pu
- Division of Oral and Maxillofacial Surgery, Faculty of Dentistry, The University of Hong Kong, Hong Kong;
| | - Samer G. Hakim
- Department Oral and Maxillofacial Surgery, University Hospital of Lübeck, Ratzeburger Allee 160, 23538 Lübeck, Germany;
| | - James C. Melville
- Department of Oral and Maxillofacial Surgery, University of Texas Health Science Center at Houston, Houston, TX 77030, USA;
| | - Yu-Xiong Su
- Division of Oral and Maxillofacial Surgery, Faculty of Dentistry, The University of Hong Kong, Hong Kong;
- Correspondence:
| |
Collapse
|