1
|
Cai Y, Zhang X, Cao J, Grzybowski A, Ye J, Lou L. Application of artificial intelligence in oculoplastics. Clin Dermatol 2024; 42:259-267. [PMID: 38184122 DOI: 10.1016/j.clindermatol.2023.12.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2024]
Abstract
Oculoplastics is a subspecialty of ophthalmology/dermatology concerned with eyelid, orbital, and lacrimal diseases. Artificial intelligence (AI), with its powerful ability to analyze large data sets, has dramatically benefited oculoplastics. The cutting-edge AI technology is widely applied to extract ocular parameters and to use these results for further assessment, such as screening and diagnosis of blepharoptosis and predicting the progression of thyroid eye disease. AI also assists in treatment procedures, such as surgical strategy planning in blepharoptosis. High efficiency and high reliability are the most apparent advantages of AI, with promising prospects. The possibilities of AI in oculoplastics may lie in three-dimensional modeling technology and image generation. We retrospectively summarize AI applications involving eyelid, orbital, and lacrimal diseases in oculoplastics, and we also examine the strengths and weaknesses of AI technology in oculoplastics.
Collapse
Affiliation(s)
- Yilu Cai
- Eye Center, Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou, China
| | - Xuan Zhang
- Eye Center, Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou, China
| | - Jing Cao
- Eye Center, Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou, China
| | - Andrzej Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, Poznan, Poland
| | - Juan Ye
- Eye Center, Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou, China
| | - Lixia Lou
- Eye Center, Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou, China.
| |
Collapse
|
2
|
Smadar L, Arazi M, Greenberg G, Haviv L, Benifla O, Zabatani A, Fabian I, Dagan M, Gutovitz JM, Ben Simon GJ, Landau-Prat D. Semiautomated MRI-Based Method for Orbital Volume and Contour Analysis. Ophthalmic Plast Reconstr Surg 2024:00002341-990000000-00370. [PMID: 38534059 DOI: 10.1097/iop.0000000000002656] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/28/2024]
Abstract
OBJECTIVE The architecture of the orbital cavity is intricate, and precise measurement of its growth is essential for managing ocular and orbital pathologies. Most methods for those measurements are by CT imaging, although MRI for soft tissue assessment is indicated in many cases, specifically pediatric patients. This study introduces a novel semiautomated MRI-based approach for depicting orbital shape and dimensions. DESIGN A retrospective cohort study. PARTICIPANTS Patients with at least 1 normal orbit who underwent both CT and MRI imaging at a single center from 2015 to 2023. METHODS Orbital dimensions included volume, horizontal and vertical lengths, and depth. These were determined by manual segmentation followed by 3-dimensional image processing software. MAIN OUTCOME MEASURES Differences in orbital measurements between MRI and CT scans. RESULTS Thirty-one patients (mean age 47.7 ± 23.8 years, 21 [67.7%]) females, were included. The mean differences in delta values between orbital measurements on CT versus MRI were: volume 0.03 ± 2.01 ml, horizontal length 0.53 ± 2.12 mm, vertical length, 0.36 ± 2.53 mm, and depth 0.97 ± 3.90 mm. The CT and. MRI orbital measurements were strongly correlated: volume (r = 0.92, p < 0.001), horizontal length (r = 0.65, p < 0.001), vertical length (r = 0.57, p = 0.001), and depth (r = 0.46, p = 0.009). The mean values of all measurements were similar on the paired-samples t test: p = 0.9 for volume (30.86 ± 5.04 ml on CT and 30.88 ± 4.92 ml on MRI), p = 0.2 for horizontal length, p = 0.4 for vertical length, and p = 0.2 for depth. CONCLUSIONS We present an innovative semiautomated method capable of calculating orbital volume and demonstrating orbital contour by MRI validated against the gold standard CT-based measurements. This method can serve as a valuable tool for evaluating diverse orbital processes.
Collapse
Affiliation(s)
- Lital Smadar
- Orbital Ophthalmic Plastic & Lacrimal Surgery Institute, Goldschleger Eye Institute, Sheba Medical Center, Tel Hashomer
- School of Medicine, Faculty of Medical and Health Sciences, Tel Aviv University, Ramat Aviv
| | - Mattan Arazi
- Orbital Ophthalmic Plastic & Lacrimal Surgery Institute, Goldschleger Eye Institute, Sheba Medical Center, Tel Hashomer
- School of Medicine, Faculty of Medical and Health Sciences, Tel Aviv University, Ramat Aviv
| | - Gahl Greenberg
- School of Medicine, Faculty of Medical and Health Sciences, Tel Aviv University, Ramat Aviv
- Department of Diagnostic Imaging, Section of Neuroradiology, Sheba Medical Center, Ramat Gan
| | - Limor Haviv
- School of Medicine, Faculty of Medical and Health Sciences, Tel Aviv University, Ramat Aviv
- PlanNet - The Sheba 3D Lab, Sheba Medical Center
| | - Or Benifla
- School of Medicine, Faculty of Medical and Health Sciences, Tel Aviv University, Ramat Aviv
- PlanNet - The Sheba 3D Lab, Sheba Medical Center
| | - Amit Zabatani
- School of Medicine, Faculty of Medical and Health Sciences, Tel Aviv University, Ramat Aviv
- Department of Orthopedics, Sheba Medical Center
- The Sheba Talpiot Medical Leadership Program, Sheba Medical Center, Tel Hashomer
| | - Ina Fabian
- Department of Cell and Developmental Biology, School of Medicine, Tel Aviv University, Tel Aviv
| | - Mayan Dagan
- Adelson school of medicine, Ariel University, Ariel, Israel
| | - Joel M Gutovitz
- Orbital Ophthalmic Plastic & Lacrimal Surgery Institute, Goldschleger Eye Institute, Sheba Medical Center, Tel Hashomer
- School of Medicine, Faculty of Medical and Health Sciences, Tel Aviv University, Ramat Aviv
| | - Guy J Ben Simon
- Orbital Ophthalmic Plastic & Lacrimal Surgery Institute, Goldschleger Eye Institute, Sheba Medical Center, Tel Hashomer
- School of Medicine, Faculty of Medical and Health Sciences, Tel Aviv University, Ramat Aviv
- The Sheba Talpiot Medical Leadership Program, Sheba Medical Center, Tel Hashomer
| | - Daphna Landau-Prat
- Orbital Ophthalmic Plastic & Lacrimal Surgery Institute, Goldschleger Eye Institute, Sheba Medical Center, Tel Hashomer
- School of Medicine, Faculty of Medical and Health Sciences, Tel Aviv University, Ramat Aviv
- The Sheba Talpiot Medical Leadership Program, Sheba Medical Center, Tel Hashomer
| |
Collapse
|
3
|
Li W, Song H, Ai D, Shi J, Wang Y, Wu W, Yang J. Semi-supervised segmentation of orbit in CT images with paired copy-paste strategy. Comput Biol Med 2024; 171:108176. [PMID: 38401453 DOI: 10.1016/j.compbiomed.2024.108176] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Revised: 02/06/2024] [Accepted: 02/18/2024] [Indexed: 02/26/2024]
Abstract
The segmentation of the orbit in computed tomography (CT) images plays a crucial role in facilitating the quantitative analysis of orbital decompression surgery for patients with Thyroid-associated Ophthalmopathy (TAO). However, the task of orbit segmentation, particularly in postoperative images, remains challenging due to the significant shape variation and limited amount of labeled data. In this paper, we present a two-stage semi-supervised framework for the automatic segmentation of the orbit in both preoperative and postoperative images, which consists of a pseudo-label generation stage and a semi-supervised segmentation stage. A Paired Copy-Paste strategy is concurrently introduced to proficiently amalgamate features extracted from both preoperative and postoperative images, thereby augmenting the network discriminative capability in discerning changes within orbital boundaries. More specifically, we employ a random cropping technique to transfer regions from labeled preoperative images (foreground) onto unlabeled postoperative images (background), as well as unlabeled preoperative images (foreground) onto labeled postoperative images (background). It is imperative to acknowledge that every set of preoperative and postoperative images belongs to the identical patient. The semi-supervised segmentation network (stage 2) utilizes a combination of mixed supervisory signals from pseudo labels (stage 1) and ground truth to process the two mixed images. The training and testing of the proposed method have been conducted on the CT dataset obtained from the Eye Hospital of Wenzhou Medical University. The experimental results demonstrate that the proposed method achieves a mean Dice similarity coefficient (DSC) of 91.92% with only 5% labeled data, surpassing the performance of the current state-of-the-art method by 2.4%.
Collapse
Affiliation(s)
- Wentao Li
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, 100081, China.
| | - Hong Song
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, 100081, China.
| | - Danni Ai
- School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China.
| | - Jieliang Shi
- Eye Hospital of Wenzhou Medical University, Wenzhou, 325027, China.
| | - Yuanyuan Wang
- School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China.
| | - Wencan Wu
- Eye Hospital of Wenzhou Medical University, Wenzhou, 325027, China.
| | - Jian Yang
- School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China.
| |
Collapse
|
4
|
Fu SJ, Yang EC. Neuroplasticity in honey bee brains: An enhanced micro-computed tomography protocol for precise mushroom body volume measurement. J Neurosci Methods 2024; 403:110040. [PMID: 38135123 DOI: 10.1016/j.jneumeth.2023.110040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Revised: 10/31/2023] [Accepted: 12/17/2023] [Indexed: 12/24/2023]
Abstract
BACKGROUND In insect brains, mushroom bodies are associated with memory and learning behavior. It has been demonstrated that the volume of the mushroom bodies in the brain of a worker honey bee changes during the adult stage. Changes in mushroom body volume imply high neuroplasticity in the brains and may be related to the age polyethism of honey bees. A suitable volume measurement method is needed to understand the correlation between behavioral changes and mushroom body volume changes in honey bees. NEW METHOD We developed a new protocol for insect micro-computed tomography by modifying a previously reported method. Permount™ mounting medium was used as the embedding medium for micro-computed tomography scanning. RESULTS This protocol can generate images with high contrast inside the brain and reduce the marked shape changes during specimen processing. From the resulting high-contrast images, we used freeware to generate a three-dimensional model and calculate the volumes of the mushroom bodies in honey bees. The measured volumes of the mushroom bodies were larger than the values reported in most previous studies. There was no significant difference between the left and right mushroom body volumes, but the volumes of honey bee mushroom bodies significantly increased with age. COMPARISON WITH EXISTING METHODS Previous protocols for micro-computed tomography using dried samples would cause brain shrinkage; protocols using ethanol-preserved or resin-embedded samples generated images with lower contrast. CONCLUSIONS The embedding protocol for micro-computed tomography is suitable for calculating volume of the mushroom bodies in honey bee brains.
Collapse
Affiliation(s)
- Shang-Jui Fu
- Department of Entomology, National Taiwan University, Taiwan
| | - En-Cheng Yang
- Department of Entomology, National Taiwan University, Taiwan.
| |
Collapse
|
5
|
Sigron GR, Britschgi CL, Gahl B, Thieringer FM. Insights into Orbital Symmetry: A Comprehensive Retrospective Study of 372 Computed Tomography Scans. J Clin Med 2024; 13:1041. [PMID: 38398354 PMCID: PMC10889405 DOI: 10.3390/jcm13041041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2023] [Revised: 01/31/2024] [Accepted: 02/06/2024] [Indexed: 02/25/2024] Open
Abstract
Background: The operation planning and production of individualized implants with the help of AI-based software after orbital fractures have become increasingly important in recent years. This retrospective study aimed to investigate the healthy orbitae of 372 patients from CT images in the bone and soft tissue windows using the Disior™ Bonelogic™ CMF Orbital software. (version 2.1.28). Methods: We analyzed the variables orbital volume, length, and area as a function of age and gender and compared bone and soft tissue windows. Results: For all variables, the intraclass correlation showed excellent agreement between the bone and soft tissue windows (p < 0.001). All variables showed higher values when calculated based on bone fenestration with, on average, 1 mL more volume, 0.35 mm more length, and 0.71 cm2 more area (p < 0.001). Across all age groups, men displayed higher values than women with, on average, 8.1 mL larger volume, a 4.78 mm longer orbit, and an 8.5 cm2 larger orbital area (p < 0.001). There was also a non-significant trend in all variables and both sexes toward growth with increasing age. Conclusions: These results mean that, due to the symmetry of the orbits in both the bone and soft tissue windows, the healthy orbit can be mirrored for surgical planning in the event of a fracture.
Collapse
Affiliation(s)
- Guido R. Sigron
- Department of Oral and Cranio-Maxillofacial Surgery and 3D Print Lab, University Hospital Basel, CH-4031 Basel, Switzerland; (C.L.B.); (F.M.T.)
- Medical Additive Manufacturing Research Group (Swiss MAM), Department of Biomedical Engineering, University of Basel, CH-4123 Allschwil, Switzerland
| | - Céline L. Britschgi
- Department of Oral and Cranio-Maxillofacial Surgery and 3D Print Lab, University Hospital Basel, CH-4031 Basel, Switzerland; (C.L.B.); (F.M.T.)
- Medical Additive Manufacturing Research Group (Swiss MAM), Department of Biomedical Engineering, University of Basel, CH-4123 Allschwil, Switzerland
| | - Brigitta Gahl
- Surgical Outcome Research Center, Department of Clinical Research, University Hospital Basel, University of Basel, CH-4031 Basel, Switzerland;
| | - Florian M. Thieringer
- Department of Oral and Cranio-Maxillofacial Surgery and 3D Print Lab, University Hospital Basel, CH-4031 Basel, Switzerland; (C.L.B.); (F.M.T.)
- Medical Additive Manufacturing Research Group (Swiss MAM), Department of Biomedical Engineering, University of Basel, CH-4123 Allschwil, Switzerland
| |
Collapse
|
6
|
Wu KY, Kulbay M, Daigle P, Nguyen BH, Tran SD. Nonspecific Orbital Inflammation (NSOI): Unraveling the Molecular Pathogenesis, Diagnostic Modalities, and Therapeutic Interventions. Int J Mol Sci 2024; 25:1553. [PMID: 38338832 PMCID: PMC10855920 DOI: 10.3390/ijms25031553] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2023] [Revised: 01/21/2024] [Accepted: 01/23/2024] [Indexed: 02/12/2024] Open
Abstract
Nonspecific orbital inflammation (NSOI), colloquially known as orbital pseudotumor, sometimes presents a diagnostic and therapeutic challenge in ophthalmology. This review aims to dissect NSOI through a molecular lens, offering a comprehensive overview of its pathogenesis, clinical presentation, diagnostic methods, and management strategies. The article delves into the underpinnings of NSOI, examining immunological and environmental factors alongside intricate molecular mechanisms involving signaling pathways, cytokines, and mediators. Special emphasis is placed on emerging molecular discoveries and approaches, highlighting the significance of understanding molecular mechanisms in NSOI for the development of novel diagnostic and therapeutic tools. Various diagnostic modalities are scrutinized for their utility and limitations. Therapeutic interventions encompass medical treatments with corticosteroids and immunomodulatory agents, all discussed in light of current molecular understanding. More importantly, this review offers a novel molecular perspective on NSOI, dissecting its pathogenesis and management with an emphasis on the latest molecular discoveries. It introduces an integrated approach combining advanced molecular diagnostics with current clinical assessments and explores emerging targeted therapies. By synthesizing these facets, the review aims to inform clinicians and researchers alike, paving the way for molecularly informed, precision-based strategies for managing NSOI.
Collapse
Affiliation(s)
- Kevin Y. Wu
- Department of Surgery, Division of Ophthalmology, University of Sherbrooke, Sherbrooke, QC J1G 2E8, Canada; (K.Y.W.)
| | - Merve Kulbay
- Department of Ophthalmology & Visual Sciences, McGill University, Montreal, QC H4A 0A4, Canada
| | - Patrick Daigle
- Department of Surgery, Division of Ophthalmology, University of Sherbrooke, Sherbrooke, QC J1G 2E8, Canada; (K.Y.W.)
| | - Bich H. Nguyen
- CHU Sainte Justine Hospital, Montreal, QC H3T 1C5, Canada
| | - Simon D. Tran
- Faculty of Dental Medicine and Oral Health Sciences, McGill University, Montreal, QC H3A 1G1, Canada
| |
Collapse
|
7
|
Kim H, Jeon YD, Park KB, Cha H, Kim MS, You J, Lee SW, Shin SH, Chung YG, Kang SB, Jang WS, Yoon DK. Automatic segmentation of inconstant fractured fragments for tibia/fibula from CT images using deep learning. Sci Rep 2023; 13:20431. [PMID: 37993627 PMCID: PMC10665312 DOI: 10.1038/s41598-023-47706-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Accepted: 11/17/2023] [Indexed: 11/24/2023] Open
Abstract
Orthopaedic surgeons need to correctly identify bone fragments using 2D/3D CT images before trauma surgery. Advances in deep learning technology provide good insights into trauma surgery over manual diagnosis. This study demonstrates the application of the DeepLab v3+ -based deep learning model for the automatic segmentation of fragments of the fractured tibia and fibula from CT images and the results of the evaluation of the performance of the automatic segmentation. The deep learning model, which was trained using over 11 million images, showed good performance with a global accuracy of 98.92%, a weighted intersection over the union of 0.9841, and a mean boundary F1 score of 0.8921. Moreover, deep learning performed 5-8 times faster than the experts' recognition performed manually, which is comparatively inefficient, with almost the same significance. This study will play an important role in preoperative surgical planning for trauma surgery with convenience and speed.
Collapse
Affiliation(s)
- Hyeonjoo Kim
- Department of Medical Device Engineering and Management, College of Medicine, Yonsei University, Seoul, Republic of Korea
- Industrial R&D Center, KAVILAB Co. Ltd., Seoul, Republic of Korea
| | - Young Dae Jeon
- Department of Orthopedic Surgery, University of Ulsan, College of Medicine, Ulsan University Hospital, Ulsan, Republic of Korea
| | - Ki Bong Park
- Department of Orthopedic Surgery, University of Ulsan, College of Medicine, Ulsan University Hospital, Ulsan, Republic of Korea
| | - Hayeong Cha
- Industrial R&D Center, KAVILAB Co. Ltd., Seoul, Republic of Korea
| | - Moo-Sub Kim
- Industrial R&D Center, KAVILAB Co. Ltd., Seoul, Republic of Korea
| | - Juyeon You
- Industrial R&D Center, KAVILAB Co. Ltd., Seoul, Republic of Korea
| | - Se-Won Lee
- Department of Orthopedic Surgery, Yeouido St. Mary's Hospital,, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Seung-Han Shin
- Department of Orthopedic Surgery, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Yang-Guk Chung
- Department of Orthopedic Surgery, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Sung Bin Kang
- Industrial R&D Center, KAVILAB Co. Ltd., Seoul, Republic of Korea
| | - Won Seuk Jang
- Department of Medical Device Engineering and Management, College of Medicine, Yonsei University, Seoul, Republic of Korea.
| | - Do-Kun Yoon
- Industrial R&D Center, KAVILAB Co. Ltd., Seoul, Republic of Korea.
| |
Collapse
|
8
|
Xu J, Zhang D, Wang C, Zhou H, Li Y, Chen X. Automatic segmentation of orbital wall from CT images via a thin wall region supervision-based multi-scale feature search network. Int J Comput Assist Radiol Surg 2023; 18:2051-2062. [PMID: 37219805 DOI: 10.1007/s11548-023-02924-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Accepted: 04/14/2023] [Indexed: 05/24/2023]
Abstract
PURPOSE Orbital wall segmentation is critical for orbital measurement and reconstruction. However, the orbital floor and medial wall are made up of thin walls (TW) with low gradient values, making it difficult to segment the blurred areas of the CT images. Clinically, doctors have to manually repair the missing parts of TW, which is time-consuming and laborious. METHODS To address these issues, this paper proposes an automatic orbital wall segmentation method based on TW region supervision using a multi-scale feature search network. First of all, in the encoding branch, the densely connected atrous spatial pyramid pooling based on the residual connection is adopted to achieve a multi-scale feature search. Then, for feature enhancement, multi-scale up-sampling and residual connection are applied to perform skip connection of features in multi-scale convolution. Finally, we explore a strategy for improving the loss function based on the TW region supervision, which effectively increases the TW region segmentation accuracy. RESULTS The test results show that the proposed network performs well in terms of automatic segmentation. For the whole orbital wall region, the Dice coefficient (Dice) of segmentation accuracy reaches 96.086 ± 1.049%, the Intersection over Union (IOU) reaches 92.486 ± 1.924%, and the 95% Hausdorff distance (HD) reaches 0.509 ± 0.166 mm. For the TW region, the Dice reaches 91.470 ± 1.739%, the IOU reaches 84.327 ± 2.938%, and the 95% HD reaches 0.481 ± 0.082 mm. Compared with other segmentation networks, the proposed network improves the segmentation accuracy while filling the missing parts in the TW region. CONCLUSION In the proposed network, the average segmentation time of each orbital wall is only 4.05 s, obviously improving the segmentation efficiency of doctors. In the future, it may have a practical significance in clinical applications such as preoperative planning for orbital reconstruction, orbital modeling, orbital implant design, and so on.
Collapse
Affiliation(s)
- Jiangchang Xu
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, Room 925, School of Mechanical Engineering, Shanghai Jiao Tong University, Dongchuan Road 800, Minhang District, Shanghai, 200240, China
| | - Dingzhong Zhang
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, Room 925, School of Mechanical Engineering, Shanghai Jiao Tong University, Dongchuan Road 800, Minhang District, Shanghai, 200240, China
| | - Chunliang Wang
- Department of Biomedical Engineering and Health Systems, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Huifang Zhou
- Department of Ophthalmology, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Shanghai Key Laboratory of Orbital Diseases and Ocular Oncology, Shanghai, China
| | - Yinwei Li
- Department of Ophthalmology, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China.
- Shanghai Key Laboratory of Orbital Diseases and Ocular Oncology, Shanghai, China.
| | - Xiaojun Chen
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, Room 925, School of Mechanical Engineering, Shanghai Jiao Tong University, Dongchuan Road 800, Minhang District, Shanghai, 200240, China.
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China.
| |
Collapse
|
9
|
Morita D, Kawarazaki A, Koimizu J, Tsujiko S, Soufi M, Otake Y, Sato Y, Numajiri T. Automatic orbital segmentation using deep learning-based 2D U-net and accuracy evaluation: A retrospective study. J Craniomaxillofac Surg 2023; 51:609-613. [PMID: 37813770 DOI: 10.1016/j.jcms.2023.09.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2023] [Revised: 05/25/2023] [Accepted: 09/05/2023] [Indexed: 10/11/2023] Open
Abstract
The purpose of this study was to verify whether the accuracy of automatic segmentation (AS) of computed tomography (CT) images of fractured orbits using deep learning (DL) is sufficient for clinical application. In the surgery of orbital fractures, many methods have been reported to create a 3D anatomical model for use as a reference. However, because the orbit bone is thin and complex, creating a segmentation model for 3D printing is complicated and time-consuming. Here, the training of DL was performed using U-Net as the DL model, and the AS output was validated with Dice coefficients and average symmetry surface distance (ASSD). In addition, the AS output was 3D printed and evaluated for accuracy by four surgeons, each with over 15 years of clinical experience. One hundred twenty-five CT images were prepared, and manual orbital segmentation was performed in all cases. Ten orbital fracture cases were randomly selected as validation data, and the remaining 115 were set as training data. AS was successful in all cases, with good accuracy: Dice, 0.860 ± 0.033 (mean ± SD); ASSD, 0.713 ± 0.212 mm. In evaluating AS accuracy, the expert surgeons generally considered that it could be used for surgical support without further modification. The orbital AS algorithm developed using DL in this study is extremely accurate and can create 3D models rapidly at low cost, potentially enabling safer and more accurate surgeries.
Collapse
Affiliation(s)
- Daiki Morita
- Department of Plastic and Reconstructive Surgery, Kyoto Prefectural University of Medicine, Kyoto, Japan.
| | - Ayako Kawarazaki
- Department of Plastic and Reconstructive Surgery, Kyoto Prefectural University of Medicine, Kyoto, Japan
| | - Jungen Koimizu
- Department of Plastic and Reconstructive Surgery, Omihachiman Community Medical Center, Shiga, Japan
| | - Shoko Tsujiko
- Department of Plastic and Reconstructive Surgery, Saiseikai Shigaken Hospital, Shiga, Japan
| | - Mazen Soufi
- Division of Information Science, Nara Institute of Science and Technology, Nara, Japan
| | - Yoshito Otake
- Division of Information Science, Nara Institute of Science and Technology, Nara, Japan
| | - Yoshinobu Sato
- Division of Information Science, Nara Institute of Science and Technology, Nara, Japan
| | - Toshiaki Numajiri
- Department of Plastic and Reconstructive Surgery, Kyoto Prefectural University of Medicine, Kyoto, Japan
| |
Collapse
|
10
|
Milham N, Schmutz B, Cooper T, Hsu E, Hutmacher DW, Lynham A. Are Magnetic Resonance Imaging-Generated 3Dimensional Models Comparable to Computed Tomography-Generated 3Dimensional Models for Orbital Fracture Reconstruction? An In-Vitro Volumetric Analysis. J Oral Maxillofac Surg 2023; 81:1116-1123. [PMID: 37336493 DOI: 10.1016/j.joms.2023.05.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 03/22/2023] [Accepted: 05/29/2023] [Indexed: 06/21/2023]
Abstract
BACKGROUND Magnetic resonance imaging (MRI) is being increasingly considered as an alternative for the evaluation and reconstruction of orbital fractures. No previous research has compared the orbital volume of an MRI-imaged, three-dimensional (3D), reconstructed, and virtually restored bony orbit to the gold standard of computed tomography (CT). PURPOSE To measure the orbital volumes generated from MRI-based 3D models of fractured bony orbits with virtually positioned prebent fan plates in situ and compare them to the volumes of CT-based virtually reconstructed orbital models. STUDY DESIGN This retrospective in-vitro study used CT and MRI data from adult patients with orbital trauma assessed at the Royal Brisbane and Women's Hospital Outpatient Maxillofacial Clinic from 2011 to 2012. Only those with orbital blowout fractures were included in the study. PREDICTOR VARIABLE The primary predictor variable was imaging modality, with CT- and MRI-based 3D models used for plate bending and placement. MAIN OUTCOME VARIABLE The primary outcome variable was the orbital volume of the enclosed 3D models. COVARIATES Additional data collected was age, sex, and side of fractured orbit. The effect of operator variability on plate contouring and orbital volume was quantified. ANALYSES The Wilcoxon signed rank test was used to assess differences between orbital volumes with a significance level P < .05. RESULTS Of 11 eligible participants, six patients (four male and two female; mean age 31 ± 8.6 years) were enrolled. Two sets of six CT-based virtually restored orbits were smaller than the intact contralateral CT models by an average of 1.02 cm3 (95% CI -0.07 to 2.11 cm3; P = .028) and 0.99 cm3 (95% CI 0.07 to 1.91 cm3; P = .028), respectively. The average volume difference between the MRI-based virtually restored orbit and the intact contralateral MRI model was 0.97 cm3 (95% CI -1.08 to 1.94 cm3; P = .75). Imaging modality did affect orbital volume difference for 1 set of CT and MRI models (0.63 cm3; 95% CI -0.11 to 1.29 cm3; P = .046) but not the other (0.69 cm3; 95% CI -0.11 to 1.23 cm3; P = .075). Single operator variability in plate bending did not result in significant (P = .75) volume differences. CONCLUSIONS MRI can be used to reconstruct orbital volume with a clinically acceptable level of accuracy.
Collapse
Affiliation(s)
- Nicole Milham
- Registrar, Department of Oral and Maxillofacial Surgery, Royal Brisbane and Women's Hospital, Brisbane, Australia.
| | - Beat Schmutz
- Principal Research Fellow, School of Mechanical, Medical and Process Engineering, Faculty of Engineering, Queensland University of Technology; Jamieson Trauma Institute, Metro North Hospital and Health Service; Centre for Biomedical Technologies, Queensland University of Technology; ARC Training Centre for Multiscale 3D Imaging, Modelling, Manufacturing, Queensland University of Technology, Brisbane, Australia
| | - Thomas Cooper
- Fellow in Oral and Maxillofacial Surgery, Canberra Hospital, Canberra, Australia
| | - Edward Hsu
- Senior Staff Specialist, Department of Oral and Maxillofacial Surgery, Royal Brisbane and Women's Hospital, Brisbane, Australia
| | - Dietmar W Hutmacher
- Distinguished Professor, School of Mechanical, Medical and Process Engineering, Faculty of Engineering, Queensland University of Technology, Centre for Biomedical Technologies, Queensland University of Technology, ARC Training Centre for Multiscale 3D Imaging, Modelling, Manufacturing, Queensland University of Technology, Max Planck Queensland Centre for the Materials Science of Extracellular Matrices, Jamieson Trauma Institute, Metro North Hospital and Health Service, Brisbane, Australia
| | - Anthony Lynham
- Associate Professor, Jamieson Trauma Institute, Metro North Hospital and Health Service, Brisbane, Australia
| |
Collapse
|
11
|
Lee J, Lee S, Lee WJ, Moon NJ, Lee JK. Neural network application for assessing thyroid-associated orbitopathy activity using orbital computed tomography. Sci Rep 2023; 13:13018. [PMID: 37563272 PMCID: PMC10415276 DOI: 10.1038/s41598-023-40331-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Accepted: 08/08/2023] [Indexed: 08/12/2023] Open
Abstract
This study aimed to propose a neural network (NN)-based method to evaluate thyroid-associated orbitopathy (TAO) patient activity using orbital computed tomography (CT). Orbital CT scans were obtained from 144 active and 288 inactive TAO patients. These CT scans were preprocessed by selecting eleven slices from axial, coronal, and sagittal planes and segmenting the region of interest. We devised an NN employing information extracted from 13 pipelines to assess these slices and clinical patient age and sex data for TAO activity evaluation. The proposed NN's performance in evaluating active and inactive TAO patients achieved a 0.871 area under the receiver operating curve (AUROC), 0.786 sensitivity, and 0.779 specificity values. In contrast, the comparison models CSPDenseNet and ConvNeXt were significantly inferior to the proposed model, with 0.819 (p = 0.029) and 0.774 (p = 0.04) AUROC values, respectively. Ablation studies based on the Sequential Forward Selection algorithm identified vital information for optimal performance and evidenced that NNs performed best with three to five active pipelines. This study establishes a promising TAO activity diagnosing tool with further validation.
Collapse
Affiliation(s)
- Jaesung Lee
- Department of Artificial Intelligence, Chung-Ang University, Seoul, Korea
- AI/ML Research Innovation Center, Chung-Ang University, Seoul, Korea
| | - Sanghyuck Lee
- Department of Artificial Intelligence, Chung-Ang University, Seoul, Korea
| | - Won Jun Lee
- Department of Ophthalmology, Chung-Ang University College of Medicine, Chung-Ang University Hospital, 102 Heukseok-Ro, Dongjak-Gu, Seoul, 06973, Korea
| | - Nam Ju Moon
- Department of Ophthalmology, Chung-Ang University College of Medicine, Chung-Ang University Hospital, 102 Heukseok-Ro, Dongjak-Gu, Seoul, 06973, Korea
| | - Jeong Kyu Lee
- Department of Ophthalmology, Chung-Ang University College of Medicine, Chung-Ang University Hospital, 102 Heukseok-Ro, Dongjak-Gu, Seoul, 06973, Korea.
| |
Collapse
|
12
|
Rabbat N, Qureshi A, Hsu KT, Asif Z, Chitnis P, Shobeiri SA, Wei Q. Automated Segmentation of Levator Ani Muscle from 3D Endovaginal Ultrasound Images. Bioengineering (Basel) 2023; 10:894. [PMID: 37627779 PMCID: PMC10451809 DOI: 10.3390/bioengineering10080894] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Revised: 07/18/2023] [Accepted: 07/21/2023] [Indexed: 08/27/2023] Open
Abstract
Levator ani muscle (LAM) avulsion is a common complication of vaginal childbirth and is linked to several pelvic floor disorders. Diagnosing and treating these conditions require imaging of the pelvic floor and examination of the obtained images, which is a time-consuming process subjected to operator variability. In our study, we proposed using deep learning (DL) to automate the segmentation of the LAM from 3D endovaginal ultrasound images (EVUS) to improve diagnostic accuracy and efficiency. Over one thousand images extracted from the 3D EVUS data of healthy subjects and patients with pelvic floor disorders were utilized for the automated LAM segmentation. A U-Net model was implemented, with Intersection over Union (IoU) and Dice metrics being used for model performance evaluation. The model achieved a mean Dice score of 0.86, demonstrating a better performance than existing works. The mean IoU was 0.76, indicative of a high degree of overlap between the automated and manual segmentation of the LAM. Three other models including Attention UNet, FD-UNet and Dense-UNet were also applied on the same images which showed comparable results. Our study demonstrated the feasibility and accuracy of using DL segmentation with U-Net architecture to automate LAM segmentation to reduce the time and resources required for manual segmentation of 3D EVUS images. The proposed method could become an important component in AI-based diagnostic tools, particularly in low socioeconomic regions where access to healthcare resources is limited. By improving the management of pelvic floor disorders, our approach may contribute to better patient outcomes in these underserved areas.
Collapse
Affiliation(s)
- Nada Rabbat
- Department of Bioengineering, George Mason University, Fairfax, VA 22030, USA; (N.R.); (A.Q.); (K.-T.H.); (P.C.); (S.A.S.)
| | - Amad Qureshi
- Department of Bioengineering, George Mason University, Fairfax, VA 22030, USA; (N.R.); (A.Q.); (K.-T.H.); (P.C.); (S.A.S.)
| | - Ko-Tsung Hsu
- Department of Bioengineering, George Mason University, Fairfax, VA 22030, USA; (N.R.); (A.Q.); (K.-T.H.); (P.C.); (S.A.S.)
| | - Zara Asif
- Department of Bioengineering, George Mason University, Fairfax, VA 22030, USA; (N.R.); (A.Q.); (K.-T.H.); (P.C.); (S.A.S.)
| | - Parag Chitnis
- Department of Bioengineering, George Mason University, Fairfax, VA 22030, USA; (N.R.); (A.Q.); (K.-T.H.); (P.C.); (S.A.S.)
| | - Seyed Abbas Shobeiri
- Department of Bioengineering, George Mason University, Fairfax, VA 22030, USA; (N.R.); (A.Q.); (K.-T.H.); (P.C.); (S.A.S.)
- Inova Fairfax Hospital, Fairfax, VA 22042, USA
| | - Qi Wei
- Department of Bioengineering, George Mason University, Fairfax, VA 22030, USA; (N.R.); (A.Q.); (K.-T.H.); (P.C.); (S.A.S.)
| |
Collapse
|
13
|
Qureshi A, Lim S, Suh SY, Mutawak B, Chitnis PV, Demer JL, Wei Q. Deep-Learning-Based Segmentation of Extraocular Muscles from Magnetic Resonance Images. Bioengineering (Basel) 2023; 10:699. [PMID: 37370630 DOI: 10.3390/bioengineering10060699] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Revised: 05/31/2023] [Accepted: 06/02/2023] [Indexed: 06/29/2023] Open
Abstract
In this study, we investigated the performance of four deep learning frameworks of U-Net, U-NeXt, DeepLabV3+, and ConResNet in multi-class pixel-based segmentation of the extraocular muscles (EOMs) from coronal MRI. Performances of the four models were evaluated and compared with the standard F-measure-based metrics of intersection over union (IoU) and Dice, where the U-Net achieved the highest overall IoU and Dice scores of 0.77 and 0.85, respectively. Centroid distance offset between identified and ground truth EOM centroids was measured where U-Net and DeepLabV3+ achieved low offsets (p > 0.05) of 0.33 mm and 0.35 mm, respectively. Our results also demonstrated that segmentation accuracy varies in spatially different image planes. This study systematically compared factors that impact the variability of segmentation and morphometric accuracy of the deep learning models when applied to segmenting EOMs from MRI.
Collapse
Affiliation(s)
- Amad Qureshi
- Department of Bioengineering, George Mason University, Fairfax, VA 22030, USA
| | - Seongjin Lim
- Department of Ophthalmology, Neurology and Bioengineering, Jules Stein Eye Institute, University of California, Los Angeles, CA 90095, USA
| | - Soh Youn Suh
- Department of Ophthalmology, Neurology and Bioengineering, Jules Stein Eye Institute, University of California, Los Angeles, CA 90095, USA
| | - Bassam Mutawak
- Department of Bioengineering, George Mason University, Fairfax, VA 22030, USA
| | - Parag V Chitnis
- Department of Bioengineering, George Mason University, Fairfax, VA 22030, USA
| | - Joseph L Demer
- Department of Ophthalmology, Neurology and Bioengineering, Jules Stein Eye Institute, University of California, Los Angeles, CA 90095, USA
| | - Qi Wei
- Department of Bioengineering, George Mason University, Fairfax, VA 22030, USA
| |
Collapse
|
14
|
Wawer Matos PA, Reimer RP, Rokohl AC, Caldeira L, Heindl LM, Große Hokamp N. Artificial Intelligence in Ophthalmology - Status Quo and Future Perspectives. Semin Ophthalmol 2023; 38:226-237. [PMID: 36356300 DOI: 10.1080/08820538.2022.2139625] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Artificial intelligence (AI) is an emerging technology in healthcare and holds the potential to disrupt many arms in medical care. In particular, disciplines using medical imaging modalities, including e.g. radiology but ophthalmology as well, are already confronted with a wide variety of AI implications. In ophthalmologic research, AI has demonstrated promising results limited to specific diseases and imaging tools, respectively. Yet, implementation of AI in clinical routine is not widely spread due to availability, heterogeneity in imaging techniques and AI methods. In order to describe the status quo, this narrational review provides a brief introduction to AI ("what the ophthalmologist needs to know"), followed by an overview of different AI-based applications in ophthalmology and a discussion on future challenges.Abbreviations: Age-related macular degeneration, AMD; Artificial intelligence, AI; Anterior segment OCT, AS-OCT; Coronary artery calcium score, CACS; Convolutional neural network, CNN; Deep convolutional neural network, DCNN; Diabetic retinopathy, DR; Machine learning, ML; Optical coherence tomography, OCT; Retinopathy of prematurity, ROP; Support vector machine, SVM; Thyroid-associated ophthalmopathy, TAO.
Collapse
Affiliation(s)
| | - Robert P Reimer
- Department of Diagnostic and Interventional Radiology, University Hospital of Cologne, Köln, Germany
| | - Alexander C Rokohl
- Department of Ophthalmology, University Hospital of Cologne, Köln, Germany
| | - Liliana Caldeira
- Department of Diagnostic and Interventional Radiology, University Hospital of Cologne, Köln, Germany
| | - Ludwig M Heindl
- Department of Ophthalmology, University Hospital of Cologne, Köln, Germany
| | - Nils Große Hokamp
- Department of Diagnostic and Interventional Radiology, University Hospital of Cologne, Köln, Germany
| |
Collapse
|
15
|
Jonsson T. Micro-CT and deep learning: Modern techniques and applications in insect morphology and neuroscience. FRONTIERS IN INSECT SCIENCE 2023; 3:1016277. [PMID: 38469492 PMCID: PMC10926430 DOI: 10.3389/finsc.2023.1016277] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/10/2022] [Accepted: 01/06/2023] [Indexed: 03/13/2024]
Abstract
Advances in modern imaging and computer technologies have led to a steady rise in the use of micro-computed tomography (µCT) in many biological areas. In zoological research, this fast and non-destructive method for producing high-resolution, two- and three-dimensional images is increasingly being used for the functional analysis of the external and internal anatomy of animals. µCT is hereby no longer limited to the analysis of specific biological tissues in a medical or preclinical context but can be combined with a variety of contrast agents to study form and function of all kinds of tissues and species, from mammals and reptiles to fish and microscopic invertebrates. Concurrently, advances in the field of artificial intelligence, especially in deep learning, have revolutionised computer vision and facilitated the automatic, fast and ever more accurate analysis of two- and three-dimensional image datasets. Here, I want to give a brief overview of both micro-computed tomography and deep learning and present their recent applications, especially within the field of insect science. Furthermore, the combination of both approaches to investigate neural tissues and the resulting potential for the analysis of insect sensory systems, from receptor structures via neuronal pathways to the brain, are discussed.
Collapse
Affiliation(s)
- Thorin Jonsson
- Institute of Biology, Karl-Franzens-University Graz, Graz, Austria
| |
Collapse
|
16
|
Bonaldi L, Pretto A, Pirri C, Uccheddu F, Fontanella CG, Stecco C. Deep Learning-Based Medical Images Segmentation of Musculoskeletal Anatomical Structures: A Survey of Bottlenecks and Strategies. Bioengineering (Basel) 2023; 10:bioengineering10020137. [PMID: 36829631 PMCID: PMC9952222 DOI: 10.3390/bioengineering10020137] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2022] [Revised: 01/13/2023] [Accepted: 01/17/2023] [Indexed: 01/22/2023] Open
Abstract
By leveraging the recent development of artificial intelligence algorithms, several medical sectors have benefited from using automatic segmentation tools from bioimaging to segment anatomical structures. Segmentation of the musculoskeletal system is key for studying alterations in anatomical tissue and supporting medical interventions. The clinical use of such tools requires an understanding of the proper method for interpreting data and evaluating their performance. The current systematic review aims to present the common bottlenecks for musculoskeletal structures analysis (e.g., small sample size, data inhomogeneity) and the related strategies utilized by different authors. A search was performed using the PUBMED database with the following keywords: deep learning, musculoskeletal system, segmentation. A total of 140 articles published up until February 2022 were obtained and analyzed according to the PRISMA framework in terms of anatomical structures, bioimaging techniques, pre/post-processing operations, training/validation/testing subset creation, network architecture, loss functions, performance indicators and so on. Several common trends emerged from this survey; however, the different methods need to be compared and discussed based on each specific case study (anatomical region, medical imaging acquisition setting, study population, etc.). These findings can be used to guide clinicians (as end users) to better understand the potential benefits and limitations of these tools.
Collapse
Affiliation(s)
- Lorenza Bonaldi
- Department of Civil, Environmental and Architectural Engineering, University of Padova, Via F. Marzolo 9, 35131 Padova, Italy
| | - Andrea Pretto
- Department of Industrial Engineering, University of Padova, Via Venezia 1, 35121 Padova, Italy
| | - Carmelo Pirri
- Department of Neuroscience, University of Padova, Via A. Gabelli 65, 35121 Padova, Italy
| | - Francesca Uccheddu
- Department of Industrial Engineering, University of Padova, Via Venezia 1, 35121 Padova, Italy
- Centre for Mechanics of Biological Materials (CMBM), University of Padova, Via F. Marzolo 9, 35131 Padova, Italy
| | - Chiara Giulia Fontanella
- Department of Industrial Engineering, University of Padova, Via Venezia 1, 35121 Padova, Italy
- Centre for Mechanics of Biological Materials (CMBM), University of Padova, Via F. Marzolo 9, 35131 Padova, Italy
- Correspondence: ; Tel.: +39-049-8276754
| | - Carla Stecco
- Department of Neuroscience, University of Padova, Via A. Gabelli 65, 35121 Padova, Italy
- Centre for Mechanics of Biological Materials (CMBM), University of Padova, Via F. Marzolo 9, 35131 Padova, Italy
| |
Collapse
|
17
|
Lee SH, Lee S, Lee J, Lee JK, Moon NJ. Effective encoder-decoder neural network for segmentation of orbital tissue in computed tomography images of Graves' orbitopathy patients. PLoS One 2023; 18:e0285488. [PMID: 37163543 PMCID: PMC10171592 DOI: 10.1371/journal.pone.0285488] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2023] [Accepted: 04/25/2023] [Indexed: 05/12/2023] Open
Abstract
PURPOSE To propose a neural network (NN) that can effectively segment orbital tissue in computed tomography (CT) images of Graves' orbitopathy (GO) patients. METHODS We analyzed orbital CT scans from 701 GO patients diagnosed between 2010 and 2019 and devised an effective NN specializing in semantic orbital tissue segmentation in GO patients' CT images. After four conventional (Attention U-Net, DeepLab V3+, SegNet, and HarDNet-MSEG) and the proposed NN train the various manual orbital tissue segmentations, we calculated the Dice coefficient and Intersection over Union for comparison. RESULTS CT images of the eyeball, four rectus muscles, the optic nerve, and the lacrimal gland tissues from all 701 patients were analyzed in this study. In the axial image with the largest eyeball area, the proposed NN achieved the best performance, with Dice coefficients of 98.2% for the eyeball, 94.1% for the optic nerve, 93.0% for the medial rectus muscle, and 91.1% for the lateral rectus muscle. The proposed NN also gave the best performance for the coronal image. Our qualitative analysis demonstrated that the proposed NN outputs provided more sophisticated orbital tissue segmentations for GO patients than the conventional NNs. CONCLUSION We concluded that our proposed NN exhibited an improved CT image segmentation for GO patients over conventional NNs designed for semantic segmentation tasks.
Collapse
Affiliation(s)
- Seung Hyeun Lee
- Department of Ophthalmology, Chung-Ang University College of Medicine, Chung-Ang University Hospital, Seoul, Korea
| | - Sanghyuck Lee
- Department of Artificial Intelligence, Chung-Ang University, Seoul, Korea
| | - Jaesung Lee
- Department of Artificial Intelligence, Chung-Ang University, Seoul, Korea
| | - Jeong Kyu Lee
- Department of Ophthalmology, Chung-Ang University College of Medicine, Chung-Ang University Hospital, Seoul, Korea
| | - Nam Ju Moon
- Department of Ophthalmology, Chung-Ang University College of Medicine, Chung-Ang University Hospital, Seoul, Korea
| |
Collapse
|
18
|
Bao XL, Sun YJ, Zhan X, Li GY. Orbital and eyelid diseases: The next breakthrough in artificial intelligence? Front Cell Dev Biol 2022; 10:1069248. [PMID: 36467418 PMCID: PMC9716028 DOI: 10.3389/fcell.2022.1069248] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Accepted: 11/08/2022] [Indexed: 12/07/2023] Open
Abstract
Orbital and eyelid disorders affect normal visual functions and facial appearance, and precise oculoplastic and reconstructive surgeries are crucial. Artificial intelligence (AI) network models exhibit a remarkable ability to analyze large sets of medical images to locate lesions. Currently, AI-based technology can automatically diagnose and grade orbital and eyelid diseases, such as thyroid-associated ophthalmopathy (TAO), as well as measure eyelid morphological parameters based on external ocular photographs to assist surgical strategies. The various types of imaging data for orbital and eyelid diseases provide a large amount of training data for network models, which might be the next breakthrough in AI-related research. This paper retrospectively summarizes different imaging data aspects addressed in AI-related research on orbital and eyelid diseases, and discusses the advantages and limitations of this research field.
Collapse
Affiliation(s)
- Xiao-Li Bao
- Department of Ophthalmology, Second Hospital of Jilin University, Changchun, China
| | - Ying-Jian Sun
- Department of Ophthalmology, Second Hospital of Jilin University, Changchun, China
| | - Xi Zhan
- Department of Engineering, The Army Engineering University of PLA, Nanjing, China
| | - Guang-Yu Li
- The Eye Hospital, School of Ophthalmology & Optometry, Wenzhou Medical University, Wenzhou, China
| |
Collapse
|
19
|
Ali TM, Nawaz A, Ur Rehman A, Ahmad RZ, Javed AR, Gadekallu TR, Chen CL, Wu CM. A Sequential Machine Learning-cum-Attention Mechanism for Effective Segmentation of Brain Tumor. Front Oncol 2022; 12:873268. [PMID: 35719987 PMCID: PMC9202559 DOI: 10.3389/fonc.2022.873268] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Accepted: 04/18/2022] [Indexed: 12/21/2022] Open
Abstract
Magnetic resonance imaging is the most generally utilized imaging methodology that permits radiologists to look inside the cerebrum using radio waves and magnets for tumor identification. However, it is tedious and complex to identify the tumorous and nontumorous regions due to the complexity in the tumorous region. Therefore, reliable and automatic segmentation and prediction are necessary for the segmentation of brain tumors. This paper proposes a reliable and efficient neural network variant, i.e., an attention-based convolutional neural network for brain tumor segmentation. Specifically, an encoder part of the UNET is a pre-trained VGG19 network followed by the adjacent decoder parts with an attention gate for segmentation noise induction and a denoising mechanism for avoiding overfitting. The dataset we are using for segmentation is BRATS’20, which comprises four different MRI modalities and one target mask file. The abovementioned algorithm resulted in a dice similarity coefficient of 0.83, 0.86, and 0.90 for enhancing, core, and whole tumors, respectively.
Collapse
Affiliation(s)
- Tahir Mohammad Ali
- Department of Computer Science, GULF University for Science and Technology, Mishref, Kuwait
| | - Ali Nawaz
- Department of Computer Science, GULF University for Science and Technology, Mishref, Kuwait
| | - Attique Ur Rehman
- Department of Computer Science, GULF University for Science and Technology, Mishref, Kuwait.,Department of Software Engineering, University of Sialkot, Sialkot, Pakistan
| | - Rana Zeeshan Ahmad
- Department of Information Technology, University of Sialkot, Sialkot, Pakistan
| | | | - Thippa Reddy Gadekallu
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore, India
| | - Chin-Ling Chen
- School of Information Engineering, Changchun Sci-Tech University, Changchun, China.,School of Computer and Information Engineering, Xiamen University of Technology, Xiamen, China.,Department of Computer Science and Information Engineering, Chaoyang University of Technology, Taichung, Taiwan
| | - Chih-Ming Wu
- School of Civil Engineering and Architecture, Xiamen University of Technology, Xiamen, China
| |
Collapse
|
20
|
Agarwal C, Gupta S, Najjar M, Weaver TE, Zhou XJ, Schonfeld D, Prasad B. Deep Learning Analyses of Brain MRI to Identify Sustained Attention Deficit in Treated Obstructive Sleep Apnea: A Pilot Study. SLEEP AND VIGILANCE 2022; 6:179-184. [PMID: 35813983 PMCID: PMC9269966 DOI: 10.1007/s41782-021-00190-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Revised: 11/22/2021] [Accepted: 11/28/2021] [Indexed: 06/03/2023]
Abstract
Purpose Persistent sustained attention deficit (SAD) after continuous positive airway pressure (CPAP) treatment is a source of quality of life and occupational impairment in obstructive sleep apnea (OSA). However, persistent SAD is difficult to predict in patients initiated on CPAP treatment. We performed secondary analyses of brain magnetic resonance (MR) images in treated OSA participants, using deep learning, to predict SAD. Methods 26 middle-aged men with CPAP use of more than 6 hours daily and MR imaging were included. SAD was defined by psychomotor vigilance task lapses of more than 2. 17 participants had SAD and 9 were without SAD. A Convolutional Neural Network (CNN) model was used for classifying the MR images into +SAD and -SAD categories. Results The CNN model achieved an accuracy of 97.02±0.80% in classifying MR images into +SAD and -SAD categories. Assuming a threshold of 90% probability for the MR image being correctly classified, the model provided a participant-level accuracy of 99.11±0.55% and a stable image level accuracy of 97.45±0.63%. Conclusion Deep learning methods, such as the proposed CNN model, can accurately predict persistent SAD based on MR images. Further replication of these findings will allow early initiation of adjunctive pharmacologic treatment in high-risk patients, along with CPAP, to improve quality of life and occupational fitness. Future augmentation of this approach with explainable artificial intelligence methods may elucidate the neuroanatomical areas underlying persistent SAD to provide mechanistic insights and novel therapeutic targets.
Collapse
Affiliation(s)
- Chirag Agarwal
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA
| | - Saransh Gupta
- Department of Medicine, University of Illinois Chicago, IL, USA
| | - Muhammad Najjar
- Department of Medicine, University of Illinois Chicago, IL, USA
- Jesse Brown VA Medical Center, Chicago, IL, USA
| | - Terri E. Weaver
- Biobehavioral Nursing Science, College of Nursing, University of Illinois Chicago, IL, USA
| | - Xiaohong Joe Zhou
- Center for Magnetic Resonance Research, University of Illinois Chicago, IL, USA
- Departments of Radiology, Neurosurgery, and Bioengineering, University of Illinois Chicago, IL, USA
| | - Dan Schonfeld
- Department of Electrical and Computer Engineering, University of Illinois Chicago, IL, USA
| | - Bharati Prasad
- Department of Medicine, University of Illinois Chicago, IL, USA
- Jesse Brown VA Medical Center, Chicago, IL, USA
| |
Collapse
|