1
|
Cai D, Zhou Y, He W, Yuan J, Liu C, Li R, Wang Y, Xia J. Automatic segmentation of knee CT images of tibial plateau fractures based on three-dimensional U-Net: Assisting junior physicians with Schatzker classification. Eur J Radiol 2024; 178:111605. [PMID: 39059081 DOI: 10.1016/j.ejrad.2024.111605] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Revised: 05/19/2024] [Accepted: 07/05/2024] [Indexed: 07/28/2024]
Abstract
PURPOSE This study aimed to automatically segment knee computed tomography (CT) images of tibial plateau fractures using a three-dimensional (3D) U-net-based method, accurately construct 3D maps of tibial plateau fractures, and examine their usefulness for Schatzker classification in clinical practice. METHODS We retrospectively enrolled 234 cases with tibial plateau fractures from our hospital in this study. The four constituent bones of the knee were manually annotated using ITK-SNAP software. Finally, image features were extracted using deep learning. The usefulness of the results for Schatzker classification was examined by an orthopaedic and a radiology resident. RESULTS On average, our model required < 40 s to process a 3D CT scan of the knee. The average Dice coefficient for all four knee bones was higher than 0.950, and highly accurate 3D maps of the tibia were produced. With the aid of the results of our model, the accuracy, sensitivity, and specificity of the Schatzker classification of both residents improved. CONCLUSIONS The proposed method can rapidly and accurately segment knee CT images of tibial plateau fractures and assist residents with Schatzker classification, which can help improve diagnostic efficiency and reduce the workload of junior doctors in clinical practice.
Collapse
Affiliation(s)
- Die Cai
- Department of Radiology, The First Affiliated Hospital of Shenzhen University, Shenzhen University, Shenzhen Second People's Hospital, 3002 SunGang Road West, Shenzhen 518035, Guangdong Province, China
| | - Yu Zhou
- Smart Medical Imaging, Learning and Engineering (SMILE) Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen 518060, China
| | - Wenjie He
- Department of Radiology, The First Affiliated Hospital of Shenzhen University, Shenzhen University, Shenzhen Second People's Hospital, 3002 SunGang Road West, Shenzhen 518035, Guangdong Province, China
| | - Jichun Yuan
- Department of Radiology, The First Affiliated Hospital of Shenzhen University, Shenzhen University, Shenzhen Second People's Hospital, 3002 SunGang Road West, Shenzhen 518035, Guangdong Province, China
| | - Chenyuan Liu
- Five-year Clinical Medicine, Xiangya School of Medicine, Central South University, Changsha 410083, Hunan, China
| | - Rui Li
- Department of Radiology, The First Affiliated Hospital of Shenzhen University, Shenzhen University, Shenzhen Second People's Hospital, 3002 SunGang Road West, Shenzhen 518035, Guangdong Province, China
| | - Yi Wang
- Smart Medical Imaging, Learning and Engineering (SMILE) Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen 518060, China.
| | - Jun Xia
- Department of Radiology, The First Affiliated Hospital of Shenzhen University, Shenzhen University, Shenzhen Second People's Hospital, 3002 SunGang Road West, Shenzhen 518035, Guangdong Province, China.
| |
Collapse
|
2
|
Chen C, Chen Y, Li X, Ning H, Xiao R. Linear semantic transformation for semi-supervised medical image segmentation. Comput Biol Med 2024; 173:108331. [PMID: 38522252 DOI: 10.1016/j.compbiomed.2024.108331] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Revised: 02/29/2024] [Accepted: 03/17/2024] [Indexed: 03/26/2024]
Abstract
Medical image segmentation is a focus research and foundation in developing intelligent medical systems. Recently, deep learning for medical image segmentation has become a standard process and succeeded significantly, promoting the development of reconstruction, and surgical planning of disease diagnosis. However, semantic learning is often inefficient owing to the lack of supervision of feature maps, resulting in that high-quality segmentation models always rely on numerous and accurate data annotations. Learning robust semantic representation in latent spaces remains a challenge. In this paper, we propose a novel semi-supervised learning framework to learn vital attributes in medical images, which constructs generalized representation from diverse semantics to realize medical image segmentation. We first build a self-supervised learning part that achieves context recovery by reconstructing space and intensity of medical images, which conduct semantic representation for feature maps. Subsequently, we combine semantic-rich feature maps and utilize simple linear semantic transformation to convert them into image segmentation. The proposed framework was tested using five medical segmentation datasets. Quantitative assessments indicate the highest scores of our method on IXI (73.78%), ScaF (47.50%), COVID-19-Seg (50.72%), PC-Seg (65.06%), and Brain-MR (72.63%) datasets. Finally, we compared our method with the latest semi-supervised learning methods and obtained 77.15% and 75.22% DSC values, respectively, ranking first on two representative datasets. The experimental results not only proved that the proposed linear semantic transformation was effectively applied to medical image segmentation, but also presented its simplicity and ease-of-use to pursue robust segmentation in semi-supervised learning. Our code is now open at: https://github.com/QingYunA/Linear-Semantic-Transformation-for-Semi-Supervised-Medical-Image-Segmentation.
Collapse
Affiliation(s)
- Cheng Chen
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, 100083, China
| | - Yunqing Chen
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, 100083, China
| | - Xiaoheng Li
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, 100083, China
| | - Huansheng Ning
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, 100083, China
| | - Ruoxiu Xiao
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, 100083, China; Shunde Innovation School, University of Science and Technology Beijing, Foshan, 100024, China.
| |
Collapse
|
3
|
Teule EHS, Lessmann N, van der Heijden EPA, Hummelink S. Automatic segmentation and labelling of wrist bones in four-dimensional computed tomography datasets via deep learning. J Hand Surg Eur Vol 2024; 49:507-509. [PMID: 37882645 DOI: 10.1177/17531934231209876] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/27/2023]
Abstract
This study developed a deep learning model for fully automatic segmentation and labelling of wrist bones from four-dimensional computed tomography (4DCT) scans. This is a crucial step towards implementing 4DCT for diagnosing wrist ligament lesions, reducing time-consuming analysis of extensive data.
Collapse
Affiliation(s)
- E H S Teule
- Technical Medicine, University of Twente, Enschede, The Netherlands
- Department of Plastic, Reconstructive, and Hand Surgery, Radboud University Medical Center, Nijmegen, The Netherlands
| | - N Lessmann
- Department of Radiology, Radboud University Medical Center, Nijmegen, The Netherlands
| | - E P A van der Heijden
- Department of Plastic, Reconstructive, and Hand Surgery, Radboud University Medical Center, Nijmegen, The Netherlands
- Department of Plastic, Reconstructive, and Hand Surgery, Jeroen Bosch Hospital, 's-Hertogenbosch, The Netherlands
| | - S Hummelink
- Department of Plastic, Reconstructive, and Hand Surgery, Radboud University Medical Center, Nijmegen, The Netherlands
| |
Collapse
|
4
|
Li Y, Tian T, Hu J, Yuan C. SUTrans-NET: a hybrid transformer approach to skin lesion segmentation. PeerJ Comput Sci 2024; 10:e1935. [PMID: 38660200 PMCID: PMC11042008 DOI: 10.7717/peerj-cs.1935] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Accepted: 02/18/2024] [Indexed: 04/26/2024]
Abstract
Melanoma is a malignant skin tumor that threatens human life and health. Early detection is essential for effective treatment. However, the low contrast between melanoma lesions and normal skin and the irregularity in size and shape make skin lesions difficult to detect with the naked eye in the early stages, making the task of skin lesion segmentation challenging. Traditional encoder-decoder built with U-shaped networks using convolutional neural network (CNN) networks have limitations in establishing long-term dependencies and global contextual connections, while the Transformer architecture is limited in its application to small medical datasets. To address these issues, we propose a new skin lesion segmentation network, SUTrans-NET, which combines CNN and Transformer in a parallel fashion to form a dual encoder, where both CNN and Transformer branches perform dynamic interactive fusion of image information in each layer. At the same time, we introduce our designed multi-grouping module SpatialGroupAttention (SGA) to complement the spatial and texture information of the Transformer branch, and utilize the Focus idea of YOLOV5 to construct the Patch Embedding module in the Transformer to prevent the loss of pixel accuracy. In addition, we design a decoder with full-scale information fusion capability to fully fuse shallow and deep features at different stages of the encoder. The effectiveness of our method is demonstrated on the ISIC 2016, ISIC 2017, ISIC 2018 and PH2 datasets and its advantages over existing methods are verified.
Collapse
Affiliation(s)
- Yaqin Li
- School of Mathematics and Computer Science, Wuhan Polytechnic University School, Wuhan, Hubei, China
| | - Tonghe Tian
- School of Mathematics and Computer Science, Wuhan Polytechnic University School, Wuhan, Hubei, China
| | - Jing Hu
- School of Mathematics and Computer Science, Wuhan Polytechnic University School, Wuhan, Hubei, China
| | - Cao Yuan
- School of Mathematics and Computer Science, Wuhan Polytechnic University School, Wuhan, Hubei, China
| |
Collapse
|
5
|
Russe MF, Rebmann P, Tran PH, Kellner E, Reisert M, Bamberg F, Kotter E, Kim S. AI-based X-ray fracture analysis of the distal radius: accuracy between representative classification, detection and segmentation deep learning models for clinical practice. BMJ Open 2024; 14:e076954. [PMID: 38262641 PMCID: PMC10823998 DOI: 10.1136/bmjopen-2023-076954] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Accepted: 12/21/2023] [Indexed: 01/25/2024] Open
Abstract
OBJECTIVES To aid in selecting the optimal artificial intelligence (AI) solution for clinical application, we directly compared performances of selected representative custom-trained or commercial classification, detection and segmentation models for fracture detection on musculoskeletal radiographs of the distal radius by aligning their outputs. DESIGN AND SETTING This single-centre retrospective study was conducted on a random subset of emergency department radiographs from 2008 to 2018 of the distal radius in Germany. MATERIALS AND METHODS An image set was created to be compatible with training and testing classification and segmentation models by annotating examinations for fractures and overlaying fracture masks, if applicable. Representative classification and segmentation models were trained on 80% of the data. After output binarisation, their derived fracture detection performances as well as that of a standard commercially available solution were compared on the remaining X-rays (20%) using mainly accuracy and area under the receiver operating characteristic (AUROC). RESULTS A total of 2856 examinations with 712 (24.9%) fractures were included in the analysis. Accuracies reached up to 0.97 for the classification model, 0.94 for the segmentation model and 0.95 for BoneView. Cohen's kappa was at least 0.80 in pairwise comparisons, while Fleiss' kappa was 0.83 for all models. Fracture predictions were visualised with all three methods at different levels of detail, ranking from downsampled image region for classification over bounding box for detection to single pixel-level delineation for segmentation. CONCLUSIONS All three investigated approaches reached high performances for detection of distal radius fractures with simple preprocessing and postprocessing protocols on the custom-trained models. Despite their underlying structural differences, selection of one's fracture analysis AI tool in the frame of this study reduces to the desired flavour of automation: automated classification, AI-assisted manual fracture reading or minimised false negatives.
Collapse
Affiliation(s)
- Maximilian Frederik Russe
- Department of Diagnostic and Interventional Radiology, Universitätsklinikum Freiburg Medizinische Universitätsklinik, Freiburg im Breisgau, Germany
| | - Philipp Rebmann
- Department of Diagnostic and Interventional Radiology, Universitätsklinikum Freiburg Medizinische Universitätsklinik, Freiburg im Breisgau, Germany
| | - Phuong Hien Tran
- Department of Diagnostic and Interventional Radiology, Universitätsklinikum Freiburg Medizinische Universitätsklinik, Freiburg im Breisgau, Germany
| | - Elias Kellner
- Department of Medical Physics, Universitätsklinikum Freiburg Medizinische Universitätsklinik, Freiburg im Breisgau, Germany
| | - Marco Reisert
- Department of Medical Physics, Universitätsklinikum Freiburg Medizinische Universitätsklinik, Freiburg im Breisgau, Germany
| | - Fabian Bamberg
- Department of Diagnostic and Interventional Radiology, Universitätsklinikum Freiburg Medizinische Universitätsklinik, Freiburg im Breisgau, Germany
| | - Elmar Kotter
- Department of Diagnostic and Interventional Radiology, Universitätsklinikum Freiburg Medizinische Universitätsklinik, Freiburg im Breisgau, Germany
| | - Suam Kim
- Department of Diagnostic and Interventional Radiology, Universitätsklinikum Freiburg Medizinische Universitätsklinik, Freiburg im Breisgau, Germany
| |
Collapse
|
6
|
Tanino H, Mitsutake R, Ito H. Measurement accuracy of the acetabular cup position using an inertial portable hip navigation system with patients in the lateral decubitus position. Sci Rep 2024; 14:1158. [PMID: 38212422 PMCID: PMC10784560 DOI: 10.1038/s41598-024-51785-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2023] [Accepted: 01/09/2024] [Indexed: 01/13/2024] Open
Abstract
Accurate cup placement is critical to ensure satisfactory outcomes after total hip arthroplasty. Portable hip navigation systems are novel intraoperative guidance tools that achieve accurate cup placement in the supine position; however, accuracy in the lateral decubitus position is under debate. A new inertial portable navigation system has recently become available. The present study investigated the accuracy of measurements of the cup position in 54 patients in the lateral decubitus position using this system and compared it with that by a goniometer. After cup placement, cup abduction and anteversion were measured using the system and by the goniometer, and were then compared with postoperatively measured angles. Absolute measurement errors with the system were 2.8° ± 2.6° for cup abduction and 3.9° ± 2.9° for anteversion. The system achieved 98 and 96% measurement accuracies within 10° for cup abduction and anteversion, respectively. The system was more accurate than the goniometer for cup anteversion (p < 0.001), but not for abduction (p = 0.537). The system uses a new registration method of the pelvic reference plane and corrects intraoperative pelvic motion errors, which may affect measurement accuracy. In the present study, reliable and reproducible intraoperative measurements of the cup position were obtained using the inertial portable navigation system.
Collapse
Affiliation(s)
- Hiromasa Tanino
- Department of Orthopaedic Surgery, Asahikawa Medical University, Midorigaoka-Higashi 2-1-1-1, Asahikawa, 078-8510, Japan.
| | - Ryo Mitsutake
- Department of Orthopaedic Surgery, Asahikawa Medical University, Midorigaoka-Higashi 2-1-1-1, Asahikawa, 078-8510, Japan
| | - Hiroshi Ito
- Department of Orthopaedic Surgery, Asahikawa Medical University, Midorigaoka-Higashi 2-1-1-1, Asahikawa, 078-8510, Japan
| |
Collapse
|
7
|
Liu X, Tian L, Deng Z, Guo Y, Zhang S. Zoledronic Acid Accelerates Bone Healing in Carpal Navicular Fracture via Silencing Long Non-coding RNA Growth Arrest Specificity 5 to Modulate MicroRNA-29a-3p Expression. Mol Biotechnol 2023:10.1007/s12033-023-00931-8. [PMID: 37861953 DOI: 10.1007/s12033-023-00931-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2023] [Accepted: 10/05/2023] [Indexed: 10/21/2023]
Abstract
Carpal navicular fractures are the most common carpal fractures. This study intends to explore the specific mechanism of Zoledronic Acid (ZA) in carpal navicular fracture healing via long non-coding RNA (lncRNA) growth arrest specificity 5 (GAS5) to mediate microRNA (miR)-29a-3p. A fractured rat model was constructed. Two weeks later, a subcutaneous injection of systemic ZA was implemented, and an injection of plasmid vectors interfered with GAS5 or miR-29a-3p expression was performed on the fracture site. Osteocalcin (OCN) and bone morphogenetic protein-2 (BMP-2) were determined, as well as serum levels of alkaline phosphatase (ALP), osteopontin (OPN) and osteoprotegerin (OPG) and bone mineral density. MC3T3-E1 cells were transfected with plasmid vectors interfering with GAS5 or miR-29a-3p, and cell proliferation and apoptosis were analyzed. GAS5 and miR-29a-3p expression in fractured rats was tested, together with their binding relationship. ZA promoted OCN and BMP-2 expression, increased bone mineral density and serum levels of ALP, OPN and OPG in fractured rats. GAS5 was upregulated and miR-29a-3p was down-regulated in fractured rats. Downregulation of GAS5 or upregulation of miR-29a-3p further promoted bone healing in fractured rats. GAS5 targets miR-29a-3p, and down-regulation of miR-29a-3p can reverse the effect of down-regulation of GAS5 on bone healing in fractured rats. ZA promoted the proliferation of MC3T3-E1 cells and inhibited apoptosis by regulating the GAS5/miR-29a-3p axis. ZA regulates miR-29a-3p expression by down-regulating GAS5 to promote carpal navicular fracture healing, promote MC3T3-E1 cell proliferation, and inhibit cell apoptosis.
Collapse
Affiliation(s)
- Xing Liu
- Department of Orthopaedic Trauma 2, The Third Hospital of ShiJiaZhuang, No. 15 Tiyu South Street, Chang'an District, Shijiazhuang City, 050011, Hebei Province, China.
| | - LiJun Tian
- Department of Orthopaedic Trauma 2, The Third Hospital of ShiJiaZhuang, No. 15 Tiyu South Street, Chang'an District, Shijiazhuang City, 050011, Hebei Province, China
| | - ZhiGang Deng
- Department of Orthopaedic Trauma 2, The Third Hospital of ShiJiaZhuang, No. 15 Tiyu South Street, Chang'an District, Shijiazhuang City, 050011, Hebei Province, China
| | - YuSong Guo
- Department of Orthopaedic Trauma 2, The Third Hospital of ShiJiaZhuang, No. 15 Tiyu South Street, Chang'an District, Shijiazhuang City, 050011, Hebei Province, China
| | - SanBing Zhang
- Department of Hand/Foot and Ankle Surgery, The Third Hospital of ShiJiaZhuang, Shijiazhuang City, 050011, Hebei Province, China
| |
Collapse
|
8
|
Khader A, Alquran H. Automated Prediction of Osteoarthritis Level in Human Osteochondral Tissue Using Histopathological Images. Bioengineering (Basel) 2023; 10:764. [PMID: 37508791 PMCID: PMC10376879 DOI: 10.3390/bioengineering10070764] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Revised: 06/21/2023] [Accepted: 06/23/2023] [Indexed: 07/30/2023] Open
Abstract
Osteoarthritis (OA) is the most common arthritis and the leading cause of lower extremity disability in older adults. Understanding OA progression is important in the development of patient-specific therapeutic techniques at the early stage of OA rather than at the end stage. Histopathology scoring systems are usually used to evaluate OA progress and the mechanisms involved in the development of OA. This study aims to classify the histopathological images of cartilage specimens automatically, using artificial intelligence algorithms. Hematoxylin and eosin (HE)- and safranin O and fast green (SafO)-stained images of human cartilage specimens were divided into early, mild, moderate, and severe OA. Five pre-trained convolutional networks (DarkNet-19, MobileNet, ResNet-101, NasNet) were utilized to extract the twenty features from the last fully connected layers for both scenarios of SafO and HE. Principal component analysis (PCA) and ant lion optimization (ALO) were utilized to obtain the best-weighted features. The support vector machine classifier was trained and tested based on the selected descriptors to achieve the highest accuracies of 98.04% and 97.03% in HE and SafO, respectively. Using the ALO algorithm, the F1 scores were 0.97, 0.991, 1, and 1 for the HE images and 1, 0.991, 0.97, and 1 for the SafO images for the early, mild, moderate, and severe classes, respectively. This algorithm may be a useful tool for researchers to evaluate the histopathological images of OA without the need for experts in histopathology scoring systems or the need to train new experts. Incorporating automated deep features could help to improve the characterization and understanding of OA progression and development.
Collapse
Affiliation(s)
- Ateka Khader
- Department of Biomedical Systems and Informatics Engineering, Hijjawi Faculty for Engineering Technology, Yarmouk University, Irbid 21163, Jordan
| | - Hiam Alquran
- Department of Biomedical Systems and Informatics Engineering, Hijjawi Faculty for Engineering Technology, Yarmouk University, Irbid 21163, Jordan
| |
Collapse
|
9
|
Shen T, Huang F, Zhang X. CT medical image segmentation algorithm based on deep learning technology. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:10954-10976. [PMID: 37322967 DOI: 10.3934/mbe.2023485] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
For the problems of blurred edges, uneven background distribution, and many noise interferences in medical image segmentation, we proposed a medical image segmentation algorithm based on deep neural network technology, which adopts a similar U-Net backbone structure and includes two parts: encoding and decoding. Firstly, the images are passed through the encoder path with residual and convolutional structures for image feature information extraction. We added the attention mechanism module to the network jump connection to address the problems of redundant network channel dimensions and low spatial perception of complex lesions. Finally, the medical image segmentation results are obtained using the decoder path with residual and convolutional structures. To verify the validity of the model in this paper, we conducted the corresponding comparative experimental analysis, and the experimental results show that the DICE and IOU of the proposed model are 0.7826, 0.9683, 0.8904, 0.8069, and 0.9462, 0.9537 for DRIVE, ISIC2018 and COVID-19 CT datasets, respectively. The segmentation accuracy is effectively improved for medical images with complex shapes and adhesions between lesions and normal tissues.
Collapse
Affiliation(s)
- Tongping Shen
- School of Information Engineering, Anhui University of Chinese Medicine, Hefei, 230012, China
- Graduate School, Angeles University Foundation, Angeles 2009, Philippines
| | - Fangliang Huang
- School of Information Engineering, Anhui University of Chinese Medicine, Hefei, 230012, China
| | - Xusong Zhang
- Graduate School, Angeles University Foundation, Angeles 2009, Philippines
| |
Collapse
|
10
|
Chen C, Zhou K, Qi S, Lu T, Xiao R. A learnable Gabor Convolution kernel for vessel segmentation. Comput Biol Med 2023; 158:106892. [PMID: 37028143 DOI: 10.1016/j.compbiomed.2023.106892] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Revised: 03/26/2023] [Accepted: 04/01/2023] [Indexed: 04/05/2023]
Abstract
Vessel segmentation is significant for characterizing vascular diseases, receiving wide attention of researchers. The common vessel segmentation methods are mainly based on convolutional neural networks (CNNs), which have excellent feature learning capabilities. Owing to inability to predict learning direction, CNNs generate large channels or sufficient depth to obtain sufficient features. It may engender redundant parameters. Drawing on performance ability of Gabor filters in vessel enhancement, we built Gabor convolution kernel and designed its optimization. Unlike traditional filter using and common modulation, its parameters are automatically updated using gradients in the back propagation. Since the structural shape of Gabor convolution kernels is the same as that of regular convolution kernels, it can be integrated into any CNNs architecture. We built Gabor ConvNet using Gabor convolution kernels and tested it using three vessel datasets. It scored 85.06%, 70.52% and 67.11%, respectively, ranking first on three datasets. Results shows that our method outperforms advanced models in vessel segmentation. Ablations also proved that Gabor kernel has better vessel extraction ability than the regular convolution kernel.
Collapse
|
11
|
Chen C, Zhou K, Wang Z, Xiao R. Generative Consistency for Semi-Supervised Cerebrovascular Segmentation From TOF-MRA. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:346-353. [PMID: 35727774 DOI: 10.1109/tmi.2022.3184675] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Cerebrovascular segmentation from Time-of-flight magnetic resonance angiography (TOF-MRA) is a critical step in computer-aided diagnosis. In recent years, deep learning models have proved its powerful feature extraction for cerebrovascular segmentation. However, they require many labeled datasets to implement effective driving, which are expensive and professional. In this paper, we propose a generative consistency for semi-supervised (GCS) model. Considering the rich information contained in the feature map, the GCS model utilizes the generation results to constrain the segmentation model. The generated data comes from labeled data, unlabeled data, and unlabeled data after perturbation, respectively. The GCS model also calculates the consistency of the perturbed data to improve the feature mining ability. Subsequently, we propose a new model as the backbone of the GSC model. It transfers TOF-MRA into graph space and establishes correlation using Transformer. We demonstrated the effectiveness of the proposed model on TOF-MRA representations, and tested the GCS model with state-of-the-art semi-supervised methods using the proposed model as backbone. The experiments prove the important role of the GCS model in cerebrovascular segmentation. Code is available at https://github.com/MontaEllis/SSL-For-Medical-Segmentation.
Collapse
|
12
|
Chen C, Qi S, Zhou K, Lu T, Ning H, Xiao R. Pairwise attention-enhanced adversarial model for automatic bone segmentation in CT images. Phys Med Biol 2023; 68. [PMID: 36634367 DOI: 10.1088/1361-6560/acb2ab] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Accepted: 01/12/2023] [Indexed: 01/14/2023]
Abstract
Objective. Bone segmentation is a critical step in screw placement navigation. Although the deep learning methods have promoted the rapid development for bone segmentation, the local bone separation is still challenging due to irregular shapes and similar representational features.Approach. In this paper, we proposed the pairwise attention-enhanced adversarial model (Pair-SegAM) for automatic bone segmentation in computed tomography images, which includes the two parts of the segmentation model and discriminator. Considering that the distributions of the predictions from the segmentation model contains complicated semantics, we improve the discriminator to strengthen the awareness ability of the target region, improving the parsing of semantic information features. The Pair-SegAM has a pairwise structure, which uses two calculation mechanics to set up pairwise attention maps, then we utilize the semantic fusion to filter unstable regions. Therefore, the improved discriminator provides more refinement information to capture the bone outline, thus effectively enhancing the segmentation models for bone segmentation.Main results. To test the Pair-SegAM, we selected the two bone datasets for assessment. We evaluated our method against several bone segmentation models and latest adversarial models on the both datasets. The experimental results prove that our method not only exhibits superior bone segmentation performance, but also states effective generalization.Significance. Our method provides a more efficient segmentation of specific bones and has the potential to be extended to other semantic segmentation domains.
Collapse
Affiliation(s)
- Cheng Chen
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, People's Republic of China
| | - Siyu Qi
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, People's Republic of China
| | - Kangneng Zhou
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, People's Republic of China
| | - Tong Lu
- Visual 3D Medical Science and Technology Development Co. Ltd, Beijing 100082, People's Republic of China
| | - Huansheng Ning
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, People's Republic of China
| | - Ruoxiu Xiao
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, People's Republic of China.,Shunde Innovation School, University of Science and Technology Beijing, Foshan 100024, People's Republic of China
| |
Collapse
|
13
|
Mahato D, Aharwal VK, Sinha A. Multi-objective optimisation model and hybrid optimization algorithm for Electric Vehicle Charge Scheduling. J EXP THEOR ARTIF IN 2023. [DOI: 10.1080/0952813x.2023.2165719] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
Affiliation(s)
- Durga Mahato
- Department of EE, Dr.A.P.J.Abdul Kalam University, Indore, Madhya Pradesh, India
| | - Vikas Kumar Aharwal
- Department of EE, Dr.A.P.J.Abdul Kalam University, Indore, Madhya Pradesh, India
| | - Apurba Sinha
- Department of CSE, Jharkhand technical university, Jharkhand, India
| |
Collapse
|
14
|
Fan X, Zhu Q, Tu P, Joskowicz L, Chen X. A review of advances in image-guided orthopedic surgery. Phys Med Biol 2023; 68. [PMID: 36595258 DOI: 10.1088/1361-6560/acaae9] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Accepted: 12/12/2022] [Indexed: 12/15/2022]
Abstract
Orthopedic surgery remains technically demanding due to the complex anatomical structures and cumbersome surgical procedures. The introduction of image-guided orthopedic surgery (IGOS) has significantly decreased the surgical risk and improved the operation results. This review focuses on the application of recent advances in artificial intelligence (AI), deep learning (DL), augmented reality (AR) and robotics in image-guided spine surgery, joint arthroplasty, fracture reduction and bone tumor resection. For the pre-operative stage, key technologies of AI and DL based medical image segmentation, 3D visualization and surgical planning procedures are systematically reviewed. For the intra-operative stage, the development of novel image registration, surgical tool calibration and real-time navigation are reviewed. Furthermore, the combination of the surgical navigation system with AR and robotic technology is also discussed. Finally, the current issues and prospects of the IGOS system are discussed, with the goal of establishing a reference and providing guidance for surgeons, engineers, and researchers involved in the research and development of this area.
Collapse
Affiliation(s)
- Xingqi Fan
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Qiyang Zhu
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Puxun Tu
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Leo Joskowicz
- School of Computer Science and Engineering, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Xiaojun Chen
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China.,Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| |
Collapse
|
15
|
Chen C, Zhou K, Guo X, Wang Z, Xiao R, Wang G. Cerebrovascular segmentation in phase-contrast magnetic resonance angiography by multi-feature fusion and vessel completion. Comput Med Imaging Graph 2022; 98:102070. [DOI: 10.1016/j.compmedimag.2022.102070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Revised: 04/12/2022] [Accepted: 04/18/2022] [Indexed: 10/18/2022]
|
16
|
Yang TH, Horng MH, Li RS, Sun YN. Scaphoid Fracture Detection by Using Convolutional Neural Network. Diagnostics (Basel) 2022; 12:diagnostics12040895. [PMID: 35453943 PMCID: PMC9024757 DOI: 10.3390/diagnostics12040895] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2022] [Revised: 03/28/2022] [Accepted: 03/30/2022] [Indexed: 11/21/2022] Open
Abstract
Scaphoid fractures frequently appear in injury radiograph, but approximately 20% are occult. While there are few studies in the fracture detection of X-ray scaphoid images, their effectiveness is insignificant in detecting the scaphoid fractures. Traditional image processing technology had been applied to segment interesting areas of X-ray images, but it always suffered from the requirements of manual intervention and a large amount of computational time. To date, the models of convolutional neural networks have been widely applied to medical image recognition; thus, this study proposed a two-stage convolutional neural network to detect scaphoid fractures. In the first stage, the scaphoid bone is separated from the X-ray image using the Faster R-CNN network. The second stage uses the ResNet model as the backbone for feature extraction, and uses the feature pyramid network and the convolutional block attention module to develop the detection and classification models for scaphoid fractures. Various metrics such as recall, precision, sensitivity, specificity, accuracy, and the area under the receiver operating characteristic curve (AUC) are used to evaluate our proposed method’s performance. The scaphoid bone detection achieved an accuracy of 99.70%. The results of scaphoid fracture detection with the rotational bounding box revealed a recall of 0.789, precision of 0.894, accuracy of 0.853, sensitivity of 0.789, specificity of 0.90, and AUC of 0.920. The resulting scaphoid fracture classification had the following performances: recall of 0.735, precision of 0.898, accuracy of 0.829, sensitivity of 0.735, specificity of 0.920, and AUC of 0.917. According to the experimental results, we found that the proposed method can provide effective references for measuring scaphoid fractures. It has a high potential to consider the solution of detection of scaphoid fractures. In the future, the integration of images of the anterior–posterior and lateral views of each participant to develop more powerful convolutional neural networks for fracture detection by X-ray radiograph is probably important to research.
Collapse
Affiliation(s)
- Tai-Hua Yang
- Department of Biomedical Engineering, National Cheng Kung University, Tainan 701, Taiwan;
- Department of Orthopedic Surgery, College of Medicine, National Cheng Kung University Hospital, National Cheng Kung University, Tainan 704, Taiwan
| | - Ming-Huwi Horng
- Department of Computer Science and Information Engineering, National Pingtung University, Pingtung 912, Taiwan;
| | - Rong-Shiang Li
- Department of Computer Science and Information Engineering, National Cheng Kung University, Tainan 701, Taiwan;
| | - Yung-Nien Sun
- Department of Computer Science and Information Engineering, National Cheng Kung University, Tainan 701, Taiwan;
- Correspondence: ; Tel.: +886-6-2757575 (ext. 62526)
| |
Collapse
|