1
|
Kondou H, Morohashi R, Kimura S, Idota N, Matsunari R, Ichioka H, Bandou R, Kawamoto M, Ting D, Ikegaya H. Artificial intelligence-based forensic sex determination of East Asian cadavers from skull morphology. Sci Rep 2023; 13:21026. [PMID: 38030742 PMCID: PMC10686987 DOI: 10.1038/s41598-023-48363-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Accepted: 11/25/2023] [Indexed: 12/01/2023] Open
Abstract
Identification of unknown cadavers is an important task for forensic scientists. Forensic scientists attempt to identify skeletal remains based on factors including age, sex, and dental treatment remains. Forensic scientists commonly consider skull or pelvic shape to evaluate the sex; however, these evaluations require sufficient experience and knowledge and lack objectivity and reproducibility. To ensure objectivity and reproducibility for sex evaluation, we applied a gated attention-based multiple-instance learning model to three-dimensional (3D) skull images reconstructed from postmortem head computed tomography scans. We preprocessed the images, trained with 864 training data, validated the model with 124 validation data, and evaluated the performance of our model in terms of accuracy with 246 test data. Furthermore, three forensic scientists evaluated the 3D skull images, and their performances were compared with those of the model. Our model showed an accuracy of 0.93, which was higher than that of the forensic scientists. Our model primarily focused on the entire skull owing to visualization but focused less on the areas often investigated by forensic scientists. In summary, our model may serve as a supportive tool to identify cadaver sex based on skull shape. Further studies are required to improve the model's performance.
Collapse
Affiliation(s)
- Hiroki Kondou
- Department of Forensic Medicine, Graduate School of Medicine, Kyoto Prefectural University of Medicine, 465 Kajiicho, Kawaramachi-Dori Hirokoji-Agaru, Kamigyo-Ku, Kyoto, 602-8566, Japan.
| | - Rina Morohashi
- Department of Forensic Medicine, Graduate School of Medicine, Kyoto Prefectural University of Medicine, 465 Kajiicho, Kawaramachi-Dori Hirokoji-Agaru, Kamigyo-Ku, Kyoto, 602-8566, Japan
| | - Satoko Kimura
- Department of Forensic Medicine, Graduate School of Medicine, Kyoto Prefectural University of Medicine, 465 Kajiicho, Kawaramachi-Dori Hirokoji-Agaru, Kamigyo-Ku, Kyoto, 602-8566, Japan
| | - Nozomi Idota
- Department of Forensic Medicine, Graduate School of Medicine, Kyoto Prefectural University of Medicine, 465 Kajiicho, Kawaramachi-Dori Hirokoji-Agaru, Kamigyo-Ku, Kyoto, 602-8566, Japan
| | - Ryota Matsunari
- Department of Forensic Medicine, Graduate School of Medicine, Kyoto Prefectural University of Medicine, 465 Kajiicho, Kawaramachi-Dori Hirokoji-Agaru, Kamigyo-Ku, Kyoto, 602-8566, Japan
| | - Hiroaki Ichioka
- Department of Forensic Medicine, Graduate School of Medicine, Kyoto Prefectural University of Medicine, 465 Kajiicho, Kawaramachi-Dori Hirokoji-Agaru, Kamigyo-Ku, Kyoto, 602-8566, Japan
| | - Risa Bandou
- Department of Forensic Medicine, Graduate School of Medicine, Kyoto Prefectural University of Medicine, 465 Kajiicho, Kawaramachi-Dori Hirokoji-Agaru, Kamigyo-Ku, Kyoto, 602-8566, Japan
| | - Masataka Kawamoto
- Department of Forensic Medicine, Graduate School of Medicine, Kyoto Prefectural University of Medicine, 465 Kajiicho, Kawaramachi-Dori Hirokoji-Agaru, Kamigyo-Ku, Kyoto, 602-8566, Japan
| | - Deng Ting
- Department of Forensic Medicine, Graduate School of Medicine, Kyoto Prefectural University of Medicine, 465 Kajiicho, Kawaramachi-Dori Hirokoji-Agaru, Kamigyo-Ku, Kyoto, 602-8566, Japan
| | - Hiroshi Ikegaya
- Department of Forensic Medicine, Graduate School of Medicine, Kyoto Prefectural University of Medicine, 465 Kajiicho, Kawaramachi-Dori Hirokoji-Agaru, Kamigyo-Ku, Kyoto, 602-8566, Japan
| |
Collapse
|
2
|
Rai HM, Yoo J. A comprehensive analysis of recent advancements in cancer detection using machine learning and deep learning models for improved diagnostics. J Cancer Res Clin Oncol 2023; 149:14365-14408. [PMID: 37540254 DOI: 10.1007/s00432-023-05216-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Accepted: 07/26/2023] [Indexed: 08/05/2023]
Abstract
PURPOSE There are millions of people who lose their life due to several types of fatal diseases. Cancer is one of the most fatal diseases which may be due to obesity, alcohol consumption, infections, ultraviolet radiation, smoking, and unhealthy lifestyles. Cancer is abnormal and uncontrolled tissue growth inside the body which may be spread to other body parts other than where it has originated. Hence it is very much required to diagnose the cancer at an early stage to provide correct and timely treatment. Also, manual diagnosis and diagnostic error may cause of the death of many patients hence much research are going on for the automatic and accurate detection of cancer at early stage. METHODS In this paper, we have done the comparative analysis of the diagnosis and recent advancement for the detection of various cancer types using traditional machine learning (ML) and deep learning (DL) models. In this study, we have included four types of cancers, brain, lung, skin, and breast and their detection using ML and DL techniques. In extensive review we have included a total of 130 pieces of literature among which 56 are of ML-based and 74 are from DL-based cancer detection techniques. Only the peer reviewed research papers published in the recent 5-year span (2018-2023) have been included for the analysis based on the parameters, year of publication, feature utilized, best model, dataset/images utilized, and best accuracy. We have reviewed ML and DL-based techniques for cancer detection separately and included accuracy as the performance evaluation metrics to maintain the homogeneity while verifying the classifier efficiency. RESULTS Among all the reviewed literatures, DL techniques achieved the highest accuracy of 100%, while ML techniques achieved 99.89%. The lowest accuracy achieved using DL and ML approaches were 70% and 75.48%, respectively. The difference in accuracy between the highest and lowest performing models is about 28.8% for skin cancer detection. In addition, the key findings, and challenges for each type of cancer detection using ML and DL techniques have been presented. The comparative analysis between the best performing and worst performing models, along with overall key findings and challenges, has been provided for future research purposes. Although the analysis is based on accuracy as the performance metric and various parameters, the results demonstrate a significant scope for improvement in classification efficiency. CONCLUSION The paper concludes that both ML and DL techniques hold promise in the early detection of various cancer types. However, the study identifies specific challenges that need to be addressed for the widespread implementation of these techniques in clinical settings. The presented results offer valuable guidance for future research in cancer detection, emphasizing the need for continued advancements in ML and DL-based approaches to improve diagnostic accuracy and ultimately save more lives.
Collapse
Affiliation(s)
- Hari Mohan Rai
- School of Computing, Gachon University, 1342 Seongnam-daero, Sujeong-gu, Seongnam-si, 13120, Gyeonggi-do, Republic of Korea.
| | - Joon Yoo
- School of Computing, Gachon University, 1342 Seongnam-daero, Sujeong-gu, Seongnam-si, 13120, Gyeonggi-do, Republic of Korea
| |
Collapse
|
3
|
Jenkin Suji R, Bhadauria SS, Wilfred Godfrey W. A survey and taxonomy of 2.5D approaches for lung segmentation and nodule detection in CT images. Comput Biol Med 2023; 165:107437. [PMID: 37717526 DOI: 10.1016/j.compbiomed.2023.107437] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Revised: 08/20/2023] [Accepted: 08/28/2023] [Indexed: 09/19/2023]
Abstract
CAD systems for lung cancer diagnosis and detection can significantly offer unbiased, infatiguable diagnostics with minimal variance, decreasing the mortality rate and the five-year survival rate. Lung segmentation and lung nodule detection are critical steps in the lung cancer CAD system pipeline. Literature on lung segmentation and lung nodule detection mostly comprises techniques that process 3-D volumes or 2-D slices and surveys. However, surveys that highlight 2.5D techniques for lung segmentation and lung nodule detection still need to be included. This paper presents a background and discussion on 2.5D methods to fill this gap. Further, this paper also gives a taxonomy of 2.5D approaches and a detailed description of the 2.5D approaches. Based on the taxonomy, various 2.5D techniques for lung segmentation and lung nodule detection are clustered into these 2.5D approaches, which is followed by possible future work in this direction.
Collapse
|
4
|
Hayee S, Hussain F, Yousaf MH. A Novel FDLSR-Based Technique for View-Independent Vehicle Make and Model Recognition. SENSORS (BASEL, SWITZERLAND) 2023; 23:7920. [PMID: 37765976 PMCID: PMC10537004 DOI: 10.3390/s23187920] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Revised: 09/04/2023] [Accepted: 09/04/2023] [Indexed: 09/29/2023]
Abstract
Vehicle make and model recognition (VMMR) is an important aspect of intelligent transportation systems (ITS). In VMMR systems, surveillance cameras capture vehicle images for real-time vehicle detection and recognition. These captured images pose challenges, including shadows, reflections, changes in weather and illumination, occlusions, and perspective distortion. Another significant challenge in VMMR is the multiclass classification. This scenario has two main categories: (a) multiplicity and (b) ambiguity. Multiplicity concerns the issue of different forms among car models manufactured by the same company, while the ambiguity problem arises when multiple models from the same manufacturer have visually similar appearances or when vehicle models of different makes have visually comparable rear/front views. This paper introduces a novel and robust VMMR model that can address the above-mentioned issues with accuracy comparable to state-of-the-art methods. Our proposed hybrid CNN model selects the best descriptive fine-grained features with the help of Fisher Discriminative Least Squares Regression (FDLSR). These features are extracted from a deep CNN model fine-tuned on the fine-grained vehicle datasets Stanford-196 and BoxCars21k. Using ResNet-152 features, our proposed model outperformed the SVM and FC layers in accuracy by 0.5% and 4% on Stanford-196 and 0.4 and 1% on BoxCars21k, respectively. Moreover, this model is well-suited for small-scale fine-grained vehicle datasets.
Collapse
Affiliation(s)
- Sobia Hayee
- Department of Computer Engineering, University of Engineering & Technology, Taxila 47050, Pakistan; (S.H.); (M.H.Y.)
| | - Fawad Hussain
- Department of Computer Engineering, University of Engineering & Technology, Taxila 47050, Pakistan; (S.H.); (M.H.Y.)
| | - Muhammad Haroon Yousaf
- Department of Computer Engineering, University of Engineering & Technology, Taxila 47050, Pakistan; (S.H.); (M.H.Y.)
- SWARM Robotics Lab, National Center of Robotics & Automation (NCRA), Taxila 47050, Pakistan
| |
Collapse
|
5
|
Rai HM. Cancer detection and segmentation using machine learning and deep learning techniques: a review. MULTIMEDIA TOOLS AND APPLICATIONS 2023. [DOI: 10.1007/s11042-023-16520-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/16/2022] [Revised: 05/12/2023] [Accepted: 08/13/2023] [Indexed: 09/16/2023]
|
6
|
Mokoatle M, Marivate V, Mapiye D, Bornman R, Hayes VM. A review and comparative study of cancer detection using machine learning: SBERT and SimCSE application. BMC Bioinformatics 2023; 24:112. [PMID: 36959534 PMCID: PMC10037872 DOI: 10.1186/s12859-023-05235-x] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Accepted: 03/17/2023] [Indexed: 03/25/2023] Open
Abstract
BACKGROUND Using visual, biological, and electronic health records data as the sole input source, pretrained convolutional neural networks and conventional machine learning methods have been heavily employed for the identification of various malignancies. Initially, a series of preprocessing steps and image segmentation steps are performed to extract region of interest features from noisy features. Then, the extracted features are applied to several machine learning and deep learning methods for the detection of cancer. METHODS In this work, a review of all the methods that have been applied to develop machine learning algorithms that detect cancer is provided. With more than 100 types of cancer, this study only examines research on the four most common and prevalent cancers worldwide: lung, breast, prostate, and colorectal cancer. Next, by using state-of-the-art sentence transformers namely: SBERT (2019) and the unsupervised SimCSE (2021), this study proposes a new methodology for detecting cancer. This method requires raw DNA sequences of matched tumor/normal pair as the only input. The learnt DNA representations retrieved from SBERT and SimCSE will then be sent to machine learning algorithms (XGBoost, Random Forest, LightGBM, and CNNs) for classification. As far as we are aware, SBERT and SimCSE transformers have not been applied to represent DNA sequences in cancer detection settings. RESULTS The XGBoost model, which had the highest overall accuracy of 73 ± 0.13 % using SBERT embeddings and 75 ± 0.12 % using SimCSE embeddings, was the best performing classifier. In light of these findings, it can be concluded that incorporating sentence representations from SimCSE's sentence transformer only marginally improved the performance of machine learning models.
Collapse
Affiliation(s)
- Mpho Mokoatle
- Department of Computer Science, University of Pretoria, Pretoria, South Africa.
| | - Vukosi Marivate
- Department of Computer Science, University of Pretoria, Pretoria, South Africa
| | | | - Riana Bornman
- School of Health Systems and Public Health, University of Pretoria, Pretoria, South Africa
| | - Vanessa M Hayes
- School of Medical Sciences, The University of Sydney, Sydney, Australia
- School of Health Systems and Public Health, University of Pretoria, Pretoria, South Africa
| |
Collapse
|
7
|
Naqvi RA, Arsalan M, Qaiser T, Khan TM, Razzak I. Sensor Data Fusion Based on Deep Learning for Computer Vision Applications and Medical Applications. SENSORS (BASEL, SWITZERLAND) 2022; 22:8058. [PMID: 36298412 PMCID: PMC9609765 DOI: 10.3390/s22208058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Accepted: 10/14/2022] [Indexed: 06/16/2023]
Abstract
Sensor fusion is the process of merging data from many sources, such as radar, lidar and camera sensors, to provide less uncertain information compared to the information collected from single source [...].
Collapse
Affiliation(s)
- Rizwan Ali Naqvi
- School of Intelligent Mechatronics Engineering, Sejong University, Seoul 05006, Korea
| | - Muhammad Arsalan
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Korea
| | - Talha Qaiser
- Department of Computer Science, University of Warwick, Coventry CV4 7AL, UK
| | - Tariq Mahmood Khan
- School of Computer Science and Engineering, University of New South Wales, Sydney 1466, Australia
| | - Imran Razzak
- School of Computer Science and Engineering, University of New South Wales, Sydney 1466, Australia
| |
Collapse
|