1
|
SAYGIN M, BEKMEZCİ M, DİNÇER E. Artificial Intelligence Model Chatgpt-4: Entrepreneur Candidate and Entrepreneurship Example. F1000Res 2024; 13:308. [PMID: 38845823 PMCID: PMC11153998 DOI: 10.12688/f1000research.144671.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 05/20/2024] [Indexed: 06/09/2024] Open
Abstract
Background Although artificial intelligence technologies are still in their infancy, it is seen that they can bring together both hope and anxiety for the future. In the research, it is focused on examining the ChatGPT-4 version, which is one of the most well-known artificial intelligence applications and claimed to have self-learning feature, within the scope of business establishment processes. Methods In this direction, the assessment questions in the Entrepreneurship Handbook, published as open access by the Small and Medium Enterprises Development Organization of Turkey, which focuses on guiding the entrepreneurial processes in Turkey and creating the perception of entrepreneurship, were combined with the artificial intelligence model ChatGPT-4 and analysed within three stages. The way of solving the questions of artificial intelligence modelling and the answers it provides have the opportunity to be compared with the entrepreneurship literature. Results It has been seen that the artificial intelligence modelling ChatGPT-4, being an outstanding entrepreneurship example itself, has succeeded in answering the questions posed in the context of 16 modules in the entrepreneurship handbook in an original way by analysing deeply. Conclusion It has also been concluded that it is quite creative in developing new alternatives to the correct answers specified in the entrepreneurship handbook. The original aspect of the research is that it is one of the pioneers of the study on artificial intelligence and entrepreneurship in literature.
Collapse
|
2
|
Tournois L, Trousset V, Hatsch D, Delabarde T, Ludes B, Lefèvre T. Artificial intelligence in the practice of forensic medicine: a scoping review. Int J Legal Med 2024; 138:1023-1037. [PMID: 38087052 PMCID: PMC11003914 DOI: 10.1007/s00414-023-03140-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Accepted: 11/21/2023] [Indexed: 04/11/2024]
Abstract
Forensic medicine is a thriving application field for artificial intelligence (AI). Indeed, AI applications intended to forensic pathologists or forensic physicians have emerged since the last decade. For example, AI models were developed to help estimate the biological age of migrants or human remains. However, the uses of AI applications by forensic pathologists or physicians and their levels of integration in medicolegal practices are not well described yet. Therefore, a scoping review was conducted on PubMed, ScienceDirect, and Scopus databases. This review included articles that mention any AI application used by forensic pathologists or physicians in practice or any AI model applied in one expertise field of the forensic pathologist or physician. Articles in other languages than English or French or dealing mainly with complementary analyses handled by experts who are not forensic pathologists or physicians or with AI to analyze data for research purposes in forensic medicine were excluded from this review. All the relevant information was retrieved in each article from a grid analysis derived and adapted from the TRIPOD checklist. This review included 35 articles and revealed that AI applications are developed in thanatology and in clinical forensic medicine. However, those applications seem to mainly remain in research and development stages. Indeed, the use of AI applications by forensic pathologists or physicians is not actual due to issues discussed in this article. Finally, the integration of AI in daily medicolegal practice involves not only forensic pathologists or physicians but also legal professionals.
Collapse
Affiliation(s)
- Laurent Tournois
- Université Paris Cité, CNRS UMR 8045, 75006, Paris, France.
- BioSilicium, Riom, France.
| | - Victor Trousset
- IRIS Institut de Recherche Interdisciplinaire Sur Les Enjeux Sociaux, UMR8156 CNRS - U997 Inserm - EHESS - Université Sorbonne Paris Nord, Paris, France
- Department of Forensic and Social Medicine, AP-HP, Jean Verdier Hospital, Bondy, France
| | | | - Tania Delabarde
- Université Paris Cité, CNRS UMR 8045, 75006, Paris, France
- Institut Médico-Légal de Paris, Paris, France
| | - Bertrand Ludes
- Université Paris Cité, CNRS UMR 8045, 75006, Paris, France
- Institut Médico-Légal de Paris, Paris, France
| | - Thomas Lefèvre
- IRIS Institut de Recherche Interdisciplinaire Sur Les Enjeux Sociaux, UMR8156 CNRS - U997 Inserm - EHESS - Université Sorbonne Paris Nord, Paris, France
- Department of Forensic and Social Medicine, AP-HP, Jean Verdier Hospital, Bondy, France
| |
Collapse
|
3
|
Wang G, Hu J, Zhang Y, Xiao Z, Huang M, He Z, Chen J, Bai Z. A modified U-Net convolutional neural network for segmenting periprostatic adipose tissue based on contour feature learning. Heliyon 2024; 10:e25030. [PMID: 38318024 PMCID: PMC10839980 DOI: 10.1016/j.heliyon.2024.e25030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2023] [Revised: 01/16/2024] [Accepted: 01/18/2024] [Indexed: 02/07/2024] Open
Abstract
Objective This study trains a U-shaped fully convolutional neural network (U-Net) model based on peripheral contour measures to achieve rapid, accurate, automated identification and segmentation of periprostatic adipose tissue (PPAT). Methods Currently, no studies are using deep learning methods to discriminate and segment periprostatic adipose tissue. This paper proposes a novel and modified, U-shaped convolutional neural network contour control points on a small number of datasets of MRI T2W images of PPAT combined with its gradient images as a feature learning method to reduce feature ambiguity caused by the differences in PPAT contours of different patients. This paper adopts a supervised learning method on the labeled dataset, combining the probability and spatial distribution of control points, and proposes a weighted loss function to optimize the neural network's convergence speed and detection performance. Based on high-precision detection of control points, this paper uses a convex curve fitting to obtain the final PPAT contour. The imaging segmentation results were compared with those of a fully convolutional network (FCN), U-Net, and semantic segmentation convolutional network (SegNet) on three evaluation metrics: Dice similarity coefficient (DSC), Hausdorff distance (HD), and intersection over union ratio (IoU). Results Cropped images with a 270 × 270-pixel matrix had DSC, HD, and IoU values of 70.1%, 27 mm, and 56.1%, respectively; downscaled images with a 256 × 256-pixel matrix had 68.7%, 26.7 mm, and 54.1%. A U-Net network based on peripheral contour characteristics predicted the complete periprostatic adipose tissue contours on T2W images at different levels. FCN, U-Net, and SegNet could not completely predict them. Conclusion This U-Net convolutional neural network based on peripheral contour features can identify and segment periprostatic adipose tissue quite well. Cropped images with a 270 × 270-pixel matrix are more appropriate for use with the U-Net convolutional neural network based on contour features; reducing the resolution of the original image will lower the accuracy of the U-Net convolutional neural network. FCN and SegNet are not appropriate for identifying PPAT on T2 sequence MR images. Our method can automatically segment PPAT rapidly and accurately, laying a foundation for PPAT image analysis.
Collapse
Affiliation(s)
- Gang Wang
- Department of Urology, Affiliated Haikou Hospital of Xiangya Medical College, Central South University, Haikou, 570208, Hainan Province, China
| | - Jinyue Hu
- Department of Radiology, Affiliated Haikou Hospital of Xiangya Medical College, Central South University, Haikou, 570208, Hainan Province, China
| | - Yu Zhang
- College of Computer Science and Cyberspace Security, Hainan University, Haikou, 570228, China
| | - Zhaolin Xiao
- College of Computer Science, Xi'an University of Technology, Xi'an, 710048, China
| | - Mengxing Huang
- College of Information and Communication Engineering, Hainan University, Haikou, 70208, China
| | - Zhanping He
- Department of Radiology, Affiliated Haikou Hospital of Xiangya Medical College, Central South University, Haikou, 570208, Hainan Province, China
| | - Jing Chen
- Department of Radiology, Affiliated Haikou Hospital of Xiangya Medical College, Central South University, Haikou, 570208, Hainan Province, China
| | - Zhiming Bai
- Department of Urology, Affiliated Haikou Hospital of Xiangya Medical College, Central South University, Haikou, 570208, Hainan Province, China
| |
Collapse
|
4
|
Perkins M, Roe J. Academic publisher guidelines on AI usage: A ChatGPT supported thematic analysis. F1000Res 2024; 12:1398. [PMID: 38322309 PMCID: PMC10844801 DOI: 10.12688/f1000research.142411.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 01/10/2024] [Indexed: 02/08/2024] Open
Abstract
Background As Artificial Intelligence (AI) technologies such as Generative AI (GenAI) have become more common in academic settings, it is necessary to examine how these tools interact with issues of authorship, academic integrity, and research methodologies. The current landscape lacks cohesive policies and guidelines for regulating AI's role in academic research which has prompted discussions among publishers, authors, and institutions. Methods This study employs inductive thematic analysis to explore publisher policies regarding AI-assisted authorship and academic work. Our methods involved a two-fold analysis using both AI-assisted and traditional unassisted techniques to examine the available policies from leading academic publishers and other publishing or academic entities. The framework was designed to offer multiple perspectives, harnessing the strengths of AI for pattern recognition while leveraging human expertise for nuanced interpretation. The results of these two analyses are combined to form the final themes. Results Our findings indicate six overall themes, three of which were independently identified in both the AI-assisted and unassisted, manual analysis using common software tools. A broad consensus appears among publishers that human authorship remains paramount and that the use of GenAI tools is permissible but must be disclosed. However, GenAI tools are increasingly acknowledged for their supportive roles, including text generation and data analysis. The study also discusses the inherent limitations and biases of AI-assisted analysis, necessitating rigorous scrutiny by authors, reviewers, and editors. Conclusions There is a growing recognition of AI's role as a valuable auxiliary tool in academic research, but one that comes with caveats pertaining to integrity, accountability, and interpretive limitations. This study used a novel analysis supported by GenAI tools to identify themes emerging in the policy landscape, underscoring the need for an informed, flexible approach to policy formulation that can adapt to the rapidly evolving landscape of AI technologies.
Collapse
Affiliation(s)
| | - Jasper Roe
- James Cook University Singapore, Singapore, Singapore
| |
Collapse
|
5
|
Ebigbo A, Messmann H. Surfing the AI wave: Insights and challenges. Endoscopy 2024; 56:70-71. [PMID: 37890515 DOI: 10.1055/a-2182-6188] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/29/2023]
Affiliation(s)
- Alanna Ebigbo
- Gastroenterology, Universitätsklinikum Augsburg, Augsburg, Germany
| | - Helmut Messmann
- Gastroenterology, Universitätsklinikum Augsburg, Augsburg, Germany
| |
Collapse
|
6
|
Santorsola M, Lescai F. The promise of explainable deep learning for omics data analysis: Adding new discovery tools to AI. N Biotechnol 2023; 77:1-11. [PMID: 37329982 DOI: 10.1016/j.nbt.2023.06.002] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 06/01/2023] [Accepted: 06/14/2023] [Indexed: 06/19/2023]
Abstract
Deep learning has already revolutionised the way a wide range of data is processed in many areas of daily life. The ability to learn abstractions and relationships from heterogeneous data has provided impressively accurate prediction and classification tools to handle increasingly big datasets. This has a significant impact on the growing wealth of omics datasets, with the unprecedented opportunity for a better understanding of the complexity of living organisms. While this revolution is transforming the way these data are analyzed, explainable deep learning is emerging as an additional tool with the potential to change the way biological data is interpreted. Explainability addresses critical issues such as transparency, so important when computational tools are introduced especially in clinical environments. Moreover, it empowers artificial intelligence with the capability to provide new insights into the input data, thus adding an element of discovery to these already powerful resources. In this review, we provide an overview of the transformative effects explainable deep learning is having on multiple sectors, ranging from genome engineering and genomics, from radiomics to drug design and clinical trials. We offer a perspective to life scientists, to better understand the potential of these tools, and a motivation to implement them in their research, by suggesting learning resources they can use to move their first steps in this field.
Collapse
Affiliation(s)
| | - Francesco Lescai
- Department of Biology and Biotechnology, University of Pavia, Pavia, Italy.
| |
Collapse
|
7
|
Yan P, Sun W, Li X, Li M, Jiang Y, Luo H. PKDN: Prior Knowledge Distillation Network for bronchoscopy diagnosis. Comput Biol Med 2023; 166:107486. [PMID: 37757599 DOI: 10.1016/j.compbiomed.2023.107486] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Revised: 08/15/2023] [Accepted: 09/15/2023] [Indexed: 09/29/2023]
Abstract
Bronchoscopy plays a crucial role in diagnosing and treating lung diseases. The deep learning-based diagnostic system for bronchoscopic images can assist physicians in accurately and efficiently diagnosing lung diseases, enabling patients to undergo timely pathological examinations and receive appropriate treatment. However, the existing diagnostic methods overlook the utilization of prior knowledge of medical images, and the limited feature extraction capability hinders precise focus on lesion regions, consequently affecting the overall diagnostic effectiveness. To address these challenges, this paper proposes a prior knowledge distillation network (PKDN) for identifying lung diseases through bronchoscopic images. The proposed method extracts color and edge features from lesion images using the prior knowledge guidance module, and subsequently enhances spatial and channel features by employing the dynamic spatial attention module and gated channel attention module, respectively. Finally, the extracted features undergo refinement and self-regulation through feature distillation. Furthermore, decoupled distillation is implemented to balance the importance of target and non-target class distillation, thereby enhancing the diagnostic performance of the network. The effectiveness of the proposed method is validated on the bronchoscopic dataset provided by Harbin Medical University Cancer Hospital, which consists of 2,029 bronchoscopic images from 200 patients. Experimental results demonstrate that the proposed method achieves an accuracy of 94.78% and an AUC of 98.17%, outperforming other methods significantly in diagnostic performance. These results indicate that the computer-aided diagnostic system based on PKDN provides satisfactory accuracy in diagnosing lung diseases during bronchoscopy.
Collapse
Affiliation(s)
- Pengfei Yan
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin 150001, China
| | - Weiling Sun
- Department of Endoscope, Harbin Medical University Cancer Hospital, Harbin 150040, China
| | - Xiang Li
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin 150001, China
| | - Minglei Li
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin 150001, China
| | - Yuchen Jiang
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin 150001, China
| | - Hao Luo
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin 150001, China.
| |
Collapse
|
8
|
Zhang H, Ogasawara K. Grad-CAM-Based Explainable Artificial Intelligence Related to Medical Text Processing. Bioengineering (Basel) 2023; 10:1070. [PMID: 37760173 PMCID: PMC10525184 DOI: 10.3390/bioengineering10091070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 08/28/2023] [Accepted: 09/06/2023] [Indexed: 09/29/2023] Open
Abstract
The opacity of deep learning makes its application challenging in the medical field. Therefore, there is a need to enable explainable artificial intelligence (XAI) in the medical field to ensure that models and their results can be explained in a manner that humans can understand. This study uses a high-accuracy computer vision algorithm model to transfer learning to medical text tasks and uses the explanatory visualization method known as gradient-weighted class activation mapping (Grad-CAM) to generate heat maps to ensure that the basis for decision-making can be provided intuitively or via the model. The system comprises four modules: pre-processing, word embedding, classifier, and visualization. We used Word2Vec and BERT to compare word embeddings and use ResNet and 1Dimension convolutional neural networks (CNN) to compare classifiers. Finally, the Bi-LSTM was used to perform text classification for direct comparison. With 25 epochs, the model that used pre-trained ResNet on the formalized text presented the best performance (recall of 90.9%, precision of 91.1%, and an F1 score of 90.2% weighted). This study uses ResNet to process medical texts through Grad-CAM-based explainable artificial intelligence and obtains a high-accuracy classification effect; at the same time, through Grad-CAM visualization, it intuitively shows the words to which the model pays attention when making predictions.
Collapse
Affiliation(s)
| | - Katsuhiko Ogasawara
- Graduate School of Health Science, Hokkaido University, N12-W5, Kitaku, Sapporo 060-0812, Japan
| |
Collapse
|
9
|
Zheng C, Bouazizi M, Ohtsuki T, Kitazawa M, Horigome T, Kishimoto T. Detecting Dementia from Face-Related Features with Automated Computational Methods. Bioengineering (Basel) 2023; 10:862. [PMID: 37508889 PMCID: PMC10376259 DOI: 10.3390/bioengineering10070862] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Revised: 07/13/2023] [Accepted: 07/14/2023] [Indexed: 07/30/2023] Open
Abstract
Alzheimer's disease (AD) is a type of dementia that is more likely to occur as people age. It currently has no known cure. As the world's population is aging quickly, early screening for AD has become increasingly important. Traditional screening methods such as brain scans or psychiatric tests are stressful and costly. The patients are likely to feel reluctant to such screenings and fail to receive timely intervention. While researchers have been exploring the use of language in dementia detection, less attention has been given to face-related features. The paper focuses on investigating how face-related features can aid in detecting dementia by exploring the PROMPT dataset that contains video data collected from patients with dementia during interviews. In this work, we extracted three types of features from the videos, including face mesh, Histogram of Oriented Gradients (HOG) features, and Action Units (AU). We trained traditional machine learning models and deep learning models on the extracted features and investigated their effectiveness in dementia detection. Our experiments show that the use of HOG features achieved the highest accuracy of 79% in dementia detection, followed by AU features with 71% accuracy, and face mesh features with 66% accuracy. Our results show that face-related features have the potential to be a crucial indicator in automated computational dementia detection.
Collapse
Affiliation(s)
- Chuheng Zheng
- Graduate School of Science and Technology, Keio University, Yokohama 223-0061, Kanagawa, Japan
| | - Mondher Bouazizi
- Faculty of Science and Technology, Keio University, Yokohama 223-0061, Kanagawa, Japan
| | - Tomoaki Ohtsuki
- Faculty of Science and Technology, Keio University, Yokohama 223-0061, Kanagawa, Japan
| | - Momoko Kitazawa
- School of Medicine, Keio University, 35 Shinanomachi, Shinjuku-ku, Tokyo 160-8582, Japan
| | - Toshiro Horigome
- School of Medicine, Keio University, 35 Shinanomachi, Shinjuku-ku, Tokyo 160-8582, Japan
| | - Taishiro Kishimoto
- School of Medicine, Keio University, 35 Shinanomachi, Shinjuku-ku, Tokyo 160-8582, Japan
| |
Collapse
|
10
|
Chen S, Li R, Wang C, Liang J, Yue K, Li W, Li Y. Attention-Based Convolutional Neural Network for Ingredients Identification. ENTROPY (BASEL, SWITZERLAND) 2023; 25:388. [PMID: 36832753 PMCID: PMC9955413 DOI: 10.3390/e25020388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/02/2022] [Revised: 02/10/2023] [Accepted: 02/12/2023] [Indexed: 06/18/2023]
Abstract
In recent years, with the development of artificial intelligence, smart catering has become one of the most popular research fields, where ingredients identification is a necessary and significant link. The automatic identification of ingredients can effectively reduce labor costs in the acceptance stage of the catering process. Although there have been a few methods for ingredients classification, most of them are of low recognition accuracy and poor flexibility. In order to solve these problems, in this paper, we construct a large-scale fresh ingredients database and design an end-to-end multi-attention-based convolutional neural network model for ingredients identification. Our method achieves an accuracy of 95.90% in the classification task, which contains 170 kinds of ingredients. The experiment results indicate that it is the state-of-the-art method for the automatic identification of ingredients. In addition, considering the sudden addition of some new categories beyond our training list in actual applications, we introduce an open-set recognition module to predict the samples outside the training set as the unknown ones. The accuracy of open-set recognition reaches 74.6%. Our algorithm has been deployed successfully in smart catering systems. It achieves an average accuracy of 92% in actual use and saves 60% of the time compared to manual operation, according to the statistics of actual application scenarios.
Collapse
|
11
|
NextDet: Efficient Sparse-to-Dense Object Detection with Attentive Feature Aggregation. FUTURE INTERNET 2022. [DOI: 10.3390/fi14120355] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
Object detection is a computer vision task of detecting instances of objects of a certain class, identifying types of objects, determining its location, and accurately labelling them in an input image or a video. The scope of the work presented within this paper proposes a modern object detection network called NextDet to efficiently detect objects of multiple classes which utilizes CondenseNeXt, an award-winning lightweight image classification convolutional neural network algorithm with reduced number of FLOPs and parameters as the backbone, to efficiently extract and aggregate image features at different granularities in addition to other novel and modified strategies such as attentive feature aggregation in the head, to perform object detection and draw bounding boxes around the detected objects. Extensive experiments and ablation tests, as outlined in this paper, are performed on Argoverse-HD and COCO datasets, which provide numerous temporarily sparse to dense annotated images, demonstrate that the proposed object detection algorithm with CondenseNeXt as the backbone result in an increase in mean Average Precision (mAP) performance and interpretability on Argoverse-HD’s monocular ego-vehicle camera captured scenarios by up to 17.39% as well as COCO’s large set of images of everyday scenes of real-world common objects by up to 14.62%.
Collapse
|
12
|
Devedzic V. Identity of AI. DISCOVER ARTIFICIAL INTELLIGENCE 2022. [DOI: 10.1007/s44163-022-00038-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
AbstractWith the explosion of Artificial Intelligence (AI) as an area of study and practice, it has gradually become very difficult to mark its boundaries precisely and specify what exactly it encompasses. Many other areas of study are interwoven with AI, and new research and development topics that require interdisciplinary approach frequently attract attention. In addition, several AI subfields and topics are home to long-time controversies that give rise to seemingly never-ending debates that further obfuscate the entire area of AI and make its boundaries even more indistinct. To tackle such problems in a systematic way, this paper introduces the concept of identity of AI (viewed as an area of study) and discusses its dynamics, controversies, contradictions, and opposing opinions and approaches, coming from different sources and stakeholders. The concept of identity of AI emerges as a set of characteristics that shape up the current outlook on AI from epistemological, philosophical, ethical, technological, and social perspectives.
Collapse
|
13
|
Li M, Li X, Jiang Y, Zhang J, Luo H, Yin S. Explainable multi-instance and multi-task learning for COVID-19 diagnosis and lesion segmentation in CT images. Knowl Based Syst 2022; 252:109278. [PMID: 35783000 PMCID: PMC9235304 DOI: 10.1016/j.knosys.2022.109278] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2022] [Revised: 06/12/2022] [Accepted: 06/13/2022] [Indexed: 11/16/2022]
Abstract
Coronavirus Disease 2019 (COVID-19) still presents a pandemic trend globally. Detecting infected individuals and analyzing their status can provide patients with proper healthcare while protecting the normal population. Chest CT (computed tomography) is an effective tool for screening of COVID-19. It displays detailed pathology-related information. To achieve automated COVID-19 diagnosis and lung CT image segmentation, convolutional neural networks (CNNs) have become mainstream methods. However, most of the previous works consider automated diagnosis and image segmentation as two independent tasks, in which some focus on lung fields segmentation and the others focus on single-lesion segmentation. Moreover, lack of clinical explainability is a common problem for CNN-based methods. In such context, we develop a multi-task learning framework in which the diagnosis of COVID-19 and multi-lesion recognition (segmentation of CT images) are achieved simultaneously. The core of the proposed framework is an explainable multi-instance multi-task network. The network learns task-related features adaptively with learnable weights, and gives explicable diagnosis results by suggesting local CT images with lesions as additional evidence. Then, severity assessment of COVID-19 and lesion quantification are performed to analyze patient status. Extensive experimental results on real-world datasets show that the proposed framework outperforms all the compared approaches for COVID-19 diagnosis and multi-lesion segmentation.
Collapse
Affiliation(s)
- Minglei Li
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001, Heilongjiang, China
| | - Xiang Li
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001, Heilongjiang, China
| | - Yuchen Jiang
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001, Heilongjiang, China
| | - Jiusi Zhang
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001, Heilongjiang, China
| | - Hao Luo
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001, Heilongjiang, China
| | - Shen Yin
- Department of Mechanical and Industrial Engineering, Norwegian University of Science and Technology, Trondheim, 7034, Norway
| |
Collapse
|