1
|
Burti S, Banzato T, Coghlan S, Wodzinski M, Bendazzoli M, Zotti A. Artificial intelligence in veterinary diagnostic imaging: Perspectives and limitations. Res Vet Sci 2024; 175:105317. [PMID: 38843690 DOI: 10.1016/j.rvsc.2024.105317] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2024] [Revised: 05/22/2024] [Accepted: 05/29/2024] [Indexed: 06/17/2024]
Abstract
The field of veterinary diagnostic imaging is undergoing significant transformation with the integration of artificial intelligence (AI) tools. This manuscript provides an overview of the current state and future prospects of AI in veterinary diagnostic imaging. The manuscript delves into various applications of AI across different imaging modalities, such as radiology, ultrasound, computed tomography, and magnetic resonance imaging. Examples of AI applications in each modality are provided, ranging from orthopaedics to internal medicine, cardiology, and more. Notable studies are discussed, demonstrating AI's potential for improved accuracy in detecting and classifying various abnormalities. The ethical considerations of using AI in veterinary diagnostics are also explored, highlighting the need for transparent AI development, accurate training data, awareness of the limitations of AI models, and the importance of maintaining human expertise in the decision-making process. The manuscript underscores the significance of AI as a decision support tool rather than a replacement for human judgement. In conclusion, this comprehensive manuscript offers an assessment of the current landscape and future potential of AI in veterinary diagnostic imaging. It provides insights into the benefits and challenges of integrating AI into clinical practice while emphasizing the critical role of ethics and human expertise in ensuring the wellbeing of veterinary patients.
Collapse
Affiliation(s)
- Silvia Burti
- Department of Animal Medicine, Production and Health, University of Padua, Viale dell'Università 16, Legnaro, 35020 Padua, Italy.
| | - Tommaso Banzato
- Department of Animal Medicine, Production and Health, University of Padua, Viale dell'Università 16, Legnaro, 35020 Padua, Italy
| | - Simon Coghlan
- School of Computing and Information Systems, Centre for AI and Digital Ethics, Australian Research Council Centre of Excellence for Automated Decision-Making and Society, University of Melbourne, 3052 Melbourne, Australia
| | - Marek Wodzinski
- Faculty of Electrical Engineering, Automatics, Computer Science and Biomedical Engineering, AGH University of Krakow, 30059 Kraków, Poland; Information Systems Institute, University of Applied Sciences - Western Switzerland (HES-SO Valais), 3960 Sierre, Switzerland
| | - Margherita Bendazzoli
- Department of Animal Medicine, Production and Health, University of Padua, Viale dell'Università 16, Legnaro, 35020 Padua, Italy
| | - Alessandro Zotti
- Department of Animal Medicine, Production and Health, University of Padua, Viale dell'Università 16, Legnaro, 35020 Padua, Italy
| |
Collapse
|
2
|
Suksangvoravong H, Choisunirachon N, Tongloy T, Chuwongin S, Boonsang S, Kittichai V, Thanaboonnipat C. Automatic classification and grading of canine tracheal collapse on thoracic radiographs by using deep learning. Vet Radiol Ultrasound 2024. [PMID: 39012062 DOI: 10.1111/vru.13413] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2024] [Revised: 06/27/2024] [Accepted: 07/05/2024] [Indexed: 07/17/2024] Open
Abstract
Tracheal collapse is a chronic and progressively worsening disease; the severity of clinical symptoms experienced by affected individuals depends on the degree of airway collapse. Cutting-edge automated tools are necessary to modernize disease screening using radiographs across various veterinary settings, such as animal clinics and hospitals. This is primarily due to the inherent challenges associated with interpreting uncertainties among veterinarians. In this study, an artificial intelligence model was developed to screen canine tracheal collapse using archived lateral cervicothoracic radiographs. This model can differentiate between a normal and collapsed trachea, ranging from early to severe degrees. The you-only-look-once (YOLO) models, including YOLO v3, YOLO v4, and YOLO v4 tiny, were used to train and test data sets under the in-house XXX platform. The results showed that the YOLO v4 tiny-416 model had satisfactory performance in screening among the normal trachea, grade 1-2 tracheal collapse, and grade 3-4 tracheal collapse with 98.30% sensitivity, 99.20% specificity, and 98.90% accuracy. The area under the curve of the precision-recall curve was >0.8, which demonstrated high diagnostic accuracy. The intraobserver agreement between deep learning and radiologists was κ = 0.975 (P < .001), with all observers having excellent agreement (κ = 1.00, P < .001). The intraclass correlation coefficient between observers was >0.90, which represented excellent consistency. Therefore, the deep learning model can be a useful and reliable method for effective screening and classification of the degree of tracheal collapse based on routine lateral cervicothoracic radiographs.
Collapse
Affiliation(s)
- Hathaiphat Suksangvoravong
- Department of Veterinary Surgery, Faculty of Veterinary Science, Chulalongkorn University, Bangkok, Thailand
| | - Nan Choisunirachon
- Department of Veterinary Surgery, Faculty of Veterinary Science, Chulalongkorn University, Bangkok, Thailand
| | - Teerawat Tongloy
- College of Advanced Manufacturing Innovation, King Mongkut's Institute of Technology Ladkrabang, Bangkok, Thailand
| | - Santhad Chuwongin
- College of Advanced Manufacturing Innovation, King Mongkut's Institute of Technology Ladkrabang, Bangkok, Thailand
| | - Siridech Boonsang
- Department of Electrical Engineering, Faculty of Engineering, King Mongkut's Institute of Technology Ladkrabang, Bangkok, Thailand
| | - Veerayuth Kittichai
- Faculty of Medicine, King Mongkut's Institute of Technology Ladkrabang, Bangkok, Thailand
| | - Chutimon Thanaboonnipat
- Department of Veterinary Surgery, Faculty of Veterinary Science, Chulalongkorn University, Bangkok, Thailand
| |
Collapse
|
3
|
Chu CP. ChatGPT in veterinary medicine: a practical guidance of generative artificial intelligence in clinics, education, and research. Front Vet Sci 2024; 11:1395934. [PMID: 38911678 PMCID: PMC11192069 DOI: 10.3389/fvets.2024.1395934] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Accepted: 05/21/2024] [Indexed: 06/25/2024] Open
Abstract
ChatGPT, the most accessible generative artificial intelligence (AI) tool, offers considerable potential for veterinary medicine, yet a dedicated review of its specific applications is lacking. This review concisely synthesizes the latest research and practical applications of ChatGPT within the clinical, educational, and research domains of veterinary medicine. It intends to provide specific guidance and actionable examples of how generative AI can be directly utilized by veterinary professionals without a programming background. For practitioners, ChatGPT can extract patient data, generate progress notes, and potentially assist in diagnosing complex cases. Veterinary educators can create custom GPTs for student support, while students can utilize ChatGPT for exam preparation. ChatGPT can aid in academic writing tasks in research, but veterinary publishers have set specific requirements for authors to follow. Despite its transformative potential, careful use is essential to avoid pitfalls like hallucination. This review addresses ethical considerations, provides learning resources, and offers tangible examples to guide responsible implementation. A table of key takeaways was provided to summarize this review. By highlighting potential benefits and limitations, this review equips veterinarians, educators, and researchers to harness the power of ChatGPT effectively.
Collapse
Affiliation(s)
- Candice P. Chu
- Department of Veterinary Pathobiology, College of Veterinary Medicine & Biomedical Sciences, Texas A&M University, College Station, TX, United States
| |
Collapse
|
4
|
Hernandez Torres SI, Holland L, Edwards TH, Venn EC, Snider EJ. Deep learning models for interpretation of point of care ultrasound in military working dogs. Front Vet Sci 2024; 11:1374890. [PMID: 38903685 PMCID: PMC11187302 DOI: 10.3389/fvets.2024.1374890] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Accepted: 05/20/2024] [Indexed: 06/22/2024] Open
Abstract
Introduction Military working dogs (MWDs) are essential for military operations in a wide range of missions. With this pivotal role, MWDs can become casualties requiring specialized veterinary care that may not always be available far forward on the battlefield. Some injuries such as pneumothorax, hemothorax, or abdominal hemorrhage can be diagnosed using point of care ultrasound (POCUS) such as the Global FAST® exam. This presents a unique opportunity for artificial intelligence (AI) to aid in the interpretation of ultrasound images. In this article, deep learning classification neural networks were developed for POCUS assessment in MWDs. Methods Images were collected in five MWDs under general anesthesia or deep sedation for all scan points in the Global FAST® exam. For representative injuries, a cadaver model was used from which positive and negative injury images were captured. A total of 327 ultrasound clips were captured and split across scan points for training three different AI network architectures: MobileNetV2, DarkNet-19, and ShrapML. Gradient class activation mapping (GradCAM) overlays were generated for representative images to better explain AI predictions. Results Performance of AI models reached over 82% accuracy for all scan points. The model with the highest performance was trained with the MobileNetV2 network for the cystocolic scan point achieving 99.8% accuracy. Across all trained networks the diaphragmatic hepatorenal scan point had the best overall performance. However, GradCAM overlays showed that the models with highest accuracy, like MobileNetV2, were not always identifying relevant features. Conversely, the GradCAM heatmaps for ShrapML show general agreement with regions most indicative of fluid accumulation. Discussion Overall, the AI models developed can automate POCUS predictions in MWDs. Preliminarily, ShrapML had the strongest performance and prediction rate paired with accurately tracking fluid accumulation sites, making it the most suitable option for eventual real-time deployment with ultrasound systems. Further integration of this technology with imaging technologies will expand use of POCUS-based triage of MWDs.
Collapse
Affiliation(s)
- Sofia I. Hernandez Torres
- Organ Support and Automation Technologies Group, U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX, United States
| | - Lawrence Holland
- Organ Support and Automation Technologies Group, U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX, United States
| | - Thomas H. Edwards
- Hemorrhage Control and Vascular Dysfunction Group, U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX, United States
- Texas A&M University, School of Veterinary Medicine, College Station, TX, United States
| | - Emilee C. Venn
- Veterinary Support Group, U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX, United States
| | - Eric J. Snider
- Organ Support and Automation Technologies Group, U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX, United States
| |
Collapse
|
5
|
Worthing KA, Roberts M, Šlapeta J. Surveyed veterinary students in Australia find ChatGPT practical and relevant while expressing no concern about artificial intelligence replacing veterinarians. Vet Rec Open 2024; 11:e280. [PMID: 38854916 PMCID: PMC11162838 DOI: 10.1002/vro2.80] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Revised: 04/25/2024] [Accepted: 05/03/2024] [Indexed: 06/11/2024] Open
Abstract
Background Chat Generative Pre-trained Transformer (ChatGPT) is a freely available online artificial intelligence (AI) program capable of understanding and generating human-like language. This study assessed veterinary students' perceptions about ChatGPT in education and practice. It compared perceptions about ChatGPT between students who had completed a critical analysis task and those who had not. Methods This cross-sectional study surveyed 498 Doctor of Veterinary Medicine (DVM) students at The University of Sydney, Australia. Second-year DVM students researched a veterinary pathogen and then completed a critical analysis of ChatGPT (version 3.5) output for the same pathogen. A survey based on the Technology Acceptance Model was then delivered to all DVM students from all years of the programme, collecting data using Likert-style, categorical and free-text items. Results Over 75% of the 100 respondents reported having used ChatGPT. The students found ChatGPT's output relevant and practical for their use but perceived it as inaccurate. They perceived ChatGPT output to be more useful for veterinary students than for pet owners or veterinarians. Those who had completed the critical analysis assignment had a more positive view of ChatGPT's practicality for veterinary students but noted its authoritative tone even when delivering inaccurate information. Over 50% of the students agreed that information about tools such as ChatGPT should be included in the veterinary curriculum. Students agreed that veterinarians should embrace AI but disagreed that AI would eventually replace the need for veterinarians. Conclusions A critical appraisal of outputs from AI tools such as ChatGPT may help prepare future veterinarians for the effective use of these tools.
Collapse
Affiliation(s)
- Kate A. Worthing
- Sydney School of Veterinary ScienceFaculty of ScienceThe University of SydneySydneyNew South WalesAustralia
- Sydney Infectious Diseases InstituteThe University of SydneySydneyNew South WalesAustralia
| | - Madeleine Roberts
- Sydney School of Veterinary ScienceFaculty of ScienceThe University of SydneySydneyNew South WalesAustralia
| | - Jan Šlapeta
- Sydney School of Veterinary ScienceFaculty of ScienceThe University of SydneySydneyNew South WalesAustralia
| |
Collapse
|
6
|
Scharre A, Scholler D, Gesell-May S, Müller T, Zablotski Y, Ertel W, May A. Comparison of veterinarians and a deep learning tool in the diagnosis of equine ophthalmic diseases. Equine Vet J 2024. [PMID: 38567426 DOI: 10.1111/evj.14087] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Accepted: 02/25/2024] [Indexed: 04/04/2024]
Abstract
BACKGROUND/OBJECTIVES The aim was to compare ophthalmic diagnoses made by veterinarians to a deep learning (artificial intelligence) software tool which was developed to aid in the diagnosis of equine ophthalmic diseases. As equine ophthalmology is a very specialised field in equine medicine, the tool may be able to help in diagnosing equine ophthalmic emergencies such as uveitis. STUDY DESIGN In silico tool development and assessment of diagnostic performance. METHODS A deep learning tool which was developed and trained for classification of equine ophthalmic diseases was tested with 40 photographs displaying various equine ophthalmic diseases. The same data set was shown to different groups of veterinarians (equine, small animal, mixed practice, other) using an opinion poll to compare the results and evaluate the performance of the programme. Convolutional Neural Networks (CNN) were trained on 2346 photographs of equine eyes, which were augmented to 9384 images. Two hundred and sixty-one separate unmodified images were used to evaluate the trained network. The trained deep learning tool was used on 40 photographs of equine eyes (10 healthy, 12 uveitis, 18 other diseases). An opinion poll was used to evaluate the diagnostic performance of 148 veterinarians in comparison to the software tool. RESULTS The probability for the correct answer was 93% for the AI programme. Equine veterinarians answered correctly in 76%, whereas other veterinarians reached 67% probability for the correct diagnosis. MAIN LIMITATIONS Diagnosis was solely based on images of equine eyes without the possibility to evaluate the inner eye. CONCLUSIONS The deep learning tool proved to be at least equivalent to veterinarians in assessing ophthalmic diseases in photographs. We therefore conclude that the software tool may be useful in detecting potential emergency cases. In this context, blindness in horses may be prevented as the horse can receive accurate treatment or can be sent to an equine hospital. Furthermore, the tool gives less experienced veterinarians the opportunity to differentiate between uveitis and other ocular anterior segment disease and to support them in their decision-making regarding treatment.
Collapse
Affiliation(s)
- Annabel Scharre
- Equine Clinic, Ludwig Maximilians University, Oberschleissheim, Germany
| | - Dominik Scholler
- Equine Clinic, Ludwig Maximilians University, Oberschleissheim, Germany
| | | | | | - Yury Zablotski
- Clinic for Ruminants, Ludwig Maximilians University, Oberschleissheim, Germany
| | - Wolfgang Ertel
- Institute for Artificial Intelligence, Ravensburg-Weingarten University, Weingarten, Germany
| | - Anna May
- Equine Clinic, Ludwig Maximilians University, Oberschleissheim, Germany
| |
Collapse
|
7
|
Nyquist ML, Fink LA, Mauldin GE, Coffman CR. Evaluation of a Novel Veterinary Dental Radiography Artificial Intelligence Software Program. J Vet Dent 2024:8987564231221071. [PMID: 38321886 DOI: 10.1177/08987564231221071] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2024]
Abstract
There is a growing trend of artificial intelligence (AI) applications in veterinary medicine, with the potential to assist veterinarians in clinical decisions. A commercially available, AI-based software program (AISP) for detecting common radiographic dental pathologies in dogs and cats was assessed for agreement with two human evaluators. Furcation bone loss, periapical lucency, resorptive lesion, retained tooth root, attachment (alveolar bone) loss and tooth fracture were assessed. The AISP does not attempt to diagnose or provide treatment recommendations, nor has it been trained to identify other types of radiographic pathology. Inter-rater reliability for detecting pathologies was measured by absolute percent agreement and Gwet's agreement coefficient. There was good to excellent inter-rater reliability among all raters, suggesting the AISP performs similarly at detecting the specified pathologies compared to human evaluators. Sensitivity and specificity for the AISP were assessed using human evaluators as the reference standard. The results revealed a trend of low sensitivity and high specificity, suggesting the AISP may produce a high rate of false negatives and may not be a good tool for initial screening. However, the low rate of false positives produced by the AISP suggests it may be beneficial as a "second set of eyes" because if it detects the specific pathology, there is a high likelihood that the pathology is present. With an understanding of the AISP, as an aid and not a substitute for veterinarians, the technology may increase dental radiography utilization and diagnostic potential.
Collapse
Affiliation(s)
| | - Lisa A Fink
- Arizona Veterinary Dental Specialists, Scottsdale, AZ, USA
| | | | - Curt R Coffman
- Arizona Veterinary Dental Specialists, Scottsdale, AZ, USA
| |
Collapse
|
8
|
Li J, Zhang Y. Regressive vision transformer for dog cardiomegaly assessment. Sci Rep 2024; 14:1539. [PMID: 38233422 PMCID: PMC10794204 DOI: 10.1038/s41598-023-50063-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Accepted: 12/14/2023] [Indexed: 01/19/2024] Open
Abstract
Cardiac disease is one of the leading causes of death in dogs. Automatic cardiomegaly detection has great significance in helping clinicians improve the accuracy of the diagnosis process. Deep learning methods show promising results in improving cardiomegaly classification accuracy, while they are still not widely applied in clinical trials due to the difficulty in mapping predicted results with input radiographs. To overcome these challenges, we first collect large-scale dog heart X-ray images. We then develop a dog heart labeling tool and apply a few-shot generalization strategy to accelerate the label speed. We also develop a regressive vision transformer model with an orthogonal layer to bridge traditional clinically used VHS metric with deep learning models. Extensive experimental results demonstrate that the proposed model achieves state-of-the-art performance.
Collapse
Affiliation(s)
- Jialu Li
- Master of Public Administration, Cornell University, Ithaca, NY, 14853, USA
| | - Youshan Zhang
- Computer Science and Artificial Intelligence, Yeshiva University, New York, NY, 10033, USA.
| |
Collapse
|
9
|
Borowska M, Jasiński T, Gierasimiuk S, Pauk J, Turek B, Górski K, Domino M. Three-Dimensional Segmentation Assisted with Clustering Analysis for Surface and Volume Measurements of Equine Incisor in Multidetector Computed Tomography Data Sets. SENSORS (BASEL, SWITZERLAND) 2023; 23:8940. [PMID: 37960639 PMCID: PMC10650163 DOI: 10.3390/s23218940] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Revised: 10/22/2023] [Accepted: 10/31/2023] [Indexed: 11/15/2023]
Abstract
Dental diagnostic imaging has progressed towards the use of advanced technologies such as 3D image processing. Since multidetector computed tomography (CT) is widely available in equine clinics, CT-based anatomical 3D models, segmentations, and measurements have become clinically applicable. This study aimed to use a 3D segmentation of CT images and volumetric measurements to investigate differences in the surface area and volume of equine incisors. The 3D Slicer was used to segment single incisors of 50 horses' heads and to extract volumetric features. Axial vertical symmetry, but not horizontal, of the incisors was evidenced. The surface area and volume differed significantly between temporary and permanent incisors, allowing for easy eruption-related clustering of the CT-based 3D images with an accuracy of >0.75. The volumetric features differed partially between center, intermediate, and corner incisors, allowing for moderate location-related clustering with an accuracy of >0.69. The volumetric features of mandibular incisors' equine odontoclastic tooth resorption and hypercementosis (EOTRH) degrees were more than those for maxillary incisors; thus, the accuracy of EOTRH degree-related clustering was >0.72 for the mandibula and >0.33 for the maxilla. The CT-based 3D images of equine incisors can be successfully segmented using the routinely achieved multidetector CT data sets and the proposed data-processing approaches.
Collapse
Affiliation(s)
- Marta Borowska
- Institute of Biomedical Engineering, Faculty of Mechanical Engineering, Białystok University of Technology, 15-351 Bialystok, Poland; (M.B.); (S.G.); (J.P.)
| | - Tomasz Jasiński
- Department of Large Animal Diseases and Clinic, Institute of Veterinary Medicine, Warsaw University of Life Sciences, 02-787 Warsaw, Poland; (T.J.); (B.T.); (K.G.)
| | - Sylwia Gierasimiuk
- Institute of Biomedical Engineering, Faculty of Mechanical Engineering, Białystok University of Technology, 15-351 Bialystok, Poland; (M.B.); (S.G.); (J.P.)
| | - Jolanta Pauk
- Institute of Biomedical Engineering, Faculty of Mechanical Engineering, Białystok University of Technology, 15-351 Bialystok, Poland; (M.B.); (S.G.); (J.P.)
| | - Bernard Turek
- Department of Large Animal Diseases and Clinic, Institute of Veterinary Medicine, Warsaw University of Life Sciences, 02-787 Warsaw, Poland; (T.J.); (B.T.); (K.G.)
| | - Kamil Górski
- Department of Large Animal Diseases and Clinic, Institute of Veterinary Medicine, Warsaw University of Life Sciences, 02-787 Warsaw, Poland; (T.J.); (B.T.); (K.G.)
| | - Małgorzata Domino
- Department of Large Animal Diseases and Clinic, Institute of Veterinary Medicine, Warsaw University of Life Sciences, 02-787 Warsaw, Poland; (T.J.); (B.T.); (K.G.)
| |
Collapse
|
10
|
NAKAZAWA Y, OHSHIMA T, KANEMOTO H, FUJIWARA-IGARASHI A. Construction of diagnostic prediction model for canine nasal diseases using less invasive examinations without anesthesia. J Vet Med Sci 2023; 85:1083-1093. [PMID: 37661430 PMCID: PMC10600536 DOI: 10.1292/jvms.23-0315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Accepted: 08/16/2023] [Indexed: 09/05/2023] Open
Abstract
Advanced imaging techniques under general anesthesia are frequently employed to achieve a definitive diagnosis of canine nasal diseases. However, these examinations may not be performed immediately in all cases. This study aimed to construct prediction models for canine nasal diseases using less-invasive examinations such as clinical signs and radiography. Dogs diagnosed with nasal disease between 2010 and 2020 were retrospectively investigated to construct a prediction model (Group M; GM), and dogs diagnosed between 2020 and 2021 were prospectively investigated to validate the efficacy (Group V; GV). Prediction models were created using two methods: manual (Model 1) and LASSO logistic regression analysis (Model 2). In total, 103 and 86 dogs were included in GM and GV, respectively. In Model 1, the sensitivity and specificity of neoplasia (NP) and sino-nasal aspergillosis (SNA) were 0.88 and 0.81 in GM and 0.92 and 0.78 in GV, respectively. Those of non-infectious rhinitis (NIR) and rhinitis secondary to dental disease (DD) were 0.78 and 0.88 in GM and 0.64 and 0.80 in GV, respectively. In Model 2, the sensitivity and specificity of NP and SNA were 0.93 and 1 in GM and 0.93 and 0.75 in GV, respectively. Those of NIR and DD were 0.96 and 0.89 in GM and 0.80 and 0.79 in GV, respectively. This study suggest that it is possible to create a prediction model using less-invasive examinations. Utilizing these predictive models may lead to appropriate general anesthesia examinations and treatment referrals.
Collapse
Affiliation(s)
- Yuta NAKAZAWA
- Laboratory of Veterinary Radiology, School of Veterinary
Medicine, Nippon Veterinary and Life Science University, Tokyo, Japan
| | - Takafumi OHSHIMA
- Laboratory of Veterinary Radiology, School of Veterinary
Medicine, Nippon Veterinary and Life Science University, Tokyo, Japan
| | - Hideyuki KANEMOTO
- DVMs Animal Medical center Yokohama, Kanagawa, Japan
- ER Hachioji Advanced Veterinary Medical Emergency and
Critical Care center, Tokyo, Japan
| | - Aki FUJIWARA-IGARASHI
- Laboratory of Veterinary Radiology, School of Veterinary
Medicine, Nippon Veterinary and Life Science University, Tokyo, Japan
| |
Collapse
|
11
|
Valente C, Wodzinski M, Guglielmini C, Poser H, Chiavegato D, Zotti A, Venturini R, Banzato T. Development of an artificial intelligence-based method for the diagnosis of the severity of myxomatous mitral valve disease from canine chest radiographs. Front Vet Sci 2023; 10:1227009. [PMID: 37808107 PMCID: PMC10556456 DOI: 10.3389/fvets.2023.1227009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Accepted: 09/12/2023] [Indexed: 10/10/2023] Open
Abstract
An algorithm based on artificial intelligence (AI) was developed and tested to classify different stages of myxomatous mitral valve disease (MMVD) from canine thoracic radiographs. The radiographs were selected from the medical databases of two different institutions, considering dogs over 6 years of age that had undergone chest X-ray and echocardiographic examination. Only radiographs clearly showing the cardiac silhouette were considered. The convolutional neural network (CNN) was trained on both the right and left lateral and/or ventro-dorsal or dorso-ventral views. Each dog was classified according to the American College of Veterinary Internal Medicine (ACVIM) guidelines as stage B1, B2 or C + D. ResNet18 CNN was used as a classification network, and the results were evaluated using confusion matrices, receiver operating characteristic curves, and t-SNE and UMAP projections. The area under the curve (AUC) showed good heart-CNN performance in determining the MMVD stage from the lateral views with an AUC of 0.87, 0.77, and 0.88 for stages B1, B2, and C + D, respectively. The high accuracy of the algorithm in predicting the MMVD stage suggests that it could stand as a useful support tool in the interpretation of canine thoracic radiographs.
Collapse
Affiliation(s)
- Carlotta Valente
- Department of Animal Medicine, Production and Health, University of Padua, Padua, Italy
| | - Marek Wodzinski
- Department of Measurement and Electronics, AGH University of Science and Technology, Krakow, Poland
- Information Systems Institute, University of Applied Sciences—Western Switzerland (HES-SO Valais), Sierre, Switzerland
| | - Carlo Guglielmini
- Department of Animal Medicine, Production and Health, University of Padua, Padua, Italy
| | - Helen Poser
- Department of Animal Medicine, Production and Health, University of Padua, Padua, Italy
| | | | - Alessandro Zotti
- Department of Animal Medicine, Production and Health, University of Padua, Padua, Italy
| | | | - Tommaso Banzato
- Department of Animal Medicine, Production and Health, University of Padua, Padua, Italy
| |
Collapse
|
12
|
Pomerantz LK, Solano M, Kalosa-Kenyon E. Performance of a commercially available artificial intelligence software for the detection of confirmed pulmonary nodules and masses in canine thoracic radiography. Vet Radiol Ultrasound 2023; 64:881-889. [PMID: 37549965 DOI: 10.1111/vru.13287] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Revised: 06/29/2023] [Accepted: 06/30/2023] [Indexed: 08/09/2023] Open
Abstract
Advancements in the field of artificial intelligence (AI) are modest in veterinary medicine relative to their substantial growth in human medicine. However, interest in this field is increasing, and commercially available veterinary AI products are already on the market. In this retrospective, diagnostic accuracy study, the accuracy of a commercially available convolutional neural network AI product (Vetology AI®) is assessed on 56 thoracic radiographic studies of pulmonary nodules and masses, as well as 32 control cases. Positive cases were confirmed to have pulmonary pathology consistent with a nodule/mass either by CT, cytology, or histopathology. The AI software detected pulmonary nodules/masses in 31 of 56 confirmed cases and correctly classified 30 of 32 control cases. The AI model accuracy is 69.3%, balanced accuracy 74.6%, F1-score 0.7, sensitivity 55.4%, and specificity 93.75%. Building on these results, both the current clinical relevance of AI and how veterinarians can be expected to use available commercial products are discussed.
Collapse
Affiliation(s)
- Leah Kathleen Pomerantz
- Department of Clinical Sciences, Tufts University Cummings School of Veterinary Medicine, North Grafton, Massachusetts, USA
| | - Mauricio Solano
- Department of Clinical Sciences, Tufts University Cummings School of Veterinary Medicine, North Grafton, Massachusetts, USA
| | - Eric Kalosa-Kenyon
- Department of Clinical Sciences, Tufts University Cummings School of Veterinary Medicine, North Grafton, Massachusetts, USA
| |
Collapse
|
13
|
Szatmári V, Hofman ZMM, van Bijsterveldt NJ, Tellegen AR, Vilaplana Grosso FR. A Novel Standardized Method for Aiding to Determine Left Atrial Enlargement on Lateral Thoracic Radiographs in Dogs. Animals (Basel) 2023; 13:2178. [PMID: 37443976 DOI: 10.3390/ani13132178] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Revised: 06/16/2023] [Accepted: 06/21/2023] [Indexed: 07/15/2023] Open
Abstract
BACKGROUND Left atrial enlargement indicates severe cardiac disease. Although the gold standard for determining left atrial size is echocardiography, many veterinary practices lack the necessary equipment and expertise. Therefore, thoracic radiography is often used to differentiate cardiogenic pulmonary edema from primary respiratory diseases and to facilitate distinguishing dogs with stage B1 and B2 mitral valve degeneration. METHODS The goal was to test a new standardized method for identifying radiographic left atrial enlargement. On a lateral radiograph, a straight line was drawn from the dorsal border of the tracheal bifurcation to the crossing point of the dorsal border of the caudal vena cava and the most cranial crus of the diaphragm. If a part of the left atrium extended this line dorsally, it was considered enlarged. Echocardiographic left atrial to aortic ratio (LA:Ao) was used as a reference. Thirty-nine observers with various levels of experience evaluated 90 radiographs, first subjectively, then applying the new method. RESULTS The new method moderately correlated with LA:Ao (r = 0.56-0.66) in all groups. The diagnostic accuracy (72-74%) of the subjective assessment and the new method showed no difference. CONCLUSIONS Though the new method was not superior to subjective assessment, it may facilitate learning and subjective interpretation.
Collapse
Affiliation(s)
- Viktor Szatmári
- Department of Clinical Sciences, Faculty of Veterinary Medicine, Utrecht University, Yalelaan 108, 3584 CM Utrecht, The Netherlands
| | - Zelie M M Hofman
- Department of Clinical Sciences, Faculty of Veterinary Medicine, Utrecht University, Yalelaan 108, 3584 CM Utrecht, The Netherlands
| | - Nynke J van Bijsterveldt
- Department of Clinical Sciences, Faculty of Veterinary Medicine, Utrecht University, Yalelaan 108, 3584 CM Utrecht, The Netherlands
| | - Anna R Tellegen
- Department of Clinical Sciences, Faculty of Veterinary Medicine, Utrecht University, Yalelaan 108, 3584 CM Utrecht, The Netherlands
| | - Federico R Vilaplana Grosso
- Department of Clinical Sciences, Faculty of Veterinary Medicine, Utrecht University, Yalelaan 108, 3584 CM Utrecht, The Netherlands
| |
Collapse
|
14
|
Pereira AI, Franco-Gonçalo P, Leite P, Ribeiro A, Alves-Pimenta MS, Colaço B, Loureiro C, Gonçalves L, Filipe V, Ginja M. Artificial Intelligence in Veterinary Imaging: An Overview. Vet Sci 2023; 10:vetsci10050320. [PMID: 37235403 DOI: 10.3390/vetsci10050320] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Revised: 04/21/2023] [Accepted: 04/25/2023] [Indexed: 05/28/2023] Open
Abstract
Artificial intelligence and machine learning have been increasingly used in the medical imaging field in the past few years. The evaluation of medical images is very subjective and complex, and therefore the application of artificial intelligence and deep learning methods to automatize the analysis process would be very beneficial. A lot of researchers have been applying these methods to image analysis diagnosis, developing software capable of assisting veterinary doctors or radiologists in their daily practice. This article details the main methodologies used to develop software applications on machine learning and how veterinarians with an interest in this field can benefit from such methodologies. The main goal of this study is to offer veterinary professionals a simple guide to enable them to understand the basics of artificial intelligence and machine learning and the concepts such as deep learning, convolutional neural networks, transfer learning, and the performance evaluation method. The language is adapted for medical technicians, and the work already published in this field is reviewed for application in the imaging diagnosis of different animal body systems: musculoskeletal, thoracic, nervous, and abdominal.
Collapse
Affiliation(s)
- Ana Inês Pereira
- Department of Veterinary Science, University of Trás-os-Montes and Alto Douro (UTAD), 5000-801 Vila Real, Portugal
| | - Pedro Franco-Gonçalo
- Department of Veterinary Science, University of Trás-os-Montes and Alto Douro (UTAD), 5000-801 Vila Real, Portugal
- Veterinary and Animal Research Centre (CECAV), University of Trás-os-Montes and Alto Douro (UTAD), 5000-801 Vila Real, Portugal
- Associate Laboratory for Animal and Veterinary Sciences (AL4AnimalS), 5000-801 Vila Real, Portugal
| | - Pedro Leite
- Neadvance Machine Vision SA, 4705-002 Braga, Portugal
| | | | - Maria Sofia Alves-Pimenta
- Veterinary and Animal Research Centre (CECAV), University of Trás-os-Montes and Alto Douro (UTAD), 5000-801 Vila Real, Portugal
- Associate Laboratory for Animal and Veterinary Sciences (AL4AnimalS), 5000-801 Vila Real, Portugal
- Department of Animal Science, University of Trás-os-Montes and Alto Douro (UTAD), 5000-801 Vila Real, Portugal
| | - Bruno Colaço
- Veterinary and Animal Research Centre (CECAV), University of Trás-os-Montes and Alto Douro (UTAD), 5000-801 Vila Real, Portugal
- Associate Laboratory for Animal and Veterinary Sciences (AL4AnimalS), 5000-801 Vila Real, Portugal
- Department of Animal Science, University of Trás-os-Montes and Alto Douro (UTAD), 5000-801 Vila Real, Portugal
| | - Cátia Loureiro
- School of Science and Technology, University of Trás-os-Montes and Alto Douro (UTAD), 5000-801 Vila Real, Portugal
- Department of Engineering, University of Trás-os-Montes and Alto Douro (UTAD), 5000-801 Vila Real, Portugal
| | - Lio Gonçalves
- School of Science and Technology, University of Trás-os-Montes and Alto Douro (UTAD), 5000-801 Vila Real, Portugal
- Department of Engineering, University of Trás-os-Montes and Alto Douro (UTAD), 5000-801 Vila Real, Portugal
- Institute for Systems and Computer Engineering (INESC-TEC), Technology and Science, 4200-465 Porto, Portugal
| | - Vítor Filipe
- School of Science and Technology, University of Trás-os-Montes and Alto Douro (UTAD), 5000-801 Vila Real, Portugal
- Department of Engineering, University of Trás-os-Montes and Alto Douro (UTAD), 5000-801 Vila Real, Portugal
- Institute for Systems and Computer Engineering (INESC-TEC), Technology and Science, 4200-465 Porto, Portugal
| | - Mário Ginja
- Department of Veterinary Science, University of Trás-os-Montes and Alto Douro (UTAD), 5000-801 Vila Real, Portugal
- Veterinary and Animal Research Centre (CECAV), University of Trás-os-Montes and Alto Douro (UTAD), 5000-801 Vila Real, Portugal
- Associate Laboratory for Animal and Veterinary Sciences (AL4AnimalS), 5000-801 Vila Real, Portugal
| |
Collapse
|