1
|
Kabir MA, Samad S, Ahmed F, Naher S, Featherston J, Laird C, Ahmed S. Mobile Apps for Wound Assessment and Monitoring: Limitations, Advancements and Opportunities. J Med Syst 2024; 48:80. [PMID: 39180710 PMCID: PMC11344716 DOI: 10.1007/s10916-024-02091-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2024] [Accepted: 07/22/2024] [Indexed: 08/26/2024]
Abstract
With the proliferation of wound assessment apps across various app stores and the increasing integration of artificial intelligence (AI) in healthcare apps, there is a growing need for a comprehensive evaluation system. Current apps lack sufficient evidence-based reliability, prompting the necessity for a systematic assessment. The objectives of this study are to evaluate the wound assessment and monitoring apps, identify limitations, and outline opportunities for future app development. An electronic search across two major app stores (Google Play store, and Apple App Store) was conducted and the selected apps were rated by three independent raters. A total of 170 apps were discovered, and 10 were selected for review based on a set of inclusion and exclusion criteria. By modifying existing scales, an app rating scale for wound assessment apps is created and used to evaluate the selected ten apps. Our rating scale evaluates apps' functionality and software quality characteristics. Most apps in the app stores, according to our evaluation, do not meet the overall requirements for wound monitoring and assessment. All the apps that we reviewed are focused on practitioners and doctors. According to our evaluation, the app ImitoWound got the highest mean score of 4.24. But this app has 7 criteria among our 11 functionalities criteria. Finally, we have recommended future opportunities to leverage advanced techniques, particularly those involving artificial intelligence, to enhance the functionality and efficacy of wound assessment apps. This research serves as a valuable resource for future developers and researchers seeking to enhance the design of wound assessment-based applications, encompassing improvements in both software quality and functionality.
Collapse
Affiliation(s)
- Muhammad Ashad Kabir
- School of Computing, Mathematics and Engineering, Charles Sturt University, Bathurst, 2795, NSW, Australia.
| | - Sabiha Samad
- Department of Computer Science and Engineering, Chittagong University of Engineering and Technology, Chattogram, 4349, Chattogram, Bangladesh
| | - Fahmida Ahmed
- Department of Computer Science and Engineering, Chittagong University of Engineering and Technology, Chattogram, 4349, Chattogram, Bangladesh
| | - Samsun Naher
- Department of Computer Science and Engineering, Chittagong University of Engineering and Technology, Chattogram, 4349, Chattogram, Bangladesh
| | - Jill Featherston
- School of Medicine, Cardiff University, Cardiff, CF14 4YS, Wales, United Kingdom
| | - Craig Laird
- Principal Pedorthist, Walk Easy Pedorthics Pty. Ltd., Tamworth, 2340, NSW, Australia
| | - Sayed Ahmed
- Principal Pedorthist, Foot Balance Technology Pty Ltd, Westmead, 2145, NSW, Australia
- Offloading Clinic, Nepean Hospital, Kingswood, 2750, NSW, Australia
| |
Collapse
|
2
|
Castillo-Valdez PF, Rodriguez-Salvador M, Ho YS. Scientific Production Dynamics in mHealth for Diabetes: Scientometric Analysis. JMIR Diabetes 2024; 9:e52196. [PMID: 39172508 PMCID: PMC11377915 DOI: 10.2196/52196] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Revised: 03/23/2024] [Accepted: 06/04/2024] [Indexed: 08/23/2024] Open
Abstract
BACKGROUND The widespread use of mobile technologies in health care (mobile health; mHealth) has facilitated disease management, especially for chronic illnesses such as diabetes. mHealth for diabetes is an attractive alternative to reduce costs and overcome geographical and temporal barriers to improve patients' conditions. OBJECTIVE This study aims to reveal the dynamics of scientific publications on mHealth for diabetes to gain insights into who are the most prominent authors, countries, institutions, and journals and what are the most cited documents and current hot spots. METHODS A scientometric analysis based on a competitive technology intelligence methodology was conducted. An innovative 8-step methodology supported by experts was executed considering scientific documents published between 1998 and 2021 in the Science Citation Index Expanded database. Publication language, publication output characteristics, journals, countries and institutions, authors, and most cited and most impactful articles were identified. RESULTS The insights obtained show that a total of 1574 scientific articles were published by 7922 authors from 90 countries, with an average of 15 (SD 38) citations and 6.5 (SD 4.4) authors per article. These documents were published in 491 journals and 92 Web of Science categories. The most productive country was the United States, followed by the United Kingdom, China, Australia, and South Korea, and the top 3 most productive institutions came from the United States, whereas the top 3 most cited articles were published in 2016, 2009, and 2017 and the top 3 most impactful articles were published in 2016 and 2017. CONCLUSIONS This approach provides a comprehensive knowledge panorama of research productivity in mHealth for diabetes, identifying new insights and opportunities for research and development and innovation, including collaboration with other entities, new areas of specialization, and human resource development. The findings obtained are useful for decision-making in policy planning, resource allocation, and identification of research opportunities, benefiting researchers, health professionals, and decision makers in their efforts to make significant contributions to the advancement of diabetes science.
Collapse
|
3
|
Almufadi N, Alhasson HF. Classification of Diabetic Foot Ulcers from Images Using Machine Learning Approach. Diagnostics (Basel) 2024; 14:1807. [PMID: 39202295 PMCID: PMC11353632 DOI: 10.3390/diagnostics14161807] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2024] [Revised: 08/12/2024] [Accepted: 08/14/2024] [Indexed: 09/03/2024] Open
Abstract
Diabetic foot ulcers (DFUs) represent a significant and serious challenge associated with diabetes. It is estimated that approximately one third of individuals with diabetes will develop DFUs at some point in their lives. This common complication can lead to serious health issues if not properly managed. The early diagnosis and treatment of DFUs are crucial to prevent severe complications, including lower limb amputation. DFUs can be categorized into two states: ischemia and infection. Accurate classification is required to avoid misdiagnosis due to the similarities between these two states. Several convolutional neural network (CNN) models have been used and pre-trained through transfer learning. These models underwent evaluation with hyperparameter tuning for the binary classification of different states of DFUs, such as ischemia and infection. This study aimed to develop an effective classification system for DFUs using CNN models and machine learning classifiers utilizing various CNN models, such as EfficientNetB0, DenseNet121, ResNet101, VGG16, InceptionV3, MobileNetV2, and InceptionResNetV2, due to their excellent performance in diverse computer vision tasks. Additionally, the head model functions as the ultimate component for making decisions in the model, utilizing data collected from preceding layers to make precise predictions or classifications. The results of the CNN models with the suggested head model have been used in different machine learning classifiers to determine which ones are most effective for enhancing the performance of each CNN model. The most optimal outcome in categorizing ischemia is a 97% accuracy rate. This was accomplished by integrating the suggested head model with the EfficientNetB0 model and inputting the outcomes into the logistic regression classifier. The EfficientNetB0 model, with the proposed modifications and by feeding the outcomes to the AdaBoost classifier, attains an accuracy of 93% in classifying infections.
Collapse
Affiliation(s)
| | - Haifa F. Alhasson
- Department of Information Technology, College of Computer, Qassim University, Buraydah 51452, Saudi Arabia
| |
Collapse
|
4
|
Sheng B, Pushpanathan K, Guan Z, Lim QH, Lim ZW, Yew SME, Goh JHL, Bee YM, Sabanayagam C, Sevdalis N, Lim CC, Lim CT, Shaw J, Jia W, Ekinci EI, Simó R, Lim LL, Li H, Tham YC. Artificial intelligence for diabetes care: current and future prospects. Lancet Diabetes Endocrinol 2024; 12:569-595. [PMID: 39054035 DOI: 10.1016/s2213-8587(24)00154-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/02/2024] [Revised: 03/28/2024] [Accepted: 05/16/2024] [Indexed: 07/27/2024]
Abstract
Artificial intelligence (AI) use in diabetes care is increasingly being explored to personalise care for people with diabetes and adapt treatments for complex presentations. However, the rapid advancement of AI also introduces challenges such as potential biases, ethical considerations, and implementation challenges in ensuring that its deployment is equitable. Ensuring inclusive and ethical developments of AI technology can empower both health-care providers and people with diabetes in managing the condition. In this Review, we explore and summarise the current and future prospects of AI across the diabetes care continuum, from enhancing screening and diagnosis to optimising treatment and predicting and managing complications.
Collapse
Affiliation(s)
- Bin Sheng
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China; Key Laboratory of Artificial Intelligence, Ministry of Education, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Krithi Pushpanathan
- Centre of Innovation and Precision Eye Health, Department of Ophthalmology, National University of Singapore, Singapore; Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Zhouyu Guan
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
| | - Quan Hziung Lim
- Department of Medicine, Faculty of Medicine, University of Malaya, Kuala Lumpur, Malaysia
| | - Zhi Wei Lim
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Samantha Min Er Yew
- Centre of Innovation and Precision Eye Health, Department of Ophthalmology, National University of Singapore, Singapore; Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | | | - Yong Mong Bee
- Department of Endocrinology, Singapore General Hospital, Singapore; SingHealth Duke-National University of Singapore Diabetes Centre, Singapore Health Services, Singapore
| | - Charumathi Sabanayagam
- Ophthalmology and Visual Sciences Academic Clinical Program, Duke-National University of Singapore Medical School, Singapore; Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Nick Sevdalis
- Centre for Behavioural and Implementation Science Interventions, National University of Singapore, Singapore
| | | | - Chwee Teck Lim
- Department of Biomedical Engineering, National University of Singapore, Singapore; Institute for Health Innovation and Technology, National University of Singapore, Singapore; Mechanobiology Institute, National University of Singapore, Singapore
| | - Jonathan Shaw
- Baker Heart and Diabetes Institute, Melbourne, VIC, Australia
| | - Weiping Jia
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China
| | - Elif Ilhan Ekinci
- Australian Centre for Accelerating Diabetes Innovations, Melbourne Medical School and Department of Medicine, University of Melbourne, Melbourne, VIC, Australia; Department of Endocrinology, Austin Health, Melbourne, VIC, Australia
| | - Rafael Simó
- Diabetes and Metabolism Research Unit, Vall d'Hebron University Hospital and Vall d'Hebron Research Institute, Barcelona, Spain; Centro de Investigación Biomédica en Red de Diabetes y Enfermedades Metabólicas Asociadas, Instituto de Salud Carlos III, Madrid, Spain
| | - Lee-Ling Lim
- Department of Medicine, Faculty of Medicine, University of Malaya, Kuala Lumpur, Malaysia; Department of Medicine and Therapeutics, Chinese University of Hong Kong, Hong Kong Special Administrative Region, China; Asia Diabetes Foundation, Hong Kong Special Administrative Region, China
| | - Huating Li
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai, China.
| | - Yih-Chung Tham
- Centre of Innovation and Precision Eye Health, Department of Ophthalmology, National University of Singapore, Singapore; Yong Loo Lin School of Medicine, National University of Singapore, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-National University of Singapore Medical School, Singapore; Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.
| |
Collapse
|
5
|
Zhao N, Yu L, Fu X, Dai W, Han H, Bai J, Xu J, Hu J, Zhou Q. Application of a Diabetic Foot Smart APP in the measurement of diabetic foot ulcers. Int J Orthop Trauma Nurs 2024; 54:101095. [PMID: 38599150 DOI: 10.1016/j.ijotn.2024.101095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 02/25/2024] [Accepted: 03/05/2024] [Indexed: 04/12/2024]
Abstract
AIMS In the early stage, we developed an intelligent measurement APP for diabetic foot ulcers, named Diabetic Foot Smart APP. This study aimed to validate the APP in the measurement of ulcer area for diabetic foot ulcer (DFU). METHODS We selected 150 DFU images to measure the ulcer areas using three assessment tools: the Smart APP software package, the ruler method, and the gold standard Image J software, and compared the measurement results and measurement time of the three tools. The intra-rater and inter-rater reliability were described by Pearson correlation coefficient, intra-group correlation coefficient, and coefficient of variation. RESULTS The Image J software showed a median ulcer area of 4.02 cm2, with a mean measurement time of 66.37 ± 7.95 s. The ruler method showed a median ulcer area of 5.14 cm2, with a mean measurement time of 171.47 ± 46.43 s. The APP software showed a median ulcer area of 3.70 cm2, with a mean measurement time of 38.25 ± 6.81 s. There were significant differences between the ruler method and the golden standard Image J software (Z = -4.123, p < 0.05), but no significant difference between the APP software and the Image J software (Z = 1.103, p > 0.05). The APP software also showed good inter-rater reliability and intra-rater reliability, with both reaching 0.99. CONCLUSION The Diabetic Foot Smart APP is a fast and reliable measurement tool with high measurement accuracy that can be easily used in clinical practice for the measurement of ulcer areas of DFU. TRIAL REGISTRATION Chinese clinical trial registration number: ChiCTR2100047210.
Collapse
Affiliation(s)
- Nan Zhao
- Teaching and Research Section of Clinical Nursing, Xiangya Hospital of Central South University, Changsha, 410008, China; Zhengzhou Shuqing Medical College, Henan, 450052, China
| | - Ling Yu
- Teaching and Research Section of Clinical Nursing, Xiangya Hospital of Central South University, Changsha, 410008, China
| | - Xiaoai Fu
- Teaching and Research Section of Clinical Nursing, Xiangya Hospital of Central South University, Changsha, 410008, China
| | - Weiwei Dai
- Teaching and Research Section of Clinical Nursing, Xiangya Hospital of Central South University, Changsha, 410008, China; Department of Stoma Wound Care Center, Xiangya Hospital, Central South University, Changsha, 410008, China
| | - Huiwu Han
- Teaching and Research Section of Clinical Nursing, Xiangya Hospital of Central South University, Changsha, 410008, China; Department of Nursing, Xiangya Hospital, Central South University, Changsha, 410008, China
| | - Jiaojiao Bai
- Department of Nursing, Huadong Hospital Affiliated to Fudan University, Shanghai, 200040, China
| | - Jingcan Xu
- Teaching and Research Section of Clinical Nursing, Xiangya Hospital of Central South University, Changsha, 410008, China; Department of Nursing, Xiangya Hospital, Central South University, Changsha, 410008, China
| | - Jianzhong Hu
- Xiangya Hospital, Central South University, Changsha, 410008, China
| | - Qiuhong Zhou
- Teaching and Research Section of Clinical Nursing, Xiangya Hospital of Central South University, Changsha, 410008, China.
| |
Collapse
|
6
|
Aman A, Bhunia M, Mukhopadhyay S, Gupta R. Machine learning assisted classification between diabetic polyneuropathy and healthy subjects using plantar pressure and temperature data: a feasibility study. Comput Methods Biomech Biomed Engin 2024:1-12. [PMID: 38826026 DOI: 10.1080/10255842.2024.2359041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Accepted: 05/10/2024] [Indexed: 06/04/2024]
Abstract
Automated and early detection of diabetics with polyneuropathy in an ambulatory health monitoring setup may reduce the major risk factors for diabetic patients. Increased and localized plantar pressure associated with impaired pain and temperature is a combination of developing foot ulcers in subjects with polyneuropathy. Although many interesting research works have been reported in this area, most of them emphasize on signal acquisition process and plantar pressure distribution in the foot region. In this work, a machine learning assisted low complexity technique was developed using plantar pressure and temperature signals which will classify between diabetic polyneuropathy and healthy subjects. Principal component analysis (PCA) and maximum relevance minimum redundancy (mRMR) methods were used for feature extraction and selection respectively followed by k-NN classifier for binary classification. The proposed technique was evaluated with 100 min of publicly available annotated data from 43 subjects and provides blind test accuracy, sensitivity, precision, F1-score, and area under curve (AUC) of 99.58%, 99.50%, 99.44%, 99.47% and 99.56% respectively. A low resource hardware implementation in ARM v6 controller required an average memory usage of 81.2 kB and latency of 1.31 s to process 9 s pressure and temperature data collected from 16 sensor channels for each of the foot region.
Collapse
Affiliation(s)
- Ayush Aman
- Institute of Radio Physics and Electronics, University of Calcutta, Kolkata, India
| | - Mousam Bhunia
- Institute of Radio Physics and Electronics, University of Calcutta, Kolkata, India
| | - Sumitra Mukhopadhyay
- Institute of Radio Physics and Electronics, University of Calcutta, Kolkata, India
| | - Rajarshi Gupta
- Electrical Engineering, Department of Applied Physics, University of Calcutta, Kolkata, India
| |
Collapse
|
7
|
Rippon MG, Fleming L, Chen T, Rogers AA, Ousey K. Artificial intelligence in wound care: diagnosis, assessment and treatment of hard-to-heal wounds: a narrative review. J Wound Care 2024; 33:229-242. [PMID: 38573907 DOI: 10.12968/jowc.2024.33.4.229] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/06/2024]
Abstract
OBJECTIVE The effective assessment of wounds, both acute and hard-to-heal, is an important component in the delivery by wound care practitioners of efficacious wound care for patients. Improved wound diagnosis, optimising wound treatment regimens, and enhanced prevention of wounds aid in providing patients with a better quality of life (QoL). There is significant potential for the use of artificial intelligence (AI) in health-related areas such as wound care. However, AI-based systems remain to be developed to a point where they can be used clinically to deliver high-quality wound care. We have carried out a narrative review of the development and use of AI in the diagnosis, assessment and treatment of hard-to-heal wounds. We retrieved 145 articles from several online databases and other online resources, and 81 of them were included in this narrative review. Our review shows that AI application in wound care offers benefits in the assessment/diagnosis, monitoring and treatment of acute and hard-to-heal wounds. As well as offering patients the potential of improved QoL, AI may also enable better use of healthcare resources.
Collapse
Affiliation(s)
- Mark G Rippon
- University of Huddersfield, Huddersfield, UK
- Daneriver Consultancy Ltd, Holmes Chapel, UK
| | - Leigh Fleming
- School of Computing and Engineering, University of Huddersfield, Huddersfield, UK
| | - Tianhua Chen
- School of Computing and Engineering, University of Huddersfield, Huddersfield, UK
| | | | - Karen Ousey
- University of Huddersfield Department of Nursing and Midwifery, Huddersfield, UK
- Adjunct Professor, School of Nursing, Faculty of Health at the Queensland University of Technology, Australia
- Visiting Professor, Royal College of Surgeons in Ireland, Dublin, Ireland
- Chair, International Wound Infection Institute
- President Elect, International Skin Tear Advisory Panel
| |
Collapse
|
8
|
Patel Y, Shah T, Dhar MK, Zhang T, Niezgoda J, Gopalakrishnan S, Yu Z. Integrated image and location analysis for wound classification: a deep learning approach. Sci Rep 2024; 14:7043. [PMID: 38528003 DOI: 10.1038/s41598-024-56626-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Accepted: 03/08/2024] [Indexed: 03/27/2024] Open
Abstract
The global burden of acute and chronic wounds presents a compelling case for enhancing wound classification methods, a vital step in diagnosing and determining optimal treatments. Recognizing this need, we introduce an innovative multi-modal network based on a deep convolutional neural network for categorizing wounds into four categories: diabetic, pressure, surgical, and venous ulcers. Our multi-modal network uses wound images and their corresponding body locations for more precise classification. A unique aspect of our methodology is incorporating a body map system that facilitates accurate wound location tagging, improving upon traditional wound image classification techniques. A distinctive feature of our approach is the integration of models such as VGG16, ResNet152, and EfficientNet within a novel architecture. This architecture includes elements like spatial and channel-wise Squeeze-and-Excitation modules, Axial Attention, and an Adaptive Gated Multi-Layer Perceptron, providing a robust foundation for classification. Our multi-modal network was trained and evaluated on two distinct datasets comprising relevant images and corresponding location information. Notably, our proposed network outperformed traditional methods, reaching an accuracy range of 74.79-100% for Region of Interest (ROI) without location classifications, 73.98-100% for ROI with location classifications, and 78.10-100% for whole image classifications. This marks a significant enhancement over previously reported performance metrics in the literature. Our results indicate the potential of our multi-modal network as an effective decision-support tool for wound image classification, paving the way for its application in various clinical contexts.
Collapse
Affiliation(s)
- Yash Patel
- Department of Computer Science, University of Wisconsin-Milwaukee, Milwaukee, WI, USA
| | - Tirth Shah
- Department of Computer Science, University of Wisconsin-Milwaukee, Milwaukee, WI, USA
| | - Mrinal Kanti Dhar
- Department of Computer Science, University of Wisconsin-Milwaukee, Milwaukee, WI, USA
| | - Taiyu Zhang
- Department of Computer Science, University of Wisconsin-Milwaukee, Milwaukee, WI, USA
| | - Jeffrey Niezgoda
- Advancing the Zenith of Healthcare (AZH) Wound and Vascular Center, Milwaukee, WI, USA
| | | | - Zeyun Yu
- Department of Computer Science, University of Wisconsin-Milwaukee, Milwaukee, WI, USA.
- Department of Biomedical Engineering, University of Wisconsin-Milwaukee, Milwaukee, WI, USA.
| |
Collapse
|
9
|
Gade A, Vijaya Baskar V, Panneerselvam J. Exhaled breath signal analysis for diabetes detection: an optimized deep learning approach. Comput Methods Biomech Biomed Engin 2024; 27:443-458. [PMID: 38062773 DOI: 10.1080/10255842.2023.2289344] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Accepted: 11/03/2023] [Indexed: 02/22/2024]
Abstract
In this study, a flexible deep learning system for breath analysis is created using an optimal hybrid deep learning model. To improve the quality of the gathered breath signals, the raw data are first pre-processed. Then, the most relevant features like Improved IMFCC, BFCC (bark frequency), DWT, peak detection, QT intervals, and PR intervals are extracted. Then, using these features the hybrid classifiers built into the diabetic's detection phase is trained. The diabetic detection phase is modeled with an optimized DBN and BI-GRU model. To enhance the detection accuracy of the proposed model, the weight function of DBN is fine-tuned with the newly projected Sine Customized by Marine Predators (SCMP) model that is modeled by conceptually blending the standard MPA and SCA models, respectively. The final outcome from optimized DBN and Bi-GRU is combined to acquire the ultimate detected outcome. Further, to validate the efficiency of the projected model, a comparative evaluation has been undergone. Accordingly, the accuracy of the proposed model is above 98%. The accuracy of the proposed model is 54.6%, 56.9%, 56.95, 44.55, 57%, 56.95, 18.2%, and 56.9% improved over the traditional models like CNN + LSTM, CNN + LSTM, CNN, LSTM, RNN, SVM, RF, and DBN, at 60th learning percentage.
Collapse
Affiliation(s)
- Anita Gade
- Department of Electronics Engineering, Sathyabama Institute of Science and Technology, Chennai, India
| | - V Vijaya Baskar
- Department of Electronics and Communication Engineering, Sathyabama Institute of Science and Technology, Chennai, India
| | - John Panneerselvam
- School of Computing and Mathematical Sciences, University of Leicester, Leicester, United Kingdom
| |
Collapse
|
10
|
Khosa I, Raza A, Anjum M, Ahmad W, Shahab S. Automatic Diabetic Foot Ulcer Recognition Using Multi-Level Thermographic Image Data. Diagnostics (Basel) 2023; 13:2637. [PMID: 37627896 PMCID: PMC10453276 DOI: 10.3390/diagnostics13162637] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Revised: 07/29/2023] [Accepted: 08/06/2023] [Indexed: 08/27/2023] Open
Abstract
Lower extremity diabetic foot ulcers (DFUs) are a severe consequence of diabetes mellitus (DM). It has been estimated that people with diabetes have a 15% to 25% lifetime risk of acquiring DFUs which leads to the risk of lower limb amputations up to 85% due to poor diagnosis and treatment. Diabetic foot develops planter ulcers where thermography is used to detect the changes in the planter temperature. In this study, publicly available thermographic image data including both control group and diabetic group patients are used. Thermograms at image level as well as patch level are utilized for DFU detection. For DFU recognition, several machine-learning-based classification approaches are employed with hand-crafted features. Moreover, a couple of convolutional neural network models including ResNet50 and DenseNet121 are evaluated for DFU recognition. Finally, a CNN-based custom-developed model is proposed for the recognition task. The results are produced using image-level data, patch-level data, and image-patch combination data. The proposed CNN-based model outperformed the utilized models as well as the state-of-the-art models in terms of the AUC and accuracy. Moreover, the recognition accuracy for both the machine-learning and deep-learning approaches was higher for the image-level thermogram data in comparison to the patch-level or combination of image-patch thermograms.
Collapse
Affiliation(s)
- Ikramullah Khosa
- Department of Electrical and Computer Engineering, COMSATS University Islamabad, Lahore Campus, Lahore 54000, Pakistan
| | - Awais Raza
- Department of Electrical and Computer Engineering, COMSATS University Islamabad, Lahore Campus, Lahore 54000, Pakistan
| | - Mohd Anjum
- Department of Computer Engineering, Aligarh Muslim University, Aligarh 202002, India
| | - Waseem Ahmad
- Department of Computer Science and Engineering, Meerut Institute of Engineering and Technology, Meerut 250005, India
| | - Sana Shahab
- Department of Business Administration, College of Business Administration, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| |
Collapse
|
11
|
Chun JW, Kim HS. The Present and Future of Artificial Intelligence-Based Medical Image in Diabetes Mellitus: Focus on Analytical Methods and Limitations of Clinical Use. J Korean Med Sci 2023; 38:e253. [PMID: 37550811 PMCID: PMC10412032 DOI: 10.3346/jkms.2023.38.e253] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Accepted: 07/12/2023] [Indexed: 08/09/2023] Open
Abstract
Artificial intelligence (AI)-based diagnostic technology using medical images can be used to increase examination accessibility and support clinical decision-making for screening and diagnosis. To determine a machine learning algorithm for diabetes complications, a literature review of studies using medical image-based AI technology was conducted using the National Library of Medicine PubMed, and the Excerpta Medica databases. Lists of studies using diabetes diagnostic images and AI as keywords were combined. In total, 227 appropriate studies were selected. Diabetic retinopathy studies using the AI model were the most frequent (85.0%, 193/227 cases), followed by diabetic foot (7.9%, 18/227 cases) and diabetic neuropathy (2.7%, 6/227 cases). The studies used open datasets (42.3%, 96/227 cases) or directly constructed data from fundoscopy or optical coherence tomography (57.7%, 131/227 cases). Major limitations in AI-based detection of diabetes complications using medical images were the lack of datasets (36.1%, 82/227 cases) and severity misclassification (26.4%, 60/227 cases). Although it remains difficult to use and fully trust AI-based imaging analysis technology clinically, it reduces clinicians' time and labor, and the expectations from its decision-support roles are high. Various data collection and synthesis data technology developments according to the disease severity are required to solve data imbalance.
Collapse
Affiliation(s)
- Ji-Won Chun
- Department of Medical Informatics, College of Medicine, The Catholic University of Korea, Seoul, Korea
| | - Hun-Sung Kim
- Department of Medical Informatics, College of Medicine, The Catholic University of Korea, Seoul, Korea
- Division of Endocrinology and Metabolism, Department of Internal Medicine, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Korea.
| |
Collapse
|
12
|
Du H, Yao MMS, Liu S, Chen L, Chan WP, Feng M. Automatic Calcification Morphology and Distribution Classification for Breast Mammograms With Multi-Task Graph Convolutional Neural Network. IEEE J Biomed Health Inform 2023; 27:3782-3793. [PMID: 37027577 DOI: 10.1109/jbhi.2023.3249404] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/03/2023]
Abstract
The morphology and distribution of microcalcifications are the most important descriptors for radiologists to diagnose breast cancer based on mammograms. However, it is very challenging and time-consuming for radiologists to characterize these descriptors manually, and there also lacks of effective and automatic solutions for this problem. We observed that the distribution and morphology descriptors are determined by the radiologists based on the spatial and visual relationships among calcifications. Thus, we hypothesize that this information can be effectively modelled by learning a relationship-aware representation using graph convolutional networks (GCNs). In this study, we propose a multi-task deep GCN method for automatic characterization of both the morphology and distribution of microcalcifications in mammograms. Our proposed method transforms morphology and distribution characterization into node and graph classification problem and learns the representations concurrently. We trained and validated the proposed method in an in-house dataset and public DDSM dataset with 195 and 583 cases,respectively. The proposed method reaches good and stable results with distribution AUC at 0.812 ± 0.043 and 0.873 ± 0.019, morphology AUC at 0.663 ± 0.016 and 0.700 ± 0.044 for both in-house and public datasets. In both datasets, our proposed method demonstrates statistically significant improvements compared to the baseline models. The performance improvements brought by our proposed multi-task mechanism can be attributed to the association between the distribution and morphology of calcifications in mammograms, which is interpretable using graphical visualizations and consistent with the definitions of descriptors in the standard BI-RADS guideline. In short, we explore, for the first time, the application of GCNs in microcalcification characterization that suggests the potential of using graph learning for more robust understanding of medical images.
Collapse
|
13
|
Mostafa Abd El-Aal El-Kady A, Mostafa M, Hamdy Ali Hussien H, Ali Moussa F. Comparative Analysis: Deep vs. Machine Learning for Early DFU Detection in Medical Imaging. 2023 INTELLIGENT METHODS, SYSTEMS, AND APPLICATIONS (IMSA) 2023. [DOI: 10.1109/imsa58542.2023.10217437] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
Affiliation(s)
| | - Mohamed Mostafa
- Beni-suef University,Faculty of Computers & Artificial Intelligence,Information Technology Dept.,Beni-suef,Egypt
| | - Heba Hamdy Ali Hussien
- Beni-suef University,Faculty of Computers & Artificial Intelligence,Assistant Professor Multimedia Dept.,Beni-suef,Egypt
| | - Farid Ali Moussa
- Beni-suef University,Faculty of Computers & Artificial Intelligence,Information Technology Dept.,Beni-suef,Egypt
| |
Collapse
|
14
|
Dabas M, Schwartz D, Beeckman D, Gefen A. Application of Artificial Intelligence Methodologies to Chronic Wound Care and Management: A Scoping Review. Adv Wound Care (New Rochelle) 2023; 12:205-240. [PMID: 35438547 DOI: 10.1089/wound.2021.0144] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2023] Open
Abstract
Significance: As the number of hard-to-heal wound cases rises with the aging of the population and the spread of chronic diseases, health care professionals struggle to provide safe and effective care to all their patients simultaneously. This study aimed at providing an in-depth overview of the relevant methodologies of artificial intelligence (AI) and their potential implementation to support these growing needs of wound care and management. Recent Advances: MEDLINE, Compendex, Scopus, Web of Science, and IEEE databases were all searched for new AI methods or novel uses of existing AI methods for the diagnosis or management of hard-to-heal wounds. We only included English peer-reviewed original articles, conference proceedings, published patent applications, or granted patents (not older than 2010) where the performance of the utilized AI algorithms was reported. Based on these criteria, a total of 75 studies were eligible for inclusion. These varied by the type of the utilized AI methodology, the wound type, the medical record/database configuration, and the research goal. Critical Issues: AI methodologies appear to have a strong positive impact and prospects in the wound care and management arena. Another important development that emerged from the findings is AI-based remote consultation systems utilizing smartphones and tablets for data collection and connectivity. Future Directions: The implementation of machine-learning algorithms in the diagnosis and managements of hard-to-heal wounds is a promising approach for improving the wound care delivered to hospitalized patients, while allowing health care professionals to manage their working time more efficiently.
Collapse
Affiliation(s)
- Mai Dabas
- Department of Biomedical Engineering, Faculty of Engineering, Tel Aviv University, Tel Aviv, Israel
| | - Dafna Schwartz
- Department of Biomedical Engineering, Faculty of Engineering, Tel Aviv University, Tel Aviv, Israel
| | - Dimitri Beeckman
- Skin Integrity Research Group (SKINT), University Centre for Nursing and Midwifery, Department of Public Health, Ghent University, Ghent, Belgium.,Swedish Centre for Skin and Wound Research, School of Health Sciences, Örebro University, Örebro, Sweden
| | - Amit Gefen
- Department of Biomedical Engineering, Faculty of Engineering, Tel Aviv University, Tel Aviv, Israel.,The Herbert J. Berman Chair in Vascular Bioengineering, Tel Aviv University, Tel Aviv, Israel
| |
Collapse
|
15
|
Kairys A, Pauliukiene R, Raudonis V, Ceponis J. Towards Home-Based Diabetic Foot Ulcer Monitoring: A Systematic Review. SENSORS (BASEL, SWITZERLAND) 2023; 23:3618. [PMID: 37050678 PMCID: PMC10099334 DOI: 10.3390/s23073618] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Revised: 03/14/2023] [Accepted: 03/27/2023] [Indexed: 06/19/2023]
Abstract
It is considered that 1 in 10 adults worldwide have diabetes. Diabetic foot ulcers are some of the most common complications of diabetes, and they are associated with a high risk of lower-limb amputation and, as a result, reduced life expectancy. Timely detection and periodic ulcer monitoring can considerably decrease amputation rates. Recent research has demonstrated that computer vision can be used to identify foot ulcers and perform non-contact telemetry by using ulcer and tissue area segmentation. However, the applications are limited to controlled lighting conditions, and expert knowledge is required for dataset annotation. This paper reviews the latest publications on the use of artificial intelligence for ulcer area detection and segmentation. The PRISMA methodology was used to search for and select articles, and the selected articles were reviewed to collect quantitative and qualitative data. Qualitative data were used to describe the methodologies used in individual studies, while quantitative data were used for generalization in terms of dataset preparation and feature extraction. Publicly available datasets were accounted for, and methods for preprocessing, augmentation, and feature extraction were evaluated. It was concluded that public datasets can be used to form a bigger, more diverse datasets, and the prospects of wider image preprocessing and the adoption of augmentation require further research.
Collapse
Affiliation(s)
- Arturas Kairys
- Automation Department, Electrical and Electronics Faculty, Kaunas University of Technology, 51368 Kaunas, Lithuania
| | - Renata Pauliukiene
- Department of Endocrinology, Lithuanian University of Health Sciences, 50161 Kaunas, Lithuania
| | - Vidas Raudonis
- Automation Department, Electrical and Electronics Faculty, Kaunas University of Technology, 51368 Kaunas, Lithuania
| | - Jonas Ceponis
- Institute of Endocrinology, Lithuanian University of Health Sciences, 44307 Kaunas, Lithuania
| |
Collapse
|
16
|
GLAN: GAN Assisted Lightweight Attention Network for Biomedical Imaging Based Diagnostics. Cognit Comput 2023. [DOI: 10.1007/s12559-023-10131-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/09/2023]
|
17
|
Biglari A, Tang W. A Review of Embedded Machine Learning Based on Hardware, Application, and Sensing Scheme. SENSORS (BASEL, SWITZERLAND) 2023; 23:2131. [PMID: 36850729 PMCID: PMC9959746 DOI: 10.3390/s23042131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Revised: 01/17/2023] [Accepted: 02/09/2023] [Indexed: 06/18/2023]
Abstract
Machine learning is an expanding field with an ever-increasing role in everyday life, with its utility in the industrial, agricultural, and medical sectors being undeniable. Recently, this utility has come in the form of machine learning implementation on embedded system devices. While there have been steady advances in the performance, memory, and power consumption of embedded devices, most machine learning algorithms still have a very high power consumption and computational demand, making the implementation of embedded machine learning somewhat difficult. However, different devices can be implemented for different applications based on their overall processing power and performance. This paper presents an overview of several different implementations of machine learning on embedded systems divided by their specific device, application, specific machine learning algorithm, and sensors. We will mainly focus on NVIDIA Jetson and Raspberry Pi devices with a few different less utilized embedded computers, as well as which of these devices were more commonly used for specific applications in different fields. We will also briefly analyze the specific ML models most commonly implemented on the devices and the specific sensors that were used to gather input from the field. All of the papers included in this review were selected using Google Scholar and published papers in the IEEExplore database. The selection criterion for these papers was the usage of embedded computing systems in either a theoretical study or practical implementation of machine learning models. The papers needed to have provided either one or, preferably, all of the following results in their studies-the overall accuracy of the models on the system, the overall power consumption of the embedded machine learning system, and the inference time of their models on the embedded system. Embedded machine learning is experiencing an explosion in both scale and scope, both due to advances in system performance and machine learning models, as well as greater affordability and accessibility of both. Improvements are noted in quality, power usage, and effectiveness.
Collapse
|
18
|
Hernandez-Guedes A, Arteaga-Marrero N, Villa E, Callico GM, Ruiz-Alzola J. Feature Ranking by Variational Dropout for Classification Using Thermograms from Diabetic Foot Ulcers. SENSORS (BASEL, SWITZERLAND) 2023; 23:757. [PMID: 36679552 PMCID: PMC9867159 DOI: 10.3390/s23020757] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 12/31/2022] [Accepted: 01/03/2023] [Indexed: 06/17/2023]
Abstract
Diabetes mellitus presents a high prevalence around the world. A common and long-term derived complication is diabetic foot ulcers (DFUs), which have a global prevalence of roughly 6.3%, and a lifetime incidence of up to 34%. Infrared thermograms, covering the entire plantar aspect of both feet, can be employed to monitor the risk of developing a foot ulcer, because diabetic patients exhibit an abnormal pattern that may indicate a foot disorder. In this study, the publicly available INAOE dataset composed of thermogram images of healthy and diabetic subjects was employed to extract relevant features aiming to establish a set of state-of-the-art features that efficiently classify DFU. This database was extended and balanced by fusing it with private local thermograms from healthy volunteers and generating synthetic data via synthetic minority oversampling technique (SMOTE). State-of-the-art features were extracted using two classical approaches, LASSO and random forest, as well as two variational deep learning (DL)-based ones: concrete and variational dropout. Then, the most relevant features were detected and ranked. Subsequently, the extracted features were employed to classify subjects at risk of developing an ulcer using as reference a support vector machine (SVM) classifier with a fixed hyperparameter configuration to evaluate the robustness of the selected features. The new set of features extracted considerably differed from those currently considered state-of-the-art but provided a fair performance. Among the implemented extraction approaches, the variational DL ones, particularly the concrete dropout, performed the best, reporting an F1 score of 90% using the aforementioned SVM classifier. In comparison with features previously considered as the state-of-the-art, approximately 15% better performance was achieved for classification.
Collapse
Affiliation(s)
- Abian Hernandez-Guedes
- Instituto Universitario de Investigaciones Biomédicas y Sanitarias (IUIBS), Universidad de Las Palmas de Gran Canaria, 35016 Las Palmas de Gran Canaria, Spain
- Instituto Universitario de Microelectrónica Aplicada (IUMA), Universidad de Las Palmas de Gran Canaria, 35017 Las Palmas de Gran Canaria, Spain
| | - Natalia Arteaga-Marrero
- Grupo Tecnología Médica IACTEC, Instituto de Astrofísica de Canarias (IAC), 38205 San Cristóbal de La Laguna, Spain
| | - Enrique Villa
- Grupo Tecnología Médica IACTEC, Instituto de Astrofísica de Canarias (IAC), 38205 San Cristóbal de La Laguna, Spain
| | - Gustavo M. Callico
- Instituto Universitario de Microelectrónica Aplicada (IUMA), Universidad de Las Palmas de Gran Canaria, 35017 Las Palmas de Gran Canaria, Spain
| | - Juan Ruiz-Alzola
- Instituto Universitario de Investigaciones Biomédicas y Sanitarias (IUIBS), Universidad de Las Palmas de Gran Canaria, 35016 Las Palmas de Gran Canaria, Spain
- Grupo Tecnología Médica IACTEC, Instituto de Astrofísica de Canarias (IAC), 38205 San Cristóbal de La Laguna, Spain
- Departamento de Señales y Comunicaciones, Universidad de Las Palmas de Gran Canaria, 35016 Las Palmas de Gran Canaria, Spain
| |
Collapse
|
19
|
Huang J, Yeung AM, Armstrong DG, Battarbee AN, Cuadros J, Espinoza JC, Kleinberg S, Mathioudakis N, Swerdlow MA, Klonoff DC. Artificial Intelligence for Predicting and Diagnosing Complications of Diabetes. J Diabetes Sci Technol 2023; 17:224-238. [PMID: 36121302 PMCID: PMC9846408 DOI: 10.1177/19322968221124583] [Citation(s) in RCA: 13] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Artificial intelligence can use real-world data to create models capable of making predictions and medical diagnosis for diabetes and its complications. The aim of this commentary article is to provide a general perspective and present recent advances on how artificial intelligence can be applied to improve the prediction and diagnosis of six significant complications of diabetes including (1) gestational diabetes, (2) hypoglycemia in the hospital, (3) diabetic retinopathy, (4) diabetic foot ulcers, (5) diabetic peripheral neuropathy, and (6) diabetic nephropathy.
Collapse
Affiliation(s)
| | | | - David G. Armstrong
- Keck School of Medicine, University of
Southern California, Los Angeles, CA, USA
| | - Ashley N. Battarbee
- Center for Women’s Reproductive Health,
The University of Alabama at Birmingham, Birmingham, AL, USA
| | - Jorge Cuadros
- Meredith Morgan Optometric Eye Center,
University of California, Berkeley, Berkeley, CA, USA
| | - Juan C. Espinoza
- Children’s Hospital Los Angeles,
University of Southern California, Los Angeles, CA, USA
| | | | | | - Mark A. Swerdlow
- Keck School of Medicine, University of
Southern California, Los Angeles, CA, USA
| | - David C. Klonoff
- Diabetes Technology Society,
Burlingame, CA, USA
- Diabetes Research Institute,
Mills-Peninsula Medical Center, San Mateo, CA, USA
| |
Collapse
|
20
|
Swerdlow M, Shin L, D’Huyvetter K, Mack WJ, Armstrong DG. Initial Clinical Experience with a Simple, Home System for Early Detection and Monitoring of Diabetic Foot Ulcers: The Foot Selfie. J Diabetes Sci Technol 2023; 17:79-88. [PMID: 34719973 PMCID: PMC9846401 DOI: 10.1177/19322968211053348] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
BACKGROUND Diabetic foot ulcers (DFUs) are a leading cause of disability and morbidity. There is an unmet need for a simple, practical, home method to detect DFUs early and remotely monitor their healing. METHOD We developed a simple, inexpensive, smartphone-based, "Foot Selfie" system that enables patients to photograph the plantar surface of their feet without assistance and transmit images to a remote server. In a pilot study, patients from a limb-salvage clinic were asked to image their feet daily for six months and to evaluate the system by questionnaire at five time points. Transmitted results were reviewed weekly. RESULTS Fifteen patients (10 male) used the system after approximately 5 minutes of instruction. Participants uploaded images on a median of 76% of eligible study days. The system captured and transmitted diagnostic quality images of the entire plantar surface of both feet, permitting clinical-management decisions on a remote basis. We monitored 12 active wounds and 39 pre-ulcerative lesions (five wounds and 13 pre-ulcerative lesions at study outset); we observed healing of seven wounds and reversal of 20 pre-ulcerative lesions. Participants rated the system as useful, empowering, and preferable to their previous methods of foot screening. CONCLUSIONS With minimal training, patients transmitted diagnostic-quality images from home on most days, allowing clinicians to review serial images. This system permits inexpensive home foot screening and monitoring of DFUs. Further studies are needed to determine whether it can reduce morbidity of DFUs and/or the associated cost of care. Artificial intelligence integration could improve scalability.
Collapse
Affiliation(s)
- Mark Swerdlow
- Department of Surgery, Southwestern
Academic Limb Salvage Alliance, Keck School of Medicine of University of Southern
California, Los Angeles, CA, USA
- Center to Stream Healthcare in Place,
Keck School of Medicine of University of Southern California, Los Angeles, CA,
USA
| | - Laura Shin
- Department of Surgery, Southwestern
Academic Limb Salvage Alliance, Keck School of Medicine of University of Southern
California, Los Angeles, CA, USA
- Center to Stream Healthcare in Place,
Keck School of Medicine of University of Southern California, Los Angeles, CA,
USA
| | - Karen D’Huyvetter
- Department of Surgery, Southwestern
Academic Limb Salvage Alliance, Keck School of Medicine of University of Southern
California, Los Angeles, CA, USA
- Center to Stream Healthcare in Place,
Keck School of Medicine of University of Southern California, Los Angeles, CA,
USA
| | - Wendy J. Mack
- Department of Population and Public
Health Sciences, Keck School of Medicine of University of Southern California, Los
Angeles, CA, USA
| | - David G. Armstrong
- Department of Surgery, Southwestern
Academic Limb Salvage Alliance, Keck School of Medicine of University of Southern
California, Los Angeles, CA, USA
- Center to Stream Healthcare in Place,
Keck School of Medicine of University of Southern California, Los Angeles, CA,
USA
| |
Collapse
|
21
|
Lan T, Li Z, Chen J. FusionSegNet: Fusing global foot features and local wound features to diagnose diabetic foot. Comput Biol Med 2023; 152:106456. [PMID: 36571939 DOI: 10.1016/j.compbiomed.2022.106456] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Revised: 12/11/2022] [Accepted: 12/19/2022] [Indexed: 12/24/2022]
Abstract
The Diabetic Foot (DF) is threatening every diabetic patient's health. Every year, more than one million people suffer amputation in the world due to lack of timely diagnosis of DF. Diagnosing DF at early stage is very essential to improve the survival rate and quality of patients. However, it is easy for inexperienced doctors to confuse DFU wounds and other specific ulcer wounds when there is a lack of patients' health records in underdeveloped areas. It is of great value to distinguish diabetic foot ulcer from chronic wounds. And the characteristics of deep learning can be well applied in this field. In this paper, we propose the FusionSegNet fusing global foot features and local wound features to identify DF images from foot ulcer images. In particular, we apply a wound segmentation module to segment foot ulcer wounds, which guides the network to pay attention to wound area. T he FusionSegNet combines two kinds of features to make a final prediction. Our method is evaluated upon our dataset collected by Shanghai Municipal Eighth People's Hospital in clinical environment. In the training-validation stage, we collect 1211 images for a 5-fold cross-validation. Our method can classify DF images and non-DF images with the area under the receiver operating characteristic curve (AUC) value of 98.93%, accuracy of 95.78%, sensitivity of 94.27%, specificity of 96.88%, and F1-score of 94.91%. With the excellent performance, the proposed method can accurately extract wound features and greatly improve the classification performance. In general, the method proposed in this paper can help clinicians make more accurate judgments of diabetic foot and has great potential in clinical auxiliary diagnosis.
Collapse
Affiliation(s)
- Tiancai Lan
- School of Mathematics and Information Engineering, Longyan University, Fujian, 364012, China
| | - Zhiwei Li
- School of Computer Engineering and Science, Shanghai University, Shanghai, 200444, China
| | - Jun Chen
- Second Hospital of Longyan, Fujian, 364000, China.
| |
Collapse
|
22
|
Anisuzzaman DM, Wang C, Rostami B, Gopalakrishnan S, Niezgoda J, Yu Z. Image-Based Artificial Intelligence in Wound Assessment: A Systematic Review. Adv Wound Care (New Rochelle) 2022; 11:687-709. [PMID: 34544270 DOI: 10.1089/wound.2021.0091] [Citation(s) in RCA: 33] [Impact Index Per Article: 16.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023] Open
Abstract
Significance: Accurately predicting wound healing trajectories is difficult for wound care clinicians due to the complex and dynamic processes involved in wound healing. Wound care teams capture images of wounds during clinical visits generating big datasets over time. Developing novel artificial intelligence (AI) systems can help clinicians diagnose, assess the effectiveness of therapy, and predict healing outcomes. Recent Advances: Rapid developments in computer processing have enabled the development of AI-based systems that can improve the diagnosis and effectiveness of therapy in various clinical specializations. In the past decade, we have witnessed AI revolutionizing all types of medical imaging like X-ray, ultrasound, computed tomography, magnetic resonance imaging, etc., but AI-based systems remain to be developed clinically and computationally for high-quality wound care that can result in better patient outcomes. Critical Issues: In the current standard of care, collecting wound images on every clinical visit, interpreting and archiving the data are cumbersome and time consuming. Commercial platforms are developed to capture images, perform wound measurements, and provide clinicians with a workflow for diagnosis, but AI-based systems are still in their infancy. This systematic review summarizes the breadth and depth of the most recent and relevant work in intelligent image-based data analysis and system developments for wound assessment. Future Directions: With increasing availabilities of massive data (wound images, wound-specific electronic health records, etc.) as well as powerful computing resources, AI-based digital platforms will play a significant role in delivering data-driven care to people suffering from debilitating chronic wounds.
Collapse
Affiliation(s)
- D M Anisuzzaman
- Department of Computer Science, University of Wisconsin-Milwaukee, Milwaukee, Wisconsin, USA
| | - Chuanbo Wang
- Department of Computer Science, University of Wisconsin-Milwaukee, Milwaukee, Wisconsin, USA
| | - Behrouz Rostami
- Department of Electrical Engineering, University of Wisconsin-Milwaukee, Milwaukee, Wisconsin, USA
| | | | | | - Zeyun Yu
- Department of Computer Science, University of Wisconsin-Milwaukee, Milwaukee, Wisconsin, USA
| |
Collapse
|
23
|
ACTNet: asymmetric convolutional transformer network for diabetic foot ulcers classification. Phys Eng Sci Med 2022; 45:1175-1181. [PMID: 36279078 DOI: 10.1007/s13246-022-01185-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2022] [Accepted: 10/01/2022] [Indexed: 12/15/2022]
Abstract
Most existing image classification methods have achieved significant progress in the field of natural images. However, in the field of diabetic foot ulcer (DFU) where data is scarce and complex, the accurate classification of data is still a thorny problem. In this paper, we propose an Asymmetric Convolutional Transformer Network (ACTNet) for the multi-class (4-class) classification task of DFU. Specifically, in order to strengthen the expressive ability of the network, we design an asymmetric convolutional module in the front part of the network to model the relationship between local pixels, extract the underlying features of the image, and guide the network to focus on the central region in the image that contains more information. Furthermore, a novel pooling layer is added between the encoder and the classification head in the Transformer, which weights the data sequence generated by the encoder to better correlate the features between the input data. Finally, to fully exploit the performance of the model, we pretrained our model on ImageNet and fine-tune it on DFU images. The model is validated on the DFUC2021 test set, and the F1-score and AUC value are 0.593 and 0.824, respectively. The experiments show that our model has excellent performance even in the case of a small dataset.
Collapse
|
24
|
Chemello G, Salvatori B, Morettini M, Tura A. Artificial Intelligence Methodologies Applied to Technologies for Screening, Diagnosis and Care of the Diabetic Foot: A Narrative Review. BIOSENSORS 2022; 12:985. [PMID: 36354494 PMCID: PMC9688674 DOI: 10.3390/bios12110985] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Revised: 10/26/2022] [Accepted: 11/04/2022] [Indexed: 06/16/2023]
Abstract
Diabetic foot syndrome is a multifactorial pathology with at least three main etiological factors, i.e., peripheral neuropathy, peripheral arterial disease, and infection. In addition to complexity, another distinctive trait of diabetic foot syndrome is its insidiousness, due to a frequent lack of early symptoms. In recent years, it has become clear that the prevalence of diabetic foot syndrome is increasing, and it is among the diabetes complications with a stronger impact on patient's quality of life. Considering the complex nature of this syndrome, artificial intelligence (AI) methodologies appear adequate to address aspects such as timely screening for the identification of the risk for foot ulcers (or, even worse, for amputation), based on appropriate sensor technologies. In this review, we summarize the main findings of the pertinent studies in the field, paying attention to both the AI-based methodological aspects and the main physiological/clinical study outcomes. The analyzed studies show that AI application to data derived by different technologies provides promising results, but in our opinion future studies may benefit from inclusion of quantitative measures based on simple sensors, which are still scarcely exploited.
Collapse
Affiliation(s)
- Gaetano Chemello
- CNR Institute of Neuroscience, Corso Stati Uniti 4, 35127 Padova, Italy
| | | | - Micaela Morettini
- Department of Information Engineering, Università Politecnica delle Marche, Via Brecce Bianche, 12, 60131 Ancona, Italy
| | - Andrea Tura
- CNR Institute of Neuroscience, Corso Stati Uniti 4, 35127 Padova, Italy
| |
Collapse
|
25
|
Lau CH, Yu KHO, Yip TF, Luk LY, Wai AKC, Sit TY, Wong JYH, Ho JWK. An artificial intelligence-enabled smartphone app for real-time pressure injury assessment. FRONTIERS IN MEDICAL TECHNOLOGY 2022; 4:905074. [PMID: 36212608 PMCID: PMC9541137 DOI: 10.3389/fmedt.2022.905074] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2022] [Accepted: 09/01/2022] [Indexed: 11/29/2022] Open
Abstract
The management of chronic wounds in the elderly such as pressure injury (also known as bedsore or pressure ulcer) is increasingly important in an ageing population. Accurate classification of the stage of pressure injury is important for wound care planning. Nonetheless, the expertise required for staging is often not available in a residential care home setting. Artificial-intelligence (AI)-based computer vision techniques have opened up opportunities to harness the inbuilt camera in modern smartphones to support pressure injury staging by nursing home carers. In this paper, we summarise the recent development of smartphone or tablet-based applications for wound assessment. Furthermore, we present a new smartphone application (app) to perform real-time detection and staging classification of pressure injury wounds using a deep learning-based object detection system, YOLOv4. Based on our validation set of 144 photos, our app obtained an overall prediction accuracy of 63.2%. The per-class prediction specificity is generally high (85.1%–100%), but have variable sensitivity: 73.3% (stage 1 vs. others), 37% (stage 2 vs. others), 76.7 (stage 3 vs. others), 70% (stage 4 vs. others), and 55.6% (unstageable vs. others). Using another independent test set, 8 out of 10 images were predicted correctly by the YOLOv4 model. When deployed in a real-life setting with two different ambient brightness levels with three different Android phone models, the prediction accuracy of the 10 test images ranges from 80 to 90%, which highlight the importance of evaluation of mobile health (mHealth) application in a simulated real-life setting. This study details the development and evaluation process and demonstrates the feasibility of applying such a real-time staging app in wound care management.
Collapse
Affiliation(s)
- Chun Hon Lau
- Laboratory of Data Discovery for Health Limited (D24H), Hong Kong Science Park, Hong Kong SAR, China
- School of Biomedical Sciences, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Pokfulam, Hong Kong SAR, China
| | - Ken Hung-On Yu
- Laboratory of Data Discovery for Health Limited (D24H), Hong Kong Science Park, Hong Kong SAR, China
- School of Biomedical Sciences, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Pokfulam, Hong Kong SAR, China
| | - Tsz Fung Yip
- Laboratory of Data Discovery for Health Limited (D24H), Hong Kong Science Park, Hong Kong SAR, China
- School of Biomedical Sciences, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Pokfulam, Hong Kong SAR, China
| | - Luke Yik Fung Luk
- Laboratory of Data Discovery for Health Limited (D24H), Hong Kong Science Park, Hong Kong SAR, China
- School of Biomedical Sciences, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Pokfulam, Hong Kong SAR, China
| | - Abraham Ka Chung Wai
- Department of Emergency Medicine, School of Clinical Medicine, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Pokfulam, Hong Kong SAR, China
| | - Tin-Yan Sit
- School of Nursing, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Pokfulam, Hong Kong SAR, China
| | - Janet Yuen-Ha Wong
- School of Nursing, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Pokfulam, Hong Kong SAR, China
- School of Nursing / Health Studies, Hong Kong Metropolitan University, Ho Man Tin, Hong Kong SAR, China
- Correspondence: Janet Yuen-Ha Wong Joshua Wing Kei Ho
| | - Joshua Wing Kei Ho
- Laboratory of Data Discovery for Health Limited (D24H), Hong Kong Science Park, Hong Kong SAR, China
- School of Biomedical Sciences, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Pokfulam, Hong Kong SAR, China
- Correspondence: Janet Yuen-Ha Wong Joshua Wing Kei Ho
| |
Collapse
|
26
|
Deep Learning Approaches for Automatic Localization in Medical Images. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:6347307. [PMID: 35814554 PMCID: PMC9259335 DOI: 10.1155/2022/6347307] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/18/2022] [Accepted: 05/23/2022] [Indexed: 12/21/2022]
Abstract
Recent revolutionary advances in deep learning (DL) have fueled several breakthrough achievements in various complicated computer vision tasks. The remarkable successes and achievements started in 2012 when deep learning neural networks (DNNs) outperformed the shallow machine learning models on a number of significant benchmarks. Significant advances were made in computer vision by conducting very complex image interpretation tasks with outstanding accuracy. These achievements have shown great promise in a wide variety of fields, especially in medical image analysis by creating opportunities to diagnose and treat diseases earlier. In recent years, the application of the DNN for object localization has gained the attention of researchers due to its success over conventional methods, especially in object localization. As this has become a very broad and rapidly growing field, this study presents a short review of DNN implementation for medical images and validates its efficacy on benchmarks. This study presents the first review that focuses on object localization using the DNN in medical images. The key aim of this study was to summarize the recent studies based on the DNN for medical image localization and to highlight the research gaps that can provide worthwhile ideas to shape future research related to object localization tasks. It starts with an overview on the importance of medical image analysis and existing technology in this space. The discussion then proceeds to the dominant DNN utilized in the current literature. Finally, we conclude by discussing the challenges associated with the application of the DNN for medical image localization which can drive further studies in identifying potential future developments in the relevant field of study.
Collapse
|
27
|
Wang TY, Chen YH, Chen JT, Liu JT, Wu PY, Chang SY, Lee YW, Su KC, Chen CL. Diabetic Macular Edema Detection Using End-to-End Deep Fusion Model and Anatomical Landmark Visualization on an Edge Computing Device. Front Med (Lausanne) 2022; 9:851644. [PMID: 35445051 PMCID: PMC9014123 DOI: 10.3389/fmed.2022.851644] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Accepted: 03/14/2022] [Indexed: 11/23/2022] Open
Abstract
Purpose Diabetic macular edema (DME) is a common cause of vision impairment and blindness in patients with diabetes. However, vision loss can be prevented by regular eye examinations during primary care. This study aimed to design an artificial intelligence (AI) system to facilitate ophthalmology referrals by physicians. Methods We developed an end-to-end deep fusion model for DME classification and hard exudate (HE) detection. Based on the architecture of fusion model, we also applied a dual model which included an independent classifier and object detector to perform these two tasks separately. We used 35,001 annotated fundus images from three hospitals between 2007 and 2018 in Taiwan to create a private dataset. The Private dataset, Messidor-1 and Messidor-2 were used to assess the performance of the fusion model for DME classification and HE detection. A second object detector was trained to identify anatomical landmarks (optic disc and macula). We integrated the fusion model and the anatomical landmark detector, and evaluated their performance on an edge device, a device with limited compute resources. Results For DME classification of our private testing dataset, Messidor-1 and Messidor-2, the area under the receiver operating characteristic curve (AUC) for the fusion model had values of 98.1, 95.2, and 95.8%, the sensitivities were 96.4, 88.7, and 87.4%, the specificities were 90.1, 90.2, and 90.2%, and the accuracies were 90.8, 90.0, and 89.9%, respectively. In addition, the AUC was not significantly different for the fusion and dual models for the three datasets (p = 0.743, 0.942, and 0.114, respectively). For HE detection, the fusion model achieved a sensitivity of 79.5%, a specificity of 87.7%, and an accuracy of 86.3% using our private testing dataset. The sensitivity of the fusion model was higher than that of the dual model (p = 0.048). For optic disc and macula detection, the second object detector achieved accuracies of 98.4% (optic disc) and 99.3% (macula). The fusion model and the anatomical landmark detector can be deployed on a portable edge device. Conclusion This portable AI system exhibited excellent performance for the classification of DME, and the visualization of HE and anatomical locations. It facilitates interpretability and can serve as a clinical reference for physicians. Clinically, this system could be applied to diabetic eye screening to improve the interpretation of fundus imaging in patients with DME.
Collapse
Affiliation(s)
- Ting-Yuan Wang
- Information and Communications Research Laboratories, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Yi-Hao Chen
- Department of Ophthalmology, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan
| | - Jiann-Torng Chen
- Department of Ophthalmology, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan
| | - Jung-Tzu Liu
- Information and Communications Research Laboratories, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Po-Yi Wu
- Information and Communications Research Laboratories, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Sung-Yen Chang
- Information and Communications Research Laboratories, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Ya-Wen Lee
- Information and Communications Research Laboratories, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Kuo-Chen Su
- Department of Optometry, Chung Shan Medical University, Taichung, Taiwan
| | - Ching-Long Chen
- Department of Ophthalmology, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan
| |
Collapse
|
28
|
Amin J, Anjum MA, Sharif A, Sharif MI. A modified classical-quantum model for diabetic foot ulcer classification. INTELLIGENT DECISION TECHNOLOGIES 2022. [DOI: 10.3233/idt-210017] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
DFU is one of the most spreading diseases now day approximately more than one million patients suffer due to this disease. Undergo the procedure of removing their lower limb of the body due to the reason that they are not able enough to recognize this disease and get proper treatment from the doctors or physicians. Therefore, there is an urgent need of developing a Computer-Aided Design (CAD) system that can easily detect Diabetic Foot Ulcer (DFU). Therefore, in this study, a pre-trained ResNet-50 model and modified classical-quantum model are utilized for diabetic foot ulcer classification into corresponding classes such as normal/abnormal and ischaemia/non-ischaemia. The presented approach achieved classification accuracy is greater than 0.90 on abnormal/normal, ischaemia/non-ischaemia, and infection and non-infection foot images. The reported results depict that the proposed method outperformed as compared to recently published work in the domain of diabetic foot ulcers.
Collapse
Affiliation(s)
- Javeria Amin
- Department of Computer Science, University of Wah, Wah Cantt, Pakistan
| | | | - Abida Sharif
- Department of Computer Science, COMSATS University Islamabad, Vehari Campus, Pakistan
| | | |
Collapse
|
29
|
Triantafyllidis A, Kondylakis H, Katehakis D, Kouroubali A, Koumakis L, Marias K, Alexiadis A, Votis K, Tzovaras D. Deep Learning in mHealth for Cardiovascular Disease, Diabetes, and Cancer: Systematic Review. JMIR Mhealth Uhealth 2022; 10:e32344. [PMID: 35377325 PMCID: PMC9016515 DOI: 10.2196/32344] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2021] [Revised: 01/26/2022] [Accepted: 02/22/2022] [Indexed: 12/30/2022] Open
Abstract
Background Major chronic diseases such as cardiovascular disease (CVD), diabetes, and cancer impose a significant burden on people and health care systems around the globe. Recently, deep learning (DL) has shown great potential for the development of intelligent mobile health (mHealth) interventions for chronic diseases that could revolutionize the delivery of health care anytime, anywhere. Objective The aim of this study is to present a systematic review of studies that have used DL based on mHealth data for the diagnosis, prognosis, management, and treatment of major chronic diseases and advance our understanding of the progress made in this rapidly developing field. Methods A search was conducted on the bibliographic databases Scopus and PubMed to identify papers with a focus on the deployment of DL algorithms that used data captured from mobile devices (eg, smartphones, smartwatches, and other wearable devices) targeting CVD, diabetes, or cancer. The identified studies were synthesized according to the target disease, the number of enrolled participants and their age, and the study period as well as the DL algorithm used, the main DL outcome, the data set used, the features selected, and the achieved performance. Results In total, 20 studies were included in the review. A total of 35% (7/20) of DL studies targeted CVD, 45% (9/20) of studies targeted diabetes, and 20% (4/20) of studies targeted cancer. The most common DL outcome was the diagnosis of the patient’s condition for the CVD studies, prediction of blood glucose levels for the studies in diabetes, and early detection of cancer. Most of the DL algorithms used were convolutional neural networks in studies on CVD and cancer and recurrent neural networks in studies on diabetes. The performance of DL was found overall to be satisfactory, reaching >84% accuracy in most studies. In comparison with classic machine learning approaches, DL was found to achieve better performance in almost all studies that reported such comparison outcomes. Most of the studies did not provide details on the explainability of DL outcomes. Conclusions The use of DL can facilitate the diagnosis, management, and treatment of major chronic diseases by harnessing mHealth data. Prospective studies are now required to demonstrate the value of applied DL in real-life mHealth tools and interventions.
Collapse
Affiliation(s)
- Andreas Triantafyllidis
- Information Technologies Institute, Centre for Research and Technology Hellas, Thessaloniki, Greece
| | - Haridimos Kondylakis
- Institute of Computer Science, Foundation for Research and Technology Hellas, Heraklion, Greece
| | - Dimitrios Katehakis
- Institute of Computer Science, Foundation for Research and Technology Hellas, Heraklion, Greece
| | - Angelina Kouroubali
- Institute of Computer Science, Foundation for Research and Technology Hellas, Heraklion, Greece
| | - Lefteris Koumakis
- Institute of Computer Science, Foundation for Research and Technology Hellas, Heraklion, Greece
| | - Kostas Marias
- Institute of Computer Science, Foundation for Research and Technology Hellas, Heraklion, Greece
| | - Anastasios Alexiadis
- Information Technologies Institute, Centre for Research and Technology Hellas, Thessaloniki, Greece
| | - Konstantinos Votis
- Information Technologies Institute, Centre for Research and Technology Hellas, Thessaloniki, Greece
| | - Dimitrios Tzovaras
- Information Technologies Institute, Centre for Research and Technology Hellas, Thessaloniki, Greece
| |
Collapse
|
30
|
Rismayanti IDA, Nursalam N, Farida VN, Dewi NWS, Utami R, Aris A, Agustini NLPIB. Early detection to prevent foot ulceration among type 2 diabetes mellitus patient: A multi-intervention review. J Public Health Res 2022; 11. [PMID: 35315261 PMCID: PMC8973203 DOI: 10.4081/jphr.2022.2752] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2021] [Accepted: 01/10/2022] [Indexed: 12/02/2022] Open
Abstract
Foot ulceration is one of the biggest complications experienced by type 2 diabetes patients. The severity and prevention of new wounds can be overcome through early detection interventions. This systematic review aims to explain and provide a comparison of various interventions that have been developed to prevent the occurrence of Diabetes Foot Ulcers (DFU). We searched Scopus, Science Direct, PubMed, CINAHL, SAGE, and ProQuest for English, experimental studies, published between 2016-2021 that tested early detection for preventing diabetic foot ulcers in diabetic patients. The Joanna Briggs Institute guidelines were used to assess eligibility, and PRISMA quality and a checklist to guide this review. 25 studies were obtained that matched the specified inclusion criteria. The entire article has an experimental study design. Majority of respondents were type 2 diabetes patients who have not experienced ulceration. Based on the results of the review, there were 3 main types of interventions used in the early detection of DFU. The types of intervention used are 1) conventional intervention/physical assessment, 2) 3D thermal camera assessment system, and 3) DFU screening instrument. The three types of interventions have advantages and disadvantages, so their use needs to be adjusted to the conditions and needs of the patient. the development of DFU risk early detection intervention needs to be developed. Integration with modern technology can also be done to increase the accuracy of the results and the ease of examination procedures. Significance for public health This systematic review aims to explain various digital and conventional-based early detection interventions along with their advantages and disadvantages that can be used to assess risk factors for DFU in DM patients. It is because several existing studies only discuss one model of early detection of DFU in DM patients, however, studies that describe various interventions that can be carried out for early detection in DM patients have not been found. By knowing several DFU prevention interventions, it is expected to increase the independence of patients and families in preventing complications such as diabetic foot.
Collapse
Affiliation(s)
| | | | | | | | - Resti Utami
- Faculty of Nursing, Universitas Airlangga, Surabaya, East Java.
| | - Arifal Aris
- Faculty of Nursing, Universitas Airlangga, Surabaya, East Java.
| | | |
Collapse
|
31
|
Venkatesan C, Sumithra MG, Murugappan M. NFU-Net: An Automated Framework for the Detection of Neurotrophic Foot Ulcer Using Deep Convolutional Neural Network. Neural Process Lett 2022. [DOI: 10.1007/s11063-022-10782-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
32
|
Carrión H, Jafari M, Bagood MD, Yang HY, Isseroff RR, Gomez M. Automatic wound detection and size estimation using deep learning algorithms. PLoS Comput Biol 2022; 18:e1009852. [PMID: 35275923 PMCID: PMC8942216 DOI: 10.1371/journal.pcbi.1009852] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2021] [Revised: 03/23/2022] [Accepted: 01/20/2022] [Indexed: 11/17/2022] Open
Abstract
Evaluating and tracking wound size is a fundamental metric for the wound assessment process. Good location and size estimates can enable proper diagnosis and effective treatment. Traditionally, laboratory wound healing studies include a collection of images at uniform time intervals exhibiting the wounded area and the healing process in the test animal, often a mouse. These images are then manually observed to determine key metrics -such as wound size progress- relevant to the study. However, this task is a time-consuming and laborious process. In addition, defining the wound edge could be subjective and can vary from one individual to another even among experts. Furthermore, as our understanding of the healing process grows, so does our need to efficiently and accurately track these key factors for high throughput (e.g., over large-scale and long-term experiments). Thus, in this study, we develop a deep learning-based image analysis pipeline that aims to intake non-uniform wound images and extract relevant information such as the location of interest, wound only image crops, and wound periphery size over-time metrics. In particular, our work focuses on images of wounded laboratory mice that are used widely for translationally relevant wound studies and leverages a commonly used ring-shaped splint present in most images to predict wound size. We apply the method to a dataset that was never meant to be quantified and, thus, presents many visual challenges. Additionally, the data set was not meant for training deep learning models and so is relatively small in size with only 256 images. We compare results to that of expert measurements and demonstrate preservation of information relevant to predicting wound closure despite variability from machine-to-expert and even expert-to-expert. The proposed system resulted in high fidelity results on unseen data with minimal human intervention. Furthermore, the pipeline estimates acceptable wound sizes when less than 50% of the images are missing reference objects.
Collapse
Affiliation(s)
- Héctor Carrión
- Department of Computer Science and Engineering, University of California, Santa Cruz, California, United States of America
| | - Mohammad Jafari
- Department of Earth and Space Sciences, Columbus State University, Columbus, Georgia, United States of America
| | - Michelle Dawn Bagood
- Department of Dermatology, University of California, Davis, Sacramento, California, United States of America
| | - Hsin-ya Yang
- Department of Dermatology, University of California, Davis, Sacramento, California, United States of America
| | - Roslyn Rivkah Isseroff
- Department of Dermatology, University of California, Davis, Sacramento, California, United States of America
| | - Marcella Gomez
- Department of Applied Mathematics, University of California, Santa Cruz, California, United States of America
| |
Collapse
|
33
|
Štotl I, Blagus R, Urbančič-Rovan V. Individualised screening of diabetic foot: creation of a prediction model based on penalised regression and assessment of theoretical efficacy. Diabetologia 2022; 65:291-300. [PMID: 34741637 DOI: 10.1007/s00125-021-05604-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/18/2021] [Accepted: 08/23/2021] [Indexed: 01/22/2023]
Abstract
AIMS/HYPOTHESIS A large proportion of people with diabetes do not receive proper foot screening due to insufficiencies in healthcare systems. Introducing an effective risk prediction model into the screening protocol would potentially reduce the required screening frequency for those considered at low risk for diabetic foot complications. The main aim of the study was to investigate the value of individualised risk assignment for foot complications for optimisation of screening. METHODS From 2015 to 2020, 11,878 routine follow-up foot investigations were performed in the tertiary diabetes clinic. From these, 4282 screening investigations with complete data containing all of 18 designated variables collected at regular clinical and foot screening visits were selected for the study sample. Penalised logistic regression models for the prediction of loss of protective sensation (LOPS) and loss of peripheral pulses (LPP) were developed and evaluated. RESULTS Using leave-one-out cross validation (LOOCV), the penalised regression model showed an AUC of 0.84 (95% CI 0.82, 0.85) for prediction of LOPS and 0.80 (95% CI 0.78, 0.83) for prediction of LPP. Calibration analysis (based on LOOCV) presented consistent recall of probabilities, with a Brier score of 0.08 (intercept 0.01 [95% CI -0.09, 0.12], slope 1.00 [95% CI 0.92, 1.09]) for LOPS and a Brier score of 0.05 (intercept 0.01 [95% CI -0.12, 0.14], slope 1.09 [95% CI 0.95, 1.22]) for LPP. In a hypothetical follow-up period of 2 years, the regular screening interval was increased from 1 year to 2 years for individuals at low risk. In individuals with an International Working Group on the Diabetic Foot (IWGDF) risk 0, we could show a 40.5% reduction in the absolute number of screening examinations (3614 instead of 6074 screenings) when a 10% risk cut-off was used and a 26.5% reduction (4463 instead of 6074 screenings) when the risk cut-off was set to 5%. CONCLUSIONS/INTERPRETATION Enhancement of the protocol for diabetic foot screening by inclusion of a prediction model allows differentiation of individuals with diabetes based on the likelihood of complications. This could potentially reduce the number of screenings needed in those considered at low risk of diabetic foot complications. The proposed model requires further refinement and external validation, but it shows the potential for improving compliance with screening guidelines.
Collapse
Affiliation(s)
- Iztok Štotl
- Department of Endocrinology, Diabetes and Metabolic Diseases, University Medical Centre Ljubljana, Ljubljana, Slovenia.
| | - Rok Blagus
- Institute for Biostatistics and Medical Informatics, Faculty of Medicine, University of Ljubljana, Ljubljana, Slovenia
- Faculty of Sports, University of Ljubljana, Ljubljana, Slovenia
| | - Vilma Urbančič-Rovan
- Department of Endocrinology, Diabetes and Metabolic Diseases, University Medical Centre Ljubljana, Ljubljana, Slovenia
- Faculty of Medicine, University of Ljubljana, Ljubljana, Slovenia
| |
Collapse
|
34
|
Scebba G, Zhang J, Catanzaro S, Mihai C, Distler O, Berli M, Karlen W. Detect-and-segment: A deep learning approach to automate wound image segmentation. INFORMATICS IN MEDICINE UNLOCKED 2022. [DOI: 10.1016/j.imu.2022.100884] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022] Open
|
35
|
Zhang J, Qiu Y, Peng L, Zhou Q, Wang Z, Qi M. A comprehensive review of methods based on deep learning for diabetes-related foot ulcers. Front Endocrinol (Lausanne) 2022; 13:945020. [PMID: 36004341 PMCID: PMC9394750 DOI: 10.3389/fendo.2022.945020] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Accepted: 07/04/2022] [Indexed: 12/23/2022] Open
Abstract
BACKGROUND Diabetes mellitus (DM) is a chronic disease with hyperglycemia. If not treated in time, it may lead to lower limb amputation. At the initial stage, the detection of diabetes-related foot ulcer (DFU) is very difficult. Deep learning has demonstrated state-of-the-art performance in various fields and has been used to analyze images of DFUs. OBJECTIVE This article reviewed current applications of deep learning to the early detection of DFU to avoid limb amputation or infection. METHODS Relevant literature on deep learning models, including in the classification, object detection, and semantic segmentation for images of DFU, published during the past 10 years, were analyzed. RESULTS Currently, the primary uses of deep learning in early DFU detection are related to different algorithms. For classification tasks, improved classification models were all based on convolutional neural networks (CNNs). The model with parallel convolutional layers based on GoogLeNet and the ensemble model outperformed the other models in classification accuracy. For object detection tasks, the models were based on architectures such as faster R-CNN, You-Only-Look-Once (YOLO) v3, YOLO v5, or EfficientDet. The refinements on YOLO v3 models achieved an accuracy of 91.95% and the model with an adaptive faster R-CNN architecture achieved a mean average precision (mAP) of 91.4%, which outperformed the other models. For semantic segmentation tasks, the models were based on architectures such as fully convolutional networks (FCNs), U-Net, V-Net, or SegNet. The model with U-Net outperformed the other models with an accuracy of 94.96%. Taking segmentation tasks as an example, the models were based on architectures such as mask R-CNN. The model with mask R-CNN obtained a precision value of 0.8632 and a mAP of 0.5084. CONCLUSION Although current research is promising in the ability of deep learning to improve a patient's quality of life, further research is required to better understand the mechanisms of deep learning for DFUs.
Collapse
Affiliation(s)
- Jianglin Zhang
- Department of Dermatology, Shenzhen Peoples Hospital, The Second Clinical Medica College, Jinan University, The First Affiliated Hospital, Southern University of Science and Technology, Shenzhen, China
| | - Yue Qiu
- Dermatology Department of Xiangya Hospital, Central South University, Changsha, China
| | - Li Peng
- School of Computer Science, Hunan First Normal University, Changsha, China
| | - Qiuhong Zhou
- Teaching and Research Section of Clinical Nursing, Xiangya Hospital of Central South University, Changsha, China
| | - Zheng Wang
- School of Computer Science, Hunan First Normal University, Changsha, China
- *Correspondence: Zheng Wang, ; Min Qi,
| | - Min Qi
- Department of Plastic Surgery, Xiangya Hospital, Central South University, Changsha, China
- *Correspondence: Zheng Wang, ; Min Qi,
| |
Collapse
|
36
|
Cassidy B, Reeves ND, Pappachan JM, Ahmad N, Haycocks S, Gillespie D, Yap MH. A Cloud-Based Deep Learning Framework for Remote Detection of Diabetic Foot Ulcers. IEEE PERVASIVE COMPUTING 2022. [DOI: 10.1109/mprv.2021.3135686] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
|
37
|
Vehi J, Mujahid O, Contreras I. Aim and Diabetes. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_158] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
38
|
Al-Garaawi N, Ebsim R, Alharan AFH, Yap MH. Diabetic foot ulcer classification using mapped binary patterns and convolutional neural networks. Comput Biol Med 2022; 140:105055. [PMID: 34839183 DOI: 10.1016/j.compbiomed.2021.105055] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2021] [Revised: 11/17/2021] [Accepted: 11/18/2021] [Indexed: 12/15/2022]
Abstract
Diabetic foot ulcer (DFU) is a major complication of diabetes and can lead to lower limb amputation if not treated early and properly. In addition to the traditional clinical approaches, in recent years, research on automation using computer vision and machine learning methods plays an important role in DFU classification, achieving promising successes. The most recent automatic approaches to DFU classification are based on convolutional neural networks (CNNs), using solely RGB images as input. In this paper, we present a CNN-based DFU classification method in which we showed that feeding an appropriate feature (texture information) to the CNN model provides a complementary performance to the standard RGB-based deep models of the DFU classification task, and better performance can be obtained if both RGB images and their texture features are combined and used as input to the CNN. To this end, the proposed method consists of two main stages. The first stage extracts texture information from the RGB image using the mapped binary patterns technique. The obtained mapped image is used to aid the second stage in recognizing DFU as it contains texture information of ulcer. The stack of RGB and mapped binary patterns images are fed to the CNN as a tensor input or as a fused image, which is a linear combination of RGB and mapped binary patterns images. The performance of the proposed approach was evaluated using two recently published DFU datasets: the Part-A dataset of healthy and unhealthy (DFU) cases [17] and Part-B dataset of ischaemia and infection cases [18]. The results showed that the proposed methods provided better performance than the state-of-the-art CNN-based methods with 0.981% (AUC) and 0.952% (F-Measure) on the Part-A dataset, 0.995% (AUC) and 0.990% (F-measure) for the Part-B ischaemia dataset, and 0.820% (AUC) and 0.744% (F-measure) on the Part-B infection dataset.
Collapse
Affiliation(s)
- Nora Al-Garaawi
- Department of Computer Science, Faculty of Education for Girls, University of Kufa, Najaf, Iraq.
| | - Raja Ebsim
- Division of Informatics, Imaging and Data Sciences, The University of Manchester, Manchester, UK
| | - Abbas F H Alharan
- Department of Computer Science, Faculty of Education for Girls, University of Kufa, Najaf, Iraq
| | - Moi Hoon Yap
- Centre for Advanced Computational Science, Manchester Metropolitan University, Manchester, UK
| |
Collapse
|
39
|
Agarwal R, Yap MH, Hasan MK, Zwiggelaar R, Martí R. Deep Learning in Mammography Breast Cancer Detection. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_157] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
40
|
Güley O, Pati S, Bakas S. Classification of Infection and Ischemia in Diabetic Foot Ulcers Using VGG Architectures. DIABETIC FOOT ULCERS GRAND CHALLENGE : SECOND CHALLENGE, DFUC 2021, HELD IN CONJUNCTION WITH MICCAI 2021, STRASBOURG, FRANCE, SEPTEMBER 27, 2021 : PROCEEDINGS. DFUC (CONFERENCE) (2ND : 2021 : ONLINE) 2022; 13183:76-89. [PMID: 35465060 PMCID: PMC9026672 DOI: 10.1007/978-3-030-94907-5_6] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Diabetic foot ulceration (DFU) is a serious complication of diabetes, and a major challenge for healthcare systems around the world. Further infection and ischemia in DFU can significantly prolong treatment and often result in limb amputation, with more severe cases resulting in terminal illness. Thus, early identification and regular monitoring is necessary to improve care, and reduce the burden on healthcare systems. With that in mind, this study attempts to address the problem of infection and ischemia classification in diabetic food ulcers, in four distinct classes. We have evaluated a series of VGG architectures with different layers, following numerous training strategies, including k-fold cross validation, data pre-processing options, augmentation techniques, and weighted loss calculations. In favor of transparency and reproducibility, we make all the implementations available through the Generally Nuanced Deep Learning Framework (GaNDLF, github.com/CBICA/GaNDLF. Our best model was evaluated during the DFU Challenge 2021, and was ranked 2nd, 5th, and 7th based on the macro-averaged AUC (area under the curve), macro-averaged F1 score, and macro-averaged recall metrics, respectively. Our findings support that current state-of-the-art architectures provide good results for the DFU image classification task, and further experimentation is required to study the effects of pre-processing and augmentation strategies.
Collapse
Affiliation(s)
- Orhun Güley
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Informatics, Technical University of Munich, Munich, Germany
| | - Sarthak Pati
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Department of Informatics, Technical University of Munich, Munich, Germany
| | - Spyridon Bakas
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
41
|
Yap MH, Hachiuma R, Alavi A, Brüngel R, Cassidy B, Goyal M, Zhu H, Rückert J, Olshansky M, Huang X, Saito H, Hassanpour S, Friedrich CM, Ascher DB, Song A, Kajita H, Gillespie D, Reeves ND, Pappachan JM, O'Shea C, Frank E. Deep learning in diabetic foot ulcers detection: A comprehensive evaluation. Comput Biol Med 2021; 135:104596. [PMID: 34247133 DOI: 10.1016/j.compbiomed.2021.104596] [Citation(s) in RCA: 31] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2021] [Revised: 06/17/2021] [Accepted: 06/17/2021] [Indexed: 02/08/2023]
Abstract
There has been a substantial amount of research involving computer methods and technology for the detection and recognition of diabetic foot ulcers (DFUs), but there is a lack of systematic comparisons of state-of-the-art deep learning object detection frameworks applied to this problem. DFUC2020 provided participants with a comprehensive dataset consisting of 2,000 images for training and 2,000 images for testing. This paper summarizes the results of DFUC2020 by comparing the deep learning-based algorithms proposed by the winning teams: Faster R-CNN, three variants of Faster R-CNN and an ensemble method; YOLOv3; YOLOv5; EfficientDet; and a new Cascade Attention Network. For each deep learning method, we provide a detailed description of model architecture, parameter settings for training and additional stages including pre-processing, data augmentation and post-processing. We provide a comprehensive evaluation for each method. All the methods required a data augmentation stage to increase the number of images available for training and a post-processing stage to remove false positives. The best performance was obtained from Deformable Convolution, a variant of Faster R-CNN, with a mean average precision (mAP) of 0.6940 and an F1-Score of 0.7434. Finally, we demonstrate that the ensemble method based on different deep learning methods can enhance the F1-Score but not the mAP.
Collapse
Affiliation(s)
- Moi Hoon Yap
- Faculty of Science and Engineering, Manchester Metropolitan University, John Dalton Building, Chester Street, Manchester, M1 5GD, UK.
| | | | - Azadeh Alavi
- Baker Heart and Diabetes Institute, 20 Commercial Road, Melbourne, VIC, 3000, Australia
| | - Raphael Brüngel
- Department of Computer Science, University of Applied Sciences and Arts Dortmund (FH Dortmund), Emil-Figge-Str. 42, 44227 Dortmund, Germany; Institute for Medical Informatics, Biometry and Epidemiology (IMIBE), University Hospital Essen, Hufelandstr. 55, 45122, Essen, Germany
| | - Bill Cassidy
- Faculty of Science and Engineering, Manchester Metropolitan University, John Dalton Building, Chester Street, Manchester, M1 5GD, UK
| | - Manu Goyal
- Department of Biomedical Data Science, Dartmouth College, Hanover, NH, USA
| | - Hongtao Zhu
- Shanghai University, Shanghai, 200444, China
| | - Johannes Rückert
- Department of Computer Science, University of Applied Sciences and Arts Dortmund (FH Dortmund), Emil-Figge-Str. 42, 44227 Dortmund, Germany
| | - Moshe Olshansky
- Baker Heart and Diabetes Institute, 20 Commercial Road, Melbourne, VIC, 3000, Australia
| | - Xiao Huang
- Shanghai University, Shanghai, 200444, China
| | | | - Saeed Hassanpour
- Department of Biomedical Data Science, Dartmouth College, Hanover, NH, USA
| | - Christoph M Friedrich
- Department of Computer Science, University of Applied Sciences and Arts Dortmund (FH Dortmund), Emil-Figge-Str. 42, 44227 Dortmund, Germany; Institute for Medical Informatics, Biometry and Epidemiology (IMIBE), University Hospital Essen, Hufelandstr. 55, 45122, Essen, Germany
| | - David B Ascher
- Baker Heart and Diabetes Institute, 20 Commercial Road, Melbourne, VIC, 3000, Australia
| | - Anping Song
- Shanghai University, Shanghai, 200444, China
| | - Hiroki Kajita
- Keio University School of Medicine, Shinanomachi, Tokyo, Japan
| | - David Gillespie
- Faculty of Science and Engineering, Manchester Metropolitan University, John Dalton Building, Chester Street, Manchester, M1 5GD, UK
| | - Neil D Reeves
- Faculty of Science and Engineering, Manchester Metropolitan University, John Dalton Building, Chester Street, Manchester, M1 5GD, UK
| | | | - Claire O'Shea
- Waikato Diabetes Health Board, Hamilton, New Zealand
| | - Eibe Frank
- Department of Computer Science, University of Waikato, Hamilton, New Zealand
| |
Collapse
|
42
|
Zhang J, Mihai C, Tüshaus L, Scebba G, Distler O, Karlen W. Wound Image Quality From a Mobile Health Tool for Home-Based Chronic Wound Management With Real-Time Quality Feedback: Randomized Feasibility Study. JMIR Mhealth Uhealth 2021; 9:e26149. [PMID: 34328440 PMCID: PMC8367165 DOI: 10.2196/26149] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2021] [Revised: 04/30/2021] [Accepted: 05/19/2021] [Indexed: 12/23/2022] Open
Abstract
Background Travel to clinics for chronic wound management is burdensome to patients. Remote assessment and management of wounds using mobile and telehealth approaches can reduce this burden and improve patient outcomes. An essential step in wound documentation is the capture of wound images, but poor image quality can have a negative influence on the reliability of the assessment. To date, no study has investigated the quality of remotely acquired wound images and whether these are suitable for wound self-management and telemedical interpretation of wound status. Objective Our goal was to develop a mobile health (mHealth) tool for the remote self-assessment of digital ulcers (DUs) in patients with systemic sclerosis (SSc). We aimed to define and validate objective measures for assessing the image quality, evaluate whether an automated feedback feature based on real-time assessment of image quality improves the overall quality of acquired wound images, and evaluate the feasibility of deploying the mHealth tool for home-based chronic wound self-monitoring by patients with SSc. Methods We developed an mHealth tool composed of a wound imaging and management app, a custom color reference sticker, and a smartphone holder. We introduced 2 objective image quality parameters based on the sharpness and presence of the color checker to assess the quality of the image during acquisition and enable a quality feedback mechanism in an advanced version of the app. We randomly assigned patients with SSc and DU to the 2 device groups (basic and feedback) to self-document their DU at home over 8 weeks. The color checker detection ratio (CCDR) and color checker sharpness (CCS) were compared between the 2 groups. We evaluated the feasibility of the mHealth tool by analyzing the usability feedback from questionnaires, user behavior and timings, and the overall quality of the wound images. Results A total of 21 patients were enrolled, of which 15 patients were included in the image quality analysis. The average CCDR was 0.96 (191/199) in the feedback group and 0.86 (158/183) in the basic group. The feedback group showed significantly higher (P<.001) CCS compared to the basic group. The usability questionnaire results showed that the majority of patients were satisfied with the tool, but could benefit from disease-specific adaptations. The median assessment duration was <50 seconds in all patients, indicating the mHealth tool was efficient to use and could be integrated into the daily routine of patients. Conclusions We developed an mHealth tool that enables patients with SSc to acquire good-quality DU images and demonstrated that it is feasible to deploy such an app in this patient group. The feedback mechanism improved the overall image quality. The introduced technical solutions consist of a further step towards reliable and trustworthy digital health for home-based self-management of wounds.
Collapse
Affiliation(s)
- Jia Zhang
- Mobile Health Systems Lab, Institute of Robotics and Intelligent Systems, Department of Health Sciences and Technology, ETH Zurich, Zurich, Switzerland
| | - Carina Mihai
- Department of Rheumatology, University Hospital Zurich, University of Zurich, Zurich, Switzerland
| | - Laura Tüshaus
- Mobile Health Systems Lab, Institute of Robotics and Intelligent Systems, Department of Health Sciences and Technology, ETH Zurich, Zurich, Switzerland
| | - Gaetano Scebba
- Mobile Health Systems Lab, Institute of Robotics and Intelligent Systems, Department of Health Sciences and Technology, ETH Zurich, Zurich, Switzerland
| | - Oliver Distler
- Department of Rheumatology, University Hospital Zurich, University of Zurich, Zurich, Switzerland
| | - Walter Karlen
- Mobile Health Systems Lab, Institute of Robotics and Intelligent Systems, Department of Health Sciences and Technology, ETH Zurich, Zurich, Switzerland
| |
Collapse
|
43
|
Rostami B, Anisuzzaman DM, Wang C, Gopalakrishnan S, Niezgoda J, Yu Z. Multiclass wound image classification using an ensemble deep CNN-based classifier. Comput Biol Med 2021; 134:104536. [PMID: 34126281 DOI: 10.1016/j.compbiomed.2021.104536] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2021] [Revised: 05/21/2021] [Accepted: 05/22/2021] [Indexed: 10/21/2022]
Abstract
Acute and chronic wounds are a challenge to healthcare systems around the world and affect many people's lives annually. Wound classification is a key step in wound diagnosis that would help clinicians to identify an optimal treatment procedure. Hence, having a high-performance classifier assists wound specialists to classify wound types with less financial and time costs. Different wound classification methods based on machine learning and deep learning have been proposed in the literature. In this study, we have developed an ensemble Deep Convolutional Neural Network-based classifier to categorize wound images into multiple classes including surgical, diabetic, and venous ulcers. The output classification scores of two classifiers (namely, patch-wise and image-wise) are fed into a Multilayer Perceptron to provide a superior classification performance. A 5-fold cross-validation approach is used to evaluate the proposed method. We obtained maximum and average classification accuracy values of 96.4% and 94.28% for binary and 91.9% and 87.7% for 3-class classification problems. The proposed classifier was compared with some common deep classifiers and showed significantly higher accuracy metrics. We also tested the proposed method on the Medetec wound image dataset, and the accuracy values of 91.2% and 82.9% were obtained for binary and 3-class classifications. The results show that our proposed method can be used effectively as a decision support system in classification of wound images or other related clinical applications.
Collapse
Affiliation(s)
- Behrouz Rostami
- Electrical Engineering Department, University of Wisconsin-Milwaukee, Milwaukee, WI, USA
| | - D M Anisuzzaman
- Computer Science Department, University of Wisconsin-Milwaukee, Milwaukee, WI, USA
| | - Chuanbo Wang
- Computer Science Department, University of Wisconsin-Milwaukee, Milwaukee, WI, USA
| | | | - Jeffrey Niezgoda
- Advancing the Zenith of Healthcare (AZH) Wound and Vascular Center, Milwaukee, WI, USA
| | - Zeyun Yu
- Electrical Engineering Department, University of Wisconsin-Milwaukee, Milwaukee, WI, USA; Computer Science Department, University of Wisconsin-Milwaukee, Milwaukee, WI, USA.
| |
Collapse
|
44
|
Cassidy B, Reeves ND, Pappachan JM, Gillespie D, O’Shea C, Rajbhandari S, Maiya AG, Frank E, Boulton AJM, Armstrong DG, Najafi B, Wu J, Kochhar RS, Yap MH. The DFUC 2020 Dataset: Analysis Towards Diabetic Foot Ulcer Detection. TOUCHREVIEWS IN ENDOCRINOLOGY 2021; 17:5-11. [PMID: 35118441 PMCID: PMC8320006 DOI: 10.17925/ee.2021.17.1.5] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/06/2020] [Accepted: 03/03/2020] [Indexed: 02/05/2023]
Abstract
Every 20 seconds a limb is amputated somewhere in the world due to diabetes. This is a global health problem that requires a global solution. The International Conference on Medical Image Computing and Computer Assisted Intervention challenge, which concerns the automated detection of diabetic foot ulcers (DFUs) using machine learning techniques, will accelerate the development of innovative healthcare technology to address this unmet medical need. In an effort to improve patient care and reduce the strain on healthcare systems, recent research has focused on the creation of cloud-based detection algorithms. These can be consumed as a service by a mobile app that patients (or a carer, partner or family member) could use themselves at home to monitor their condition and to detect the appearance of a DFU. Collaborative work between Manchester Metropolitan University, Lancashire Teaching Hospitals and the Manchester University NHS Foundation Trust has created a repository of 4,000 DFU images for the purpose of supporting research toward more advanced methods of DFU detection. This paper presents a dataset description and analysis, assessment methods, benchmark algorithms and initial evaluation results. It facilitates the challenge by providing useful insights into state-of-the-art and ongoing research.
Collapse
Affiliation(s)
- Bill Cassidy
- Centre for Applied Computational Science, Faculty of Science and Engineering, Manchester Metropolitan University, Manchester, UK
| | - Neil D Reeves
- Research Centre for Musculoskeletal Science & Sports Medicine, Faculty of Science and Engineering, Manchester Metropolitan University, Manchester, UK
| | - Joseph M Pappachan
- Research Centre for Musculoskeletal Science & Sports Medicine, Faculty of Science and Engineering, Manchester Metropolitan University, Manchester, UK
- Lancashire Teaching Hospitals, Preston, UK
- School of Medical Sciences, University of Manchester, Manchester, UK
| | - David Gillespie
- Centre for Applied Computational Science, Faculty of Science and Engineering, Manchester Metropolitan University, Manchester, UK
| | - Claire O’Shea
- Waikato District Health Board, Hamilton, New Zealand
| | | | - Arun G Maiya
- Manipal College of Health Professions, Karnataka, India
| | - Eibe Frank
- Department of Computer Science, University of Waikato, Hamilton, New Zealand
| | - Andrew JM Boulton
- School of Medical Sciences, University of Manchester, Manchester, UK
| | - David G Armstrong
- Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | | | - Justina Wu
- Waikato District Health Board, Hamilton, New Zealand
| | | | - Moi Hoon Yap
- Centre for Applied Computational Science, Faculty of Science and Engineering, Manchester Metropolitan University, Manchester, UK
| |
Collapse
|
45
|
Aim and Diabetes. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_158-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
46
|
Cassidy B, Reeves ND, Pappachan JM, Gillespie D, O'Shea C, Rajbhandari S, Maiya AG, Frank E, Boulton AJM, Armstrong DG, Najafi B, Wu J, Kochhar RS, Yap MH. The DFUC 2020 Dataset: Analysis Towards Diabetic Foot Ulcer Detection. EUROPEAN ENDOCRINOLOGY 2021. [DOI: 10.17925/ee.2021.1.1.5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
|
47
|
Goyal M, Knackstedt T, Yan S, Hassanpour S. Artificial intelligence-based image classification methods for diagnosis of skin cancer: Challenges and opportunities. Comput Biol Med 2020; 127:104065. [PMID: 33246265 PMCID: PMC8290363 DOI: 10.1016/j.compbiomed.2020.104065] [Citation(s) in RCA: 108] [Impact Index Per Article: 27.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2020] [Revised: 10/15/2020] [Accepted: 10/15/2020] [Indexed: 01/13/2023]
Abstract
Recently, there has been great interest in developing Artificial Intelligence (AI) enabled computer-aided diagnostics solutions for the diagnosis of skin cancer. With the increasing incidence of skin cancers, low awareness among a growing population, and a lack of adequate clinical expertise and services, there is an immediate need for AI systems to assist clinicians in this domain. A large number of skin lesion datasets are available publicly, and researchers have developed AI solutions, particularly deep learning algorithms, to distinguish malignant skin lesions from benign lesions in different image modalities such as dermoscopic, clinical, and histopathology images. Despite the various claims of AI systems achieving higher accuracy than dermatologists in the classification of different skin lesions, these AI systems are still in the very early stages of clinical application in terms of being ready to aid clinicians in the diagnosis of skin cancers. In this review, we discuss advancements in the digital image-based AI solutions for the diagnosis of skin cancer, along with some challenges and future opportunities to improve these AI systems to support dermatologists and enhance their ability to diagnose skin cancer.
Collapse
Affiliation(s)
- Manu Goyal
- Department of Biomedical Data Science, Dartmouth College, Hanover, NH, USA.
| | - Thomas Knackstedt
- Department of Dermatology, Metrohealth System and School of Medicine, Case Western Reserve University, Cleveland, OH, USA
| | - Shaofeng Yan
- Section of Dermatopathology, Department of Pathology and Laboratory Medicine, Dartmouth-Hitchcock Medical Center, Geisel School of Medicine at Dartmouth, Lebanon, NH, USA
| | - Saeed Hassanpour
- Departments of Biomedical Data Science, Computer Science, and Epidemiology, Dartmouth College, Hanover, NH, USA
| |
Collapse
|
48
|
Wagh A, Jain S, Mukherjee A, Agu E, Pedersen P, Strong D, Tulu B, Lindsay C, Liu Z. Semantic Segmentation of Smartphone Wound Images: Comparative Analysis of AHRF and CNN-Based Approaches. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2020; 8:181590-181604. [PMID: 33251080 PMCID: PMC7695230 DOI: 10.1109/access.2020.3014175] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/19/2023]
Abstract
Smartphone wound image analysis has recently emerged as a viable way to assess healing progress and provide actionable feedback to patients and caregivers between hospital appointments. Segmentation is a key image analysis step, after which attributes of the wound segment (e.g. wound area and tissue composition) can be analyzed. The Associated Hierarchical Random Field (AHRF) formulates the image segmentation problem as a graph optimization problem. Handcrafted features are extracted, which are then classified using machine learning classifiers. More recently deep learning approaches have emerged and demonstrated superior performance for a wide range of image analysis tasks. FCN, U-Net and DeepLabV3 are Convolutional Neural Networks used for semantic segmentation. While in separate experiments each of these methods have shown promising results, no prior work has comprehensively and systematically compared the approaches on the same large wound image dataset, or more generally compared deep learning vs non-deep learning wound image segmentation approaches. In this paper, we compare the segmentation performance of AHRF and CNN approaches (FCN, U-Net, DeepLabV3) using various metrics including segmentation accuracy (dice score), inference time, amount of training data required and performance on diverse wound sizes and tissue types. Improvements possible using various image pre- and post-processing techniques are also explored. As access to adequate medical images/data is a common constraint, we explore the sensitivity of the approaches to the size of the wound dataset. We found that for small datasets (< 300 images), AHRF is more accurate than U-Net but not as accurate as FCN and DeepLabV3. AHRF is also over 1000x slower. For larger datasets (> 300 images), AHRF saturates quickly, and all CNN approaches (FCN, U-Net and DeepLabV3) are significantly more accurate than AHRF.
Collapse
Affiliation(s)
- Ameya Wagh
- Computer Science Department, Worcester Polytechnic Institute, Worcester, MA, USA, 01609
| | - Shubham Jain
- Computer Science Department, Worcester Polytechnic Institute, Worcester, MA, USA, 01609
| | - Apratim Mukherjee
- Computer Science Department, Manipal Institute of Technology, Manipal, Karnataka, India, 576104
| | - Emmanuel Agu
- Computer Science Department, Worcester Polytechnic Institute, Worcester, MA, USA, 01609
| | - Peder Pedersen
- Computer Science Department, Worcester Polytechnic Institute, Worcester, MA, USA, 01609
| | - Diane Strong
- Computer Science Department, Worcester Polytechnic Institute, Worcester, MA, USA, 01609
| | - Bengisu Tulu
- Computer Science Department, Worcester Polytechnic Institute, Worcester, MA, USA, 01609
| | - Clifford Lindsay
- Radiology Department, University of Massachusetts Medical School, Worcester MA, USA, 01655
| | - Ziyang Liu
- Computer Science Department, Worcester Polytechnic Institute, Worcester, MA, USA, 01609
| |
Collapse
|
49
|
Jinnai S, Yamazaki N, Hirano Y, Sugawara Y, Ohe Y, Hamamoto R. The Development of a Skin Cancer Classification System for Pigmented Skin Lesions Using Deep Learning. Biomolecules 2020; 10:biom10081123. [PMID: 32751349 PMCID: PMC7465007 DOI: 10.3390/biom10081123] [Citation(s) in RCA: 61] [Impact Index Per Article: 15.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2020] [Revised: 07/25/2020] [Accepted: 07/28/2020] [Indexed: 12/13/2022] Open
Abstract
Recent studies have demonstrated the usefulness of convolutional neural networks (CNNs) to classify images of melanoma, with accuracies comparable to those achieved by dermatologists. However, the performance of a CNN trained with only clinical images of a pigmented skin lesion in a clinical image classification task, in competition with dermatologists, has not been reported to date. In this study, we extracted 5846 clinical images of pigmented skin lesions from 3551 patients. Pigmented skin lesions included malignant tumors (malignant melanoma and basal cell carcinoma) and benign tumors (nevus, seborrhoeic keratosis, senile lentigo, and hematoma/hemangioma). We created the test dataset by randomly selecting 666 patients out of them and picking one image per patient, and created the training dataset by giving bounding-box annotations to the rest of the images (4732 images, 2885 patients). Subsequently, we trained a faster, region-based CNN (FRCNN) with the training dataset and checked the performance of the model on the test dataset. In addition, ten board-certified dermatologists (BCDs) and ten dermatologic trainees (TRNs) took the same tests, and we compared their diagnostic accuracy with FRCNN. For six-class classification, the accuracy of FRCNN was 86.2%, and that of the BCDs and TRNs was 79.5% (p = 0.0081) and 75.1% (p < 0.00001), respectively. For two-class classification (benign or malignant), the accuracy, sensitivity, and specificity were 91.5%, 83.3%, and 94.5% by FRCNN; 86.6%, 86.3%, and 86.6% by BCD; and 85.3%, 83.5%, and 85.9% by TRN, respectively. False positive rates and positive predictive values were 5.5% and 84.7% by FRCNN, 13.4% and 70.5% by BCD, and 14.1% and 68.5% by TRN, respectively. We compared the classification performance of FRCNN with 20 dermatologists. As a result, the classification accuracy of FRCNN was better than that of the dermatologists. In the future, we plan to implement this system in society and have it used by the general public, in order to improve the prognosis of skin cancer.
Collapse
Affiliation(s)
- Shunichi Jinnai
- Department of Dermatologic Oncology, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan;
- Correspondence: (S.J.); (R.H.)
| | - Naoya Yamazaki
- Department of Dermatologic Oncology, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan;
| | - Yuichiro Hirano
- Preferred Networks, 1-6-1 Otemachi, Chiyoda-ku, Tokyo 100-0004, Japan; (Y.H.); (Y.S.)
| | - Yohei Sugawara
- Preferred Networks, 1-6-1 Otemachi, Chiyoda-ku, Tokyo 100-0004, Japan; (Y.H.); (Y.S.)
| | - Yuichiro Ohe
- Department of Thoracic Oncology, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan;
| | - Ryuji Hamamoto
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
- Correspondence: (S.J.); (R.H.)
| |
Collapse
|
50
|
Yap MH, Goyal M, Osman F, Martí R, Denton E, Juette A, Zwiggelaar R. Breast ultrasound region of interest detection and lesion localisation. Artif Intell Med 2020; 107:101880. [PMID: 32828439 DOI: 10.1016/j.artmed.2020.101880] [Citation(s) in RCA: 43] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2019] [Revised: 05/06/2020] [Accepted: 05/12/2020] [Indexed: 11/29/2022]
Abstract
In current breast ultrasound computer aided diagnosis systems, the radiologist preselects a region of interest (ROI) as an input for computerised breast ultrasound image analysis. This task is time consuming and there is inconsistency among human experts. Researchers attempting to automate the process of obtaining the ROIs have been relying on image processing and conventional machine learning methods. We propose the use of a deep learning method for breast ultrasound ROI detection and lesion localisation. We use the most accurate object detection deep learning framework - Faster-RCNN with Inception-ResNet-v2 - as our deep learning network. Due to the lack of datasets, we use transfer learning and propose a new 3-channel artificial RGB method to improve the overall performance. We evaluate and compare the performance of our proposed methods on two datasets (namely, Dataset A and Dataset B), i.e. within individual datasets and composite dataset. We report the lesion detection results with two types of analysis: (1) detected point (centre of the segmented region or the detected bounding box) and (2) Intersection over Union (IoU). Our results demonstrate that the proposed methods achieved comparable results on detected point but with notable improvement on IoU. In addition, our proposed 3-channel artificial RGB method improves the recall of Dataset A. Finally, we outline some future directions for the research.
Collapse
Affiliation(s)
- Moi Hoon Yap
- Department of Computing and Mathematics, Manchester Metropolitan University, UK.
| | - Manu Goyal
- Department of Computing and Mathematics, Manchester Metropolitan University, UK
| | - Fatima Osman
- Department of Computer Science, Sudan University of Science and Technology, Sudan
| | - Robert Martí
- Computer Vision and Robotics Institute, University of Girona, Spain
| | - Erika Denton
- Nolfolk and Norwich University Hospital Foundation Trust, Norwich, UK
| | - Arne Juette
- Nolfolk and Norwich University Hospital Foundation Trust, Norwich, UK
| | | |
Collapse
|