1
|
Tajabadi M, Martin R, Heider D. Privacy-preserving decentralized learning methods for biomedical applications. Comput Struct Biotechnol J 2024; 23:3281-3287. [PMID: 39296807 PMCID: PMC11408144 DOI: 10.1016/j.csbj.2024.08.024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2024] [Revised: 08/26/2024] [Accepted: 08/26/2024] [Indexed: 09/21/2024] Open
Abstract
In recent years, decentralized machine learning has emerged as a significant advancement in biomedical applications, offering robust solutions for data privacy, security, and collaboration across diverse healthcare environments. In this review, we examine various decentralized learning methodologies, including federated learning, split learning, swarm learning, gossip learning, edge learning, and some of their applications in the biomedical field. We delve into the underlying principles, network topologies, and communication strategies of each approach, highlighting their advantages and limitations. Ultimately, the selection of a suitable method should be based on specific needs, infrastructures, and computational capabilities.
Collapse
Affiliation(s)
- Mohammad Tajabadi
- Institute of Computer Science, Heinrich-Heine-University Duesseldorf, Graf-Adolf-Str. 63, Duesseldorf, 40215, North Rhine-Westphalia, Germany
- Center for Digital Medicine, Heinrich-Heine-University Duesseldorf, Moorenstr. 5, Duesseldorf, 40215, North Rhine-Westphalia, Germany
| | - Roman Martin
- Institute of Computer Science, Heinrich-Heine-University Duesseldorf, Graf-Adolf-Str. 63, Duesseldorf, 40215, North Rhine-Westphalia, Germany
- Center for Digital Medicine, Heinrich-Heine-University Duesseldorf, Moorenstr. 5, Duesseldorf, 40215, North Rhine-Westphalia, Germany
| | - Dominik Heider
- Institute of Computer Science, Heinrich-Heine-University Duesseldorf, Graf-Adolf-Str. 63, Duesseldorf, 40215, North Rhine-Westphalia, Germany
- Center for Digital Medicine, Heinrich-Heine-University Duesseldorf, Moorenstr. 5, Duesseldorf, 40215, North Rhine-Westphalia, Germany
| |
Collapse
|
2
|
Pazhanivel DB, Velu AN, Palaniappan BS. Design and Enhancement of a Fog-Enabled Air Quality Monitoring and Prediction System: An Optimized Lightweight Deep Learning Model for a Smart Fog Environmental Gateway. SENSORS (BASEL, SWITZERLAND) 2024; 24:5069. [PMID: 39124116 PMCID: PMC11315033 DOI: 10.3390/s24155069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/09/2024] [Revised: 07/26/2024] [Accepted: 08/01/2024] [Indexed: 08/12/2024]
Abstract
Effective air quality monitoring and forecasting are essential for safeguarding public health, protecting the environment, and promoting sustainable development in smart cities. Conventional systems are cloud-based, incur high costs, lack accurate Deep Learning (DL)models for multi-step forecasting, and fail to optimize DL models for fog nodes. To address these challenges, this paper proposes a Fog-enabled Air Quality Monitoring and Prediction (FAQMP) system by integrating the Internet of Things (IoT), Fog Computing (FC), Low-Power Wide-Area Networks (LPWANs), and Deep Learning (DL) for improved accuracy and efficiency in monitoring and forecasting air quality levels. The three-layered FAQMP system includes a low-cost Air Quality Monitoring (AQM) node transmitting data via LoRa to the Fog Computing layer and then the cloud layer for complex processing. The Smart Fog Environmental Gateway (SFEG) in the FC layer introduces efficient Fog Intelligence by employing an optimized lightweight DL-based Sequence-to-Sequence (Seq2Seq) Gated Recurrent Unit (GRU) attention model, enabling real-time processing, accurate forecasting, and timely warnings of dangerous AQI levels while optimizing fog resource usage. Initially, the Seq2Seq GRU Attention model, validated for multi-step forecasting, outperformed the state-of-the-art DL methods with an average RMSE of 5.5576, MAE of 3.4975, MAPE of 19.1991%, R2 of 0.6926, and Theil's U1 of 0.1325. This model is then made lightweight and optimized using post-training quantization (PTQ), specifically dynamic range quantization, which reduced the model size to less than a quarter of the original, improved execution time by 81.53% while maintaining forecast accuracy. This optimization enables efficient deployment on resource-constrained fog nodes like SFEG by balancing performance and computational efficiency, thereby enhancing the effectiveness of the FAQMP system through efficient Fog Intelligence. The FAQMP system, supported by the EnviroWeb application, provides real-time AQI updates, forecasts, and alerts, aiding the government in proactively addressing pollution concerns, maintaining air quality standards, and fostering a healthier and more sustainable environment.
Collapse
Affiliation(s)
| | - Anantha Narayanan Velu
- Department of Computer Science and Engineering, Amrita School of Computing, Amrita Vishwa Vidyapeetham, Coimbatore 641112, India; (D.B.P.); (B.S.P.)
| | | |
Collapse
|
3
|
Saad M, Hefner S, Donovan S, Bernhard D, Tripathi R, Factor SA, Powell JM, Kwon H, Sameni R, Esper CD, McKay JL. Development of a Tremor Detection Algorithm for Use in an Academic Movement Disorders Center. SENSORS (BASEL, SWITZERLAND) 2024; 24:4960. [PMID: 39124007 PMCID: PMC11314995 DOI: 10.3390/s24154960] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/25/2024] [Revised: 07/24/2024] [Accepted: 07/28/2024] [Indexed: 08/12/2024]
Abstract
Tremor, defined as an "involuntary, rhythmic, oscillatory movement of a body part", is a key feature of many neurological conditions including Parkinson's disease and essential tremor. Clinical assessment continues to be performed by visual observation with quantification on clinical scales. Methodologies for objectively quantifying tremor are promising but remain non-standardized across centers. Our center performs full-body behavioral testing with 3D motion capture for clinical and research purposes in patients with Parkinson's disease, essential tremor, and other conditions. The objective of this study was to assess the ability of several candidate processing pipelines to identify the presence or absence of tremor in kinematic data from patients with confirmed movement disorders and compare them to expert ratings from movement disorders specialists. We curated a database of 2272 separate kinematic data recordings from our center, each of which was contemporaneously annotated as tremor present or absent by a movement physician. We compared the ability of six separate processing pipelines to recreate clinician ratings based on F1 score, in addition to accuracy, precision, and recall. The performance across algorithms was generally comparable. The average F1 score was 0.84±0.02 (mean ± SD; range 0.81-0.87). The second highest performing algorithm (cross-validated F1=0.87) was a hybrid that used engineered features adapted from an algorithm in longstanding clinical use with a modern Support Vector Machine classifier. Taken together, our results suggest the potential to update legacy clinical decision support systems to incorporate modern machine learning classifiers to create better-performing tools.
Collapse
Affiliation(s)
- Mark Saad
- Jean and Paul Amos Parkinson’s Disease and Movement Disorders Program, Department of Neurology, School of Medicine, Emory University, Atlanta, GA 30322, USA; (M.S.)
| | - Sofia Hefner
- Department of Neuroscience, Georgia Institute of Technology, Atlanta, GA 30322, USA
| | - Suzann Donovan
- Department of Neuroscience and Behavioral Biology, College of Arts and Sciences, Emory University, Atlanta, GA 30322, USA
| | - Doug Bernhard
- Jean and Paul Amos Parkinson’s Disease and Movement Disorders Program, Department of Neurology, School of Medicine, Emory University, Atlanta, GA 30322, USA; (M.S.)
| | - Richa Tripathi
- Jean and Paul Amos Parkinson’s Disease and Movement Disorders Program, Department of Neurology, School of Medicine, Emory University, Atlanta, GA 30322, USA; (M.S.)
| | - Stewart A. Factor
- Jean and Paul Amos Parkinson’s Disease and Movement Disorders Program, Department of Neurology, School of Medicine, Emory University, Atlanta, GA 30322, USA; (M.S.)
| | - Jeanne M. Powell
- Department of Psychology, Laney Graduate School, Emory University, Atlanta, GA 30322, USA
| | - Hyeokhyen Kwon
- Department of Biomedical Informatics, School of Medicine, Emory University, Atlanta, GA 30322, USA (R.S.)
| | - Reza Sameni
- Department of Biomedical Informatics, School of Medicine, Emory University, Atlanta, GA 30322, USA (R.S.)
- Department of Biomedical Engineering, Georgia Institute of Technology, Atlanta, GA 30322, USA
| | - Christine D. Esper
- Jean and Paul Amos Parkinson’s Disease and Movement Disorders Program, Department of Neurology, School of Medicine, Emory University, Atlanta, GA 30322, USA; (M.S.)
| | - J. Lucas McKay
- Jean and Paul Amos Parkinson’s Disease and Movement Disorders Program, Department of Neurology, School of Medicine, Emory University, Atlanta, GA 30322, USA; (M.S.)
- Department of Biomedical Informatics, School of Medicine, Emory University, Atlanta, GA 30322, USA (R.S.)
| |
Collapse
|
4
|
Delacour C, Carapezzi S, Abernot M, Todri-Sanial A. Energy-Performance Assessment of Oscillatory Neural Networks Based on VO 2 Devices for Future Edge AI Computing. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:10045-10058. [PMID: 37022082 DOI: 10.1109/tnnls.2023.3238473] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Oscillatory neural network (ONN) is an emerging neuromorphic architecture composed of oscillators that implement neurons and are coupled by synapses. ONNs exhibit rich dynamics and associative properties, which can be used to solve problems in the analog domain according to the paradigm let physics compute. For example, compact oscillators made of VO2 material are good candidates for building low-power ONN architectures dedicated to AI applications at the edge, like pattern recognition. However, little is known about the ONN scalability and its performance when implemented in hardware. Before deploying ONN, it is necessary to assess its computation time, energy consumption, performance, and accuracy for a given application. Here, we consider a VO2-oscillator as an ONN building block and perform circuit-level simulations to evaluate the ONN performances at the architecture level. Notably, we investigate how the ONN computation time, energy, and memory capacity scale with the number of oscillators. It appears that the ONN energy grows linearly when scaling up the network, making it suitable for large-scale integration at the edge. Furthermore, we investigate the design knobs for minimizing the ONN energy. Assisted by technology computer-aided design (TCAD) simulations, we report on scaling down the dimensions of VO2 devices in crossbar (CB) geometry to decrease the oscillator voltage and energy. We benchmark ONN versus state-of-the-art architectures and observe that the ONN paradigm is a competitive energy-efficient solution for scaled VO2 devices oscillating above 100 MHz. Finally, we present how ONN can efficiently detect edges in images captured on low-power edge devices and compare the results with Sobel and Canny edge detectors.
Collapse
|
5
|
Tezsezen E, Yigci D, Ahmadpour A, Tasoglu S. AI-Based Metamaterial Design. ACS APPLIED MATERIALS & INTERFACES 2024; 16:29547-29569. [PMID: 38808674 PMCID: PMC11181287 DOI: 10.1021/acsami.4c04486] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/19/2024] [Revised: 05/16/2024] [Accepted: 05/16/2024] [Indexed: 05/30/2024]
Abstract
The use of metamaterials in various devices has revolutionized applications in optics, healthcare, acoustics, and power systems. Advancements in these fields demand novel or superior metamaterials that can demonstrate targeted control of electromagnetic, mechanical, and thermal properties of matter. Traditional design systems and methods often require manual manipulations which is time-consuming and resource intensive. The integration of artificial intelligence (AI) in optimizing metamaterial design can be employed to explore variant disciplines and address bottlenecks in design. AI-based metamaterial design can also enable the development of novel metamaterials by optimizing design parameters that cannot be achieved using traditional methods. The application of AI can be leveraged to accelerate the analysis of vast data sets as well as to better utilize limited data sets via generative models. This review covers the transformative impact of AI and AI-based metamaterial design for optics, acoustics, healthcare, and power systems. The current challenges, emerging fields, future directions, and bottlenecks within each domain are discussed.
Collapse
Affiliation(s)
- Ece Tezsezen
- Graduate
School of Science and Engineering, Koç
University, Istanbul 34450, Türkiye
| | - Defne Yigci
- School
of Medicine, Koç University, Istanbul 34450, Türkiye
| | - Abdollah Ahmadpour
- Department
of Mechanical Engineering, Koç University
Sariyer, Istanbul 34450, Türkiye
| | - Savas Tasoglu
- Department
of Mechanical Engineering, Koç University
Sariyer, Istanbul 34450, Türkiye
- Koç
University Translational Medicine Research Center (KUTTAM), Koç University, Istanbul 34450, Türkiye
- Bogaziçi
Institute of Biomedical Engineering, Bogaziçi
University, Istanbul 34684, Türkiye
- Koç
University Arçelik Research Center for Creative Industries
(KUAR), Koç University, Istanbul 34450, Türkiye
| |
Collapse
|
6
|
Suppiah R, Noori K, Abidi K, Sharma A. Real-time edge computing design for physiological signal analysis and classification. Biomed Phys Eng Express 2024; 10:045034. [PMID: 38781938 DOI: 10.1088/2057-1976/ad4f8d] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2024] [Accepted: 05/23/2024] [Indexed: 05/25/2024]
Abstract
Physiological Signals like Electromography (EMG) and Electroencephalography (EEG) can be analysed and decoded to provide vital information that can be used in a range of applications like rehabilitative robotics and remote device control. The process of acquiring and using these signals requires many compute-intensive tasks like signal acquisition, signal processing, feature extraction, and machine learning. Performing these activities on a PC-based system with well-established software tools like Python and Matlab is the first step in designing solutions based upon these signals. In the application domain of rehabilitative robotics, one of the main goals is to develop solutions that can be deployed for the use of individuals who need them in improving their Acitivities-for-Daily Living (ADL). To achieve this objective, the final solution must be deployed onto an embedded solution that allows high portability and ease-of-use. Porting a solution from a PC-based environment onto a resource-constraint one such as a microcontroller poses many challenges. In this research paper, we propose the use of an ARM-based Corex M-4 processor. We explore the various stages of the design from the initial testing and validation, to the deployment of the proposed algorithm on the controller, and further investigate the use of Cepstrum features to obtain a high classification accuracy with minimal input features. The proposed solution is able to achieve an average classification accuracy of 95.34% for all five classes in the EMG domain and 96.16% in the EEG domain on the embedded board.
Collapse
Affiliation(s)
- Ravi Suppiah
- Electrical and Electronic Engineering, Newcastle University upon Tyne, NE1 7RU, United Kingdom
| | - Kim Noori
- Electrical and Electronic Engineering, Newcastle University upon Tyne, NE1 7RU, United Kingdom
- Electrical Power Engineering, Newcastle University in Singapore, Singapore, 609607, Singapore
- Purdue Polytechnic Institute, Purdue University, West Lafayette, IN, 47907, United States of America
| | - Khalid Abidi
- Electrical and Electronic Engineering, Newcastle University upon Tyne, NE1 7RU, United Kingdom
- Electrical Power Engineering, Newcastle University in Singapore, Singapore, 609607, Singapore
| | - Anurag Sharma
- Electrical and Electronic Engineering, Newcastle University upon Tyne, NE1 7RU, United Kingdom
- Electrical Power Engineering, Newcastle University in Singapore, Singapore, 609607, Singapore
| |
Collapse
|
7
|
Mahbub T, Bhagwagar A, Chand P, Zualkernan I, Judas J, Dghaym D. Bat2Web: A Framework for Real-Time Classification of Bat Species Echolocation Signals Using Audio Sensor Data. SENSORS (BASEL, SWITZERLAND) 2024; 24:2899. [PMID: 38733008 PMCID: PMC11086295 DOI: 10.3390/s24092899] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/27/2024] [Revised: 04/09/2024] [Accepted: 04/26/2024] [Indexed: 05/13/2024]
Abstract
Bats play a pivotal role in maintaining ecological balance, and studying their behaviors offers vital insights into environmental health and aids in conservation efforts. Determining the presence of various bat species in an environment is essential for many bat studies. Specialized audio sensors can be used to record bat echolocation calls that can then be used to identify bat species. However, the complexity of bat calls presents a significant challenge, necessitating expert analysis and extensive time for accurate interpretation. Recent advances in neural networks can help identify bat species automatically from their echolocation calls. Such neural networks can be integrated into a complete end-to-end system that leverages recent internet of things (IoT) technologies with long-range, low-powered communication protocols to implement automated acoustical monitoring. This paper presents the design and implementation of such a system that uses a tiny neural network for interpreting sensor data derived from bat echolocation signals. A highly compact convolutional neural network (CNN) model was developed that demonstrated excellent performance in bat species identification, achieving an F1-score of 0.9578 and an accuracy rate of 97.5%. The neural network was deployed, and its performance was evaluated on various alternative edge devices, including the NVIDIA Jetson Nano and Google Coral.
Collapse
Affiliation(s)
- Taslim Mahbub
- Department of Computer Science and Engineering, American University of Sharjah, Sharjah 26666, United Arab Emirates; (A.B.); (P.C.); (I.Z.); (D.D.)
| | - Azadan Bhagwagar
- Department of Computer Science and Engineering, American University of Sharjah, Sharjah 26666, United Arab Emirates; (A.B.); (P.C.); (I.Z.); (D.D.)
| | - Priyanka Chand
- Department of Computer Science and Engineering, American University of Sharjah, Sharjah 26666, United Arab Emirates; (A.B.); (P.C.); (I.Z.); (D.D.)
| | - Imran Zualkernan
- Department of Computer Science and Engineering, American University of Sharjah, Sharjah 26666, United Arab Emirates; (A.B.); (P.C.); (I.Z.); (D.D.)
| | - Jacky Judas
- Nature & Ecosystem Restoration, Soudah Development, Riyadh 13519, Saudi Arabia;
| | - Dana Dghaym
- Department of Computer Science and Engineering, American University of Sharjah, Sharjah 26666, United Arab Emirates; (A.B.); (P.C.); (I.Z.); (D.D.)
| |
Collapse
|
8
|
Nguyen TPV, Yang W, Tang Z, Xia X, Mullens AB, Dean JA, Li Y. Lightweight federated learning for STIs/HIV prediction. Sci Rep 2024; 14:6560. [PMID: 38503789 PMCID: PMC10950866 DOI: 10.1038/s41598-024-56115-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Accepted: 03/01/2024] [Indexed: 03/21/2024] Open
Abstract
This paper presents a solution that prioritises high privacy protection and improves communication throughput for predicting the risk of sexually transmissible infections/human immunodeficiency virus (STIs/HIV). The approach utilised Federated Learning (FL) to construct a model from multiple clinics and key stakeholders. FL ensured that only models were shared between clinics, minimising the risk of personal information leakage. Additionally, an algorithm was explored on the FL manager side to construct a global model that aligns with the communication status of the system. Our proposed method introduced Random Forest Federated Learning for assessing the risk of STIs/HIV, incorporating a flexible aggregation process that can be adjusted to accommodate the capacious communication system. Experimental results demonstrated the significant potential of a solution for estimating STIs/HIV risk. In comparison with recent studies, our approach yielded superior results in terms of AUC (0.97) and accuracy ( 93 % ). Despite these promising findings, a limitation of the study lies in the experiment for man's data, due to the self-reported nature of the data and sensitive content. which may be subject to participant bias. Future research could check the performance of the proposed framework in partnership with high-risk populations (e.g., men who have sex with men) to provide a more comprehensive understanding of the proposed framework's impact and ultimately aim to improve health outcomes/health service optimisation.
Collapse
Affiliation(s)
- Thi Phuoc Van Nguyen
- School of Mathematics, Physics and Computing, Centre for Health Research, University of Southern Queensland, Toowoomba Campus, Toowoomba, 4350, QLD, Australia.
| | - Wencheng Yang
- School of Mathematics, Physics and Computing, Centre for Health Research, University of Southern Queensland, Toowoomba Campus, Toowoomba, 4350, QLD, Australia
| | - Zhaohui Tang
- School of Mathematics, Physics and Computing, Centre for Health Research, University of Southern Queensland, Toowoomba Campus, Toowoomba, 4350, QLD, Australia
| | - Xiaoyu Xia
- School of Computing Technologies, RMIT University, GPO Box 2476, Melbourne, 3001, VIC, Australia
| | - Amy B Mullens
- School of Psychology and Wellbeing, Institute for Resilient Regions, Centre for Health Research, University of Southern Queensland, Ipswich Campus, Ipswich, 4305, Australia
| | - Judith A Dean
- School of Public Health, Faculty of Medicine, The University of Queensland, Herston Road, Brisbane, 4006, QLD, Australia
| | - Yan Li
- School of Mathematics, Physics and Computing, Centre for Health Research, University of Southern Queensland, Toowoomba Campus, Toowoomba, 4350, QLD, Australia
| |
Collapse
|
9
|
Pyun KR, Kwon K, Yoo MJ, Kim KK, Gong D, Yeo WH, Han S, Ko SH. Machine-learned wearable sensors for real-time hand-motion recognition: toward practical applications. Natl Sci Rev 2024; 11:nwad298. [PMID: 38213520 PMCID: PMC10776364 DOI: 10.1093/nsr/nwad298] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Revised: 09/23/2023] [Accepted: 11/01/2023] [Indexed: 01/13/2024] Open
Abstract
Soft electromechanical sensors have led to a new paradigm of electronic devices for novel motion-based wearable applications in our daily lives. However, the vast amount of random and unidentified signals generated by complex body motions has hindered the precise recognition and practical application of this technology. Recent advancements in artificial-intelligence technology have enabled significant strides in extracting features from massive and intricate data sets, thereby presenting a breakthrough in utilizing wearable sensors for practical applications. Beyond traditional machine-learning techniques for classifying simple gestures, advanced machine-learning algorithms have been developed to handle more complex and nuanced motion-based tasks with restricted training data sets. Machine-learning techniques have improved the ability to perceive, and thus machine-learned wearable soft sensors have enabled accurate and rapid human-gesture recognition, providing real-time feedback to users. This forms a crucial component of future wearable electronics, contributing to a robust human-machine interface. In this review, we provide a comprehensive summary covering materials, structures and machine-learning algorithms for hand-gesture recognition and possible practical applications through machine-learned wearable electromechanical sensors.
Collapse
Affiliation(s)
- Kyung Rok Pyun
- Department of Mechanical Engineering, Seoul National University, Seoul08826, South Korea
| | - Kangkyu Kwon
- Department of Mechanical Engineering, Seoul National University, Seoul08826, South Korea
- IEN Center for Human-Centric Interfaces and Engineering, Institute for Electronics and Nanotechnology, Georgia Institute of Technology, Atlanta, GA30332, USA
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA30332, USA
| | - Myung Jin Yoo
- Department of Mechanical Engineering, Seoul National University, Seoul08826, South Korea
| | - Kyun Kyu Kim
- Department of Chemical Engineering, Stanford University, Stanford, CA94305, USA
| | - Dohyeon Gong
- Department of Mechanical Engineering, Ajou University, Suwon-si16499, South Korea
| | - Woon-Hong Yeo
- IEN Center for Human-Centric Interfaces and Engineering, Institute for Electronics and Nanotechnology, Georgia Institute of Technology, Atlanta, GA30332, USA
- George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA30332, USA
| | - Seungyong Han
- Department of Mechanical Engineering, Ajou University, Suwon-si16499, South Korea
| | - Seung Hwan Ko
- Department of Mechanical Engineering, Seoul National University, Seoul08826, South Korea
- Institute of Advanced Machinery and Design (SNU-IAMD), Seoul National University, Seoul08826, South Korea
| |
Collapse
|
10
|
Atanane O, Mourhir A, Benamar N, Zennaro M. Smart Buildings: Water Leakage Detection Using TinyML. SENSORS (BASEL, SWITZERLAND) 2023; 23:9210. [PMID: 38005596 PMCID: PMC10675406 DOI: 10.3390/s23229210] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Revised: 11/07/2023] [Accepted: 11/12/2023] [Indexed: 11/26/2023]
Abstract
The escalating global water usage and the increasing strain on major cities due to water shortages highlights the critical need for efficient water management practices. In water-stressed regions worldwide, significant water wastage is primarily attributed to leakages, inefficient use, and aging infrastructure. Undetected water leakages in buildings' pipelines contribute to the water waste problem. To address this issue, an effective water leak detection method is required. In this paper, we explore the application of edge computing in smart buildings to enhance water management. By integrating sensors and embedded Machine Learning models, known as TinyML, smart water management systems can collect real-time data, analyze it, and make accurate decisions for efficient water utilization. The transition to TinyML enables faster and more cost-effective local decision-making, reducing the dependence on centralized entities. In this work, we propose a solution that can be adapted for effective leakage detection in real-world scenarios with minimum human intervention using TinyML. We follow an approach that is similar to a typical machine learning lifecycle in production, spanning stages including data collection, training, hyperparameter tuning, offline evaluation and model optimization for on-device resource efficiency before deployment. In this work, we considered an existing water leakage acoustic dataset for polyvinyl chloride pipelines. To prepare the acoustic data for analysis, we performed preprocessing to transform it into scalograms. We devised a water leak detection method by applying transfer learning to five distinct Convolutional Neural Network (CNN) variants, which are namely EfficientNet, ResNet, AlexNet, MobileNet V1, and MobileNet V2. The CNN models were found to be able to detect leakages where a maximum testing accuracy, recall, precision, and F1 score of 97.45%, 98.57%, 96.70%, and 97.63%, respectively, were observed using the EfficientNet model. To enable seamless deployment on the Arduino Nano 33 BLE edge device, the EfficientNet model is compressed using quantization resulting in a low inference time of 1932 ms, a peak RAM usage of 255.3 kilobytes, and a flash usage requirement of merely 48.7 kilobytes.
Collapse
Affiliation(s)
- Othmane Atanane
- School of Science and Engineering, Al Akhawayn University in Ifrane, P.O. Box 104, Hassan II Avenue, Ifrane 53000, Morocco; (O.A.); (N.B.)
| | - Asmaa Mourhir
- School of Science and Engineering, Al Akhawayn University in Ifrane, P.O. Box 104, Hassan II Avenue, Ifrane 53000, Morocco; (O.A.); (N.B.)
| | - Nabil Benamar
- School of Science and Engineering, Al Akhawayn University in Ifrane, P.O. Box 104, Hassan II Avenue, Ifrane 53000, Morocco; (O.A.); (N.B.)
- School of Technology, Moulay Ismail University of Meknes, Meknes 50050, Morocco
| | - Marco Zennaro
- The Abdus Salam International Centre for Theoretical Physics, 34151 Trieste, Italy;
| |
Collapse
|
11
|
Shumba AT, Montanaro T, Sergi I, Bramanti A, Ciccarelli M, Rispoli A, Carrizzo A, De Vittorio M, Patrono L. Wearable Technologies and AI at the Far Edge for Chronic Heart Failure Prevention and Management: A Systematic Review and Prospects. SENSORS (BASEL, SWITZERLAND) 2023; 23:6896. [PMID: 37571678 PMCID: PMC10422393 DOI: 10.3390/s23156896] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Revised: 07/31/2023] [Accepted: 08/01/2023] [Indexed: 08/13/2023]
Abstract
Smart wearable devices enable personalized at-home healthcare by unobtrusively collecting patient health data and facilitating the development of intelligent platforms to support patient care and management. The accurate analysis of data obtained from wearable devices is crucial for interpreting and contextualizing health data and facilitating the reliable diagnosis and management of critical and chronic diseases. The combination of edge computing and artificial intelligence has provided real-time, time-critical, and privacy-preserving data analysis solutions. However, based on the envisioned service, evaluating the additive value of edge intelligence to the overall architecture is essential before implementation. This article aims to comprehensively analyze the current state of the art on smart health infrastructures implementing wearable and AI technologies at the far edge to support patients with chronic heart failure (CHF). In particular, we highlight the contribution of edge intelligence in supporting the integration of wearable devices into IoT-aware technology infrastructures that provide services for patient diagnosis and management. We also offer an in-depth analysis of open challenges and provide potential solutions to facilitate the integration of wearable devices with edge AI solutions to provide innovative technological infrastructures and interactive services for patients and doctors.
Collapse
Affiliation(s)
- Angela-Tafadzwa Shumba
- Department of Engineering for Innovation, University of Salento, 73100 Lecce, Italy; (A.-T.S.); (T.M.); (I.S.); (M.D.V.)
- Istituto Italiano di Tecnologia, Centre for Biomolecular Nanotechnologies, 73010 Arnesano, Italy
| | - Teodoro Montanaro
- Department of Engineering for Innovation, University of Salento, 73100 Lecce, Italy; (A.-T.S.); (T.M.); (I.S.); (M.D.V.)
| | - Ilaria Sergi
- Department of Engineering for Innovation, University of Salento, 73100 Lecce, Italy; (A.-T.S.); (T.M.); (I.S.); (M.D.V.)
| | - Alessia Bramanti
- Dipartimento di Medicina, Chirurgia e Odontoiatria “Scuola Medica Salernitana” (DIPMED), University of Salerno, 84081 Baronissi, Italy; (A.B.); (M.C.); (A.R.); (A.C.)
| | - Michele Ciccarelli
- Dipartimento di Medicina, Chirurgia e Odontoiatria “Scuola Medica Salernitana” (DIPMED), University of Salerno, 84081 Baronissi, Italy; (A.B.); (M.C.); (A.R.); (A.C.)
| | - Antonella Rispoli
- Dipartimento di Medicina, Chirurgia e Odontoiatria “Scuola Medica Salernitana” (DIPMED), University of Salerno, 84081 Baronissi, Italy; (A.B.); (M.C.); (A.R.); (A.C.)
| | - Albino Carrizzo
- Dipartimento di Medicina, Chirurgia e Odontoiatria “Scuola Medica Salernitana” (DIPMED), University of Salerno, 84081 Baronissi, Italy; (A.B.); (M.C.); (A.R.); (A.C.)
| | - Massimo De Vittorio
- Department of Engineering for Innovation, University of Salento, 73100 Lecce, Italy; (A.-T.S.); (T.M.); (I.S.); (M.D.V.)
- Istituto Italiano di Tecnologia, Centre for Biomolecular Nanotechnologies, 73010 Arnesano, Italy
| | - Luigi Patrono
- Department of Engineering for Innovation, University of Salento, 73100 Lecce, Italy; (A.-T.S.); (T.M.); (I.S.); (M.D.V.)
| |
Collapse
|
12
|
Alajlan NN, Ibrahim DM. DDD TinyML: A TinyML-Based Driver Drowsiness Detection Model Using Deep Learning. SENSORS (BASEL, SWITZERLAND) 2023; 23:5696. [PMID: 37420860 DOI: 10.3390/s23125696] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Revised: 06/07/2023] [Accepted: 06/14/2023] [Indexed: 07/09/2023]
Abstract
Driver drowsiness is one of the main causes of traffic accidents today. In recent years, driver drowsiness detection has suffered from issues integrating deep learning (DL) with Internet-of-things (IoT) devices due to the limited resources of IoT devices, which pose a challenge to fulfilling DL models that demand large storage and computation. Thus, there are challenges to meeting the requirements of real-time driver drowsiness detection applications that need short latency and lightweight computation. To this end, we applied Tiny Machine Learning (TinyML) to a driver drowsiness detection case study. In this paper, we first present an overview of TinyML. After conducting some preliminary experiments, we proposed five lightweight DL models that can be deployed on a microcontroller. We applied three DL models: SqueezeNet, AlexNet, and CNN. In addition, we adopted two pretrained models (MobileNet-V2 and MobileNet-V3) to find the best model in terms of size and accuracy results. After that, we applied the optimization methods to DL models using quantization. Three quantization methods were applied: quantization-aware training (QAT), full-integer quantization (FIQ), and dynamic range quantization (DRQ). The obtained results in terms of the model size show that the CNN model achieved the smallest size of 0.05 MB using the DRQ method, followed by SqueezeNet, AlexNet MobileNet-V3, and MobileNet-V2, with 0.141 MB, 0.58 MB, 1.16 MB, and 1.55 MB, respectively. The result after applying the optimization method was 0.9964 accuracy using DRQ in the MobileNet-V2 model, which outperformed the other models, followed by the SqueezeNet and AlexNet models, with 0.9951 and 0.9924 accuracies, respectively, using DRQ.
Collapse
Affiliation(s)
- Norah N Alajlan
- Department of Information Technology, College of Computer, Qassim University, Buraydah 51452, Saudi Arabia
| | - Dina M Ibrahim
- Department of Information Technology, College of Computer, Qassim University, Buraydah 51452, Saudi Arabia
- Department of Computers and Control Engineering, Faculty of Engineering, Tanta University, Tanta 31733, Egypt
| |
Collapse
|
13
|
Zhang N, Wood O, Yang Z, Xie J. AI-Guided Computing Insights into a Thermostat Monitoring Neonatal Intensive Care Unit (NICU). SENSORS (BASEL, SWITZERLAND) 2023; 23:s23094492. [PMID: 37177696 PMCID: PMC10181714 DOI: 10.3390/s23094492] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Revised: 04/19/2023] [Accepted: 05/03/2023] [Indexed: 05/15/2023]
Abstract
In any healthcare setting, it is important to monitor and control airflow and ventilation with a thermostat. Computational fluid dynamics (CFD) simulations can be carried out to investigate the airflow and heat transfer taking place inside a neonatal intensive care unit (NICU). In this present study, the NICU is modeled based on the realistic dimensions of a single-patient room in compliance with the appropriate square footage allocated per incubator. The physics of flow in NICU is predicted based on the Navier-Stokes conservation equations for an incompressible flow, according to suitable thermophysical characteristics of the climate. The results show sensible flow structures and heat transfer as expected from any indoor climate with this configuration. Furthermore, machine learning (ML) in an artificial intelligence (AI) model has been adopted to take the important geometric parameter values as input from our CFD settings. The model provides accurate predictions of the thermal performance (i.e., temperature evaluation) associated with that design in real time. Besides the geometric parameters, there are three thermophysical variables of interest: the mass flow rate (i.e., inlet velocity), the heat flux of the radiator (i.e., heat source), and the temperature gradient caused by the convection. These thermophysical variables have significantly recovered the physics of convective flows and enhanced the heat transfer throughout the incubator. Importantly, the AI model is not only trained to improve the turbulence modeling but also to capture the large temperature gradient occurring between the infant and surrounding air. These physics-informed (Pi) computing insights make the AI model more general by reproducing the flow of fluid and heat transfer with high levels of numerical accuracy. It can be concluded that AI can aid in dealing with large datasets such as those produced in NICU, and in turn, ML can identify patterns in data and help with the sensor readings in health care.
Collapse
Affiliation(s)
- Ning Zhang
- Faculty of Arts and Sciences, Beijing Normal University at Zhuhai, Zhuhai 519087, China
| | - Olivia Wood
- Galliford Try, Staffordshire Technology Park, Stafford ST18 0GP, UK
| | - Zhiyin Yang
- School of Computing and Engineering, University of Derby, Derby DE22 3AW, UK
| | - Jianfei Xie
- School of Computing and Engineering, University of Derby, Derby DE22 3AW, UK
| |
Collapse
|
14
|
Zhu Y, Li J, Kim J, Li S, Zhao Y, Bahari J, Eliahoo P, Li G, Kawakita S, Haghniaz R, Gao X, Falcone N, Ermis M, Kang H, Liu H, Kim H, Tabish T, Yu H, Li B, Akbari M, Emaminejad S, Khademhosseini A. Skin-interfaced electronics: A promising and intelligent paradigm for personalized healthcare. Biomaterials 2023; 296:122075. [PMID: 36931103 PMCID: PMC10085866 DOI: 10.1016/j.biomaterials.2023.122075] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Revised: 02/23/2023] [Accepted: 03/02/2023] [Indexed: 03/09/2023]
Abstract
Skin-interfaced electronics (skintronics) have received considerable attention due to their thinness, skin-like mechanical softness, excellent conformability, and multifunctional integration. Current advancements in skintronics have enabled health monitoring and digital medicine. Particularly, skintronics offer a personalized platform for early-stage disease diagnosis and treatment. In this comprehensive review, we discuss (1) the state-of-the-art skintronic devices, (2) material selections and platform considerations of future skintronics toward intelligent healthcare, (3) device fabrication and system integrations of skintronics, (4) an overview of the skintronic platform for personalized healthcare applications, including biosensing as well as wound healing, sleep monitoring, the assessment of SARS-CoV-2, and the augmented reality-/virtual reality-enhanced human-machine interfaces, and (5) current challenges and future opportunities of skintronics and their potentials in clinical translation and commercialization. The field of skintronics will not only minimize physical and physiological mismatches with the skin but also shift the paradigm in intelligent and personalized healthcare and offer unprecedented promise to revolutionize conventional medical practices.
Collapse
Affiliation(s)
- Yangzhi Zhu
- Terasaki Institute for Biomedical Innovation, Los Angeles, CA, 90064, United States.
| | - Jinghang Li
- Terasaki Institute for Biomedical Innovation, Los Angeles, CA, 90064, United States
| | - Jinjoo Kim
- Terasaki Institute for Biomedical Innovation, Los Angeles, CA, 90064, United States
| | - Shaopei Li
- Terasaki Institute for Biomedical Innovation, Los Angeles, CA, 90064, United States
| | - Yichao Zhao
- Interconnected and Integrated Bioelectronics Lab, Department of Electrical and Computer Engineering, and Materials Science and Engineering, University of California, Los Angeles, CA, 90095, United States
| | - Jamal Bahari
- Terasaki Institute for Biomedical Innovation, Los Angeles, CA, 90064, United States
| | - Payam Eliahoo
- Biomedical Engineering Department, University of Southern California, Los Angeles, CA, 90007, United States
| | - Guanghui Li
- The Centre of Nanoscale Science and Technology and Key Laboratory of Functional Polymer Materials, Institute of Polymer Chemistry, College of Chemistry, Nankai University, Tianjin, 300071, China; Renewable Energy Conversion and Storage Center (RECAST), Nankai University, Tianjin, 300071, China
| | - Satoru Kawakita
- Terasaki Institute for Biomedical Innovation, Los Angeles, CA, 90064, United States
| | - Reihaneh Haghniaz
- Terasaki Institute for Biomedical Innovation, Los Angeles, CA, 90064, United States
| | - Xiaoxiang Gao
- Department of Nanoengineering, University of California, San Diego, La Jolla, CA, 92093, United States
| | - Natashya Falcone
- Terasaki Institute for Biomedical Innovation, Los Angeles, CA, 90064, United States
| | - Menekse Ermis
- Terasaki Institute for Biomedical Innovation, Los Angeles, CA, 90064, United States
| | - Heemin Kang
- Department of Materials Science and Engineering, Korea University, Seoul, 02841, Republic of Korea
| | - Hao Liu
- Bioinspired Engineering and Biomechanics Center (BEBC), Xi'an Jiaotong University, Xi'an, 710049, PR China
| | - HanJun Kim
- Terasaki Institute for Biomedical Innovation, Los Angeles, CA, 90064, United States; College of Pharmacy, Korea University, Sejong, 30019, Republic of Korea
| | - Tanveer Tabish
- Division of Cardiovascular Medicine, Radcliffe Department of Medicine, University of Oxford, Oxford, OX3 7BN, United Kingdom
| | - Haidong Yu
- Frontiers Science Center for Flexible Electronics, Xi'an Institute of Flexible Electronics (IFE) and Xi'an Institute of Biomedical Materials & Engineering, Northwestern Polytechnical University, Xi'an, 710072, PR China
| | - Bingbing Li
- Terasaki Institute for Biomedical Innovation, Los Angeles, CA, 90064, United States; Department of Manufacturing Systems Engineering and Management, California State University, Northridge, CA, 91330, United States
| | - Mohsen Akbari
- Terasaki Institute for Biomedical Innovation, Los Angeles, CA, 90064, United States; Laboratory for Innovation in Microengineering (LiME), Department of Mechanical Engineering, Center for Biomedical Research, University of Victoria, Victoria, BC V8P 2C5, Canada
| | - Sam Emaminejad
- Interconnected and Integrated Bioelectronics Lab, Department of Electrical and Computer Engineering, and Materials Science and Engineering, University of California, Los Angeles, CA, 90095, United States
| | - Ali Khademhosseini
- Terasaki Institute for Biomedical Innovation, Los Angeles, CA, 90064, United States.
| |
Collapse
|
15
|
Kim MC, Lee JH, Wang DH, Lee IS. Induction Motor Fault Diagnosis Using Support Vector Machine, Neural Networks, and Boosting Methods. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23052585. [PMID: 36904787 PMCID: PMC10007536 DOI: 10.3390/s23052585] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Revised: 02/21/2023] [Accepted: 02/23/2023] [Indexed: 06/12/2023]
Abstract
Induction motors are robust and cost effective; thus, they are commonly used as power sources in various industrial applications. However, due to the characteristics of induction motors, industrial processes can stop when motor failures occur. Thus, research is required to realize the quick and accurate diagnosis of faults in induction motors. In this study, we constructed an induction motor simulator with normal, rotor failure, and bearing failure states. Using this simulator, 1240 vibration datasets comprising 1024 data samples were obtained for each state. Then, failure diagnosis was performed on the acquired data using support vector machine, multilayer neural network, convolutional neural network, gradient boosting machine, and XGBoost machine learning models. The diagnostic accuracies and calculation speeds of these models were verified via stratified K-fold cross validation. In addition, a graphical user interface was designed and implemented for the proposed fault diagnosis technique. The experimental results demonstrate that the proposed fault diagnosis technique is suitable for diagnosing faults in induction motors.
Collapse
Affiliation(s)
| | | | | | - In-Soo Lee
- Correspondence: ; Tel.: +82-10-5312-5324
| |
Collapse
|
16
|
Mialland A, Atallah I, Bonvilain A. Toward a robust swallowing detection for an implantable active artificial larynx: a survey. Med Biol Eng Comput 2023; 61:1299-1327. [PMID: 36792845 DOI: 10.1007/s11517-023-02772-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2022] [Accepted: 01/04/2023] [Indexed: 02/17/2023]
Abstract
Total laryngectomy consists in the removal of the larynx and is intended as a curative treatment for laryngeal cancer, but it leaves the patient with no possibility to breathe, talk, and swallow normally anymore. A tracheostomy is created to restore breathing through the throat, but the aero-digestive tracts are permanently separated and the air no longer passes through the nasal tracts, which allowed filtration, warming, humidification, olfaction, and acceleration of the air for better tissue oxygenation. As for phonation restoration, various techniques allow the patient to talk again. The main one consists of a tracheo-esophageal valve prosthesis that makes the air passes from the esophagus to the pharynx, and makes the air vibrate to allow speech through articulation. Finally, swallowing is possible through the original tract as it is now isolated from the trachea. Yet, many methods exist to detect and assess a swallowing, but none is intended as a definitive restoration technique of the natural airway, which would permanently close the tracheostomy and avoid its adverse effects. In addition, these methods are non-invasive and lack detection accuracy. The feasibility of an effective early detection of swallowing would allow to further develop an implantable active artificial larynx and therefore restore the aero-digestive tracts. A previous attempt has been made on an artificial larynx implanted in 2012, but no active detection was included and the system was completely mechanic. This led to residues in the airway because of the imperfect sealing of the mechanism. An active swallowing detection coupled with indwelling measurements would thus likely add a significant reliability on such a system as it would allow to actively close an artificial larynx. So, after a brief explanation of the swallowing mechanism, this survey intends to first provide a detailed consideration of the anatomical region involved in swallowing, with a detection perspective. Second, the swallowing mechanism following total laryngectomy surgery is detailed. Third, the current non-invasive swallowing detection technique and their limitations are discussed. Finally, the previous points are explored with regard to the inherent requirements for the feasibility of an effective swallowing detection for an artificial larynx. Graphical Abstract.
Collapse
Affiliation(s)
- Adrien Mialland
- Institute of Engineering and Management Univ. Grenoble Alpes, Univ. Grenoble Alpes, CNRS, Grenoble INP, Gipsa-lab, 38000, Grenoble, France.
| | - Ihab Atallah
- Institute of Engineering and Management Univ. Grenoble Alpes, Otorhinolaryngology, CHU Grenoble Alpes, 38700, La Tronche, France
| | - Agnès Bonvilain
- Institute of Engineering and Management Univ. Grenoble Alpes, Univ. Grenoble Alpes, CNRS, Grenoble INP, Gipsa-lab, 38000, Grenoble, France
| |
Collapse
|
17
|
Rodriguez-Conde I, Campos C, Fdez-Riverola F. Horizontally Distributed Inference of Deep Neural Networks for AI-Enabled IoT. SENSORS (BASEL, SWITZERLAND) 2023; 23:1911. [PMID: 36850508 PMCID: PMC9958567 DOI: 10.3390/s23041911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/31/2022] [Revised: 02/02/2023] [Accepted: 02/05/2023] [Indexed: 06/18/2023]
Abstract
Motivated by the pervasiveness of artificial intelligence (AI) and the Internet of Things (IoT) in the current "smart everything" scenario, this article provides a comprehensive overview of the most recent research at the intersection of both domains, focusing on the design and development of specific mechanisms for enabling a collaborative inference across edge devices towards the in situ execution of highly complex state-of-the-art deep neural networks (DNNs), despite the resource-constrained nature of such infrastructures. In particular, the review discusses the most salient approaches conceived along those lines, elaborating on the specificities of the partitioning schemes and the parallelism paradigms explored, providing an organized and schematic discussion of the underlying workflows and associated communication patterns, as well as the architectural aspects of the DNNs that have driven the design of such techniques, while also highlighting both the primary challenges encountered at the design and operational levels and the specific adjustments or enhancements explored in response to them.
Collapse
Affiliation(s)
- Ivan Rodriguez-Conde
- Department of Computer Science, University of Arkansas at Little Rock, 2801 South University Avenue, Little Rock, AR 72204, USA
| | - Celso Campos
- Department of Computer Science, ESEI—Escuela Superior de Ingeniería Informática, Universidade de Vigo, 32004 Ourense, Spain
| | - Florentino Fdez-Riverola
- CINBIO, Department of Computer Science, ESEI—Escuela Superior de Ingeniería Informática, Universidade de Vigo, 32004 Ourense, Spain
- SING Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), SERGAS-UVIGO, 36213 Vigo, Spain
| |
Collapse
|
18
|
Surianarayanan C, Lawrence JJ, Chelliah PR, Prakash E, Hewage C. A Survey on Optimization Techniques for Edge Artificial Intelligence (AI). SENSORS (BASEL, SWITZERLAND) 2023; 23:1279. [PMID: 36772319 PMCID: PMC9919555 DOI: 10.3390/s23031279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/31/2022] [Revised: 01/12/2023] [Accepted: 01/19/2023] [Indexed: 06/18/2023]
Abstract
Artificial Intelligence (Al) models are being produced and used to solve a variety of current and future business and technical problems. Therefore, AI model engineering processes, platforms, and products are acquiring special significance across industry verticals. For achieving deeper automation, the number of data features being used while generating highly promising and productive AI models is numerous, and hence the resulting AI models are bulky. Such heavyweight models consume a lot of computation, storage, networking, and energy resources. On the other side, increasingly, AI models are being deployed in IoT devices to ensure real-time knowledge discovery and dissemination. Real-time insights are of paramount importance in producing and releasing real-time and intelligent services and applications. Thus, edge intelligence through on-device data processing has laid down a stimulating foundation for real-time intelligent enterprises and environments. With these emerging requirements, the focus turned towards unearthing competent and cognitive techniques for maximally compressing huge AI models without sacrificing AI model performance. Therefore, AI researchers have come up with a number of powerful optimization techniques and tools to optimize AI models. This paper is to dig deep and describe all kinds of model optimization at different levels and layers. Having learned the optimization methods, this work has highlighted the importance of having an enabling AI model optimization framework.
Collapse
Affiliation(s)
- Chellammal Surianarayanan
- Centre for Distance and Online Education, Bharathidasan University, Tiruchirappalli 620024, Tamilnadu, India
| | | | - Pethuru Raj Chelliah
- Edge AI Division, Reliance Jio Platforms Ltd., Bangalore 560103, Karnataka, India
| | - Edmond Prakash
- Research Center for Creative Arts, University for the Creative Arts (UCA), Farnham GU9 7DS, UK
| | - Chaminda Hewage
- Cardiff School of Technologies, Cardiff Metropolitan University, Cardiff CF5 2YB, UK
| |
Collapse
|
19
|
Homrighausen J, Horsthemke L, Pogorzelski J, Trinschek S, Glösekötter P, Gregor M. Edge-Machine-Learning-Assisted Robust Magnetometer Based on Randomly Oriented NV-Ensembles in Diamond. SENSORS (BASEL, SWITZERLAND) 2023; 23:1119. [PMID: 36772156 PMCID: PMC9920683 DOI: 10.3390/s23031119] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/02/2022] [Revised: 01/06/2023] [Accepted: 01/12/2023] [Indexed: 06/18/2023]
Abstract
Quantum magnetometry based on optically detected magnetic resonance (ODMR) of nitrogen vacancy centers in nano- or micro-diamonds is a promising technology for precise magnetic-field sensors. Here, we propose a new, low-cost and stand-alone sensor setup that employs machine learning on an embedded device, so-called edge machine learning. We train an artificial neural network with data acquired from a continuous-wave ODMR setup and subsequently use this pre-trained network on the sensor device to deduce the magnitude of the magnetic field from recorded ODMR spectra. In our proposed sensor setup, a low-cost and low-power ESP32 microcontroller development board is employed to control data recording and perform inference of the network. In a proof-of-concept study, we show that the setup is capable of measuring magnetic fields with high precision and has the potential to enable robust and accessible sensor applications with a wide measuring range.
Collapse
Affiliation(s)
- Jonas Homrighausen
- Department of Engineering Physics, Münster University of Applied Sciences, Stegerwaldstraße 39, 48565 Steinfurt, Germany
| | - Ludwig Horsthemke
- Department of Electrical Engineering and Computer Science, Münster University of Applied Sciences, Stegerwaldstraße 39, 48565 Steinfurt, Germany
| | - Jens Pogorzelski
- Department of Electrical Engineering and Computer Science, Münster University of Applied Sciences, Stegerwaldstraße 39, 48565 Steinfurt, Germany
| | - Sarah Trinschek
- Department of Engineering Physics, Münster University of Applied Sciences, Stegerwaldstraße 39, 48565 Steinfurt, Germany
| | - Peter Glösekötter
- Department of Electrical Engineering and Computer Science, Münster University of Applied Sciences, Stegerwaldstraße 39, 48565 Steinfurt, Germany
| | - Markus Gregor
- Department of Engineering Physics, Münster University of Applied Sciences, Stegerwaldstraße 39, 48565 Steinfurt, Germany
| |
Collapse
|
20
|
Loukatos D, Kondoyanni M, Alexopoulos G, Maraveas C, Arvanitis KG. On-Device Intelligence for Malfunction Detection of Water Pump Equipment in Agricultural Premises: Feasibility and Experimentation. SENSORS (BASEL, SWITZERLAND) 2023; 23:839. [PMID: 36679636 PMCID: PMC9860875 DOI: 10.3390/s23020839] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Revised: 12/28/2022] [Accepted: 01/01/2023] [Indexed: 06/17/2023]
Abstract
The digital transformation of agriculture is a promising necessity for tackling the increasing nutritional needs on Earth and the degradation of natural resources. Toward this direction, the availability of innovative electronic components and of the accompanying software programs can be exploited to detect malfunctions in typical agricultural equipment, such as water pumps, thereby preventing potential failures and water and economic losses. In this context, this article highlights the steps for adding intelligence to sensors installed on pumps in order to intercept and deliver malfunction alerts, based on cheap in situ microcontrollers, sensors, and radios and easy-to-use software tools. This involves efficient data gathering, neural network model training, generation, optimization, and execution procedures, which are further facilitated by the deployment of an experimental platform for generating diverse disturbances of the water pump operation. The best-performing variant of the malfunction detection model can achieve an accuracy rate of about 93% based on the vibration data. The system being implemented follows the on-device intelligence approach that decentralizes processing and networking tasks, thereby aiming to simplify the installation process and reduce the overall costs. In addition to highlighting the necessary implementation variants and details, a characteristic set of evaluation results is also presented, as well as directions for future exploitation.
Collapse
|
21
|
Peruzzi G, Pozzebon A, Van Der Meer M. Fight Fire with Fire: Detecting Forest Fires with Embedded Machine Learning Models Dealing with Audio and Images on Low Power IoT Devices. SENSORS (BASEL, SWITZERLAND) 2023; 23:783. [PMID: 36679579 PMCID: PMC9863941 DOI: 10.3390/s23020783] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Revised: 01/03/2023] [Accepted: 01/05/2023] [Indexed: 06/17/2023]
Abstract
Forest fires are the main cause of desertification, and they have a disastrous impact on agricultural and forest ecosystems. Modern fire detection and warning systems rely on several techniques: satellite monitoring, sensor networks, image processing, data fusion, etc. Recently, Artificial Intelligence (AI) algorithms have been applied to fire recognition systems, enhancing their efficiency and reliability. However, these devices usually need constant data transmission along with a proper amount of computing power, entailing high costs and energy consumption. This paper presents the prototype of a Video Surveillance Unit (VSU) for recognising and signalling the presence of forest fires by exploiting two embedded Machine Learning (ML) algorithms running on a low power device. The ML models take audio samples and images as their respective inputs, allowing for timely fire detection. The main result is that while the performances of the two models are comparable when they work independently, their joint usage according to the proposed methodology provides a higher accuracy, precision, recall and F1 score (96.15%, 92.30%, 100.00%, and 96.00%, respectively). Eventually, each event is remotely signalled by making use of the Long Range Wide Area Network (LoRaWAN) protocol to ensure that the personnel in charge are able to operate promptly.
Collapse
Affiliation(s)
- Giacomo Peruzzi
- Department of Information Engineering, University of Padova, 35131 Padova, Italy
| | - Alessandro Pozzebon
- Department of Information Engineering, University of Padova, 35131 Padova, Italy
| | - Mattia Van Der Meer
- Department of Information Engineering and Mathematics, University of Siena, 53100 Siena, Italy
| |
Collapse
|
22
|
Strantzalis K, Gioulekas F, Katsaros P, Symeonidis A. Operational State Recognition of a DC Motor Using Edge Artificial Intelligence. SENSORS (BASEL, SWITZERLAND) 2022; 22:9658. [PMID: 36560026 PMCID: PMC9783357 DOI: 10.3390/s22249658] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Revised: 12/01/2022] [Accepted: 12/05/2022] [Indexed: 06/17/2023]
Abstract
Edge artificial intelligence (EDGE-AI) refers to the execution of artificial intelligence algorithms on hardware devices while processing sensor data/signals in order to extract information and identify patterns, without utilizing the cloud. In the field of predictive maintenance for industrial applications, EDGE-AI systems can provide operational state recognition for machines and production chains, almost in real time. This work presents two methodological approaches for the detection of the operational states of a DC motor, based on sound data. Initially, features were extracted using an audio dataset. Two different Convolutional Neural Network (CNN) models were trained for the particular classification problem. These two models are subject to post-training quantization and an appropriate conversion/compression in order to be deployed to microcontroller units (MCUs) through utilizing appropriate software tools. A real-time validation experiment was conducted, including the simulation of a custom stress test environment, to check the deployed models' performance on the recognition of the engine's operational states and the response time for the transition between the engine's states. Finally, the two implementations were compared in terms of classification accuracy, latency, and resource utilization, leading to promising results.
Collapse
Affiliation(s)
- Konstantinos Strantzalis
- School of Electrical and Computer Engineering, Aristotle University of Thessaloniki, 541 24 Thessaloniki, Greece
| | | | - Panagiotis Katsaros
- School of Informatics, Aristotle University of Thessaloniki, 541 24 Thessaloniki, Greece
| | - Andreas Symeonidis
- School of Electrical and Computer Engineering, Aristotle University of Thessaloniki, 541 24 Thessaloniki, Greece
| |
Collapse
|
23
|
TinyML for Ultra-Low Power AI and Large Scale IoT Deployments: A Systematic Review. FUTURE INTERNET 2022. [DOI: 10.3390/fi14120363] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022] Open
Abstract
The rapid emergence of low-power embedded devices and modern machine learning (ML) algorithms has created a new Internet of Things (IoT) era where lightweight ML frameworks such as TinyML have created new opportunities for ML algorithms running within edge devices. In particular, the TinyML framework in such devices aims to deliver reduced latency, efficient bandwidth consumption, improved data security, increased privacy, lower costs and overall network cost reduction in cloud environments. Its ability to enable IoT devices to work effectively without constant connectivity to cloud services, while nevertheless providing accurate ML services, offers a viable alternative for IoT applications seeking cost-effective solutions. TinyML intends to deliver on-premises analytics that bring significant value to IoT services, particularly in environments with limited connection. This review article defines TinyML, presents an overview of its benefits and uses and provides background information based on up-to-date literature. Then, we demonstrate the TensorFlow Lite framework which supports TinyML along with analytical steps for an ML model creation. In addition, we explore the integration of TinyML with network technologies such as 5G and LPWAN. Ultimately, we anticipate that this analysis will serve as an informational pillar for the IoT/Cloud research community and pave the way for future studies.
Collapse
|
24
|
Carapezzi S, Delacour C, Plews A, Nejim A, Karg S, Todri-Sanial A. Role of ambient temperature in modulation of behavior of vanadium dioxide volatile memristors and oscillators for neuromorphic applications. Sci Rep 2022; 12:19377. [PMID: 36371590 PMCID: PMC9653463 DOI: 10.1038/s41598-022-23629-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2022] [Accepted: 11/02/2022] [Indexed: 11/13/2022] Open
Abstract
Volatile memristors are versatile devices whose operating mechanism is based on an abrupt and volatile change of resistivity. This switching between high and low resistance states is at the base of cutting edge technological implementations such as neural/synaptic devices or random number generators. A detailed understanding of this operating mechanisms is essential prerequisite to exploit the full potentiality of volatile memristors. In this respect, multi-physics device simulations provide a powerful tool to single out material properties and device features that are the keys to achieve desired behaviors. In this paper, we perform 3D electrothermal simulations of volatile memristors based on vanadium dioxide (VO[Formula: see text]) to accurately investigate the interplay among Joule effect, heat dissipation and the external temperature [Formula: see text] over their resistive switching mechanism. In particular, we extract from our simulations a simplified model for the effect of [Formula: see text] over the negative differential resistance (NDR) region of such devices. The NDR of VO[Formula: see text] devices is pivotal for building VO[Formula: see text] oscillators, which have been recently shown to be essential elements of oscillatory neural networks (ONNs). ONNs are innovative neuromorphic circuits that harness oscillators' phases to compute. Our simulations quantify the impact of [Formula: see text] over figures of merit of VO[Formula: see text] oscillator, such as frequency, voltage amplitude and average power per cycle. Our findings shed light over the interlinked thermal and electrical behavior of VO[Formula: see text] volatile memristors and oscillators, and provide a roadmap for the development of ONN technology.
Collapse
Affiliation(s)
- Stefania Carapezzi
- Microelectronics Department, LIRMM, University of Montpellier, CNRS, Montpellier, 34095, France.
| | - Corentin Delacour
- Microelectronics Department, LIRMM, University of Montpellier, CNRS, Montpellier, 34095, France
| | | | | | - Siegfried Karg
- Department of Science and Technology, IBM Research Europe - Zurich, Ruschlikon, 8803, Switzerland
| | - Aida Todri-Sanial
- Microelectronics Department, LIRMM, University of Montpellier, CNRS, Montpellier, 34095, France.
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, 5612, AP, The Netherlands.
| |
Collapse
|
25
|
Yang J, Wu S, Dai R, Yu W, Chen Y. Publication trends of artificial intelligence in retina in 10 years: Where do we stand? Front Med (Lausanne) 2022; 9:1001673. [PMID: 36405613 PMCID: PMC9666394 DOI: 10.3389/fmed.2022.1001673] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2022] [Accepted: 09/20/2022] [Indexed: 11/25/2022] Open
Abstract
PURPOSE Artificial intelligence (AI) has been applied in the field of retina. The purpose of this study was to analyze the study trends within AI in retina by reporting on publication trends, to identify journals, countries, authors, international collaborations, and keywords involved in AI in retina. MATERIALS AND METHODS A cross-sectional study. Bibliometric methods were used to evaluate global production and development trends in AI in retina since 2012 using Web of Science Core Collection. RESULTS A total of 599 publications were retrieved ultimately. We found that AI in retina is a very attractive topic in scientific and medical community. No journal was found to specialize in AI in retina. The USA, China, and India were the three most productive countries. Authors from Austria, Singapore, and England also had worldwide academic influence. China has shown the greatest rapid increase in publication numbers. International collaboration could increase influence in this field. Keywords revealed that diabetic retinopathy, optical coherence tomography on multiple diseases, algorithm were three popular topics in the field. Most of top journals and top publication on AI in retina were mainly focused on engineering and computing, rather than medicine. CONCLUSION These results helped clarify the current status and future trends in researches of AI in retina. This study may be useful for clinicians and scientists to have a general overview of this field, and better understand the main actors in this field (including authors, journals, and countries). Researches are supposed to focus on more retinal diseases, multiple modal imaging, and performance of AI models in real-world clinical application. Collaboration among countries and institutions is common in current research of AI in retina.
Collapse
Affiliation(s)
- Jingyuan Yang
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China,Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Shan Wu
- Beijing Hospital, National Center of Gerontology, Institute of Geriatric Medicine, Chinese Academy of Medical Sciences, Beijing, China
| | - Rongping Dai
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China,Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Weihong Yu
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China,Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Youxin Chen
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China,Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China,*Correspondence: Youxin Chen,
| |
Collapse
|
26
|
Datta A, Nicolaï B, Vitrac O, Verboven P, Erdogdu F, Marra F, Sarghini F, Koh C. Computer-aided food engineering. NATURE FOOD 2022; 3:894-904. [PMID: 37118206 DOI: 10.1038/s43016-022-00617-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/18/2021] [Accepted: 09/09/2022] [Indexed: 04/30/2023]
Abstract
Computer-aided food engineering (CAFE) can reduce resource use in product, process and equipment development, improve time-to-market performance, and drive high-level innovation in food safety and quality. Yet, CAFE is challenged by the complexity and variability of food composition and structure, by the transformations food undergoes during processing and the limited availability of comprehensive mechanistic frameworks describing those transformations. Here we introduce frameworks to model food processes and predict physiochemical properties that will accelerate CAFE. We review how investments in open access, such as code sharing, and capacity-building through specialized courses could facilitate the use of CAFE in the transformation already underway in digital food systems.
Collapse
Affiliation(s)
- Ashim Datta
- Department of Biological and Environmental Engineering, Cornell University, Ithaca, NY, USA.
| | - Bart Nicolaï
- Biosystems Department - MeBioS Division, Katholieke Universiteit Leuven, Leuven, Belgium
| | - Olivier Vitrac
- Université Paris-Saclay, INRAE, AgroParisTech, UMR 0782 SayFood, Massy, France
| | - Pieter Verboven
- Biosystems Department - MeBioS Division, Katholieke Universiteit Leuven, Leuven, Belgium
| | - Ferruh Erdogdu
- Department of Food Engineering, Ankara University, Golbasi-Ankara, Turkey
| | - Francesco Marra
- Department of Industrial Engineering, University of Salerno, Fisciano, Italy
| | - Fabrizio Sarghini
- Department of Agricultural Sciences, Agricultural and Biosystems Engineering, University of Naples Federico II, Portici, Italy
| | - Chris Koh
- PepsiCo R&D, PepsiCo, Plano, TX, USA
| |
Collapse
|
27
|
D’Souza O, Mukhopadhyay SC, Sheng M. Health, Security and Fire Safety Process Optimisation Using Intelligence at the Edge. SENSORS (BASEL, SWITZERLAND) 2022; 22:8143. [PMID: 36365840 PMCID: PMC9659114 DOI: 10.3390/s22218143] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/09/2022] [Revised: 10/17/2022] [Accepted: 10/19/2022] [Indexed: 06/16/2023]
Abstract
The proliferation of sensors to capture parametric measures or event data over a myriad of networking topologies is growing exponentially to improve our daily lives. Large amounts of data must be shared on constrained network infrastructure, increasing delays and loss of valuable real-time information. Our research presents a solution for the health, security, safety, and fire domains to obtain temporally synchronous, credible and high-resolution data from sensors to maintain the temporal hierarchy of reported events. We developed a multisensor fusion framework with energy conservation via domain-specific "wake up" triggers that turn on low-power model-driven microcontrollers using machine learning (TinyML) models. We investigated optimisation techniques using anomaly detection modes to deliver real-time insights in demanding life-saving situations. Using energy-efficient methods to analyse sensor data at the point of creation, we facilitated a pathway to provide sensor customisation at the "edge", where and when it is most needed. We present the application and generalised results in a real-life health care scenario and explain its application and benefits in other named researched domains.
Collapse
Affiliation(s)
- Ollencio D’Souza
- School of Engineering, Faculty of Science and Engineering, North Ryde Campus, Macquarie University, Sydney, NSW 2109, Australia
| | - Subhas Chandra Mukhopadhyay
- School of Engineering, Faculty of Science and Engineering, North Ryde Campus, Macquarie University, Sydney, NSW 2109, Australia
| | - Michael Sheng
- Department of Computing, Macquarie University, Sydney, NSW 2109, Australia
| |
Collapse
|
28
|
Shumba AT, Montanaro T, Sergi I, Fachechi L, De Vittorio M, Patrono L. Leveraging IoT-Aware Technologies and AI Techniques for Real-Time Critical Healthcare Applications. SENSORS (BASEL, SWITZERLAND) 2022; 22:7675. [PMID: 36236773 PMCID: PMC9571691 DOI: 10.3390/s22197675] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Revised: 10/04/2022] [Accepted: 10/04/2022] [Indexed: 06/16/2023]
Abstract
Personalised healthcare has seen significant improvements due to the introduction of health monitoring technologies that allow wearable devices to unintrusively monitor physiological parameters such as heart health, blood pressure, sleep patterns, and blood glucose levels, among others. Additionally, utilising advanced sensing technologies based on flexible and innovative biocompatible materials in wearable devices allows high accuracy and precision measurement of biological signals. Furthermore, applying real-time Machine Learning algorithms to highly accurate physiological parameters allows precise identification of unusual patterns in the data to provide health event predictions and warnings for timely intervention. However, in the predominantly adopted architectures, health event predictions based on Machine Learning are typically obtained by leveraging Cloud infrastructures characterised by shortcomings such as delayed response times and privacy issues. Fortunately, recent works highlight that a new paradigm based on Edge Computing technologies and on-device Artificial Intelligence significantly improve the latency and privacy issues. Applying this new paradigm to personalised healthcare architectures can significantly improve their efficiency and efficacy. Therefore, this paper reviews existing IoT healthcare architectures that utilise wearable devices and subsequently presents a scalable and modular system architecture to leverage emerging technologies to solve identified shortcomings. The defined architecture includes ultrathin, skin-compatible, flexible, high precision piezoelectric sensors, low-cost communication technologies, on-device intelligence, Edge Intelligence, and Edge Computing technologies. To provide development guidelines and define a consistent reference architecture for improved scalable wearable IoT-based critical healthcare architectures, this manuscript outlines the essential functional and non-functional requirements based on deductions from existing architectures and emerging technology trends. The presented system architecture can be applied to many scenarios, including ambient assisted living, where continuous surveillance and issuance of timely warnings can afford independence to the elderly and chronically ill. We conclude that the distribution and modularity of architecture layers, local AI-based elaboration, and data packaging consistency are the more essential functional requirements for critical healthcare application use cases. We also identify fast response time, utility, comfort, and low cost as the essential non-functional requirements for the defined system architecture.
Collapse
Affiliation(s)
- Angela-Tafadzwa Shumba
- Department of Engineering for Innovation, University of Salento, 73100 Lecce, Italy
- Istituto Italiano di Tecnologia, Center for Biomolecular Nanotechnologies, Arnesano, 73010 Lecce, Italy
| | - Teodoro Montanaro
- Department of Engineering for Innovation, University of Salento, 73100 Lecce, Italy
| | - Ilaria Sergi
- Department of Engineering for Innovation, University of Salento, 73100 Lecce, Italy
| | - Luca Fachechi
- Istituto Italiano di Tecnologia, Center for Biomolecular Nanotechnologies, Arnesano, 73010 Lecce, Italy
| | - Massimo De Vittorio
- Department of Engineering for Innovation, University of Salento, 73100 Lecce, Italy
- Istituto Italiano di Tecnologia, Center for Biomolecular Nanotechnologies, Arnesano, 73010 Lecce, Italy
| | - Luigi Patrono
- Department of Engineering for Innovation, University of Salento, 73100 Lecce, Italy
| |
Collapse
|
29
|
Context-Aware Edge-Based AI Models for Wireless Sensor Networks-An Overview. SENSORS 2022; 22:s22155544. [PMID: 35898044 PMCID: PMC9371178 DOI: 10.3390/s22155544] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Revised: 06/25/2022] [Accepted: 07/05/2022] [Indexed: 02/04/2023]
Abstract
Recent advances in sensor technology are expected to lead to a greater use of wireless sensor networks (WSNs) in industry, logistics, healthcare, etc. On the other hand, advances in artificial intelligence (AI), machine learning (ML), and deep learning (DL) are becoming dominant solutions for processing large amounts of data from edge-synthesized heterogeneous sensors and drawing accurate conclusions with better understanding of the situation. Integration of the two areas WSN and AI has resulted in more accurate measurements, context-aware analysis and prediction useful for smart sensing applications. In this paper, a comprehensive overview of the latest developments in context-aware intelligent systems using sensor technology is provided. In addition, it also discusses the areas in which they are used, related challenges, motivations for adopting AI solutions, focusing on edge computing, i.e., sensor and AI techniques, along with analysis of existing research gaps. Another contribution of this study is the use of a semantic-aware approach to extract survey-relevant subjects. The latter specifically identifies eleven main research topics supported by the articles included in the work. These are analyzed from various angles to answer five main research questions. Finally, potential future research directions are also discussed.
Collapse
|
30
|
A Systematic Review of Wi-Fi and Machine Learning Integration with Topic Modeling Techniques. SENSORS 2022; 22:s22134925. [PMID: 35808430 PMCID: PMC9269691 DOI: 10.3390/s22134925] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/19/2022] [Revised: 06/20/2022] [Accepted: 06/27/2022] [Indexed: 02/08/2023]
Abstract
Wireless networks have drastically influenced our lifestyle, changing our workplaces and society. Among the variety of wireless technology, Wi-Fi surely plays a leading role, especially in local area networks. The spread of mobiles and tablets, and more recently, the advent of Internet of Things, have resulted in a multitude of Wi-Fi-enabled devices continuously sending data to the Internet and between each other. At the same time, Machine Learning has proven to be one of the most effective and versatile tools for the analysis of fast streaming data. This systematic review aims at studying the interaction between these technologies and how it has developed throughout their lifetimes. We used Scopus, Web of Science, and IEEE Xplore databases to retrieve paper abstracts and leveraged a topic modeling technique, namely, BERTopic, to analyze the resulting document corpus. After these steps, we inspected the obtained clusters and computed statistics to characterize and interpret the topics they refer to. Our results include both the applications of Wi-Fi sensing and the variety of Machine Learning algorithms used to tackle them. We also report how the Wi-Fi advances have affected sensing applications and the choice of the most suitable Machine Learning models.
Collapse
|
31
|
Loukatos D, Lygkoura KA, Maraveas C, Arvanitis KG. Enriching IoT Modules with Edge AI Functionality to Detect Water Misuse Events in a Decentralized Manner. SENSORS 2022; 22:s22134874. [PMID: 35808373 PMCID: PMC9269755 DOI: 10.3390/s22134874] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Revised: 06/22/2022] [Accepted: 06/27/2022] [Indexed: 12/04/2022]
Abstract
The digital transformation of agriculture is a promising necessity for tackling the increasing nutritional needs of the population on Earth and the degradation of natural resources. Focusing on the “hot” area of natural resource preservation, the recent appearance of more efficient and cheaper microcontrollers, the advances in low-power and long-range radios, and the availability of accompanying software tools are exploited in order to monitor water consumption and to detect and report misuse events, with reduced power and network bandwidth requirements. Quite often, large quantities of water are wasted for a variety of reasons; from broken irrigation pipes to people’s negligence. To tackle this problem, the necessary design and implementation details are highlighted for an experimental water usage reporting system that exhibits Edge Artificial Intelligence (Edge AI) functionality. By combining modern technologies, such as Internet of Things (IoT), Edge Computing (EC) and Machine Learning (ML), the deployment of a compact automated detection mechanism can be easier than before, while the information that has to travel from the edges of the network to the cloud and thus the corresponding energy footprint are drastically reduced. In parallel, characteristic implementation challenges are discussed, and a first set of corresponding evaluation results is presented.
Collapse
|
32
|
Benchmarking Object Detection Deep Learning Models in Embedded Devices. SENSORS 2022; 22:s22114205. [PMID: 35684827 PMCID: PMC9185277 DOI: 10.3390/s22114205] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Revised: 05/26/2022] [Accepted: 05/27/2022] [Indexed: 11/26/2022]
Abstract
Object detection is an essential capability for performing complex tasks in robotic applications. Today, deep learning (DL) approaches are the basis of state-of-the-art solutions in computer vision, where they provide very high accuracy albeit with high computational costs. Due to the physical limitations of robotic platforms, embedded devices are not as powerful as desktop computers, and adjustments have to be made to deep learning models before transferring them to robotic applications. This work benchmarks deep learning object detection models in embedded devices. Furthermore, some hardware selection guidelines are included, together with a description of the most relevant features of the two boards selected for this benchmark. Embedded electronic devices integrate a powerful AI co-processor to accelerate DL applications. To take advantage of these co-processors, models must be converted to a specific embedded runtime format. Five quantization levels applied to a collection of DL models are considered; two of them allow the execution of models in the embedded general-purpose CPU and are used as the baseline to assess the improvements obtained when running the same models with the three remaining quantization levels in the AI co-processors. The benchmark procedure is explained in detail, and a comprehensive analysis of the collected data is presented. Finally, the feasibility and challenges of the implementation of embedded object detection applications are discussed.
Collapse
|
33
|
Alajlan NN, Ibrahim DM. TinyML: Enabling of Inference Deep Learning Models on Ultra-Low-Power IoT Edge Devices for AI Applications. MICROMACHINES 2022; 13:mi13060851. [PMID: 35744466 PMCID: PMC9227753 DOI: 10.3390/mi13060851] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Revised: 05/26/2022] [Accepted: 05/27/2022] [Indexed: 02/04/2023]
Abstract
Recently, the Internet of Things (IoT) has gained a lot of attention, since IoT devices are placed in various fields. Many of these devices are based on machine learning (ML) models, which render them intelligent and able to make decisions. IoT devices typically have limited resources, which restricts the execution of complex ML models such as deep learning (DL) on them. In addition, connecting IoT devices to the cloud to transfer raw data and perform processing causes delayed system responses, exposes private data and increases communication costs. Therefore, to tackle these issues, there is a new technology called Tiny Machine Learning (TinyML), that has paved the way to meet the challenges of IoT devices. This technology allows processing of the data locally on the device without the need to send it to the cloud. In addition, TinyML permits the inference of ML models, concerning DL models on the device as a Microcontroller that has limited resources. The aim of this paper is to provide an overview of the revolution of TinyML and a review of tinyML studies, wherein the main contribution is to provide an analysis of the type of ML models used in tinyML studies; it also presents the details of datasets and the types and characteristics of the devices with an aim to clarify the state of the art and envision development requirements.
Collapse
Affiliation(s)
- Norah N. Alajlan
- Department of Information Technology, College of Computer, Qassim University, Buraydah 51452, Saudi Arabia;
| | - Dina M. Ibrahim
- Department of Information Technology, College of Computer, Qassim University, Buraydah 51452, Saudi Arabia;
- Department of Computers and Control Engineering, Faculty of Engineering, Tanta University, Tanta 31733, Egypt
- Correspondence: or
| |
Collapse
|
34
|
Todri-Sanial A, Carapezzi S, Delacour C, Abernot M, Gil T, Corti E, Karg SF, Nunez J, Jimenez M, Avedillo MJ, Linares-Barranco B. How Frequency Injection Locking Can Train Oscillatory Neural Networks to Compute in Phase. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:1996-2009. [PMID: 34495849 DOI: 10.1109/tnnls.2021.3107771] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Brain-inspired computing employs devices and architectures that emulate biological functions for more adaptive and energy-efficient systems. Oscillatory neural networks (ONNs) are an alternative approach in emulating biological functions of the human brain and are suitable for solving large and complex associative problems. In this work, we investigate the dynamics of coupled oscillators to implement such ONNs. By harnessing the complex dynamics of coupled oscillatory systems, we forge a novel computation model-information is encoded in the phase of oscillations. Coupled interconnected oscillators can exhibit various behaviors due to the strength of the coupling. In this article, we present a novel method based on subharmonic injection locking (SHIL) for controlling the oscillatory states of coupled oscillators that allow them to lock in frequency with distinct phase differences. Circuit-level simulation results indicate SHIL effectiveness and its applicability to large-scale oscillatory networks for pattern recognition.
Collapse
|
35
|
Machine Learning for Healthcare Wearable Devices: The Big Picture. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:4653923. [PMID: 35480146 PMCID: PMC9038375 DOI: 10.1155/2022/4653923] [Citation(s) in RCA: 42] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/17/2021] [Accepted: 03/22/2022] [Indexed: 02/07/2023]
Abstract
Using artificial intelligence and machine learning techniques in healthcare applications has been actively researched over the last few years. It holds promising opportunities as it is used to track human activities and vital signs using wearable devices and assist in diseases' diagnosis, and it can play a great role in elderly care and patient's health monitoring and diagnostics. With the great technological advances in medical sensors and miniaturization of electronic chips in the recent five years, more applications are being researched and developed for wearable devices. Despite the remarkable growth of using smart watches and other wearable devices, a few of these massive research efforts for machine learning applications have found their way to market. In this study, a review of the different areas of the recent machine learning research for healthcare wearable devices is presented. Different challenges facing machine learning applications on wearable devices are discussed. Potential solutions from the literature are presented, and areas open for improvement and further research are highlighted.
Collapse
|
36
|
Chaudhri SN, Rajput NS, Alsamhi SH, Shvetsov AV, Almalki FA. Zero-Padding and Spatial Augmentation-Based Gas Sensor Node Optimization Approach in Resource-Constrained 6G-IoT Paradigm. SENSORS 2022; 22:s22083039. [PMID: 35459024 PMCID: PMC9028001 DOI: 10.3390/s22083039] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/06/2022] [Revised: 03/17/2022] [Accepted: 04/13/2022] [Indexed: 12/16/2022]
Abstract
Ultra-low-power is a key performance indicator in 6G-IoT ecosystems. Sensor nodes in this eco-system are also capable of running light-weight artificial intelligence (AI) models. In this work, we have achieved high performance in a gas sensor system using Convolutional Neural Network (CNN) with a smaller number of gas sensor elements. We have identified redundant gas sensor elements in a gas sensor array and removed them to reduce the power consumption without significant deviation in the node’s performance. The inevitable variation in the performance due to removing redundant sensor elements has been compensated using specialized data pre-processing (zero-padded virtual sensors and spatial augmentation) and CNN. The experiment is demonstrated to classify and quantify the four hazardous gases, viz., acetone, carbon tetrachloride, ethyl methyl ketone, and xylene. The performance of the unoptimized gas sensor array has been taken as a “baseline” to compare the performance of the optimized gas sensor array. Our proposed approach reduces the power consumption from 10 Watts to 5 Watts; classification performance sustained to 100 percent while quantification performance compensated up to a mean squared error (MSE) of 1.12 × 10−2. Thus, our power-efficient optimization paves the way to “computation on edge”, even in the resource-constrained 6G-IoT paradigm.
Collapse
Affiliation(s)
- Shiv Nath Chaudhri
- Department of Electronics Engineering, Indian Institute of Technology (BHU), Varanasi 221005, Uttar Pradesh, India;
| | - Navin Singh Rajput
- Department of Electronics Engineering, Indian Institute of Technology (BHU), Varanasi 221005, Uttar Pradesh, India;
- Correspondence:
| | - Saeed Hamood Alsamhi
- Software Research Institute, Technological University of the Shannon, Midlands Midwest, N37HD68 Athlone, Ireland;
- Faculty of Engineering, IBB University, Ibb 70270, Yemen
| | - Alexey V. Shvetsov
- Department of Operation of Road Transport and Car Service, North-Eastern Federal University, 677000 Yakutsk, Russia;
- Department of Transport and Technological Processes, Vladivostok State University of Economics and Service, 690014 Vladivostok, Russia
| | - Faris A. Almalki
- Department of Computer Engineering, College of Computers and Information Technology, Taif University, Taif 21944, Saudi Arabia;
| |
Collapse
|
37
|
Wang A, Togo R, Ogawa T, Haseyama M. Defect Detection of Subway Tunnels Using Advanced U-Net Network. SENSORS 2022; 22:s22062330. [PMID: 35336501 PMCID: PMC8955254 DOI: 10.3390/s22062330] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 03/08/2022] [Accepted: 03/13/2022] [Indexed: 12/02/2022]
Abstract
In this paper, we present a novel defect detection model based on an improved U-Net architecture. As a semantic segmentation task, the defect detection task has the problems of background–foreground imbalance, multi-scale targets, and feature similarity between the background and defects in the real-world data. Conventionally, general convolutional neural network (CNN)-based networks mainly focus on natural image tasks, which are insensitive to the problems in our task. The proposed method has a network design for multi-scale segmentation based on the U-Net architecture including an atrous spatial pyramid pooling (ASPP) module and an inception module, and can detect various types of defects compared to conventional simple CNN-based methods. Through the experiments using a real-world subway tunnel image dataset, the proposed method showed a better performance than that of general semantic segmentation including state-of-the-art methods. Additionally, we showed that our method can achieve excellent detection balance among multi-scale defects.
Collapse
Affiliation(s)
- An Wang
- Graduate School of Information Science and Technology, Hokkaido University, N-14, W-9, Kita-ku, Sapporo 060-0814, Japan
- Correspondence:
| | - Ren Togo
- Faculty of Information Science and Technology, Hokkaido University, N-14, W-9, Kita-ku, Sapporo 060-0814, Japan; (R.T.); (T.O.); (M.H.)
| | - Takahiro Ogawa
- Faculty of Information Science and Technology, Hokkaido University, N-14, W-9, Kita-ku, Sapporo 060-0814, Japan; (R.T.); (T.O.); (M.H.)
| | - Miki Haseyama
- Faculty of Information Science and Technology, Hokkaido University, N-14, W-9, Kita-ku, Sapporo 060-0814, Japan; (R.T.); (T.O.); (M.H.)
| |
Collapse
|
38
|
Design and Development of Internet of Things-Driven Fault Detection of Indoor Thermal Comfort: HVAC System Problems Case Study. SENSORS 2022; 22:s22051925. [PMID: 35271075 PMCID: PMC8914663 DOI: 10.3390/s22051925] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/12/2022] [Revised: 02/25/2022] [Accepted: 02/27/2022] [Indexed: 02/01/2023]
Abstract
Controlling thermal comfort in the indoor environment demands research because it is fundamental to indicating occupants’ health, wellbeing, and performance in working productivity. A suitable thermal comfort must monitor and balance complex factors from heating, ventilation, air-conditioning systems (HVAC Systems) and outdoor and indoor environments based on advanced technology. It needs engineers and technicians to observe relevant factors on a physical site and to detect problems using their experience to fix them early and prevent them from worsening. However, it is a labor-intensive and time-consuming task, while experts are short on diagnosing and producing proactive plans and actions. This research addresses the limitations by proposing a new Internet of Things (IoT)-driven fault detection system for indoor thermal comfort. We focus on the well-known problem caused by an HVAC system that cannot transfer heat from the indoor to outdoor and needs engineers to diagnose such concerns. The IoT device is developed to observe perceptual information from the physical site as a system input. The prior knowledge from existing research and experts is encoded to help systems detect problems in the manner of human-like intelligence. Three standard categories of machine learning (ML) based on geometry, probability, and logical expression are applied to the system for learning HVAC system problems. The results report that the MLs could improve overall performance based on prior knowledge around 10% compared to perceptual information. Well-designed IoT devices with prior knowledge reduced false positives and false negatives in the predictive process that aids the system to reach satisfactory performance.
Collapse
|
39
|
Automated Deep Learning for Medical Imaging. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_269] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
40
|
Rocha-Jácome C, Carvajal RG, Chavero FM, Guevara-Cabezas E, Hidalgo Fort E. Industry 4.0: A Proposal of Paradigm Organization Schemes from a Systematic Literature Review. SENSORS (BASEL, SWITZERLAND) 2021; 22:66. [PMID: 35009609 PMCID: PMC8747394 DOI: 10.3390/s22010066] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/03/2021] [Revised: 12/18/2021] [Accepted: 12/21/2021] [Indexed: 06/14/2023]
Abstract
Currently, the concept of Industry 4.0 is well known; however, it is extremely complex, as it is constantly evolving and innovating. It includes the participation of many disciplines and areas of knowledge as well as the integration of many technologies, both mature and emerging, but working in collaboration and relying on their study and implementation under the novel criteria of Cyber-Physical Systems. This study starts with an exhaustive search for updated scientific information of which a bibliometric analysis is carried out with results presented in different tables and graphs. Subsequently, based on the qualitative analysis of the references, we present two proposals for the schematic analysis of Industry 4.0 that will help academia and companies to support digital transformation studies. The results will allow us to perform a simple alternative analysis of Industry 4.0 to understand the functions and scope of the integrating technologies to achieve a better collaboration of each area of knowledge and each professional, considering the potential and limitations of each one, supporting the planning of an appropriate strategy, especially in the management of human resources, for the successful execution of the digital transformation of the industry.
Collapse
|
41
|
Edge Computing Using Embedded Webserver with Mobile Device for Diagnosis and Prediction of Metastasis in Histopathological Images. INT J COMPUT INT SYS 2021. [DOI: 10.1007/s44196-021-00040-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022] Open
Abstract
AbstractDiagnosis of different breast cancer stages using histopathology whole slide images is the gold standard in grading the tissue metastasis. Traditional diagnosis involves labor intensive procedures and is prone to human errors. Computer aided diagnosis assists medical experts as a second opinion tool in early detection which prevents further proliferation. Computing facilities have emerged to an extent where algorithms can attain near human accuracy in prediction of diseases, offering better treatment to curb further proliferation. The work introduced in the paper provides an interface in mobile platform, which enables the user to input histopathology image and obtain the prediction results with its class probability through embedded web-server. The trained deep convolutional neural networks model is deployed into a microcomputer-based embedded system after hyper-parameter tuning, offering congruent performance. The implementation results show that the embedded platform with custom-trained CNN model is suitable for medical image classification, as it takes less execution time and mean prediction time. It is also noticed that customized CNN classifier model outperforms pre-trained models when used in embedded platforms for prediction and classification of histopathology images. This work also emphasizes the relevance of portable and flexible embedded device in real time clinical applications.
Collapse
|
42
|
Andreadis A, Giambene G, Zambon R. Monitoring Illegal Tree Cutting through Ultra-Low-Power Smart IoT Devices. SENSORS 2021; 21:s21227593. [PMID: 34833669 PMCID: PMC8624687 DOI: 10.3390/s21227593] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Revised: 10/31/2021] [Accepted: 11/11/2021] [Indexed: 11/16/2022]
Abstract
Forests play a fundamental role in preserving the environment and fighting global warming. Unfortunately, they are continuously reduced by human interventions such as deforestation, fires, etc. This paper proposes and evaluates a framework for automatically detecting illegal tree-cutting activity in forests through audio event classification. We envisage ultra-low-power tiny devices, embedding edge-computing microcontrollers and long-range wireless communication to cover vast areas in the forest. To reduce the energy footprint and resource consumption for effective and pervasive detection of illegal tree cutting, an efficient and accurate audio classification solution based on convolutional neural networks is proposed, designed specifically for resource-constrained wireless edge devices. With respect to previous works, the proposed system allows for recognizing a wider range of threats related to deforestation through a distributed and pervasive edge-computing technique. Different pre-processing techniques have been evaluated, focusing on a trade-off between classification accuracy with respect to computational resources, memory, and energy footprint. Furthermore, experimental long-range communication tests have been conducted in real environments. Data obtained from the experimental results show that the proposed solution can detect and notify tree-cutting events for efficient and cost-effective forest monitoring through smart IoT, with an accuracy of 85%.
Collapse
|
43
|
Configurable Hardware Core for IoT Object Detection. FUTURE INTERNET 2021. [DOI: 10.3390/fi13110280] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Object detection is an important task for many applications, like transportation, security, and medical applications. Many of these applications are needed on edge devices to make local decisions. Therefore, it is necessary to provide low-cost, fast solutions for object detection. This work proposes a configurable hardware core on a field-programmable gate array (FPGA) for object detection. The configurability of the core allows its deployment on target devices with diverse hardware resources. The object detection accelerator is based on YOLO, for its good accuracy at moderate computational complexity. The solution was applied to the design of a core to accelerate the Tiny-YOLOv3, based on a CNN developed for constrained environments. However, it can be applied to other YOLO versions. The core was integrated into a full system-on-chip solution and tested with the COCO dataset. It achieved a performance from 7 to 14 FPS in a low-cost ZYNQ7020 FPGA, depending on the quantization, with an accuracy reduction from 2.1 to 1.4 points of mAP50.
Collapse
|
44
|
O'Byrne C, Abbas A, Korot E, Keane PA. Automated deep learning in ophthalmology: AI that can build AI. Curr Opin Ophthalmol 2021; 32:406-412. [PMID: 34231529 DOI: 10.1097/icu.0000000000000779] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
PURPOSE OF REVIEW The purpose of this review is to describe the current status of automated deep learning in healthcare and to explore and detail the development of these models using commercially available platforms. We highlight key studies demonstrating the effectiveness of this technique and discuss current challenges and future directions of automated deep learning. RECENT FINDINGS There are several commercially available automated deep learning platforms. Although specific features differ between platforms, they utilise the common approach of supervised learning. Ophthalmology is an exemplar speciality in the area, with a number of recent proof-of-concept studies exploring classification of retinal fundus photographs, optical coherence tomography images and indocyanine green angiography images. Automated deep learning has also demonstrated impressive results in other specialities such as dermatology, radiology and histopathology. SUMMARY Automated deep learning allows users without coding expertise to develop deep learning algorithms. It is rapidly establishing itself as a valuable tool for those with limited technical experience. Despite residual challenges, it offers considerable potential in the future of patient management, clinical research and medical education. VIDEO ABSTRACT http://links.lww.com/COOP/A44.
Collapse
Affiliation(s)
- Ciara O'Byrne
- Medical Retina Department, Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Trinity College School of Medicine, Dublin, Ireland
| | - Abdallah Abbas
- Medical Retina Department, Moorfields Eye Hospital NHS Foundation Trust, London, UK
- University College London Medical School, London, UK
| | - Edward Korot
- Medical Retina Department, Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Byers Eye Institute, Stanford University, Stanford, California, USA
| | - Pearse A Keane
- Medical Retina Department, Moorfields Eye Hospital NHS Foundation Trust, London, UK
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation, London, UK
| |
Collapse
|
45
|
Characterization of task response time in a fog-enabled IoT network using queueing models with general service times. JOURNAL OF KING SAUD UNIVERSITY - COMPUTER AND INFORMATION SCIENCES 2021. [DOI: 10.1016/j.jksuci.2021.09.008] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
46
|
Green IoT and Edge AI as Key Technological Enablers for a Sustainable Digital Transition towards a Smart Circular Economy: An Industry 5.0 Use Case. SENSORS 2021; 21:s21175745. [PMID: 34502637 PMCID: PMC8434294 DOI: 10.3390/s21175745] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/02/2021] [Revised: 08/20/2021] [Accepted: 08/23/2021] [Indexed: 02/05/2023]
Abstract
Internet of Things (IoT) can help to pave the way to the circular economy and to a more sustainable world by enabling the digitalization of many operations and processes, such as water distribution, preventive maintenance, or smart manufacturing. Paradoxically, IoT technologies and paradigms such as edge computing, although they have a huge potential for the digital transition towards sustainability, they are not yet contributing to the sustainable development of the IoT sector itself. In fact, such a sector has a significant carbon footprint due to the use of scarce raw materials and its energy consumption in manufacturing, operating, and recycling processes. To tackle these issues, the Green IoT (G-IoT) paradigm has emerged as a research area to reduce such carbon footprint; however, its sustainable vision collides directly with the advent of Edge Artificial Intelligence (Edge AI), which imposes the consumption of additional energy. This article deals with this problem by exploring the different aspects that impact the design and development of Edge-AI G-IoT systems. Moreover, it presents a practical Industry 5.0 use case that illustrates the different concepts analyzed throughout the article. Specifically, the proposed scenario consists in an Industry 5.0 smart workshop that looks for improving operator safety and operation tracking. Such an application case makes use of a mist computing architecture composed of AI-enabled IoT nodes. After describing the application case, it is evaluated its energy consumption and it is analyzed the impact on the carbon footprint that it may have on different countries. Overall, this article provides guidelines that will help future developers to face the challenges that will arise when creating the next generation of Edge-AI G-IoT systems.
Collapse
|
47
|
On IoT-Friendly Skewness Monitoring for Skewness-Aware Online Edge Learning. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11167461] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Machine learning techniques generally require or assume balanced datasets. Skewed data can make machine learning systems never function properly, no matter how carefully the parameter tuning is conducted. Thus, a common solution to the problem of high skewness is to pre-process data (e.g., log transformation) before applying machine learning to deal with real-world problems. Nevertheless, this pre-processing strategy cannot be employed for online machine learning, especially in the context of edge computing, because it is barely possible to foresee and store the continuous data flow on IoT devices on the edge. Thus, it will be crucial and valuable to enable skewness monitoring in real time. Unfortunately, there exists a surprising gap between practitioners’ needs and scientific research in running statistics for monitoring real-time skewness, not to mention the lack of suitable remedies for skewed data at runtime. Inspired by Welford’s algorithm, which is the most efficient approach to calculating running variance, this research developed efficient calculation methods for three versions of running skewness. These methods can conveniently be implemented as skewness monitoring modules that are affordable for IoT devices in different edge learning scenarios. Such an IoT-friendly skewness monitoring eventually acts a cornerstone for developing the research field of skewness-aware online edge learning. By initially validating the usefulness and significance of skewness awareness in edge learning implementations, we also argue that conjoint research efforts from relevant communities are needed to boost this promising research field.
Collapse
|
48
|
Abstract
Morphological operators are nonlinear transformations commonly used in image processing. Their theoretical foundation is based on lattice theory, and it is a well-known result that a large class of image operators can be expressed in terms of two basic ones, the erosions and the dilations. In practice, useful operators can be built by combining these two operators, and the new operators can be further combined to implement more complex transformations. The possibility of implementing a compact combination that performs a complex transformation of images is particularly appealing in resource-constrained hardware scenarios. However, finding a proper combination may require a considerable trial-and-error effort. This difficulty has motivated the development of machine-learning-based approaches for designing morphological image operators. In this work, we present an overview of this topic, divided in three parts. First, we review and discuss the representation structure of morphological image operators. Then we address the problem of learning morphological image operators from data, and how representation manifests in the formulation of this problem as well as in the learned operators. In the last part we focus on recent morphological image operator learning methods that take advantage of deep-learning frameworks. We close with discussions and a list of prospective future research directions.
Collapse
|
49
|
Huč A, Šalej J, Trebar M. Analysis of Machine Learning Algorithms for Anomaly Detection on Edge Devices. SENSORS (BASEL, SWITZERLAND) 2021; 21:4946. [PMID: 34300686 PMCID: PMC8309800 DOI: 10.3390/s21144946] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Revised: 07/11/2021] [Accepted: 07/16/2021] [Indexed: 11/16/2022]
Abstract
The Internet of Things (IoT) consists of small devices or a network of sensors, which permanently generate huge amounts of data. Usually, they have limited resources, either computing power or memory, which means that raw data are transferred to central systems or the cloud for analysis. Lately, the idea of moving intelligence to the IoT is becoming feasible, with machine learning (ML) moved to edge devices. The aim of this study is to provide an experimental analysis of processing a large imbalanced dataset (DS2OS), split into a training dataset (80%) and a test dataset (20%). The training dataset was reduced by randomly selecting a smaller number of samples to create new datasets Di (i = 1, 2, 5, 10, 15, 20, 40, 60, 80%). Afterwards, they were used with several machine learning algorithms to identify the size at which the performance metrics show saturation and classification results stop improving with an F1 score equal to 0.95 or higher, which happened at 20% of the training dataset. Further on, two solutions for the reduction of the number of samples to provide a balanced dataset are given. In the first, datasets DRi consist of all anomalous samples in seven classes and a reduced majority class ('NL') with i = 0.1, 0.2, 0.5, 1, 2, 5, 10, 15, 20 percent of randomly selected samples. In the second, datasets DCi are generated from the representative samples determined with clustering from the training dataset. All three dataset reduction methods showed comparable performance results. Further evaluation of training times and memory usage on Raspberry Pi 4 shows a possibility to run ML algorithms with limited sized datasets on edge devices.
Collapse
Affiliation(s)
| | | | - Mira Trebar
- Faculty of Computer and Information Science, University of Ljubljana, Večna Pot 113, SI-1000 Ljubljana, Slovenia; (A.H.); (J.Š.)
| |
Collapse
|
50
|
Abbas ZH, Ali Z, Abbas G, Jiao L, Bilal M, Suh DY, Piran MJ. Computational Offloading in Mobile Edge with Comprehensive and Energy Efficient Cost Function: A Deep Learning Approach. SENSORS 2021; 21:s21103523. [PMID: 34069364 PMCID: PMC8158712 DOI: 10.3390/s21103523] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Revised: 05/08/2021] [Accepted: 05/13/2021] [Indexed: 11/16/2022]
Abstract
In mobile edge computing (MEC), partial computational offloading can be intelligently investigated to reduce the energy consumption and service delay of user equipment (UE) by dividing a single task into different components. Some of the components execute locally on the UE while the remaining are offloaded to a mobile edge server (MES). In this paper, we investigate the partial offloading technique in MEC using a supervised deep learning approach. The proposed technique, comprehensive and energy efficient deep learning-based offloading technique (CEDOT), intelligently selects the partial offloading policy and also the size of each component of a task to reduce the service delay and energy consumption of UEs. We use deep learning to find, simultaneously, the best partitioning of a single task with the best offloading policy. The deep neural network (DNN) is trained through a comprehensive dataset, generated from our mathematical model, which reduces the time delay and energy consumption of the overall process. Due to the complexity and computation of the mathematical model in the algorithm being high, due to trained DNN the complexity and computation are minimized in the proposed work. We propose a comprehensive cost function, which depends on various delays, energy consumption, radio resources, and computation resources. Furthermore, the cost function also depends on energy consumption and delay due to the task-division-process in partial offloading. None of the literature work considers the partitioning along with the computational offloading policy, and hence, the time and energy consumption due to task-division-process are ignored in the cost function. The proposed work considers all the important parameters in the cost function and generates a comprehensive training dataset with high computation and complexity. Once we get the training dataset, then the complexity is minimized through trained DNN which gives faster decision making with low energy consumptions. Simulation results demonstrate the superior performance of the proposed technique with high accuracy of the DNN in deciding offloading policy and partitioning of a task with minimum delay and energy consumption for UE. More than 70% accuracy of the trained DNN is achieved through a comprehensive training dataset. The simulation results also show the constant accuracy of the DNN when the UEs are moving which means the decision making of the offloading policy and partitioning are not affected by the mobility of UEs.
Collapse
Affiliation(s)
- Ziaul Haq Abbas
- Faculty of Electrical Engineering, GIK Institute of Engineering Sciences and Technology, Topi 23640, Pakistan;
| | - Zaiwar Ali
- Telecommunications and Networking Research Center, GIK Institute of Engineering Sciences and Technology, Topi 23640, Pakistan;
| | - Ghulam Abbas
- Faculty of Computer Science and Engineering, GIK Institute of Engineering Sciences and Technology, Topi 23640, Pakistan;
| | - Lei Jiao
- Department of Information and Communication Technology, University of Agder (UiA), 4898 Grimstad, Norway;
| | - Muhammad Bilal
- Department of Computer Engineering, Hankuk University of Foreign Studies, Yongin-si 17035, Korea
- Correspondence: (M.B.); (D.-Y.S.)
| | - Doug-Young Suh
- Department of Electronics and Software Convergence, Kyung Hee University, Yongin-si 17035, Korea
- Correspondence: (M.B.); (D.-Y.S.)
| | - Md. Jalil Piran
- Department of Computer Science and Engineering, Sejong University, Seoul 05006, Korea;
| |
Collapse
|