1
|
El Marhraoui Y, Bouilland S, Boukallel M, Anastassova M, Ammi M. CNN-Based Self-Attention Weight Extraction for Fall Event Prediction Using Balance Test Score. SENSORS (BASEL, SWITZERLAND) 2023; 23:9194. [PMID: 38005580 PMCID: PMC10675741 DOI: 10.3390/s23229194] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Revised: 10/16/2023] [Accepted: 11/09/2023] [Indexed: 11/26/2023]
Abstract
Injury, hospitalization, and even death are common consequences of falling for elderly people. Therefore, early and robust identification of people at risk of recurrent falling is crucial from a preventive point of view. This study aims to evaluate the effectiveness of an interpretable semi-supervised approach in identifying individuals at risk of falls by using the data provided by ankle-mounted IMU sensors. Our method benefits from the cause-effect link between a fall event and balance ability to pinpoint the moments with the highest fall probability. This framework also has the advantage of training on unlabeled data, and one can exploit its interpretation capacities to detect the target while only using patient metadata, especially those in relation to balance characteristics. This study shows that a visual-based self-attention model is able to infer the relationship between a fall event and loss of balance by attributing high values of weight to moments where the vertical acceleration component of the IMU sensors exceeds 5 m/s² during an especially short period. This semi-supervised approach uses interpretable features to highlight the moments of the recording that may explain the score of balance, thus revealing the moments with the highest risk of falling. Our model allows for the detection of 71% of the possible falling risk events in a window of 1 s (500 ms before and after the target) when compared with threshold-based approaches. This type of framework plays a paramount role in reducing the costs of annotation in the case of fall prevention when using wearable devices. Overall, this adaptive tool can provide valuable data to healthcare professionals, and it can assist them in enhancing fall prevention efforts on a larger scale with lower costs.
Collapse
Affiliation(s)
- Youness El Marhraoui
- CLI Department, University of Paris 8, 93200 Saint-Denis, France;
- Laboratoire Analyse, Géométrie et Applications, University of Sorbonne Paris Nord, 93430 Villetaneuse, France
| | | | - Mehdi Boukallel
- Laboratory for Integration of Systems and Technology, CEA, 91120 Palaiseau, France; (M.B.); (M.A.)
| | - Margarita Anastassova
- Laboratory for Integration of Systems and Technology, CEA, 91120 Palaiseau, France; (M.B.); (M.A.)
| | - Mehdi Ammi
- CLI Department, University of Paris 8, 93200 Saint-Denis, France;
| |
Collapse
|
2
|
Zheng K, Li B, Li Y, Chang P, Sun G, Li H, Zhang J. Fall detection based on dynamic key points incorporating preposed attention. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:11238-11259. [PMID: 37322980 DOI: 10.3934/mbe.2023498] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Accidental falls pose a significant threat to the elderly population, and accurate fall detection from surveillance videos can significantly reduce the negative impact of falls. Although most fall detection algorithms based on video deep learning focus on training and detecting human posture or key points in pictures or videos, we have found that the human pose-based model and key points-based model can complement each other to improve fall detection accuracy. In this paper, we propose a preposed attention capture mechanism for images that will be fed into the training network, and a fall detection model based on this mechanism. We accomplish this by fusing the human dynamic key point information with the original human posture image. We first propose the concept of dynamic key points to account for incomplete pose key point information in the fall state. We then introduce an attention expectation that predicates the original attention mechanism of the depth model by automatically labeling dynamic key points. Finally, the depth model trained with human dynamic key points is used to correct the detection errors of the depth model with raw human pose images. Our experiments on the Fall Detection Dataset and the UP-Fall Detection Dataset demonstrate that our proposed fall detection algorithm can effectively improve the accuracy of fall detection and provide better support for elderly care.
Collapse
Affiliation(s)
- Kun Zheng
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Bin Li
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Yu Li
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Peng Chang
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Guangmin Sun
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Hui Li
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Junjie Zhang
- Smart Learning Institute, Beijing Normal University, Beijing 100875, China
| |
Collapse
|
3
|
Wu L, Huang C, Zhao S, Li J, Zhao J, Cui Z, Yu Z, Xu Y, Zhang M. Robust fall detection in video surveillance based on weakly supervised learning. Neural Netw 2023; 163:286-297. [PMID: 37086545 DOI: 10.1016/j.neunet.2023.03.042] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Revised: 03/04/2023] [Accepted: 03/31/2023] [Indexed: 04/07/2023]
Abstract
Fall event detection has been a research hotspot in recent years in the fields of medicine and health. Currently, vision-based fall detection methods have been considered the most promising methods due to their advantages of a non-contact characteristic and easy deployment. However, the existing vision-based fall detection methods mainly use supervised learning in model training and require much time and energy for data annotations. To address these limitations, this work proposes a detection method that uses a weakly supervised learning-based dual-modal network. The proposed method adopts a deep multiple instance learning framework to learn the fall events using weak labels. As a result, the proposed method does not require time-consuming fine-grained annotations. The final detection result of each video is obtained by integrating the information obtained from two streams of the dual-modal network using the proposed dual-modal fusion strategy. Experimental results on two public benchmark datasets and a proposed dataset demonstrate the superiority of the proposed method over the current state-of-the-art methods.
Collapse
Affiliation(s)
- Lian Wu
- College of Computer Science and Technology, GuiZhou University, Guiyang, 550025, China; School of Mathematics and Big Data, GuiZhou Education University, Guiyang, 550018, China
| | - Chao Huang
- School of Cyber Science and Technology, Sun Yat-sen University (Shenzhen Campus), Shenzhen, 518107, China.
| | - Shuping Zhao
- Faculty of Computer Science, Guangdong University of Technology, Guangzhou, 510006, China
| | - Jinkai Li
- School of Computer Science and Technology, Harbin Institute of Technology, Shenzhen, 518055, China
| | - Jianchuan Zhao
- College of Computer Science and Technology, GuiZhou University, Guiyang, 550025, China; School of Mathematics and Big Data, GuiZhou Education University, Guiyang, 550018, China
| | - Zhongwei Cui
- School of Mathematics and Big Data, GuiZhou Education University, Guiyang, 550018, China
| | - Zhen Yu
- School of Mathematics and Big Data, GuiZhou Education University, Guiyang, 550018, China
| | - Yong Xu
- School of Computer Science and Technology, Harbin Institute of Technology, Shenzhen, 518055, China.
| | - Min Zhang
- School of Computer Science and Technology, Harbin Institute of Technology, Shenzhen, 518055, China
| |
Collapse
|
4
|
Chan HL, Ouyang Y, Chen RS, Lai YH, Kuo CC, Liao GS, Hsu WY, Chang YJ. Deep Neural Network for the Detections of Fall and Physical Activities Using Foot Pressures and Inertial Sensing. SENSORS (BASEL, SWITZERLAND) 2023; 23:495. [PMID: 36617087 PMCID: PMC9824659 DOI: 10.3390/s23010495] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Revised: 12/16/2022] [Accepted: 12/28/2022] [Indexed: 06/17/2023]
Abstract
Fall detection and physical activity (PA) classification are important health maintenance issues for the elderly and people with mobility dysfunctions. The literature review showed that most studies concerning fall detection and PA classification addressed these issues individually, and many were based on inertial sensing from the trunk and upper extremities. While shoes are common footwear in daily off-bed activities, most of the aforementioned studies did not focus much on shoe-based measurements. In this paper, we propose a novel footwear approach to detect falls and classify various types of PAs based on a convolutional neural network and recurrent neural network hybrid. The footwear-based detections using deep-learning technology were demonstrated to be efficient based on the data collected from 32 participants, each performing simulated falls and various types of PAs: fall detection with inertial measures had a higher F1-score than detection using foot pressures; the detections of dynamic PAs (jump, jog, walks) had higher F1-scores while using inertial measures, whereas the detections of static PAs (sit, stand) had higher F1-scores while using foot pressures; the combination of foot pressures and inertial measures was most efficient in detecting fall, static, and dynamic PAs.
Collapse
Affiliation(s)
- Hsiao-Lung Chan
- Department of Electrical Engineering, Chang Gung University, Taoyuan 333, Taiwan
- Department of Biomedical Engineering, Chang Gung University, Taoyuan 333, Taiwan
- Neuroscience Research Center, Chang Gung Memorial Hospital, Linkou, Taoyuan 333, Taiwan
| | - Yuan Ouyang
- Department of Electrical Engineering, Chang Gung University, Taoyuan 333, Taiwan
- Department of Neurology, Chang Gung Memorial Hospital, Linkou, Taoyuan 333, Taiwan
| | - Rou-Shayn Chen
- Neuroscience Research Center, Chang Gung Memorial Hospital, Linkou, Taoyuan 333, Taiwan
- Department of Neurology, Chang Gung Memorial Hospital, Linkou, Taoyuan 333, Taiwan
| | - Yen-Hung Lai
- Department of Electrical Engineering, Chang Gung University, Taoyuan 333, Taiwan
| | - Cheng-Chung Kuo
- Department of Electrical Engineering, Chang Gung University, Taoyuan 333, Taiwan
| | - Guo-Sheng Liao
- Department of Electrical Engineering, Chang Gung University, Taoyuan 333, Taiwan
| | - Wen-Yen Hsu
- Department of Electrical Engineering, Chang Gung University, Taoyuan 333, Taiwan
| | - Ya-Ju Chang
- Neuroscience Research Center, Chang Gung Memorial Hospital, Linkou, Taoyuan 333, Taiwan
- School of Physical Therapy and Graduate Institute of Rehabilitation Science, College of Medicine, and Health Aging Research Center, Chang Gung University, Taoyuan 333, Taiwan
| |
Collapse
|
5
|
Chan PY, Tay A, Chen D, De Freitas M, Millet C, Nguyen-Duc T, Duke G, Lyall J, Nguyen JT, McNeil J, Hopper I. Ambient intelligence-based monitoring of staff and patient activity in the intensive care unit. Aust Crit Care 2023; 36:92-98. [PMID: 36244918 DOI: 10.1016/j.aucc.2022.08.011] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Revised: 08/19/2022] [Accepted: 08/20/2022] [Indexed: 01/24/2023] Open
Abstract
BACKGROUND Caregiver workload in the ICU setting is difficult to numerically quantify. Ambient Intelligence utilises computer vision-guided neural networks to continuously monitor multiple datapoints in video feeds, has become increasingly efficient at automatically tracking various aspects of human movement. OBJECTIVES To assess the feasibility of using Ambient Intelligence to track and quantify allpatient and caregiver activity within a bedspace over the course of an ICU admission and also to establish patient specific factors, and environmental factors such as time ofday, that might contribute to an increased workload in ICU workers. METHODS 5000 images were manually annotated and then used to train You Only LookOnce (YOLOv4), an open-source computer vision algorithm. Comparison of patientmotion and caregiver activity was then performed between these patients. RESULTS The algorithm was deployed on 14 patients comprising 1762800 framesof new, untrained data. There was a strong correlation between the number ofcaregivers in the room and the standardized movement of the patient (p < 0.0001) withmore caregivers associated with more movement. There was a significant difference incaregiver activity throughout the day (p < 0.05), HDU vs. ICU status (p < 0.05), delirious vs. non delirious patients (p < 0.05), and intubated vs. not intubated patients(p < 0.05). Caregiver activity was lowest between 0400 and 0800 (average .71 ± .026caregivers per hour) with statistically significant differences in activity compared to 0800-2400 (p < 0.05). Caregiver activity was highest between 1200 and 1600 (1.02 ± .031 caregivers per hour) with a statistically significant difference in activity comparedto activity from 1600 to 0800 (p < 0.05). The three most dominant predictors of workeractivity were patient motion (Standardized Dominance 78.6%), Mechanical Ventilation(Standardized Dominance 7.9%) and Delirium (Standardized Dominance 6.2%). CONCLUSION Ambient Intelligence could potentially be used to derive a single standardized metricthat could be applied to patients to illustrate their overall workload. This could be usedto predict workflow demands for better staff deployment, monitoring of caregiver workload, and potentially as a tool to predict burnout.
Collapse
Affiliation(s)
- Peter Y Chan
- Department of Intensive Care Medicine, Eastern Health, Melbourne, Victoria, Australia; School of Public Health and Prevention Medicine, Monash University, Melbourne, Victoria, Australia.
| | - Andrew Tay
- Department of Intensive Care Medicine, Eastern Health, Melbourne, Victoria, Australia
| | - David Chen
- Department of Intensive Care Medicine, Eastern Health, Melbourne, Victoria, Australia
| | - Maria De Freitas
- Department of Intensive Care Medicine, Eastern Health, Melbourne, Victoria, Australia
| | - Coralie Millet
- Department of Intensive Care Medicine, Eastern Health, Melbourne, Victoria, Australia
| | - Thanh Nguyen-Duc
- School of Public Health and Prevention Medicine, Monash University, Melbourne, Victoria, Australia
| | - Graeme Duke
- Department of Intensive Care Medicine, Eastern Health, Melbourne, Victoria, Australia
| | - Jessica Lyall
- Department of Intensive Care Medicine, Eastern Health, Melbourne, Victoria, Australia
| | - John T Nguyen
- School of Public Health and Prevention Medicine, Monash University, Melbourne, Victoria, Australia
| | - John McNeil
- School of Public Health and Prevention Medicine, Monash University, Melbourne, Victoria, Australia
| | - Ingrid Hopper
- School of Public Health and Prevention Medicine, Monash University, Melbourne, Victoria, Australia
| |
Collapse
|
6
|
Anwary AR, Rahman MA, Muzahid AJM, Ul Ashraf AW, Patwary M, Hussain A. Deep Learning enabled Fall Detection exploiting Gait Analysis. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:4683-4686. [PMID: 36086537 DOI: 10.1109/embc48229.2022.9871964] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Falls associated injuries often result not only increasing the medical-, social- and care-cost but also loss of mobility, impair chronic health and even potential risk of fatality. Because of elderly population growth, it is one of the major global public health problems. To address such issue, we present a Deep Learning enabled Fall Detection (DLFD) method exploiting Gait Analysis. More in details, firstly, we propose a framework for fall detection system. Secondly, we discussed the proposed DLFD method which exploits fall and non-fall RGB video to extract gait features using MediaPipe framework, applies normalization algorithm and classifies using bi-directional Long Short-Term Memory (bi-LSTM) model. Finally, the model is tested on collected three public datasets of 434x2 videos(more than 1 million frames) which consists of different activities and varieties of falls. The experimental results show that the model can achieve the accuracy of 96.35% and reveals the effectiveness of the proposal. This could play a significant role to alleviate falls problem by immediate alerting to emergency and relevant teams for taking necessary actions. This will speed up the assistance proceedings, reduce the risk of prolonged injury and save lives.
Collapse
|
7
|
Vision-based human fall detection systems using deep learning: A review. Comput Biol Med 2022; 146:105626. [DOI: 10.1016/j.compbiomed.2022.105626] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2022] [Revised: 03/08/2022] [Accepted: 04/06/2022] [Indexed: 11/24/2022]
|
8
|
Human fall detection and activity monitoring: a comparative analysis of vision-based methods for classification and detection techniques. Soft comput 2022. [DOI: 10.1007/s00500-021-06717-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
9
|
Imitating Emergencies: Generating Thermal Surveillance Fall Data Using Low-Cost Human-like Dolls. SENSORS 2022; 22:s22030825. [PMID: 35161571 PMCID: PMC8840151 DOI: 10.3390/s22030825] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/10/2021] [Revised: 01/15/2022] [Accepted: 01/17/2022] [Indexed: 11/26/2022]
Abstract
Outdoor fall detection, in the context of accidents, such as falling from heights or in water, is a research area that has not received as much attention as other automated surveillance areas. Gathering sufficient data for developing deep-learning models for such applications has also proven to be not a straight-forward task. Normally, footage of volunteer people falling is used for providing data, but that can be a complicated and dangerous process. In this paper, we propose an application for thermal images of a low-cost rubber doll falling in a harbor, for simulating real emergencies. We achieve thermal signatures similar to a human on different parts of the doll’s body. The change of these thermal signatures over time is measured, and its stability is verified. We demonstrate that, even with the size and weight differences of the doll, the produced videos of falls have a similar motion and appearance to what is expected from real people. We show that the captured thermal doll data can be used for the real-world application of pedestrian detection by running the captured data through a state-of-the-art object detector trained on real people. An average confidence score of 0.730 is achieved, compared to a confidence score of 0.761 when using footage of real people falling. The captured fall sequences using the doll can be used as a substitute to sequences of people.
Collapse
|
10
|
Predicting Human Motion Signals Using Modern Deep Learning Techniques and Smartphone Sensors. SENSORS 2021; 21:s21248270. [PMID: 34960368 PMCID: PMC8703955 DOI: 10.3390/s21248270] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/18/2021] [Accepted: 12/08/2021] [Indexed: 11/30/2022]
Abstract
The global adoption of smartphone technology affords many conveniences, and not surprisingly, healthcare applications using wearable sensors like smartphones have received much attention. Among the various potential applications and research related to healthcare, recent studies have been conducted on recognizing human activities and characterizing human motions, often with wearable sensors, and with sensor signals that generally operate in the form of time series. In most studies, these sensor signals are used after pre-processing, e.g., by converting them into an image format rather than directly using the sensor signals themselves. Several methods have been used for converting time series data to image formats, such as spectrograms, raw plots, and recurrence plots. In this paper, we deal with the health care task of predicting human motion signals obtained from sensors attached to persons. We convert the motion signals into image formats with the recurrence plot method, and use it as an input into a deep learning model. For predicting subsequent motion signals, we utilize a recently introduced deep learning model combining neural networks and the Fourier transform, the Fourier neural operator. The model can be viewed as a Fourier-transform-based extension of a convolution neural network, and in these experiments, we compare the results of the model to the convolution neural network (CNN) model. The results of the proposed method in this paper show better performance than the results of the CNN model and, furthermore, we confirm that it can be utilized for detecting potential accidental falls more quickly via predicted motion signals.
Collapse
|
11
|
Chouai M, Dolezel P, Stursa D, Nemec Z. New End-to-End Strategy Based on DeepLabv3+ Semantic Segmentation for Human Head Detection. SENSORS 2021; 21:s21175848. [PMID: 34502738 PMCID: PMC8434303 DOI: 10.3390/s21175848] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/23/2021] [Revised: 08/16/2021] [Accepted: 08/25/2021] [Indexed: 11/16/2022]
Abstract
In the field of computer vision, object detection consists of automatically finding objects in images by giving their positions. The most common fields of application are safety systems (pedestrian detection, identification of behavior) and control systems. Another important application is head/person detection, which is the primary material for road safety, rescue, surveillance, etc. In this study, we developed a new approach based on two parallel Deeplapv3+ to improve the performance of the person detection system. For the implementation of our semantic segmentation model, a working methodology with two types of ground truths extracted from the bounding boxes given by the original ground truths was established. The approach has been implemented in our two private datasets as well as in a public dataset. To show the performance of the proposed system, a comparative analysis was carried out on two deep learning semantic segmentation state-of-art models: SegNet and U-Net. By achieving 99.14% of global accuracy, the result demonstrated that the developed strategy could be an efficient way to build a deep neural network model for semantic segmentation. This strategy can be used, not only for the detection of the human head but also be applied in several semantic segmentation applications.
Collapse
|
12
|
Survey and Synthesis of State of the Art in Driver Monitoring. SENSORS 2021; 21:s21165558. [PMID: 34450999 PMCID: PMC8402294 DOI: 10.3390/s21165558] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/12/2021] [Revised: 08/06/2021] [Accepted: 08/10/2021] [Indexed: 11/22/2022]
Abstract
Road vehicle accidents are mostly due to human errors, and many such accidents could be avoided by continuously monitoring the driver. Driver monitoring (DM) is a topic of growing interest in the automotive industry, and it will remain relevant for all vehicles that are not fully autonomous, and thus for decades for the average vehicle owner. The present paper focuses on the first step of DM, which consists of characterizing the state of the driver. Since DM will be increasingly linked to driving automation (DA), this paper presents a clear view of the role of DM at each of the six SAE levels of DA. This paper surveys the state of the art of DM, and then synthesizes it, providing a unique, structured, polychotomous view of the many characterization techniques of DM. Informed by the survey, the paper characterizes the driver state along the five main dimensions—called here “(sub)states”—of drowsiness, mental workload, distraction, emotions, and under the influence. The polychotomous view of DM is presented through a pair of interlocked tables that relate these states to their indicators (e.g., the eye-blink rate) and the sensors that can access each of these indicators (e.g., a camera). The tables factor in not only the effects linked directly to the driver, but also those linked to the (driven) vehicle and the (driving) environment. They show, at a glance, to concerned researchers, equipment providers, and vehicle manufacturers (1) most of the options they have to implement various forms of advanced DM systems, and (2) fruitful areas for further research and innovation.
Collapse
|
13
|
Bhattacharjee P, Biswas S. Smart walking assistant (SWA) for elderly care using an intelligent realtime hybrid model. EVOLVING SYSTEMS 2021. [DOI: 10.1007/s12530-021-09382-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|