1
|
Evaluation of Eligibility Criteria Relevance for the Purpose of IT-Supported Trial Recruitment: Descriptive Quantitative Analysis. JMIR Form Res 2024; 8:e49347. [PMID: 38294862 PMCID: PMC10867759 DOI: 10.2196/49347] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 09/28/2023] [Accepted: 11/22/2023] [Indexed: 02/01/2024] Open
Abstract
BACKGROUND Clinical trials (CTs) are crucial for medical research; however, they frequently fall short of the requisite number of participants who meet all eligibility criteria (EC). A clinical trial recruitment support system (CTRSS) is developed to help identify potential participants by performing a search on a specific data pool. The accuracy of the search results is directly related to the quality of the data used for comparison. Data accessibility can present challenges, making it crucial to identify the necessary data for a CTRSS to query. Prior research has examined the data elements frequently used in CT EC but has not evaluated which criteria are actually used to search for participants. Although all EC must be met to enroll a person in a CT, not all criteria have the same importance when searching for potential participants in an existing data pool, such as an electronic health record, because some of the criteria are only relevant at the time of enrollment. OBJECTIVE In this study, we investigated which groups of data elements are relevant in practice for finding suitable participants and whether there are typical elements that are not relevant and can therefore be omitted. METHODS We asked trial experts and CTRSS developers to first categorize the EC of their CTs according to data element groups and then to classify them into 1 of 3 categories: necessary, complementary, and irrelevant. In addition, the experts assessed whether a criterion was documented (on paper or digitally) or whether it was information known only to the treating physicians or patients. RESULTS We reviewed 82 CTs with 1132 unique EC. Of these 1132 EC, 350 (30.9%) were considered necessary, 224 (19.8%) complementary, and 341 (30.1%) total irrelevant. To identify the most relevant data elements, we introduced the data element relevance index (DERI). This describes the percentage of studies in which the corresponding data element occurs and is also classified as necessary or supplementary. We found that the query of "diagnosis" was relevant for finding participants in 79 (96.3%) of the CTs. This group was followed by "date of birth/age" with a DERI of 85.4% (n=70) and "procedure" with a DERI of 35.4% (n=29). CONCLUSIONS The distribution of data element groups in CTs has been heterogeneously described in previous works. Therefore, we recommend identifying the percentage of CTs in which data element groups can be found as a more reliable way to determine the relevance of EC. Only necessary and complementary criteria should be included in this DERI.
Collapse
|
2
|
A Preliminary Study on the Effectiveness of K-Nearest Neighbor Algorithm Based on Local Mean for Classification of Bioinformatics Data. Stud Health Technol Inform 2023; 308:410-416. [PMID: 38007767 DOI: 10.3233/shti230867] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2023]
Abstract
The preliminary classification of biological class data is of great importance for bioinformatics. One can quickly classify object data by comparing their existing features with known traits. k-nearest neighbor algorithm is easy to apply in this field, but its drawbacks make it less meaningful to improve the efficiency of the algorithm by simply changing the distance model, so this study uses a local mean-based k-nearest neighbor classifier and compares the accuracy of the predicted classification of six different distance models used. The prediction accuracies in the experimental results were all greater than 70%, and the highest accuracy was achieved in different data sets for all distance models with K=2; the prediction accuracy of Minkowski distance with different parameters had the highest volatility in the test.and the experimental results can be used as a reference for related practitioners.
Collapse
|
3
|
Characterizing the Differences in Descriptions of Violence on Reddit During the COVID-19 Pandemic. JOURNAL OF INTERPERSONAL VIOLENCE 2023; 38:9290-9314. [PMID: 36987388 PMCID: PMC10064198 DOI: 10.1177/08862605231163885] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Concerns have been raised over the experiences of violence such as domestic violence (DV) and intimate partner violence (IPV) during the COVID-19 pandemic. Social media such as Reddit represent an alternative outlet for reporting experiences of violence where healthcare access has been limited. This study analyzed seven violence-related subreddits to investigate the trends of different violence patterns from January 2018 to February 2022 to enhance the health-service providers' existing service or provide some new perspective for existing violence research. Specifically, we collected violence-related texts from Reddit using keyword searching and identified six major types with supervised machine learning classifiers: DV, IPV, physical violence, sexual violence, emotional violence, and nonspecific violence or others. The increase rate (IR) of each violence type was calculated and temporally compared in five phases of the pandemic. The phases include one pre-pandemic phase (Phase 0, the date before February 26, 2020) and four pandemic phases (Phases 1-4) with separation dates of June 17, 2020, September 7, 2020, and June 4, 2021. We found that the number of IPV-related posts increased most in the earliest phase; however, that for COVID-citing IPV was highest in the mid-pandemic phase. IRs for DV, IPV, and emotional violence also showed increases across all pandemic phases, with IRs of 26.9%, 58.8%, and 28.8%, respectively, from the pre-pandemic to the first pandemic phase. In the other three pandemic phases, all the IRs for these three types of violence were positive, though lower than the IRs in the first pandemic phase. The findings highlight the importance of identifying and providing help to those who suffer from such violent experiences and support the role of social media site monitoring as a means of informative surveillance for help-providing authorities and violence research groups.
Collapse
|
4
|
Diagnosis of the Pneumatic Wheel Condition Based on Vibration Analysis of the Sprung Mass in the Vehicle Self-Diagnostics System. SENSORS (BASEL, SWITZERLAND) 2023; 23:2326. [PMID: 36850924 PMCID: PMC9965739 DOI: 10.3390/s23042326] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/31/2022] [Revised: 02/12/2023] [Accepted: 02/17/2023] [Indexed: 06/18/2023]
Abstract
This paper presents a method for the multi-criteria classification of data in terms of identifying pneumatic wheel imbalance on the basis of vehicle body vibrations in normal operation conditions. The paper uses an expert system based on search graphs that apply source features of objects and distances from points in the space of classified objects (the metric used). Rules generated for data obtained from tests performed under stationary and road conditions using a chassis dynamometer were used to develop the expert system. The recorded linear acceleration signals of the vehicle body were analyzed in the frequency domain for which the power spectral density was determined. The power field values for selected harmonics of the spectrum consistent with the angular velocity of the wheel were adopted for further analysis. In the developed expert system, the Kamada-Kawai model was used to arrange the nodes of the decision tree graph. Based on the developed database containing learning and testing data for each vehicle speed and wheel balance condition, the probability of the wheel imbalance condition was determined. As a result of the analysis, it was determined that the highest probability of identifying wheel imbalance equal to almost 100% was obtained in the vehicle speed range of 50 km/h to 70 km/h. This is known as the pre-resonance range in relation to the eigenfrequency of the wheel vibrations. As the vehicle speed increases, the accuracy of the data classification for identifying wheel imbalance in relation to the learning data decreases to 50% for the speed of 90 km/h.
Collapse
|
5
|
Interaction of Secure Cloud Network and Crowd Computing for Smart City Data Obfuscation. SENSORS (BASEL, SWITZERLAND) 2022; 22:7169. [PMID: 36236264 PMCID: PMC9572171 DOI: 10.3390/s22197169] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Revised: 09/05/2022] [Accepted: 09/07/2022] [Indexed: 06/16/2023]
Abstract
There can be many inherent issues in the process of managing cloud infrastructure and the platform of the cloud. The platform of the cloud manages cloud software and legality issues in making contracts. The platform also handles the process of managing cloud software services and legal contract-based segmentation. In this paper, we tackle these issues directly with some feasible solutions. For these constraints, the Averaged One-Dependence Estimators (AODE) classifier and the SELECT Applicable Only to Parallel Server (SELECT-APSL ASA) method are proposed to separate the data related to the place. ASA is made up of the AODE and SELECT Applicable Only to Parallel Server. The AODE classifier is used to separate the data from smart city data based on the hybrid data obfuscation technique. The data from the hybrid data obfuscation technique manages 50% of the raw data, and 50% of hospital data is masked using the proposed transmission. The analysis of energy consumption before the cryptosystem shows the total packet delivered by about 71.66% compared with existing algorithms. The analysis of energy consumption after cryptosystem assumption shows 47.34% consumption, compared to existing state-of-the-art algorithms. The average energy consumption before data obfuscation decreased by 2.47%, and the average energy consumption after data obfuscation was reduced by 9.90%. The analysis of the makespan time before data obfuscation decreased by 33.71%. Compared to existing state-of-the-art algorithms, the study of makespan time after data obfuscation decreased by 1.3%. These impressive results show the strength of our methodology.
Collapse
|
6
|
Applying the Properties of Neurons in Machine Learning: A Brain-like Neural Model with Interactive Stimulation for Data Classification. Brain Sci 2022; 12:brainsci12091191. [PMID: 36138927 PMCID: PMC9496749 DOI: 10.3390/brainsci12091191] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Revised: 08/26/2022] [Accepted: 08/29/2022] [Indexed: 11/16/2022] Open
Abstract
Some neural models achieve outstanding results in image recognition, semantic segmentation and natural language processing. However, their classification performance on structured and small-scale datasets that do not involve feature extraction is worse than that of traditional algorithms, although they require more time to train. In this paper, we propose a brain-like neural model with interactive stimulation (NMIS) that focuses on data classification. It consists of a primary neural field and a senior neural field that play different cognitive roles. The former is used to correspond to real instances in the feature space, and the latter stores the category pattern. Neurons in the primary field exchange information through interactive stimulation and their activation is transmitted to the senior field via inter-field interaction, simulating the mechanisms of neuronal interaction and synaptic plasticity, respectively. The proposed NMIS is biologically plausible and does not involve complex optimization processes. Therefore, it exhibits better learning ability on small-scale and structured datasets than traditional BP neural networks. For large-scale data classification, a nearest neighbor NMIS (NN_NMIS), an optimized version of NMIS, is proposed to improve computational efficiency. Numerical experiments performed on some UCI datasets show that the proposed NMIS and NN_NMIS are significantly superior to some classification algorithms that are widely used in machine learning.
Collapse
|
7
|
A hybrid feature selection model based on butterfly optimization algorithm: COVID-19 as a case study. EXPERT SYSTEMS 2022; 39:e12786. [PMID: 34511693 PMCID: PMC8420334 DOI: 10.1111/exsy.12786] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/09/2021] [Revised: 06/26/2021] [Accepted: 07/12/2021] [Indexed: 06/13/2023]
Abstract
The need to evolve a novel feature selection (FS) approach was motivated by the persistence necessary for a robust FS system, the time-consuming exhaustive search in traditional methods, and the favourable swarming manner in various optimization techniques. Most of the datasets have a high dimension in many issues since all features are not crucial to the problem, which reduces the algorithm's accuracy and efficiency. This article presents a hybrid feature selection approach to solve the low precision and tardy convergence of the butterfly optimization algorithm (BOA). The proposed method is dependent on combining the algorithm of BOA and the particle swarm optimization (PSO) as a search methodology using a wrapper framework. BOA is started with a one-dimensional cubic map in the proposed approach, and a non-linear parameter control technique is also implemented. To boost the basic BOA for global optimization, PSO algorithm is mixed with the butterfly optimization algorithm (BOAPSO). A 25 dataset evaluates the proposed BOAPSO to determine its efficiency with three metrics: classification precision, the selected features, and the computational time. A COVID-19 dataset has been used to evaluate the proposed approach. Compared to the previous approaches, the findings show the supremacy of BOAPSO for enhancing performance precision and minimizing the number of chosen features. Concerning the accuracy, the experimental outcomes demonstrate that the proposed model converges rapidly and performs better than with the PSO, BOA, and GWO with improvement percentages: 91.07%, 87.2%, 87.8%, 87.3%, respectively. Moreover, the proposed model's average selected features are 5.7 compared to the PSO, BOA, and GWO, with average features 22.5, 18.05, and 23.1, respectively.
Collapse
|
8
|
Use of Remote Structural Tap Testing Devices Deployed via Ground Vehicle for Health Monitoring of Transportation Infrastructure. SENSORS 2022; 22:s22041458. [PMID: 35214358 PMCID: PMC8880419 DOI: 10.3390/s22041458] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Revised: 02/07/2022] [Accepted: 02/08/2022] [Indexed: 11/16/2022]
Abstract
Transportation infrastructure is an integral part of the world's overall functionality; however, current transportation infrastructure has aged since it was first developed and implemented. Consequently, given its condition, preservation has become a main priority for transportation agencies. Billions of dollars annually are required to maintain the United States' transportation system; however, with limited budgets the prioritization of maintenance and repairs is key. Structural Health Monitoring (SHM) methods can efficiently inform the prioritization of preservation efforts. This paper presents an acoustic monitoring SHM method, deemed tap testing, which is used to detect signs of deterioration in structural/mechanical surfaces through nondestructive means. This method is proposed as a tool to assist bridge inspectors, who already utilize a costly form of SHM methodology when conducting inspections in the field. Challenges arise when it comes to this method of testing, especially when SHM device deployment is done by hand, and when the results are based solely upon a given inspector's abilities. This type of monitoring solution is also, in general, only available to experts, and is associated with special cases that justify their cost. With the creation of a low-cost, cyber-physical system that interrogates and classifies the mechanical health of given surfaces, we lower the cost of SHM, decrease the challenges faced when conducting such tests, and enable communities with a revolutionary solution that is adaptable to their needs. The authors of this paper created and tested a low-cost, interrogating robot that informs users of structural/mechanical defects. This research describes the further development, validation of, and experimentation with, a tap testing device that utilizes remote technology.
Collapse
|
9
|
Understanding the Nature of Metadata: Systematic Review. J Med Internet Res 2022; 24:e25440. [PMID: 35014967 PMCID: PMC8790684 DOI: 10.2196/25440] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2020] [Revised: 01/28/2021] [Accepted: 10/14/2021] [Indexed: 01/11/2023] Open
Abstract
Background Metadata are created to describe the corresponding data in a detailed and unambiguous way and is used for various applications in different research areas, for example, data identification and classification. However, a clear definition of metadata is crucial for further use. Unfortunately, extensive experience with the processing and management of metadata has shown that the term “metadata” and its use is not always unambiguous. Objective This study aimed to understand the definition of metadata and the challenges resulting from metadata reuse. Methods A systematic literature search was performed in this study following the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines for reporting on systematic reviews. Five research questions were identified to streamline the review process, addressing metadata characteristics, metadata standards, use cases, and problems encountered. This review was preceded by a harmonization process to achieve a general understanding of the terms used. Results The harmonization process resulted in a clear set of definitions for metadata processing focusing on data integration. The following literature review was conducted by 10 reviewers with different backgrounds and using the harmonized definitions. This study included 81 peer-reviewed papers from the last decade after applying various filtering steps to identify the most relevant papers. The 5 research questions could be answered, resulting in a broad overview of the standards, use cases, problems, and corresponding solutions for the application of metadata in different research areas. Conclusions Metadata can be a powerful tool for identifying, describing, and processing information, but its meaningful creation is costly and challenging. This review process uncovered many standards, use cases, problems, and solutions for dealing with metadata. The presented harmonized definitions and the new schema have the potential to improve the classification and generation of metadata by creating a shared understanding of metadata and its context.
Collapse
|
10
|
Monitoring Weeder Robots and Anticipating Their Functioning by Using Advanced Topological Data Analysis. Front Artif Intell 2021; 4:761123. [PMID: 34966892 PMCID: PMC8710805 DOI: 10.3389/frai.2021.761123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2021] [Accepted: 11/17/2021] [Indexed: 11/22/2022] Open
Abstract
The present paper aims at analyzing the topological content of the complex trajectories that weeder-autonomous robots follow in operation. We will prove that the topological descriptors of these trajectories are affected by the robot environment as well as by the robot state, with respect to maintenance operations. Most of existing methodologies enabling efficient diagnosis are based on the data analysis, and in particular on some statistical quantities derived from the data. The present work explores the use of an original approach that instead of analyzing quantities derived from the data, analyzes the “shape” of the data, that is, the time series topology based on the homology persistence. We will prove that this procedure is able to extract valuable patterns able to discriminate the trajectories that the robot follows depending on the particular patch in which it operates, as well as to differentiate the robot behavior before and after undergoing a maintenance operation. Even if it is a preliminary work, and it does not pretend to compare its performances with respect to other existing technologies, this work opens new perspectives in considering quite natural and simple descriptors based on the intrinsic information that data contains, with the aim of performing efficient diagnosis and prognosis.
Collapse
|
11
|
SS-RNN: A Strengthened Skip Algorithm for Data Classification Based on Recurrent Neural Networks. Front Genet 2021; 12:746181. [PMID: 34721533 PMCID: PMC8548744 DOI: 10.3389/fgene.2021.746181] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2021] [Accepted: 09/14/2021] [Indexed: 11/13/2022] Open
Abstract
Recurrent neural networks are widely used in time series prediction and classification. However, they have problems such as insufficient memory ability and difficulty in gradient back propagation. To solve these problems, this paper proposes a new algorithm called SS-RNN, which directly uses multiple historical information to predict the current time information. It can enhance the long-term memory ability. At the same time, for the time direction, it can improve the correlation of states at different moments. To include the historical information, we design two different processing methods for the SS-RNN in continuous and discontinuous ways, respectively. For each method, there are two ways for historical information addition: 1) direct addition and 2) adding weight weighting and function mapping to activation function. It provides six pathways so as to fully and deeply explore the effect and influence of historical information on the RNNs. By comparing the average accuracy of real datasets with long short-term memory, Bi-LSTM, gated recurrent units, and MCNN and calculating the main indexes (Accuracy, Precision, Recall, and F1-score), it can be observed that our method can improve the average accuracy and optimize the structure of the recurrent neural network and effectively solve the problems of exploding and vanishing gradients.
Collapse
|
12
|
A Smart IoT System for Detecting the Position of a Lying Person Using a Novel Textile Pressure Sensor. SENSORS 2020; 21:s21010206. [PMID: 33396203 PMCID: PMC7795588 DOI: 10.3390/s21010206] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/04/2020] [Revised: 12/23/2020] [Accepted: 12/27/2020] [Indexed: 11/18/2022]
Abstract
Bedsores are one of the severe problems which could affect a long-term lying subject in the hospitals or the hospice. To prevent lying bedsores, we present a smart Internet of Things (IoT) system for detecting the position of a lying person using novel textile pressure sensors. To build such a system, it is necessary to use different technologies and techniques. We used sixty-four of our novel textile pressure sensors based on electrically conductive yarn and the Velostat to collect the information about the pressure distribution of the lying person. Using Message Queuing Telemetry Transport (MQTT) protocol and Arduino-based hardware, we send measured data to the server. On the server side, there is a Node-RED application responsible for data collection, evaluation, and provisioning. We are using a neural network to classify the subject lying posture on the separate device because of the computation complexity. We created the challenging dataset from the observation of twenty-one people in four lying positions. We achieved a best classification precision of 92% for fourth class (right side posture type). On the other hand, the best recall (91%) for first class (supine posture type) was obtained. The best F1 score (84%) was achieved for first class (supine posture type). After the classification, we send the information to the staff desktop application. The application reminds employees when it is necessary to change the lying position of individual subjects and thus prevent bedsores.
Collapse
|
13
|
A Novel Framework Using Deep Auto-Encoders Based Linear Model for Data Classification. SENSORS 2020; 20:s20216378. [PMID: 33182270 PMCID: PMC7664945 DOI: 10.3390/s20216378] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/27/2020] [Revised: 11/03/2020] [Accepted: 11/05/2020] [Indexed: 11/25/2022]
Abstract
This paper proposes a novel data classification framework, combining sparse auto-encoders (SAEs) and a post-processing system consisting of a linear system model relying on Particle Swarm Optimization (PSO) algorithm. All the sensitive and high-level features are extracted by using the first auto-encoder which is wired to the second auto-encoder, followed by a Softmax function layer to classify the extracted features obtained from the second layer. The two auto-encoders and the Softmax classifier are stacked in order to be trained in a supervised approach using the well-known backpropagation algorithm to enhance the performance of the neural network. Afterwards, the linear model transforms the calculated output of the deep stacked sparse auto-encoder to a value close to the anticipated output. This simple transformation increases the overall data classification performance of the stacked sparse auto-encoder architecture. The PSO algorithm allows the estimation of the parameters of the linear model in a metaheuristic policy. The proposed framework is validated by using three public datasets, which present promising results when compared with the current literature. Furthermore, the framework can be applied to any data classification problem by considering minor updates such as altering some parameters including input features, hidden neurons and output classes.
Collapse
|
14
|
A Smart Architecture for Diabetic Patient Monitoring Using Machine Learning Algorithms. Healthcare (Basel) 2020; 8:healthcare8030348. [PMID: 32961757 PMCID: PMC7551629 DOI: 10.3390/healthcare8030348] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2020] [Revised: 08/19/2020] [Accepted: 08/27/2020] [Indexed: 11/17/2022] Open
Abstract
Continuous monitoring of diabetic patients improves their quality of life. The use of multiple technologies such as the Internet of Things (IoT), embedded systems, communication technologies, artificial intelligence, and smart devices can reduce the economic costs of the healthcare system. Different communication technologies have made it possible to provide personalized and remote health services. In order to respond to the needs of future intelligent e-health applications, we are called to develop intelligent healthcare systems and expand the number of applications connected to the network. Therefore, the 5G network should support intelligent healthcare applications, to meet some important requirements such as high bandwidth and high energy efficiency. This article presents an intelligent architecture for monitoring diabetic patients by using machine learning algorithms. The architecture elements included smart devices, sensors, and smartphones to collect measurements from the body. The intelligent system collected the data received from the patient, and performed data classification using machine learning in order to make a diagnosis. The proposed prediction system was evaluated by several machine learning algorithms, and the simulation results demonstrated that the sequential minimal optimization (SMO) algorithm gives superior classification accuracy, sensitivity, and precision compared to other algorithms.
Collapse
|
15
|
Data classification based on fractional order gradient descent with momentum for RBF neural network. NETWORK (BRISTOL, ENGLAND) 2020; 31:166-185. [PMID: 33283569 DOI: 10.1080/0954898x.2020.1849842] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/09/2019] [Revised: 05/01/2020] [Accepted: 11/05/2020] [Indexed: 06/12/2023]
Abstract
The weight-updating methods have played an important role in improving the performance of neural networks. To ameliorate the oscillating phenomenon in training radial basis function (RBF) neural network, a fractional order gradient descent with momentum method for updating the weights of RBF neural network (FOGDM-RBF) is proposed for data classification. Its convergence is proved. In order to speed up the convergence process, an adaptive learning rate is used to adjust the training process. The Iris data set and MNIST data set are used to test the proposed algorithm. The results verify the theoretical results of the proposed algorithm such as its monotonicity and convergence. Some non-parametric statistical tests such as Friedman test and Quade test are taken for the comparison of the proposed algorithm with other algorithms. The influence of fractional order, learning rate and batch size is analysed and compared. Error analysis shows that the algorithm can effectively accelerate the convergence speed of gradient descent method and improve its performance with high accuracy and validity.
Collapse
|
16
|
A Dual Neural Architecture Combined SqueezeNet with OctConv for LiDAR Data Classification. SENSORS 2019; 19:s19224927. [PMID: 31726726 PMCID: PMC6891785 DOI: 10.3390/s19224927] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/07/2019] [Revised: 11/06/2019] [Accepted: 11/09/2019] [Indexed: 11/17/2022]
Abstract
Light detection and ranging (LiDAR) is a frequently used technique of data acquisition and it is widely used in diverse practical applications. In recent years, deep convolutional neural networks (CNNs) have shown their effectiveness for LiDAR-derived rasterized digital surface models (LiDAR-DSM) data classification. However, many excellent CNNs have too many parameters due to depth and complexity. Meanwhile, traditional CNNs have spatial redundancy because different convolution kernels scan and store information independently. SqueezeNet replaces a part of 3 × 3 convolution kernels in CNNs with 1 × 1 convolution kernels, decomposes the original one convolution layer into two layers, and encapsulates them into a Fire module. This structure can reduce the parameters of network. Octave Convolution (OctConv) pools some feature maps firstly and stores them separately from the feature maps with the original size. It can reduce spatial redundancy by sharing information between the two groups. In this article, in order to improve the accuracy and efficiency of the network simultaneously, Fire modules of SqueezeNet are used to replace the traditional convolution layers in OctConv to form a new dual neural architecture: OctSqueezeNet. Our experiments, conducted using two well-known LiDAR datasets and several classical state-of-the-art classification methods, revealed that our proposed classification approach based on OctSqueezeNet is able to provide competitive advantages in terms of both classification accuracy and computational amount.
Collapse
|
17
|
Abstract
This study proposes a decision tree-based e-visit classification approach (DTEVCA) to determine clinic visits qualified as e-visits using clinics' medical records and patients' demographic data. This study assumes that health care insurance will subsidise e-visit service costs, in which case, identifying patients who benefit most from e-visit service is essential. Using a large data set from Taiwan's National Health Insurance, this study verifies the efficiency and validity of the DTEVCA. Results indicate that this approach can accurately classify in-office clinic visits that could switch to e-visit services. The straightforward rules of this decision tree also give insurance agencies a clear guideline to understand the circumstances of using e-visits and predict the effects of implementing e-visits in Taiwan. Result of this study can help countries improve the policy formulation process for physicians' use, or for academic research. The DTEVCA can update classification rules using new data to correct biases and ensure the stability of the e-visit system. In addition, the concept of this approach is feasible not only for e-visit service but also for other 'new services' such as new products or new policies.
Collapse
|
18
|
A Weighted Combination Method for Conflicting Evidence in Multi-Sensor Data Fusion. SENSORS 2018; 18:s18051487. [PMID: 29747419 PMCID: PMC5982568 DOI: 10.3390/s18051487] [Citation(s) in RCA: 53] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/30/2018] [Revised: 05/01/2018] [Accepted: 05/01/2018] [Indexed: 11/16/2022]
Abstract
Dempster⁻Shafer evidence theory is widely applied in various fields related to information fusion. However, how to avoid the counter-intuitive results is an open issue when combining highly conflicting pieces of evidence. In order to handle such a problem, a weighted combination method for conflicting pieces of evidence in multi-sensor data fusion is proposed by considering both the interplay between the pieces of evidence and the impacts of the pieces of evidence themselves. First, the degree of credibility of the evidence is determined on the basis of the modified cosine similarity measure of basic probability assignment. Then, the degree of credibility of the evidence is adjusted by leveraging the belief entropy function to measure the information volume of the evidence. Finally, the final weight of each piece of evidence generated from the above steps is obtained and adopted to modify the bodies of evidence before using Dempster’s combination rule. A numerical example is provided to illustrate that the proposed method is reasonable and efficient in handling the conflicting pieces of evidence. In addition, applications in data classification and motor rotor fault diagnosis validate the practicability of the proposed method with better accuracy.
Collapse
|
19
|
Power Allocation Based on Data Classification in Wireless Sensor Networks. SENSORS (BASEL, SWITZERLAND) 2017; 17:s17051107. [PMID: 28498346 PMCID: PMC5470783 DOI: 10.3390/s17051107] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/29/2017] [Revised: 05/08/2017] [Accepted: 05/10/2017] [Indexed: 06/07/2023]
Abstract
Limited node energy in wireless sensor networks is a crucial factor which affects the monitoring of equipment operation and working conditions in coal mines. In addition, due to heterogeneous nodes and different data acquisition rates, the number of arriving packets in a queue network can differ, which may lead to some queue lengths reaching the maximum value earlier compared with others. In order to tackle these two problems, an optimal power allocation strategy based on classified data is proposed in this paper. Arriving data is classified into dissimilar classes depending on the number of arriving packets. The problem is formulated as a Lyapunov drift optimization with the objective of minimizing the weight sum of average power consumption and average data class. As a result, a suboptimal distributed algorithm without any knowledge of system statistics is presented. The simulations, conducted in the perfect channel state information (CSI) case and the imperfect CSI case, reveal that the utility can be pushed arbitrarily close to optimal by increasing the parameter V, but with a corresponding growth in the average delay, and that other tunable parameters W and the classification method in the interior of utility function can trade power optimality for increased average data class. The above results show that data in a high class has priorities to be processed than data in a low class, and energy consumption can be minimized in this resource allocation strategy.
Collapse
|
20
|
Non-invasive method to analyse the risk of developing diabetic foot. Healthc Technol Lett 2015; 1:109-13. [PMID: 26609394 DOI: 10.1049/htl.2014.0076] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2014] [Revised: 10/14/2014] [Accepted: 10/14/2014] [Indexed: 11/20/2022] Open
Abstract
Foot complications (diabetic foot) are among the most serious and costly complications of diabetes mellitus. Amputation of all or part of a lower extremity is usually preceded by a foot ulcer. To prevent diabetic foot, an automatic non-invasive method to identify patients with diabetes who have a high risk of developing diabetic foot is proposed. To design the proposed method, information concerning social scope and self-care of 153 diabetic patients was presented to the K-means clustering algorithm, which divided the data into two groups: high risk and low risk of developing diabetic foot. In the operational stage, the Euclidian distance from the information vector to the centroids of each group of risk is used as criterion for classification. Both real and simulated data were used to evaluate the method in which promising results were achieved with accuracy of 0.97 ± 0.06 for simulated data and 0.68 ± 0.16 considering the classification of specialists as the gold standard for real data. The method requires a simple computational processing and can be useful for basic health units to triage diabetic patients helping the health-care team to reduce the number of cases of diabetic foot.
Collapse
|
21
|
Evaluation of Three State-of-the-Art Classifiers for Recognition of Activities of Daily Living from Smart Home Ambient Data. SENSORS 2015; 15:11725-40. [PMID: 26007727 PMCID: PMC4481906 DOI: 10.3390/s150511725] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/10/2015] [Revised: 05/13/2015] [Accepted: 05/14/2015] [Indexed: 11/16/2022]
Abstract
Smart homes for the aging population have recently started attracting the attention of the research community. The “health state” of smart homes is comprised of many different levels; starting with the physical health of citizens, it also includes longer-term health norms and outcomes, as well as the arena of positive behavior changes. One of the problems of interest is to monitor the activities of daily living (ADL) of the elderly, aiming at their protection and well-being. For this purpose, we installed passive infrared (PIR) sensors to detect motion in a specific area inside a smart apartment and used them to collect a set of ADL. In a novel approach, we describe a technology that allows the ground truth collected in one smart home to train activity recognition systems for other smart homes. We asked the users to label all instances of all ADL only once and subsequently applied data mining techniques to cluster in-home sensor firings. Each cluster would therefore represent the instances of the same activity. Once the clusters were associated to their corresponding activities, our system was able to recognize future activities. To improve the activity recognition accuracy, our system preprocessed raw sensor data by identifying overlapping activities. To evaluate the recognition performance from a 200-day dataset, we implemented three different active learning classification algorithms and compared their performance: naive Bayesian (NB), support vector machine (SVM) and random forest (RF). Based on our results, the RF classifier recognized activities with an average specificity of 96.53%, a sensitivity of 68.49%, a precision of 74.41% and an F-measure of 71.33%, outperforming both the NB and SVM classifiers. Further clustering markedly improved the results of the RF classifier. An activity recognition system based on PIR sensors in conjunction with a clustering classification approach was able to detect ADL from datasets collected from different homes. Thus, our PIR-based smart home technology could improve care and provide valuable information to better understand the functioning of our societies, as well as to inform both individual and collective action in a smart city scenario.
Collapse
|
22
|
Plurigon: three dimensional visualization and classification of high-dimensionality data. Front Physiol 2013; 4:190. [PMID: 23885241 PMCID: PMC3717481 DOI: 10.3389/fphys.2013.00190] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2013] [Accepted: 07/01/2013] [Indexed: 01/02/2023] Open
Abstract
High-dimensionality data is rapidly becoming the norm for biomedical sciences and many other analytical disciplines. Not only is the collection and processing time for such data becoming problematic, but it has become increasingly difficult to form a comprehensive appreciation of high-dimensionality data. Though data analysis methods for coping with multivariate data are well-documented in technical fields such as computer science, little effort is currently being expended to condense data vectors that exist beyond the realm of physical space into an easily interpretable and aesthetic form. To address this important need, we have developed Plurigon, a data visualization and classification tool for the integration of high-dimensionality visualization algorithms with a user-friendly, interactive graphical interface. Unlike existing data visualization methods, which are focused on an ensemble of data points, Plurigon places a strong emphasis upon the visualization of a single data point and its determining characteristics. Multivariate data vectors are represented in the form of a deformed sphere with a distinct topology of hills, valleys, plateaus, peaks, and crevices. The gestalt structure of the resultant Plurigon object generates an easily-appreciable model. User interaction with the Plurigon is extensive; zoom, rotation, axial and vector display, feature extraction, and anaglyph stereoscopy are currently supported. With Plurigon and its ability to analyze high-complexity data, we hope to see a unification of biomedical and computational sciences as well as practical applications in a wide array of scientific disciplines. Increased accessibility to the analysis of high-dimensionality data may increase the number of new discoveries and breakthroughs, ranging from drug screening to disease diagnosis to medical literature mining.
Collapse
|
23
|
Alteration and reorganization of functional networks: a new perspective in brain injury study. Front Hum Neurosci 2011; 5:90. [PMID: 21960965 PMCID: PMC3177176 DOI: 10.3389/fnhum.2011.00090] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2010] [Accepted: 08/11/2011] [Indexed: 11/29/2022] Open
Abstract
Plasticity is the mechanism underlying the brain’s potential capability to compensate injury. Recently several studies have shown how functional connections among the brain areas are severely altered by brain injury and plasticity leading to a reorganization of the networks. This new approach studies the impact of brain injury by means of alteration of functional interactions. The concept of functional connectivity refers to the statistical interdependencies between physiological time series simultaneously recorded in various areas of the brain and it could be an essential tool for brain functional studies, being its deviation from healthy reference an indicator for damage. In this article, we review studies investigating functional connectivity changes after brain injury and subsequent recovery, providing an accessible introduction to common mathematical methods to infer functional connectivity, exploring their capabilities, future perspectives, and clinical uses in brain injury studies.
Collapse
|
24
|
Real-time imaging of human brain function by near-infrared spectroscopy using an adaptive general linear model. Neuroimage 2009; 46:133-43. [PMID: 19457389 PMCID: PMC2758631 DOI: 10.1016/j.neuroimage.2009.01.033] [Citation(s) in RCA: 162] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2008] [Revised: 01/15/2009] [Accepted: 01/23/2009] [Indexed: 11/18/2022] Open
Abstract
Near-infrared spectroscopy is a non-invasive neuroimaging method which uses light to measure changes in cerebral blood oxygenation associated with brain activity. In this work, we demonstrate the ability to record and analyze images of brain activity in real-time using a 16-channel continuous wave optical NIRS system. We propose a novel real-time analysis framework using an adaptive Kalman filter and a state-space model based on a canonical general linear model of brain activity. We show that our adaptive model has the ability to estimate single-trial brain activity events as we apply this method to track and classify experimental data acquired during an alternating bilateral self-paced finger tapping task.
Collapse
|