701
|
Hughes C, Musselman EA, Walsh L, Mariscal T, Warner S, Hintze A, Rashidi N, Gordon-Murer C, Tanha T, Licudo F, Ng R, Tran J. The mPOWERED Electronic Learning System for Intimate Partner Violence Education: Mixed Methods Usability Study. JMIR Nurs 2020; 3:e15828. [PMID: 34345778 PMCID: PMC8279438 DOI: 10.2196/15828] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2019] [Revised: 11/01/2019] [Accepted: 11/25/2019] [Indexed: 11/14/2022] Open
Abstract
Background Nurse practitioners are a common resource for victims of intimate partner violence (IPV) presenting to health care settings. However, they often have inadequate knowledge about IPV and lack self-efficacy and confidence to be able to screen for IPV and communicate effectively with patients. Objective The aim of this study was to develop and test the usability of a blended learning system aimed at educating nurse practitioner students on topics related to IPV (ie, the mPOWERED system [Health Equity Institute]). Methods Development of the mPOWERED system involved usability testing with 7 nurse educators (NEs) and 18 nurse practitioner students. Users were asked to complete usability testing using a speak-aloud procedure and then complete a satisfaction and usability questionnaire. Results Overall, the mPOWERED system was deemed to have high usability and was positively evaluated by both NEs and nurse practitioner students. Respondents provided critical feedback that will be used to improve the system. Conclusions By including target end users in the design and evaluation of the mPOWERED system, we have developed a blended IPV learning system that can easily be integrated into health care education. Larger-scale evaluation of the pedagogical impact of this system is underway.
Collapse
Affiliation(s)
- Charmayne Hughes
- Health Equity Institute San Francisco State University San Francisco, CA United States
| | - Elaine A Musselman
- School of Nursing San Francisco State University San Francisco, CA United States
| | - Lilia Walsh
- Health Equity Institute San Francisco State University San Francisco, CA United States
| | - Tatiana Mariscal
- Health Equity Institute San Francisco State University San Francisco, CA United States
| | - Sam Warner
- Health Equity Institute San Francisco State University San Francisco, CA United States
| | - Amy Hintze
- Health Equity Institute San Francisco State University San Francisco, CA United States
| | - Neela Rashidi
- Health Equity Institute San Francisco State University San Francisco, CA United States
| | - Chloe Gordon-Murer
- Health Equity Institute San Francisco State University San Francisco, CA United States
| | - Tiana Tanha
- Health Equity Institute San Francisco State University San Francisco, CA United States
| | - Fahrial Licudo
- Health Equity Institute San Francisco State University San Francisco, CA United States
| | - Rachel Ng
- Health Equity Institute San Francisco State University San Francisco, CA United States
| | - Jenna Tran
- Health Equity Institute San Francisco State University San Francisco, CA United States
| |
Collapse
|
702
|
Wang Y, Xue M, Wang Y, Yan X, Chen B, Fu CW, Hurter C. Interactive Structure-aware Blending of Diverse Edge Bundling Visualizations. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:687-696. [PMID: 31443025 DOI: 10.1109/tvcg.2019.2934805] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Many edge bundling techniques (i.e., data simplification as a support for data visualization and decision making) exist but they are not directly applicable to any kind of dataset and their parameters are often too abstract and difficult to set up. As a result, this hinders the user ability to create efficient aggregated visualizations. To address these issues, we investigated a novel way of handling visual aggregation with a task-driven and user-centered approach. Given a graph, our approach produces a decluttered view as follows: first, the user investigates different edge bundling results and specifies areas, where certain edge bundling techniques would provide user-desired results. Second, our system then computes a smooth and structural preserving transition between these specified areas. Lastly, the user can further fine-tune the global visualization with a direct manipulation technique to remove the local ambiguity and to apply different visual deformations. In this paper, we provide details for our design rationale and implementation. Also, we show how our algorithm gives more suitable results compared to current edge bundling techniques, and in the end, we provide concrete instances of usages, where the algorithm combines various edge bundling results to support diverse data exploration and visualizations.
Collapse
|
703
|
Optimizing global food supply chains: The case for blockchain and GSI standards. BUILDING THE FUTURE OF FOOD SAFETY TECHNOLOGY 2020. [PMCID: PMC7561516 DOI: 10.1016/b978-0-12-818956-6.00017-8] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
Abstract
This chapter examines the integration of GS1 standards with the functional components of blockchain technology as an approach to realize a coherent standardized framework of industry-based tools for successful food supply chains (FSCs) transformation. The globalization of food systems has engendered significant changes to the operation and structure of FSCs. Alongside increasing consumer demands for safe and sustainable food products, FSCs are challenged with issues related to information transparency and consumer trust. Uncertainty in matters of transparency and trust arises from the growing information asymmetry between food producers and food consumers, in particular, how and where food is cultivated, harvested, processed, and under what conditions. FSCs are tasked with guaranteeing the highest standards in food quality and food safety—ensuring the use of safe and authentic ingredients, limiting product perishability, and mitigating the risk of opportunism, such as quality cheating or falsification of information. A sustainable, food-secure world will require multidirectional sharing of information and enhanced information symmetry between food producers and food consumers. The need for information symmetry will drive transformational changes in FSCs methods of practice and will require a coherent standardized framework of best practice recommendations to manage logistic units in the food chain. A standardized framework will enhance food traceability, drive FSC efficiencies, enable data interoperability, improve data governance practices, and set supply chain identification standards for products and assets (what), exchange parties (who), locations (where), business processes (why), and sequence (when).
Collapse
|
704
|
Vochozka M, Horak J, Krulicky T. Innovations in Management Forecast: Time Development of Stock Prices with Neural Networks. MARKETING AND MANAGEMENT OF INNOVATIONS 2020. [DOI: 10.21272/mmi.2020.2-24] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
Accurate prediction of stock market values is a challenging task for over decades. Prediction of stock prices is associated with numerous benefits including but not limited to helping investors make wise decisions to accumulate profits. The development of the share price is a dynamic and nonlinear process affected by several factors. What is interesting is the unpredictability of share prices due to the global financial crisis. However, classical methods are no longer sufficient for the application of share price development prediction.However, over-relying on prediction data can lead to losses in the case of software malfunction. This paper aims to innovate the prediction management when predicting the share price development over time by the use of neural networks. For the contribution, the data on the prices of CEZ, a.s. shares obtained from the Prague Stock Exchange database. The stock price data are available for the period 2012-2017. In the case of Statistica software, the multilayer perceptron networks (MLP) and the radial basis function networks (RBF) are generated. In the case of Matlab software, the Support Vector Regression (SVR) and the Back-Propagation Neural Network (BPNN) are generated. The networks with the best characteristics are retained and based on the statistical interpretation of the results, and all are applicable in practice. In all data sets, MLP networks show stable performance better than in the case of SVR and BPNN networks. As for the final assessment, the deviation of 2.26% occurs in the most significant differential of the maximal and the minimal prediction. It is not necessarily significant regarding the price of one stock. However, in the case of purchasing or selling a large number of stocks, the difference may seem significant. Therefore, in practice, the application of two networks is recommended: MLP 1-2-1 and MLP 1-5-1. The first network always represents a pessimistic, minimal prediction. The second one of the recommended networks is an optimistic, maximal prediction. The actual situation should correspond to the interval of the difference between the optimistic and pessimistic prediction.
Keywords:
Statistica software, Matlab software, stock price development, neural networks, prediction.
Collapse
Affiliation(s)
- Marek Vochozka
- Institute of Technology and Business in Ceske Budejovice (Czech Republic)
| | - Jakub Horak
- Institute of Technology and Business in Ceske Budejovice (Czech Republic)
| | - Tomas Krulicky
- Institute of Technology and Business in Ceske Budejovice (Czech Republic)
| |
Collapse
|
705
|
|
706
|
Ma Y, Xie T, Li J, Maciejewski R. Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:1075-1085. [PMID: 31478859 DOI: 10.1109/tvcg.2019.2934631] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Machine learning models are currently being deployed in a variety of real-world applications where model predictions are used to make decisions about healthcare, bank loans, and numerous other critical tasks. As the deployment of artificial intelligence technologies becomes ubiquitous, it is unsurprising that adversaries have begun developing methods to manipulate machine learning models to their advantage. While the visual analytics community has developed methods for opening the black box of machine learning models, little work has focused on helping the user understand their model vulnerabilities in the context of adversarial attacks. In this paper, we present a visual analytics framework for explaining and exploring model vulnerabilities to adversarial attacks. Our framework employs a multi-faceted visualization scheme designed to support the analysis of data poisoning attacks from the perspective of models, data instances, features, and local structures. We demonstrate our framework through two case studies on binary classifiers and illustrate model vulnerabilities with respect to varying attack strategies.
Collapse
|
707
|
El Moataz A, Mammass D, Mansouri A, Nouboud F. A Bottom-Up Approach for Pig Skeleton Extraction Using RGB Data. LECTURE NOTES IN COMPUTER SCIENCE 2020. [PMCID: PMC7340904 DOI: 10.1007/978-3-030-51935-3_6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Affiliation(s)
| | - Driss Mammass
- IRF-SIC, Faculty of Sciences, Ibn Zohr University, Agadir, Morocco
| | | | | |
Collapse
|
708
|
Goble C, Cohen-Boulakia S, Soiland-Reyes S, Garijo D, Gil Y, Crusoe MR, Peters K, Schober D. FAIR Computational Workflows. DATA INTELLIGENCE 2020. [DOI: 10.1162/dint_a_00033] [Citation(s) in RCA: 50] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022] Open
Abstract
Computational workflows describe the complex multi-step methods that are used for data collection, data preparation, analytics, predictive modelling, and simulation that lead to new data products. They can inherently contribute to the FAIR data principles: by processing data according to established metadata; by creating metadata themselves during the processing of data; and by tracking and recording data provenance. These properties aid data quality assessment and contribute to secondary data usage. Moreover, workflows are digital objects in their own right. This paper argues that FAIR principles for workflows need to address their specific nature in terms of their composition of executable software steps, their provenance, and their development.
Collapse
Affiliation(s)
- Carole Goble
- Department of Computer Science, The University of Manchester, Oxford Road, Manchester M13 9PL, UK
| | - Sarah Cohen-Boulakia
- Laboratoire de Recherche en Informatique, CNRS, Université Paris-Saclay, Batiment 650, Université Paris-Sud, 91405 ORSAY Cedex, France
| | - Stian Soiland-Reyes
- Department of Computer Science, The University of Manchester, Oxford Road, Manchester M13 9PL, UK
- Common Workflow Language project, Software Freedom Conservancy, Inc. 137 Montague St STE 380, NY 11201-3548, USA
| | - Daniel Garijo
- Information Sciences Institute, University of Southern California, Marina Del Rey CA 90292, USA
| | - Yolanda Gil
- Information Sciences Institute, University of Southern California, Marina Del Rey CA 90292, USA
| | - Michael R. Crusoe
- Common Workflow Language project, Software Freedom Conservancy, Inc. 137 Montague St STE 380, NY 11201-3548, USA
| | - Kristian Peters
- Leibniz Institute of Plant Biochemistry (IPB Halle), Department of Biochemistry of Plant Interactions, Weinberg 3, 06120 Halle (Saale), Germany
| | - Daniel Schober
- Leibniz Institute of Plant Biochemistry (IPB Halle), Department of Biochemistry of Plant Interactions, Weinberg 3, 06120 Halle (Saale), Germany
| |
Collapse
|
709
|
Neural Network Ensembles for Sensor-Based Human Activity Recognition Within Smart Environments. SENSORS 2019; 20:s20010216. [PMID: 31905991 PMCID: PMC6982871 DOI: 10.3390/s20010216] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/13/2019] [Revised: 12/13/2019] [Accepted: 12/21/2019] [Indexed: 11/18/2022]
Abstract
In this paper, we focus on data-driven approaches to human activity recognition (HAR). Data-driven approaches rely on good quality data during training, however, a shortage of high quality, large-scale, and accurately annotated HAR datasets exists for recognizing activities of daily living (ADLs) within smart environments. The contributions of this paper involve improving the quality of an openly available HAR dataset for the purpose of data-driven HAR and proposing a new ensemble of neural networks as a data-driven HAR classifier. Specifically, we propose a homogeneous ensemble neural network approach for the purpose of recognizing activities of daily living within a smart home setting. Four base models were generated and integrated using a support function fusion method which involved computing an output decision score for each base classifier. The contribution of this work also involved exploring several approaches to resolving conflicts between the base models. Experimental results demonstrated that distributing data at a class level greatly reduces the number of conflicts that occur between the base models, leading to an increased performance prior to the application of conflict resolution techniques. Overall, the best HAR performance of 80.39% was achieved through distributing data at a class level in conjunction with a conflict resolution approach, which involved calculating the difference between the highest and second highest predictions per conflicting model and awarding the final decision to the model with the highest differential value.
Collapse
|
710
|
Urakubo H, Bullmann T, Kubota Y, Oba S, Ishii S. UNI-EM: An Environment for Deep Neural Network-Based Automated Segmentation of Neuronal Electron Microscopic Images. Sci Rep 2019; 9:19413. [PMID: 31857624 PMCID: PMC6923391 DOI: 10.1038/s41598-019-55431-0] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2019] [Accepted: 11/28/2019] [Indexed: 12/19/2022] Open
Abstract
Recently, there has been rapid expansion in the field of micro-connectomics, which targets the three-dimensional (3D) reconstruction of neuronal networks from stacks of two-dimensional (2D) electron microscopy (EM) images. The spatial scale of the 3D reconstruction increases rapidly owing to deep convolutional neural networks (CNNs) that enable automated image segmentation. Several research teams have developed their own software pipelines for CNN-based segmentation. However, the complexity of such pipelines makes their use difficult even for computer experts and impossible for non-experts. In this study, we developed a new software program, called UNI-EM, for 2D and 3D CNN-based segmentation. UNI-EM is a software collection for CNN-based EM image segmentation, including ground truth generation, training, inference, postprocessing, proofreading, and visualization. UNI-EM incorporates a set of 2D CNNs, i.e., U-Net, ResNet, HighwayNet, and DenseNet. We further wrapped flood-filling networks (FFNs) as a representative 3D CNN-based neuron segmentation algorithm. The 2D- and 3D-CNNs are known to demonstrate state-of-the-art level segmentation performance. We then provided two example workflows: mitochondria segmentation using a 2D CNN and neuron segmentation using FFNs. By following these example workflows, users can benefit from CNN-based segmentation without possessing knowledge of Python programming or CNN frameworks.
Collapse
Affiliation(s)
- Hidetoshi Urakubo
- Integrated Systems Biology Laboratory, Department of Systems Science, Graduate School of Informatics, Kyoto University, Yoshida-Honmachi 36-1, Sakyo-ku, Kyoto, 606-8501, Japan.
| | - Torsten Bullmann
- Integrated Systems Biology Laboratory, Department of Systems Science, Graduate School of Informatics, Kyoto University, Yoshida-Honmachi 36-1, Sakyo-ku, Kyoto, 606-8501, Japan.,Carl-Ludwig-Institute for Physiology, University Leipzig, Liebigstr. 27, 04103, Leipzig, Germany
| | - Yoshiyuki Kubota
- Division of Cerebral Circuitry, National Institute for Physiological Sciences, 5-1 Myodaiji-Higashiyama, Okazaki, Aichi, 444-8787, Japan.,Department of Physiological Sciences, The Graduate University for Advanced Studies (SOKENDAI), 5-1 Myodaiji-Higashiyama, Okazaki, Aichi, 444-8787, Japan
| | - Shigeyuki Oba
- Integrated Systems Biology Laboratory, Department of Systems Science, Graduate School of Informatics, Kyoto University, Yoshida-Honmachi 36-1, Sakyo-ku, Kyoto, 606-8501, Japan
| | - Shin Ishii
- Integrated Systems Biology Laboratory, Department of Systems Science, Graduate School of Informatics, Kyoto University, Yoshida-Honmachi 36-1, Sakyo-ku, Kyoto, 606-8501, Japan
| |
Collapse
|
711
|
Aivazoglou M, Roussos AO, Margaris D, Vassilakis C, Ioannidis S, Polakis J, Spiliotopoulos D. A fine-grained social network recommender system. SOCIAL NETWORK ANALYSIS AND MINING 2019. [DOI: 10.1007/s13278-019-0621-7] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
712
|
Ukoha C, Stranieri A. Criteria to Measure Social Media Value in Health Care Settings: Narrative Literature Review. J Med Internet Res 2019; 21:e14684. [PMID: 31841114 PMCID: PMC6937544 DOI: 10.2196/14684] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2019] [Revised: 08/03/2019] [Accepted: 08/20/2019] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND With the growing use of social media in health care settings, there is a need to measure outcomes resulting from its use to ensure continuous performance improvement. Despite the need for measurement, a unified approach for measuring the value of social media used in health care remains elusive. OBJECTIVE This study aimed to elucidate how the value of social media in health care settings can be ascertained and to taxonomically identify steps and techniques in social media measurement from a review of relevant literature. METHODS A total of 65 relevant articles drawn from 341 articles on the subject of measuring social media in health care settings were qualitatively analyzed and synthesized. The articles were selected from the literature from diverse disciplines including business, information systems, medical informatics, and medicine. RESULTS The review of the literature showed different levels and focus of analysis when measuring the value of social media in health care settings. It equally showed that there are various metrics for measurement, levels of measurement, approaches to measurement, and scales of measurement. Each may be relevant, depending on the use case of social media in health care. CONCLUSIONS A comprehensive yardstick is required to simplify the measurement of outcomes resulting from the use of social media in health care. At the moment, there is neither a consensus on what indicators to measure nor on how to measure them. We hope that this review is used as a starting point to create a comprehensive measurement criterion for social media used in health care.
Collapse
Affiliation(s)
- Chukwuma Ukoha
- Centre for Informatics and Applied Optimisation, Federation University Australia, Ballarat, Australia
| | - Andrew Stranieri
- Centre for Informatics and Applied Optimisation, Federation University Australia, Ballarat, Australia
| |
Collapse
|
713
|
Hakim NL, Shih TK, Kasthuri Arachchi SP, Aditya W, Chen YC, Lin CY. Dynamic Hand Gesture Recognition Using 3DCNN and LSTM with FSM Context-Aware Model. SENSORS 2019; 19:s19245429. [PMID: 31835404 PMCID: PMC6961023 DOI: 10.3390/s19245429] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/16/2019] [Revised: 11/30/2019] [Accepted: 12/06/2019] [Indexed: 11/16/2022]
Abstract
With the recent growth of Smart TV technology, the demand for unique and beneficial applications motivates the study of a unique gesture-based system for a smart TV-like environment. Combining movie recommendation, social media platform, call a friend application, weather updates, chatting app, and tourism platform into a single system regulated by natural-like gesture controller is proposed to allow the ease of use and natural interaction. Gesture recognition problem solving was designed through 24 gestures of 13 static and 11 dynamic gestures that suit to the environment. Dataset of a sequence of RGB and depth images were collected, preprocessed, and trained in the proposed deep learning architecture. Combination of three-dimensional Convolutional Neural Network (3DCNN) followed by Long Short-Term Memory (LSTM) model was used to extract the spatio-temporal features. At the end of the classification, Finite State Machine (FSM) communicates the model to control the class decision results based on application context. The result suggested the combination data of depth and RGB to hold 97.8% of accuracy rate on eight selected gestures, while the FSM has improved the recognition rate from 89% to 91% in a real-time performance.
Collapse
Affiliation(s)
- Noorkholis Luthfil Hakim
- Department of Computer Science and Information Engineering, National Central University, Taoyuan 32001, Taiwan; (N.L.H.); (S.P.K.A.); (W.A.)
| | - Timothy K. Shih
- Department of Computer Science and Information Engineering, National Central University, Taoyuan 32001, Taiwan; (N.L.H.); (S.P.K.A.); (W.A.)
- Correspondence: or
| | | | - Wisnu Aditya
- Department of Computer Science and Information Engineering, National Central University, Taoyuan 32001, Taiwan; (N.L.H.); (S.P.K.A.); (W.A.)
| | - Yi-Cheng Chen
- Department of Information Management, National Central University, Taoyuan 32001, Taiwan;
| | - Chih-Yang Lin
- Department of Electrical Engineering, Yuan Ze University, Taoyuan 32003, Taiwan;
| |
Collapse
|
714
|
Rollin G, Lages J, Serebriyskaya TS, Shepelyansky DL. Interactions of pharmaceutical companies with world countries, cancers and rare diseases from Wikipedia network analysis. PLoS One 2019; 14:e0225500. [PMID: 31800621 PMCID: PMC6892506 DOI: 10.1371/journal.pone.0225500] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2019] [Accepted: 10/08/2019] [Indexed: 12/11/2022] Open
Abstract
Using the English Wikipedia network of more than 5 million articles we analyze interactions and interlinks between the 34 largest pharmaceutical companies, 195 world countries, 47 rare renal diseases and 37 types of cancer. The recently developed algorithm using a reduced Google matrix (REGOMAX) allows us to take account both of direct Markov transitions between these articles and also of indirect transitions generated by the pathways between them via the global Wikipedia network. This approach therefore provides a compact description of interactions between these articles that allows us to determine the friendship networks between them, as well as the PageRank sensitivity of countries to pharmaceutical companies and rare renal diseases. We also show that the top pharmaceutical companies in terms of their Wikipedia PageRank are not those with the highest market capitalization.
Collapse
Affiliation(s)
- Guillaume Rollin
- Institut UTINAM, CNRS, UMR 6213, OSU THETA, Université de Bourgogne Franche-Comté, Besançon, France
| | - José Lages
- Institut UTINAM, CNRS, UMR 6213, OSU THETA, Université de Bourgogne Franche-Comté, Besançon, France
| | - Tatiana S. Serebriyskaya
- Laboratory for Translational Research and Personalized Medicine, Moscow Institute of Physics and Technology, Moscow, Russia
| | - Dima L. Shepelyansky
- Laboratoire de Physique Théorique, IRSAMC, Université de Toulouse, CNRS, UPS, 31062 Toulouse, France
| |
Collapse
|
715
|
Ashaye OR, Irani Z. The role of stakeholders in the effective use of e-government resources in public services. INTERNATIONAL JOURNAL OF INFORMATION MANAGEMENT 2019. [DOI: 10.1016/j.ijinfomgt.2019.05.016] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
716
|
Cero Dinarević E, Baraković Husić J, Baraković S. Step by Step Towards Effective Human Activity Recognition: A Balance between Energy Consumption and Latency in Health and Wellbeing Applications. SENSORS (BASEL, SWITZERLAND) 2019; 19:E5206. [PMID: 31783705 PMCID: PMC6928889 DOI: 10.3390/s19235206] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/02/2019] [Revised: 11/11/2019] [Accepted: 11/17/2019] [Indexed: 05/14/2023]
Abstract
Human activity recognition (HAR) is a classification process that is used for recognizing human motions. A comprehensive review of currently considered approaches in each stage of HAR, as well as the influence of each HAR stage on energy consumption and latency is presented in this paper. It highlights various methods for the optimization of energy consumption and latency in each stage of HAR that has been used in literature and was analyzed in order to provide direction for the implementation of HAR in health and wellbeing applications. This paper analyses if and how each stage of the HAR process affects energy consumption and latency. It shows that data collection and filtering and data segmentation and classification stand out as key stages in achieving a balance between energy consumption and latency. Since latency is only critical for real-time HAR applications, the energy consumption of sensors and devices stands out as a key challenge for HAR implementation in health and wellbeing applications. Most of the approaches in overcoming challenges related to HAR implementation take place in the data collection, filtering and classification stages, while the data segmentation stage needs further exploration. Finally, this paper recommends a balance between energy consumption and latency for HAR in health and wellbeing applications, which takes into account the context and health of the target population.
Collapse
Affiliation(s)
- Enida Cero Dinarević
- Department for Information Technology, American University in Bosnia and Herzegovina, 75000 Tuzla, Bosnia and Herzegovina
| | - Jasmina Baraković Husić
- Department of Telecommunications, Faculty of Electrical Engineering, University of Sarajevo, 71000 Sarajevo, Bosnia and Herzegovina;
| | - Sabina Baraković
- Department for IT and Telecommunication Systems, Ministry of Security of Bosnia and Herzegovina, 71000 Sarajevo, Bosnia and Herzegovina;
| |
Collapse
|
717
|
Abstract
One positive impact of smart cities is reducing energy consumption and CO2 emission through the use of information and communication technologies (ICT). Energy transition pursues systematic changes to the low-carbon society, and it can benefit from technological and institutional advancement in smart cities. The integration of the energy transition to smart city development has not been thoroughly studied yet. The purpose of this study is to find empirical evidence of smart cities’ contributions to energy transition. The hypothesis is that there is a significant difference between smart and non-smart cities in the performance of energy transition. The Smart Energy Transition Index is introduced. Index is useful to summarize the smart city component’s contribution to energy transition and to enable comparison among cities. The cities in South Korea are divided into three groups: (1) first-wave smart cities that focus on smart transportation and security services; (2) second-wave smart cities that provide comprehensive urban services; and (3) non-smart cities. The results showed that second-wave smart cities scored higher than first-wave and non-smart cities, and there is a statistically significant difference among city groups. This confirms the hypothesis of this paper that smart city development can contribute to the energy transition.
Collapse
|
718
|
Khan FZ, Soiland-Reyes S, Sinnott RO, Lonie A, Goble C, Crusoe MR. Sharing interoperable workflow provenance: A review of best practices and their practical application in CWLProv. Gigascience 2019; 8:giz095. [PMID: 31675414 PMCID: PMC6824458 DOI: 10.1093/gigascience/giz095] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2018] [Revised: 05/23/2019] [Accepted: 07/17/2019] [Indexed: 01/22/2023] Open
Abstract
BACKGROUND The automation of data analysis in the form of scientific workflows has become a widely adopted practice in many fields of research. Computationally driven data-intensive experiments using workflows enable automation, scaling, adaptation, and provenance support. However, there are still several challenges associated with the effective sharing, publication, and reproducibility of such workflows due to the incomplete capture of provenance and lack of interoperability between different technical (software) platforms. RESULTS Based on best-practice recommendations identified from the literature on workflow design, sharing, and publishing, we define a hierarchical provenance framework to achieve uniformity in provenance and support comprehensive and fully re-executable workflows equipped with domain-specific information. To realize this framework, we present CWLProv, a standard-based format to represent any workflow-based computational analysis to produce workflow output artefacts that satisfy the various levels of provenance. We use open source community-driven standards, interoperable workflow definitions in Common Workflow Language (CWL), structured provenance representation using the W3C PROV model, and resource aggregation and sharing as workflow-centric research objects generated along with the final outputs of a given workflow enactment. We demonstrate the utility of this approach through a practical implementation of CWLProv and evaluation using real-life genomic workflows developed by independent groups. CONCLUSIONS The underlying principles of the standards utilized by CWLProv enable semantically rich and executable research objects that capture computational workflows with retrospective provenance such that any platform supporting CWL will be able to understand the analysis, reuse the methods for partial reruns, or reproduce the analysis to validate the published findings.
Collapse
Affiliation(s)
- Farah Zaib Khan
- The University of Melbourne, School of Computing and Information System, Doug Mcdonnell Building, Parkville, Australia, 3052
- Common Workflow Language Project
| | | | - Richard O Sinnott
- The University of Melbourne, School of Computing and Information System, Doug Mcdonnell Building, Parkville, Australia, 3052
| | - Andrew Lonie
- The University of Melbourne, School of Computing and Information System, Doug Mcdonnell Building, Parkville, Australia, 3052
| | | | | |
Collapse
|
719
|
Batmaz Z, Kaleli C. AE-MCCF: An Autoencoder-Based Multi-criteria Recommendation Algorithm. ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING 2019. [DOI: 10.1007/s13369-019-03946-z] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
720
|
Experience Design, Virtual Reality and Media Hybridization for the Digital Communication Inside Museums. APPLIED SYSTEM INNOVATION 2019. [DOI: 10.3390/asi2040035] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Experience design, both in real and in virtual museums, is very complex to be planned, even more when digital contents are juxtaposed to real collections. Researchers in this field, curators, creatives and software developers must work together in order to evolve towards a more efficient interconnection among visitors, collections and digital applications. This paper deals with such an interconnection, providing a theoretical background and practical guidelines, on the basis of museum studies and of the author’s research experience in this domain, supported by the results of surveys carried out on European museums visitors dealing with digital technologies. Media hybridization has a long and evolving tradition and it can contribute to transmit culture in engaging way to the public, respecting the scientific plausibility of contents. The choice of narrative structures and styles, as well as of interaction paradigms and technologies, is deeply conditioned by a series of factors that will be examined in detail. A general “direction” is needed to arouse in the visitor a feeling of confidence and trust, expectation and discovery that makes him/her feel at the center of an emotional and creative experience, of a progressive appropriation of meaning. Various typologies of experiences will be discussed and compared.
Collapse
|
721
|
Schöpfel J, Azeroual O, Saake G. Implementation and user acceptance of research information systems. DATA TECHNOLOGIES AND APPLICATIONS 2019. [DOI: 10.1108/dta-01-2019-0009] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Purpose
The purpose of this paper is to present empirical evidence on the implementation, acceptance and quality-related aspects of research information systems (RIS) in academic institutions.
Design/methodology/approach
The study is based on a 2018 survey with 160 German universities and research institutions.
Findings
The paper presents recent figures about the implementation of RIS in German academic institutions, including results on the satisfaction, perceived usefulness and ease of use. It contains also information about the perceived data quality and the preferred quality management. RIS acceptance can be achieved only if the highest possible quality of the data is to be ensured. For this reason, the impact of data quality on the technology acceptance model (TAM) is examined, and the relation between the level of data quality and user acceptance of the associated institutional RIS is addressed.
Research limitations/implications
The data provide empirical elements for a better understanding of the role of the data quality for the acceptance of RIS, in the framework of a TAM. The study puts the focus on commercial and open-source solutions while in-house developments have been excluded. Also, mainly because of the small sample size, the data analysis was limited to descriptive statistics.
Practical implications
The results are helpful for the management of RIS projects, to increase acceptance and satisfaction with the system, and for the further development of RIS functionalities.
Originality/value
The number of empirical studies on the implementation and acceptance of RIS is low, and very few address in this context the question of data quality. The study tries to fill the gap.
Collapse
|
722
|
Abstract
This research aims to explore how to enhance student engagement in higher education institutions (HEIs) while using a novel conversational system (chatbots). The principal research methodology for this study is design science research (DSR), which is executed in three iterations: personas elicitation, a survey and development of student engagement factor models (SEFMs), and chatbot interaction analysis. This paper focuses on the first iteration, personas elicitation, which proposes a data-driven persona development method (DDPDM) that utilises machine learning, specifically the K-means clustering technique. Data analysis is conducted using two datasets. Three methods are used to find the K-values: the elbow, gap statistic, and silhouette methods. Subsequently, the silhouette coefficient is used to find the optimal value of K. Eight personas are produced from the two data analyses. The pragmatic findings from this study make two contributions to the current literature. Firstly, the proposed DDPDM uses machine learning, specifically K-means clustering, to build data-driven personas. Secondly, the persona template is designed for university students, which supports the construction of data-driven personas. Future work will cover the second and third iterations. It will cover building SEFMs, building tailored interaction models for these personas and then evaluating them using chatbot technology.
Collapse
|
723
|
Zmitri M, Fourati H, Vuillerme AN. Human Activities and Postures Recognition: From Inertial Measurements to Quaternion-Based Approaches. SENSORS 2019; 19:s19194058. [PMID: 31547055 PMCID: PMC6806241 DOI: 10.3390/s19194058] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/25/2019] [Revised: 09/13/2019] [Accepted: 09/16/2019] [Indexed: 01/24/2023]
Abstract
This paper presents two approaches to assess the effect of the number of inertial sensors and their location placements on recognition of human postures and activities. Inertial and Magnetic Measurement Units (IMMUs)—which consist of a triad of three-axis accelerometer, three-axis gyroscope, and three-axis magnetometer sensors—are used in this work. Five IMMUs are initially used and attached to different body segments. Placements of up to three IMMUs are then considered: back, left foot, and left thigh. The subspace k-nearest neighbors (KNN) classifier is used to achieve the supervised learning process and the recognition task. In a first approach, we feed raw data from three-axis accelerometer and three-axis gyroscope into the classifier without any filtering or pre-processing, unlike what is usually reported in the state-of-the-art where statistical features were computed instead. Results show the efficiency of this method for the recognition of the studied activities and postures. With the proposed algorithm, more than 80% of the activities and postures are correctly classified using one IMMU, placed on the lower back, left thigh, or left foot location, and more than 90% when combining all three placements. In a second approach, we extract attitude, in term of quaternion, from IMMUs in order to more precisely achieve the recognition process. The obtained accuracy results are compared to those obtained when only raw data is exploited. Results show that the use of attitude significantly improves the performance of the classifier, especially for certain specific activities. In that case, it was further shown that using a smaller number of features, with quaternion, in the recognition process leads to a lower computation time and better accuracy.
Collapse
Affiliation(s)
- Makia Zmitri
- GIPSA-Lab, Department of Automatic Control, University Grenoble Alpes, 38000Grenoble, France.
- Univ. Grenoble Alpes, AGEIS, 38000 Grenoble, France.
| | - Hassen Fourati
- GIPSA-Lab, Department of Automatic Control, University Grenoble Alpes, 38000Grenoble, France.
| | - And Nicolas Vuillerme
- Univ. Grenoble Alpes, AGEIS, 38000 Grenoble, France.
- Institut Universitaire de France, 75231 Paris, France.
| |
Collapse
|
724
|
Rollin G, Lages J, Shepelyansky DL. Wikipedia network analysis of cancer interactions and world influence. PLoS One 2019; 14:e0222508. [PMID: 31536541 PMCID: PMC6752824 DOI: 10.1371/journal.pone.0222508] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2019] [Accepted: 08/31/2019] [Indexed: 02/07/2023] Open
Abstract
We apply the Google matrix algorithms for analysis of interactions and influence of 37 cancer types, 203 cancer drugs and 195 world countries using the network of 5 416 537 English Wikipedia articles with all their directed hyperlinks. The PageRank algorithm provides a ranking of cancers which has 60% and 70% overlaps with the top 10 deadliest cancers extracted from World Health Organization GLOBOCAN 2018 and Global Burden of Diseases Study 2017, respectively. The recently developed reduced Google matrix algorithm gives networks of interactions between cancers, drugs and countries taking into account all direct and indirect links between these selected 435 entities. These reduced networks allow to obtain sensitivity of countries to specific cancers and drugs. The strongest links between cancers and drugs are in good agreement with the approved medical prescriptions of specific drugs to specific cancers. We argue that this analysis of knowledge accumulated in Wikipedia provides useful complementary global information about interdependencies between cancers, drugs and world countries.
Collapse
Affiliation(s)
- Guillaume Rollin
- Institut UTINAM, CNRS, UMR 6213, OSU THETA, Université de Bourgogne Franche-Comté, Besançon, France
| | - José Lages
- Institut UTINAM, CNRS, UMR 6213, OSU THETA, Université de Bourgogne Franche-Comté, Besançon, France
| | - Dima L. Shepelyansky
- Laboratoire de Physique Théorique, IRSAMC, Université de Toulouse, CNRS, UPS, Toulouse, France
| |
Collapse
|
725
|
Dementia Patient Segmentation Using EMR Data Visualization: A Design Study. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2019; 16:ijerph16183438. [PMID: 31527556 PMCID: PMC6765847 DOI: 10.3390/ijerph16183438] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/01/2019] [Revised: 08/31/2019] [Accepted: 09/07/2019] [Indexed: 11/17/2022]
Abstract
(1) Background: The Electronic Medical Record system, which is a digital medical record management architecture, is critical for reliable medical research. It facilitates the investigation of disease patterns and efficient treatment via collaboration with data scientists. (2) Methods: In this study, we present multidimensional visual tools for the analysis of multidimensional datasets via a combination of 3-dimensional radial coordinate visualization (3D RadVis) and many-objective optimization (e.g., Parallel Coordinates). Also, we propose a user-driven research design to facilitate visualization. We followed a design process to (1) understand the demands of domain experts, (2) define the problems based on relevant works, (3) design visualization, (4) implement visualization, and (5) enable qualitative evaluation by domain experts. (3) Results: This study provides clinical insight into dementia based on EMR data via visual analysis. Results of a case study based on questionnaires surveying daily living activities indicated that daily behaviors influenced the progression of dementia. (4) Conclusions: This study provides a visual analytical tool to support cluster segmentation. Using this tool, we segmented dementia patients into clusters and interpreted the behavioral patterns of each group. This study contributes to biomedical data interpretation based on a visual approach.
Collapse
|
726
|
Abstract
Tourism forecasting is a significant tool/attribute in tourist industry in order to provide for careful planning and management of tourism resources. Although accurate tourist volume prediction is a very challenging task, reliable and precise predictions offer the opportunity of gaining major profits. Thus, the development and implementation of more sophisticated and advanced machine learning algorithms can be beneficial for the tourism forecasting industry. In this work, we explore the prediction performance of Weight Constrained Neural Networks (WCNNs) for forecasting tourist arrivals in Greece. WCNNs constitute a new machine learning prediction model that is characterized by the application of box-constraints on the weights of the network. Our experimental results indicate that WCNNs outperform classical neural networks and the state-of-the-art regression models: support vector regression, k-nearest neighbor regression, radial basis function neural network, M5 decision tree and Gaussian processes.
Collapse
|
727
|
Livieris IE, Pintelas P. An adaptive nonmonotone active set – weight constrained – neural network training algorithm. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.06.033] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
728
|
An Energy-Efficient Method for Human Activity Recognition with Segment-Level Change Detection and Deep Learning. SENSORS 2019; 19:s19173688. [PMID: 31450654 PMCID: PMC6749525 DOI: 10.3390/s19173688] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/28/2019] [Revised: 08/19/2019] [Accepted: 08/23/2019] [Indexed: 11/24/2022]
Abstract
Human activity recognition (HAR), which is important in context awareness services, needs to occur continuously in daily life, owing to which an energy-efficient method is needed. However, because human activities have a longer cycle than HAR methods, which have analysis cycles of a few seconds, continuous classification of human activities using these methods is computationally and energy inefficient. Therefore, we propose segment-level change detection to identify activity change with very low computational complexity. Additionally, a fully convolutional network (FCN) with a high recognition rate is used to classify the activity only when activity change occurs. We compared the accuracy and energy consumption of the proposed method with that of a method based on a convolutional neural network (CNN) by using a public dataset on different embedded platforms. The experimental results showed that, although the recognition rate of the proposed FCN model is similar to that of the CNN model, the former requires only 10% of the network parameters of the CNN model. In addition, our experiments to measure the energy consumption on the embedded platforms showed that the proposed method uses as much as 6.5 times less energy than the CNN-based method when only HAR energy consumption is compared.
Collapse
|
729
|
Multilingual Ranking of Wikipedia Articles with Quality and Popularity Assessment in Different Topics. COMPUTERS 2019. [DOI: 10.3390/computers8030060] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
On Wikipedia, articles about various topics can be created and edited independently in each language version. Therefore, the quality of information about the same topic depends on the language. Any interested user can improve an article and that improvement may depend on the popularity of the article. The goal of this study is to show what topics are best represented in different language versions of Wikipedia using results of quality assessment for over 39 million articles in 55 languages. In this paper, we also analyze how popular selected topics are among readers and authors in various languages. We used two approaches to assign articles to various topics. First, we selected 27 main multilingual categories and analyzed all their connections with sub-categories based on information extracted from over 10 million categories in 55 language versions. To classify the articles to one of the 27 main categories, we took into account over 400 million links from articles to over 10 million categories and over 26 million links between categories. In the second approach, we used data from DBpedia and Wikidata. We also showed how the results of the study can be used to build local and global rankings of the Wikipedia content.
Collapse
|
730
|
Abstract
This contribution provides a systematic literature review of Human Activity Recognition for Production and Logistics. An initial list of 1243 publications that complies with predefined Inclusion Criteria was surveyed by three reviewers. Fifty-two publications that comply with the Content Criteria were analysed regarding the observed activities, sensor attachment, utilised datasets, sensor technology and the applied methods of HAR. This review is focused on applications that use marker-based Motion Capturing or Inertial Measurement Units. The analysed methods can be deployed in industrial application of Production and Logistics or transferred from related domains into this field. The findings provide an overview of the specifications of state-of-the-art HAR approaches, statistical pattern recognition and deep architectures and they outline a future road map for further research from a practitioner’s perspective.
Collapse
|
731
|
Process Innovation and Improvement Using Business Object-Oriented Process Modelling (BOOPM) Framework. APPLIED SYSTEM INNOVATION 2019. [DOI: 10.3390/asi2030023] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
In the past decades, a number of methodologies have been proposed to innovate and improve business processes that play an important role in enhancing the operational efficiency of an organisation in order to attain business competitiveness. Traditional business process modelling (BPM) approaches are process-centric and focus on the workflow, ignoring the data modelling aspects that are essential for today’s data-centric landscape of modern businesses. Hence, a majority of BPM initiatives have failed in several organisations due to the lack of data-driven insights into their business performance. On the other hand, the information systems of today focus more on dataflows using object-oriented modelling (OOM) approaches. Even standard OOM approaches, such as unified modelling language (UML) methods, exhibit inherent weaknesses due to their lack of formalized innovation with business objects and the dynamic control-flows of complex business processes. In addition to these issues, both BPM and OOM approaches have been augmented with an array of complex software tools and techniques which have confused businesses. There is a lack of a common generalized framework that integrates the well-formalised control-flow based BPM approach and the dataflow based OOM approach that is suitable for today’s enterprise systems in order to support organisations to achieve successful business process improvements. This paper takes a modest step to fill this gap. We propose a framework using a structured six-step business process modelling (BPM) guideline combined with a business object-oriented methodology (BOOM) in a unique and practical way that could be adopted for improving an organisation’s process efficiency and business performance in contemporary enterprise systems. Our proposed business object-oriented process modelling (BOOPM) framework is applied to a business case study in order to demonstrate the practical implementation and process efficiency improvements that can be achieved in enterprise systems using such a structured and integrated approach.
Collapse
|
732
|
Helou S, Abou-Khalil V, Yamamoto G, Kondoh E, Tamura H, Hiragi S, Sugiyama O, Okamoto K, Nambu M, Kuroda T. Understanding the Situated Roles of Electronic Medical Record Systems to Enable Redesign: Mixed Methods Study. JMIR Hum Factors 2019; 6:e13812. [PMID: 31290398 PMCID: PMC6647759 DOI: 10.2196/13812] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2019] [Revised: 04/29/2019] [Accepted: 06/20/2019] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Redesigning electronic medical record (EMR) systems is needed to improve their usability and usefulness. Similar to other artifacts, EMR systems can evolve with time and exhibit situated roles. Situated roles refer to the ways in which a system is appropriated by its users, that is, the unintended ways the users engage with, relate to, and perceive the system in its context of use. These situated roles are usually unknown to the designers as they emerge and evolve as a response by the users to a contextual need or constraint. Understanding the system's situated roles can expose the unarticulated needs of the users and enable redesign opportunities. OBJECTIVE This study aimed to find EMR redesign opportunities by understanding the situated roles of EMR systems in prenatal care settings. METHODS We conducted a field-based observational study at a Japanese prenatal care clinic. We observed 3 obstetricians and 6 midwives providing prenatal care to 37 pregnant women. We looked at how the EMR system is used during the checkups. We analyzed the observational data following a thematic analysis approach and identified the situated roles of the EMR system. Finally, we administered a survey to 5 obstetricians and 10 midwives to validate our results and understand the attitudes of the prenatal care staff regarding the situated roles of the EMR system. RESULTS We identified 10 distinct situated roles that EMR systems play in prenatal care settings. Among them, 4 roles were regarded as favorable as most users wanted to experience them more frequently, and 4 roles were regarded as unfavorable as most users wanted to experience them less frequently; 2 ambivalent roles highlighted the providers' reluctance to document sensitive psychosocial information in the EMR and their use of the EMR system as an accomplice to pause communication during the checkups. To improve the usability and usefulness of EMR systems, designers can amplify the favorable roles and minimize the unfavorable roles. Our results also showed that obstetricians and midwives may have different experiences, wants, and priorities regarding the use of the EMR system. CONCLUSIONS Currently, EMR systems are mainly viewed as tools that support the clinical workflow. Redesigning EMR systems is needed to amplify their roles as communication support tools. Our results provided multiple EMR redesign opportunities to improve the usability and usefulness of EMR systems in prenatal care. Designers can use the results to guide their EMR redesign activities and align them with the users' wants and priorities. The biggest challenge is to redesign EMR systems in a way that amplifies their favorable roles for all the stakeholders concurrently.
Collapse
Affiliation(s)
- Samar Helou
- Department of Social Informatics, Graduate School of Informatics, Kyoto University, Kyoto, Japan
| | - Victoria Abou-Khalil
- Department of Social Informatics, Graduate School of Informatics, Kyoto University, Kyoto, Japan
| | - Goshiro Yamamoto
- Division of Medical Information Technology and Administration Planning, Kyoto University Hospital, Kyoto, Japan
| | - Eiji Kondoh
- Department of Gynecology and Obstetrics, Graduate School of Medicine, Kyoto University, Kyoto, Japan
| | - Hiroshi Tamura
- Center for Innovative Research and Education in Data Science, Kyoto University, Kyoto, Japan
| | - Shusuke Hiragi
- Division of Medical Information Technology and Administration Planning, Kyoto University Hospital, Kyoto, Japan
| | - Osamu Sugiyama
- Preemptive Medicine and Lifestyle Related Diseases Research Center, Kyoto University Hospital, Kyoto, Japan
| | - Kazuya Okamoto
- Division of Medical Information Technology and Administration Planning, Kyoto University Hospital, Kyoto, Japan
| | - Masayuki Nambu
- Preemptive Medicine and Lifestyle Related Diseases Research Center, Kyoto University Hospital, Kyoto, Japan
| | - Tomohiro Kuroda
- Division of Medical Information Technology and Administration Planning, Kyoto University Hospital, Kyoto, Japan
| |
Collapse
|
733
|
|
734
|
IVAN: An Interactive Herlofson’s Nomogram Visualizer for Local Weather Forecast. COMPUTERS 2019. [DOI: 10.3390/computers8030053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
In 1947, N. Herlofson proposed a modification to the 1884 Heinrich Hertz’s Emagram with the goal of getting more precise hand-made weather forecasts providing larger angles between isotherms and adiabats. Since then, the Herlofson’s nomogram has been used every day to visualize the results of about 800 radiosonde balloons that, twice a day, are globally released, sounding the atmosphere and reading pressure, altitude, temperature, dew point, and wind velocity. Relevant weather forecasts use such pieces of information to predict fog, cloud height, rain, thunderstorms, etc. However, despite its diffusion, non-technical people (e.g., private gliding pilots) do not use the Herlofson’s nomogram because they often consider it hard to interpret and confusing. This paper copes with this problem presenting a visualization based environment that presents the Herlofson’s nomogram in an easier to interpret way, allowing the selection of the right level of detail and at the same time inspection of the sounding row data and the plotted diagram. Our visual environment was compared with the classic way of representing the Herlofson’s nomogram in a formal user study, demonstrating the higher efficacy and better comprehensibility of the proposed solution.
Collapse
|
735
|
Abstract
When using computer-aided translation systems in a typical, professional translation workflow, there are several stages at which there is room for improvement. The SCATE (Smart Computer-Aided Translation Environment) project investigated several of these aspects, both from a human-computer interaction point of view, as well as from a purely technological side. This paper describes the SCATE research with respect to improved fuzzy matching, parallel treebanks, the integration of translation memories with machine translation, quality estimation, terminology extraction from comparable texts, the use of speech recognition in the translation process, and human computer interaction and interface design for the professional translation environment. For each of these topics, we describe the experiments we performed and the conclusions drawn, providing an overview of the highlights of the entire SCATE project.
Collapse
|
736
|
Factors Affecting the Performance of Recommender Systems in a Smart TV Environment. TECHNOLOGIES 2019. [DOI: 10.3390/technologies7020041] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The recommender systems are deployed on the Web for reducing cognitive overload. It uses different parameters, such as profile information, feedbacks, history, etc., as input and recommends items to a user or group of users. Such parameters are easy to predict and calculate for a single user on a personalized device, such as a personal computer or smartphone. However, watching the Web contents on a smart TV is significantly different from other connected devices. For example, the smart TV is a multi-user, lean-back supported device, and normally enjoyed in groups. Moreover, the performance of a recommender system is questionable due to the dynamic interests of groups in front of a smart TV. This paper discussed in detail the existing recommender system approaches in the context of smart TV environment. Moreover, it highlights the issues and challenges in existing recommendations for smart TV viewer(s) and presents some research opportunities to cope with these issues. The paper further reports some overlooked factors that affect the recommendation process on a smart TV. A subjective study of viewers’ watching behavior on a smart TV is also presented for validating these factors. Results show that apart from all technological advancement, the viewers are enjoying smart TV as a passive, lean-back device, and mostly used for watching live channels and videos on the big screen. Furthermore, in most households, smart TV is enjoyed in groups as a shared device which creates hurdles in personalized recommendations. This is because predicting the group members and satisfying each member is still an issue. The findings of this study suggest that for precise and relevant recommendations on smart TVs, the recommender systems need to adapt to the varying watching behavior of viewer(s).
Collapse
|
737
|
AirInsight: Visual Exploration and Interpretation of Latent Patterns and Anomalies in Air Quality Data. SUSTAINABILITY 2019. [DOI: 10.3390/su11102944] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Nowadays, huge volume of air quality data provides unprecedented opportunities for analyzing pollution. However, due to the high complexity, most traditional analytical methods focus on abstracting data, so these techniques discard the original structure and limit the understanding of the results. Visual analysis is a powerful technique for exploring unknown patterns since it retains the details of the original data and gives visual feedback to users. In this paper, we focus on air quality data and propose the AirInsight design, an interactive visual analytic system for recognizing, exploring, and summarizing regular patterns, as well as detecting, classifying, and interpreting abnormal cases. Based on the time-varying and multivariate features of air quality data, a dimension reduction method Composite Least Square Projection (CLSP) is proposed, which allows appreciating and interpreting the data patterns in the context of attributes. On the basis of the observed regular patterns, multiple abnormal cases are further detected, including the multivariate anomalies by the proposed Noise Hierarchical Clustering (NHC) method, abruptly changing timestamps by Time diversity (TD) indicator, and cities with unique patterns by the Geographical Surprise (GS) measure. Moreover, we combine TD and GS to group anomalies based on their underlying spatiotemporal correlations. AirInsight includes multiple coordinated views and rich interactive functions to provide contextual information from different aspects and facilitate a comprehensive understanding. In particular, a pair of glyphs are designed that provide a visual representation of the temporal variation in air quality conditions for a user-selected city. Experiments show that CLSP improves the accuracy of Least Square Projection (LSP) and that NHC has the ability to separate noises. Meanwhile, several case studies and task-based user evaluation demonstrate that our system is effective and practical for exploring and interpreting multivariate spatiotemporal patterns and anomalies in air quality data.
Collapse
|
738
|
Abstract
Research and development activities are one of the main drivers for progress, economic growth and wellbeing in many societies. This article proposes a text mining approach applied to a large amount of data extracted from job vacancies advertisements, aiming to shed light on the main skills and demands that characterize first stage research positions in Europe. Results show that data handling and processing skills are essential for early career researchers, irrespective of their research field. Also, as many analyzed first stage research positions are connected to universities, they include teaching activities to a great extent. Management of time, risks, projects, and resources plays an important part in the job requirements included in the analyzed advertisements. Such information is relevant not only for early career researchers who perform job selection taking into account the match of possessed skills with the required ones, but also for educational institutions that are responsible for skills development of the future R&D professionals.
Collapse
|
739
|
Ramachandran S, Palivela LH. An intelligent system to detect human suspicious activity using deep neural networks. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2019. [DOI: 10.3233/jifs-179003] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Affiliation(s)
- Sumalatha Ramachandran
- Department of Information Technology, Madras Institute of Technology, Anna University, Chennai, Tamilnadu, India
| | - Lakshmi Harika Palivela
- Department of Information Technology, Madras Institute of Technology, Anna University, Chennai, Tamilnadu, India
| |
Collapse
|
740
|
Align My Curriculum: A Framework to Bridge the Gap between Acquired University Curriculum and Required Market Skills. SUSTAINABILITY 2019. [DOI: 10.3390/su11092607] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
With the advancement of technology, academics and curriculum developers are always under pressure to provide students with skills that match the market’s requirements. A systematic and continuous examination of the market is needed, to stay up to date with the required skills, and then to update the curriculum to train the students with required market skills. In this article, we present a framework referred to as Align My Curriculum (AMC). The AMC framework aims to facilitate alignment between acquired university curriculum outcomes and required market skills. It can be used to classify, compare and visualize the data of a university curriculum and job vacancies in the market. The presented framework benefits academics and curriculum developers by improving the courses and therefore bridging the skills gap. Stakeholders from both academia and industry can gain insights into the predominant required and acquired skills. In addition, it may be useful for analysts, students, and job applicants. This article describes the architecture, implementation and experimental results, with visual analysis to help decision and policy-makers.
Collapse
|
741
|
Data-Driven Model-Free Tracking Reinforcement Learning Control with VRFT-based Adaptive Actor-Critic. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9091807] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
This paper proposes a neural network (NN)-based control scheme in an Adaptive Actor-Critic (AAC) learning framework designed for output reference model tracking, as a representative deep-learning application. The control learning scheme is model-free with respect to the process model. AAC designs usually require an initial controller to start the learning process; however, systematic guidelines for choosing the initial controller are not offered in the literature, especially in a model-free manner. Virtual Reference Feedback Tuning (VRFT) is proposed for obtaining an initially stabilizing NN nonlinear state-feedback controller, designed from input-state-output data collected from the process in open-loop setting. The solution offers systematic design guidelines for initial controller design. The resulting suboptimal state-feedback controller is next improved under the AAC learning framework by online adaptation of a critic NN and a controller NN. The mixed VRFT-AAC approach is validated on a multi-input multi-output nonlinear constrained coupled vertical two-tank system. Discussions on the control system behavior are offered together with comparisons with similar approaches.
Collapse
|
742
|
Abstract
Most existing multi-label data streams classification methods focus on extending single-label streams classification approaches to multi-label cases, without considering the special characteristics of multi-label stream data, such as label dependency, concept drift, and recurrent concepts. Motivated by these challenges, we devise an efficient ensemble paradigm for multi-label data streams classification. The algorithm deploys a novel change detection based on Jensen–Shannon divergence to identify different kinds of concept drift in data streams. Moreover, our method tries to consider label dependency by pruning away infrequent label combinations to enhance classification performance. Empirical results on both synthetic and real-world datasets have demonstrated its effectiveness.
Collapse
|
743
|
Abstract
Evidence transfer for clustering is a deep learning method that manipulates the latent representations of an autoencoder according to external categorical evidence with the effect of improving a clustering outcome. Evidence transfer’s application on clustering is designed to be robust when introduced with a low quality of evidence, while increasing the effectiveness of the clustering accuracy during relevant corresponding evidence. We interpret the effects of evidence transfer on the latent representation of an autoencoder by comparing our method to the information bottleneck method. Information bottleneck is an optimisation problem of finding the best tradeoff between maximising the mutual information of data representations and a task outcome while at the same time being effective in compressing the original data source. We posit that the evidence transfer method has essentially the same objective regarding the latent representations produced by an autoencoder. We verify our hypothesis using information theoretic metrics from feature selection in order to perform an empirical analysis over the information that is carried through the bottleneck of the latent space. We use the relevance metric to compare the overall mutual information between the latent representations and the ground truth labels before and after their incremental manipulation, as well as, to study the effects of evidence transfer regarding the significance of each latent feature.
Collapse
|
744
|
Forecasting Economy-Related Data Utilizing Weight-Constrained Recurrent Neural Networks. ALGORITHMS 2019. [DOI: 10.3390/a12040085] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
During the last few decades, machine learning has constituted a significant tool in extracting useful knowledge from economic data for assisting decision-making. In this work, we evaluate the performance of weight-constrained recurrent neural networks in forecasting economic classification problems. These networks are efficiently trained with a recently-proposed training algorithm, which has two major advantages. Firstly, it exploits the numerical efficiency and very low memory requirements of the limited memory BFGS matrices; secondly, it utilizes a gradient-projection strategy for handling the bounds on the weights. The reported numerical experiments present the classification accuracy of the proposed model, providing empirical evidence that the application of the bounds on the weights of the recurrent neural network provides more stable and reliable learning.
Collapse
|
745
|
Abstract
Key performance indicators (KPIs) are time series with the format of (timestamp, value). The accuracy of KPIs anomaly detection is far beyond our initial expectations sometimes. The reasons include the unbalanced distribution between the normal data and the anomalies as well as the existence of many different types of the KPIs data curves. In this paper, we propose a new anomaly detection model based on mining six local data features as the input of back-propagation (BP) neural network. By means of vectorization description on a normalized dataset innovatively, the local geometric characteristics of one time series curve could be well described in a precise mathematical way. Differing from some traditional statistics data characteristics describing the entire variation situation of one sequence, the six mined local data features give a subtle insight of local dynamics by describing the local monotonicity, the local convexity/concavity, the local inflection property and peaks distribution of one KPI time series. In order to demonstrate the validity of the proposed model, we applied our method on 14 classical KPIs time series datasets. Numerical results show that the new given scheme achieves an average F1-score over 90%. Comparison results show that the proposed model detects the anomaly more precisely.
Collapse
|
746
|
Abstract
RadViz is one of the few methods in Visual Analytics able to project high-dimensional data and explain formed structures in terms of data variables. However, RadViz methods have several limitations in terms of scalability in the number of variables, ambiguities created in the projection by the placement of variables along the circular design space, and ability to segregate similar instances into visual clusters. To address these limitations, we propose RadViz++, a set of techniques for interactive exploration of high-dimensional data using a RadViz-type metaphor. We demonstrate the added value of our method by comparing it with existing high-dimensional visualization methods, and also by analyzing a complex real-world dataset having over a hundred variables.
Collapse
|
747
|
|
748
|
Abstract
Progressive visualization offers a great deal of promise for big data visualization; however, current progressive visualization systems do not allow for continuous interaction. What if users want to see more confident results on a subset of the visualization? This can happen when users are in exploratory analysis mode but want to ask some directed questions of the data as well. In a progressive visualization system, the online aggregation algorithm determines the database sampling rate and resulting convergence rate, not the user. In this paper, we extend a recent method in online aggregation, called Wander Join, that is optimized for queries that join tables, one of the most computationally expensive operations. This extension leverages importance sampling to enable user-driven sampling when data joins are in the query. We applied user interaction techniques that allow the user to view and adjust the convergence rate, providing more transparency and control over the online aggregation process. By leveraging importance sampling, our extension of Wander Join also allows for stratified sampling of groups when there is data distribution skew. We also improve the convergence rate of filtering queries, but with additional overhead costs not needed in the original Wander Join algorithm.
Collapse
|
749
|
A Weighted Voting Ensemble Self-Labeled Algorithm for the Detection of Lung Abnormalities from X-Rays. ALGORITHMS 2019. [DOI: 10.3390/a12030064] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
During the last decades, intensive efforts have been devoted to the extraction of useful knowledge from large volumes of medical data employing advanced machine learning and data mining techniques. Advances in digital chest radiography have enabled research and medical centers to accumulate large repositories of classified (labeled) images and mostly of unclassified (unlabeled) images from human experts. Machine learning methods such as semi-supervised learning algorithms have been proposed as a new direction to address the problem of shortage of available labeled data, by exploiting the explicit classification information of labeled data with the information hidden in the unlabeled data. In the present work, we propose a new ensemble semi-supervised learning algorithm for the classification of lung abnormalities from chest X-rays based on a new weighted voting scheme. The proposed algorithm assigns a vector of weights on each component classifier of the ensemble based on its accuracy on each class. Our numerical experiments illustrate the efficiency of the proposed ensemble methodology against other state-of-the-art classification methods.
Collapse
|
750
|
IGR Token-Raw Material and Ingredient Certification of Recipe Based Foods Using Smart Contracts. INFORMATICS 2019. [DOI: 10.3390/informatics6010011] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023] Open
Abstract
The use of smart contracts and blockchain tokens to implement a consumer trustworthy ingredient certification scheme for commingled foods, i.e., recipe based, food products is described. The proposed framework allows ingredients that carry any desired property (including social or environmental customer perceived value) to be certified by any certification authority, at the moment of harvest or extraction, using the IGR Ethereum token. The mechanism involves the transfer of tokens containing the internet url published at the authority’s web site from the farmer all along the supply chain to the final consumer at each transfer of custody of the ingredient using the Cricital Tracking Event/Key Data Elements (CTE/KDE) philosophy of the Institute of Food Technologists (IFT). This allows the end consumer to easily inspect and be assured of the origin of the ingredient by means of a mobile application. A successful code implementation of the framework was deployed, tested and is running as a beta version on the Ethereum live blockchain as the IGR token. The main contribution of the framework is the possibility to ensure the true origin of any instance or lot of ingredient within a recipe to the customer, without harming the food processor legitimate right to protect its recipes and suppliers.
Collapse
|