51
|
Thompson S, Dowrick T, Xiao G, Ramalhinho J, Robu M, Ahmad M, Taylor D, Clarkson MJ. SnappySonic: An Ultrasound Acquisition Replay Simulator. JOURNAL OF OPEN RESEARCH SOFTWARE 2020; 8:8. [PMID: 32395246 PMCID: PMC7212065 DOI: 10.5334/jors.289] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
SnappySonic provides an ultrasound acquisition replay simulator designed for public engagement and training. It provides a simple interface to allow users to experience ultrasound acquisition without the need for specialist hardware or acoustically compatible phantoms. The software is implemented in Python, built on top of a set of open source Python modules targeted at surgical innovation. The library has high potential for reuse, most obviously for those who want to simulate ultrasound acquisition, but it could also be used as a user interface for displaying high dimensional images or video data.
Collapse
|
52
|
Spurney RJ, Van den Broeck L, Clark NM, Fisher AP, de Luis Balaguer MA, Sozzani R. tuxnet: a simple interface to process RNA sequencing data and infer gene regulatory networks. THE PLANT JOURNAL : FOR CELL AND MOLECULAR BIOLOGY 2020; 101:716-730. [PMID: 31571287 DOI: 10.1111/tpj.14558] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/10/2019] [Revised: 08/20/2019] [Accepted: 09/17/2019] [Indexed: 06/10/2023]
Abstract
Predicting gene regulatory networks (GRNs) from expression profiles is a common approach for identifying important biological regulators. Despite the increased use of inference methods, existing computational approaches often do not integrate RNA-sequencing data analysis, are not automated or are restricted to users with bioinformatics backgrounds. To address these limitations, we developed tuxnet, a user-friendly platform that can process raw RNA-sequencing data from any organism with an existing reference genome using a modified tuxedo pipeline (hisat 2 + cufflinks package) and infer GRNs from these processed data. tuxnet is implemented as a graphical user interface and can mine gene regulations, either by applying a dynamic Bayesian network (DBN) inference algorithm, genist, or a regression tree-based pipeline, rtp-star. We obtained time-course expression data of a PERIANTHIA (PAN) inducible line and inferred a GRN using genist to illustrate the use of tuxnet while gaining insight into the regulations downstream of the Arabidopsis root stem cell regulator PAN. Using rtp-star, we inferred the network of ATHB13, a downstream gene of PAN, for which we obtained wild-type and mutant expression profiles. Additionally, we generated two networks using temporal data from developmental leaf data and spatial data from root cell-type data to highlight the use of tuxnet to form new testable hypotheses from previously explored data. Our case studies feature the versatility of tuxnet when using different types of gene expression data to infer networks and its accessibility as a pipeline for non-bioinformaticians to analyze transcriptome data, predict causal regulations, assess network topology and identify key regulators.
Collapse
|
53
|
Zhang XZ, Feng N, Ma AJ, Li BQ. Aligning retention time shifts in HPLC three-dimensional spectra by icoshift approach combined with data arrangement methods and the release of a graphical user interface. J Sep Sci 2019; 43:552-560. [PMID: 31670445 DOI: 10.1002/jssc.201900791] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2019] [Revised: 10/28/2019] [Accepted: 10/28/2019] [Indexed: 11/07/2022]
Abstract
High-performance liquid chromatography coupled with photodiode array detection has been extensively applied in many fields and the peaks among the analyzed samples can be shifted due to the variations of instrumental and experimental conditions. In multivariate analysis, retention time alignment is an important pretreatment step. Hence, the shifted peaks in high-performance liquid chromatography coupled with photodiode array detection three-dimensional spectra should be aligned for further analysis. Being motivated by this purpose, the interval correlated shifting method combined with the proposed data arrangement methods are recommended and employed on high-performance liquid chromatography coupled with photodiode array detection data as a demonstration. We validate the alignment performance of the proposed method through comparison the consistency of the retention time before and after alignment. The obtained results demonstrated that the proposed method is capable of successful aligning the employed data. Additionally, the interval correlated shifting method combined with the data arrangement modes is implemented in an easy-to-use graphical user interface environment and so can be operated easily by users not familiar with programming languages.
Collapse
|
54
|
Naik GR, Gargiulo GD, Serrador JM, Breen PP. Groundtruth: A Matlab GUI for Artifact and Feature Identification in Physiological Signals. Front Physiol 2019; 10:850. [PMID: 31481893 PMCID: PMC6710362 DOI: 10.3389/fphys.2019.00850] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2019] [Accepted: 06/20/2019] [Indexed: 12/03/2022] Open
Abstract
Groundtruth is a Matlab Graphical User Interface (GUI) developed for the identification of key features and artifacts within physiological signals. The ultimate aim of this GUI is to provide a simple means of assessing the performance of new sensors. Secondary, to this is providing a means of providing marked data, enabling assessment of automated artifact rejection and feature identification algorithms. With the emergence of new wearable sensor technologies, there is an unmet need for convenient assessment of device performance, and a faster means of assessing new algorithms. The proposed GUI allows interactive marking of artifact regions as well as simultaneous interactive identification of key features, e.g., respiration peaks in respiration signals, R-peaks in Electrocardiography signals, etc. In this paper, we present the base structure of the system, together with an example of its use for two simultaneously worn respiration sensors. The respiration rates are computed for both original as well as artifact removed data and validated using Bland–Altman plots. The respiration rates computed based on the proposed GUI (after artifact removal process) demonstrated consistent results for two respiration sensors after artifact removal process. Groundtruth is customizable, and alternative processing modules are easy to add/remove. Groundtruth is intended for open-source use.
Collapse
|
55
|
Smith C, Kenney L, Howard D, Waring K, Sun M, Luckie H, Hardiker N, Cotterill S. Prediction of setup times for an advanced upper limb functional electrical stimulation system. J Rehabil Assist Technol Eng 2019; 5:2055668318802561. [PMID: 31191957 PMCID: PMC6531802 DOI: 10.1177/2055668318802561] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2017] [Accepted: 08/24/2018] [Indexed: 11/16/2022] Open
Abstract
Introduction Rehabilitation devices take time to don, and longer or unpredictable setup time impacts on usage. This paper reports on the development of a model to predict setup time for upper limb functional electrical stimulation. Methods Participants' level of impairment (Fugl Meyer-Upper Extremity Scale), function (Action Research Arm Test) and mental status (Mini Mental Scale) were measured. Setup times for each stage of the setup process and total setup times were recorded. A predictive model of setup time was devised using upper limb impairment and task complexity. Results Six participants with stroke were recruited, mean age 60 (±17) years and mean time since stroke 9.8 (±9.6) years. Mean Fugl Meyer-Upper Extremity score was 31.1 (±6), Action Research Arm Test 10.4 (±7.9) and Mini Mental Scale 26.1 (±2.7). Linear regression analysis showed that upper limb impairment and task complexity most effectively predicted setup time (51% as compared with 39%) (F(2,21) = 12.782, adjusted R2 = 0.506; p < .05). Conclusions A model to predict setup time based on upper limb impairment and task complexity accounted for 51% of the variation in setup time. Further studies are required to test the model in real-world settings and to identify other contributing factors.
Collapse
|
56
|
Magezi DA. Corrigendum: Linear mixed-effects models for within-participant psychology experiments: an introductory tutorial and free, graphical user interface (LMMgui). Front Psychol 2019; 10:489. [PMID: 30914999 PMCID: PMC6422962 DOI: 10.3389/fpsyg.2019.00489] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2019] [Accepted: 02/19/2019] [Indexed: 11/15/2022] Open
|
57
|
Oscarsson M, Beteva A, Flot D, Gordon E, Guijarro M, Leonard G, McSweeney S, Monaco S, Mueller-Dieckmann C, Nanao M, Nurizzo D, Popov AN, von Stetten D, Svensson O, Rey-Bakaikoa V, Chado I, Chavas LMG, Gadea L, Gourhant P, Isabet T, Legrand P, Savko M, Sirigu S, Shepard W, Thompson A, Mueller U, Nan J, Eguiraun M, Bolmsten F, Nardella A, Milàn-Otero A, Thunnissen M, Hellmig M, Kastner A, Schmuckermaier L, Gerlach M, Feiler C, Weiss MS, Bowler MW, Gobbo A, Papp G, Sinoir J, McCarthy AA, Karpics I, Nikolova M, Bourenkov G, Schneider T, Andreu J, Cuní G, Juanhuix J, Boer R, Fogh R, Keller P, Flensburg C, Paciorek W, Vonrhein C, Bricogne G, de Sanctis D. MXCuBE2: the dawn of MXCuBE Collaboration. JOURNAL OF SYNCHROTRON RADIATION 2019; 26:393-405. [PMID: 30855248 PMCID: PMC6412183 DOI: 10.1107/s1600577519001267] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/21/2018] [Accepted: 01/23/2019] [Indexed: 05/22/2023]
Abstract
MXCuBE2 is the second-generation evolution of the MXCuBE beamline control software, initially developed and used at ESRF - the European Synchrotron. MXCuBE2 extends, in an intuitive graphical user interface (GUI), the functionalities and data collection methods available to users while keeping all previously available features and allowing for the straightforward incorporation of ongoing and future developments. MXCuBE2 introduces an extended abstraction layer that allows easy interfacing of any kind of macromolecular crystallography (MX) hardware component, whether this is a diffractometer, sample changer, detector or optical element. MXCuBE2 also works in strong synergy with the ISPyB Laboratory Information Management System, accessing the list of samples available for a particular experimental session and associating, either from instructions contained in ISPyB or from user input via the MXCuBE2 GUI, different data collection types to them. The development of MXCuBE2 forms the core of a fruitful collaboration which brings together several European synchrotrons and a software development factory and, as such, defines a new paradigm for the development of beamline control platforms for the European MX user community.
Collapse
|
58
|
Sokolovskis J, Herremans D, Chew E. A Novel Interface for the Graphical Analysis of Music Practice Behaviors. Front Psychol 2018; 9:2292. [PMID: 30534100 PMCID: PMC6275316 DOI: 10.3389/fpsyg.2018.02292] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2018] [Accepted: 11/02/2018] [Indexed: 11/13/2022] Open
Abstract
Practice is an essential part of music training, but critical content-based analyses of practice behaviors still lack tools for conveying informative representation of practice sessions. To bridge this gap, we present a novel visualization system, the Music Practice Browser, for representing, identifying, and analysing music practice behaviors. The Music Practice Browser provides a graphical interface for reviewing recorded practice sessions, which allows musicians, teachers, and researchers to examine aspects and features of music practice behaviors. The system takes beat and practice segment information together with a musical score in XML format as input, and produces a number of different visualizations: Practice Session Work Maps give an overview of contiguous practice segments; Practice Segment Arcs make evident transitions and repeated segments; Practice Session Precision Maps facilitate the identifying of errors; Tempo-Loudness Evolution Graphs track expressive variations over the course of a practice session. We then test the new system on practice sessions of pianists of varying levels of expertise ranging from novice to expert. The practice patterns found include Drill-Correct, Drill-Smooth, Memorization Strategy, Review and Explore, and Expressive Evolution. The analysis reveals practice patterns and behavior differences between beginners and experts, such as a higher proportion of Drill-Smooth patterns in expert practice.
Collapse
|
59
|
Kar A, Corcoran P. Performance Evaluation Strategies for Eye Gaze Estimation Systems with Quantitative Metrics and Visualizations. SENSORS 2018; 18:s18093151. [PMID: 30231547 PMCID: PMC6165570 DOI: 10.3390/s18093151] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/10/2018] [Revised: 09/07/2018] [Accepted: 09/15/2018] [Indexed: 11/16/2022]
Abstract
An eye tracker’s accuracy and system behavior play critical roles in determining the reliability and usability of eye gaze data obtained from them. However, in contemporary eye gaze research, there exists a lot of ambiguity in the definitions of gaze estimation accuracy parameters and lack of well-defined methods for evaluating the performance of eye tracking systems. In this paper, a set of fully defined evaluation metrics are therefore developed and presented for complete performance characterization of generic commercial eye trackers, when they operate under varying conditions on desktop or mobile platforms. In addition, some useful visualization methods are implemented, which will help in studying the performance and data quality of eye trackers irrespective of their design principles and application areas. Also the concept of a graphical user interface software named GazeVisual v1.1 is proposed that would integrate all these methods and enable general users to effortlessly access the described metrics, generate visualizations and extract valuable information from their own gaze datasets. We intend to present these tools as open resources in future to the eye gaze research community for use and further advancement, as a contribution towards standardization of gaze research outputs and analysis.
Collapse
|
60
|
Ledieu T, Bouzillé G, Polard E, Plaisant C, Thiessard F, Cuggia M. Clinical Data Analytics With Time-Related Graphical User Interfaces: Application to Pharmacovigilance. Front Pharmacol 2018; 9:717. [PMID: 30233354 PMCID: PMC6127627 DOI: 10.3389/fphar.2018.00717] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2018] [Accepted: 06/13/2018] [Indexed: 11/13/2022] Open
Abstract
Pharmacovigilance consists in monitoring and preventing the occurrence of adverse drug reactions. This activity can be time-consuming because it requires the collection of both patient and medication information. In this paper, we present two visualization and data mining applications to make this task easier for the practitioner. These tools have been developed and tested using the biomedical data warehouse eHOP (Hospital Biomedical Data Warehouse) of the Rennes University Hospital Centre. The first application is a tool to visualize the patient electronic health record in the form of a timeline. All patient data is collected and displayed chronologically. The usability test of the timeline has been very positive (SUS score: 82.5) and the tool is now available for practitioners in their daily practice. The second application is a tool to visualize and search the sequences of a patient cohort. The visual interface allow user to quickly visualize sequences. A query builder allows user to search for sequences in relation with a reference sequence, such as a prescription sequence followed by an abnormal biological value. The sequences are then visually aligned with this reference sequence and ranked by similarity. The GSP (Generalized Sequential Pattern) and Apriori algorithms allow us to display a summary of the sequences list by searching for common sequences and associations. The tool was tested on a use case which consisted in detection of inappropriate drug administration. Compared to a random order, we showed this ranking system saved the practitioner time in this task (to analyze one sequence, 3.49 ± 3.54 vs. 2.26 ± 2.86 s, p = 0.0003). These two visualization and data mining applications will help the daily practice of pharmacovigilance.
Collapse
|
61
|
ABE-VIEW: Android Interface for Wireless Data Acquisition and Control. SENSORS 2018; 18:s18082647. [PMID: 30104474 PMCID: PMC6111993 DOI: 10.3390/s18082647] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/27/2018] [Revised: 08/04/2018] [Accepted: 08/09/2018] [Indexed: 01/19/2023]
Abstract
Advances in scientific knowledge are increasingly supported by a growing community of developers freely sharing new hardware and software tools. In this spirit we have developed a free Android app, ABE-VIEW, that provides a flexible graphical user interface (GUI) populated entirely from a remote instrument by ascii-coded instructions communicated wirelessly over Bluetooth. Options include an interactive chart for plotting data in real time, up to 16 data fields, and virtual controls including buttons, numerical controls with user-defined range and resolution, and radio buttons which the user can use to send coded instructions back to the instrument. Data can be recorded into comma delimited files interactively at the user’s discretion. Our original objective of the project was to make data acquisition and control for undergraduate engineering labs more modular and affordable, but we have also found that the tool is highly useful for rapidly testing novel sensor systems for iterative improvement. Here we document the operation of the app and syntax for communicating with it. We also illustrate its application in undergraduate engineering labs on dynamic systems modeling, as well as for identifying the source of harmonic distortion affecting electrochemical impedance measurements at certain frequencies in a novel wireless potentiostat.
Collapse
|
62
|
Bennett KB, Bryant A, Sushereba C. Ecological Interface Design for Computer Network Defense. HUMAN FACTORS 2018; 60:610-625. [PMID: 29741960 DOI: 10.1177/0018720818769233] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
OBJECTIVE A prototype ecological interface for computer network defense (CND) was developed. BACKGROUND Concerns about CND run high. Although there is a vast literature on CND, there is some indication that this research is not being translated into operational contexts. Part of the reason may be that CND has historically been treated as a strictly technical problem, rather than as a socio-technical problem. METHODS The cognitive systems engineering (CSE)/ecological interface design (EID) framework was used in the analysis and design of the prototype interface. A brief overview of CSE/EID is provided. EID principles of design (i.e., direct perception, direct manipulation and visual momentum) are described and illustrated through concrete examples from the ecological interface. RESULTS Key features of the ecological interface include (a) a wide variety of alternative visual displays, (b) controls that allow easy, dynamic reconfiguration of these displays, (c) visual highlighting of functionally related information across displays, (d) control mechanisms to selectively filter massive data sets, and (e) the capability for easy expansion. Cyber attacks from a well-known data set are illustrated through screen shots. CONCLUSION CND support needs to be developed with a triadic focus (i.e., humans interacting with technology to accomplish work) if it is to be effective. Iterative design and formal evaluation is also required. The discipline of human factors has a long tradition of success on both counts; it is time that HF became fully involved in CND. APPLICATION Direct application in supporting cyber analysts.
Collapse
|
63
|
Mittal V, Hung LH, Keswani J, Kristiyanto D, Lee SB, Yeung KY. GUIdock-VNC: using a graphical desktop sharing system to provide a browser-based interface for containerized software. Gigascience 2018; 6:1-6. [PMID: 28327936 PMCID: PMC5530313 DOI: 10.1093/gigascience/giw013] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2016] [Accepted: 12/16/2016] [Indexed: 11/30/2022] Open
Abstract
Background: Software container technology such as Docker can be used to package and distribute bioinformatics workflows consisting of multiple software implementations and dependencies. However, Docker is a command line–based tool, and many bioinformatics pipelines consist of components that require a graphical user interface. Results: We present a container tool called GUIdock-VNC that uses a graphical desktop sharing system to provide a browser-based interface for containerized software. GUIdock-VNC uses the Virtual Network Computing protocol to render the graphics within most commonly used browsers. We also present a minimal image builder that can add our proposed graphical desktop sharing system to any Docker packages, with the end result that any Docker packages can be run using a graphical desktop within a browser. In addition, GUIdock-VNC uses the Oauth2 authentication protocols when deployed on the cloud. Conclusions: As a proof-of-concept, we demonstrated the utility of GUIdock-noVNC in gene network inference. We benchmarked our container implementation on various operating systems and showed that our solution creates minimal overhead.
Collapse
|
64
|
Sherfey JS, Soplata AE, Ardid S, Roberts EA, Stanley DA, Pittman-Polletta BR, Kopell NJ. DynaSim: A MATLAB Toolbox for Neural Modeling and Simulation. Front Neuroinform 2018; 12:10. [PMID: 29599715 PMCID: PMC5862864 DOI: 10.3389/fninf.2018.00010] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2017] [Accepted: 02/21/2018] [Indexed: 11/13/2022] Open
Abstract
DynaSim is an open-source MATLAB/GNU Octave toolbox for rapid prototyping of neural models and batch simulation management. It is designed to speed up and simplify the process of generating, sharing, and exploring network models of neurons with one or more compartments. Models can be specified by equations directly (similar to XPP or the Brian simulator) or by lists of predefined or custom model components. The higher-level specification supports arbitrarily complex population models and networks of interconnected populations. DynaSim also includes a large set of features that simplify exploring model dynamics over parameter spaces, running simulations in parallel using both multicore processors and high-performance computer clusters, and analyzing and plotting large numbers of simulated data sets in parallel. It also includes a graphical user interface (DynaSim GUI) that supports full functionality without requiring user programming. The software has been implemented in MATLAB to enable advanced neural modeling using MATLAB, given its popularity and a growing interest in modeling neural systems. The design of DynaSim incorporates a novel schema for model specification to facilitate future interoperability with other specifications (e.g., NeuroML, SBML), simulators (e.g., NEURON, Brian, NEST), and web-based applications (e.g., Geppetto) outside MATLAB. DynaSim is freely available at http://dynasimtoolbox.org. This tool promises to reduce barriers for investigating dynamics in large neural models, facilitate collaborative modeling, and complement other tools being developed in the neuroinformatics community.
Collapse
|
65
|
Wojdyla JA, Kaminski JW, Panepucci E, Ebner S, Wang X, Gabadinho J, Wang M. DA+ data acquisition and analysis software at the Swiss Light Source macromolecular crystallography beamlines. JOURNAL OF SYNCHROTRON RADIATION 2018; 25:293-303. [PMID: 29271779 PMCID: PMC5741135 DOI: 10.1107/s1600577517014503] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/04/2017] [Accepted: 10/08/2017] [Indexed: 05/19/2023]
Abstract
Data acquisition software is an essential component of modern macromolecular crystallography (MX) beamlines, enabling efficient use of beam time at synchrotron facilities. Developed at the Paul Scherrer Institute, the DA+ data acquisition software is implemented at all three Swiss Light Source (SLS) MX beamlines. DA+ consists of distributed services and components written in Python and Java, which communicate via messaging and streaming technologies. The major components of DA+ are the user interface, acquisition engine, online processing and database. Immediate data quality feedback is achieved with distributed automatic data analysis routines. The software architecture enables exploration of the full potential of the latest instrumentation at the SLS MX beamlines, such as the SmarGon goniometer and the EIGER X 16M detector, and development of new data collection methods.
Collapse
|
66
|
Moriarty NW, Liebschner D, Klei HE, Echols N, Afonine PV, Headd JJ, Poon BK, Adams PD. Interactive comparison and remediation of collections of macromolecular structures. Protein Sci 2017; 27:182-194. [PMID: 28901593 DOI: 10.1002/pro.3296] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2017] [Revised: 09/08/2017] [Accepted: 09/11/2017] [Indexed: 11/09/2022]
Abstract
Often similar structures need to be compared to reveal local differences throughout the entire model or between related copies within the model. Therefore, a program to compare multiple structures and enable correction any differences not supported by the density map was written within the Phenix framework (Adams et al., Acta Cryst 2010; D66:213-221). This program, called Structure Comparison, can also be used for structures with multiple copies of the same protein chain in the asymmetric unit, that is, as a result of non-crystallographic symmetry (NCS). Structure Comparison was designed to interface with Coot(Emsley et al., Acta Cryst 2010; D66:486-501) and PyMOL(DeLano, PyMOL 0.99; 2002) to facilitate comparison of large numbers of related structures. Structure Comparison analyzes collections of protein structures using several metrics, such as the rotamer conformation of equivalent residues, displays the results in tabular form and allows superimposed protein chains and density maps to be quickly inspected and edited (via the tools in Coot) for consistency, completeness and correctness.
Collapse
|
67
|
Al-Naji A, Chahl J. Simultaneous Tracking of Cardiorespiratory Signals for Multiple Persons Using a Machine Vision System With Noise Artifact Removal. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE 2017; 5:1900510. [PMID: 29043113 PMCID: PMC5642312 DOI: 10.1109/jtehm.2017.2757485] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/26/2017] [Revised: 09/20/2017] [Accepted: 09/22/2017] [Indexed: 11/09/2022]
Abstract
Most existing non-contact monitoring systems are limited to detecting physiological signs from a single subject at a time. Still, another challenge facing these systems is that they are prone to noise artifacts resulting from motion of subjects, facial expressions, talking, skin tone, and illumination variations. This paper proposes an efficient non-contact system based on a digital camera to track the cardiorespiratory signal from a number of subjects (up to six persons) at the same time with a new method for noise artifact removal. The proposed system relied on the physiological and physical effects as a result of the activity of the cardiovascular and respiratory systems, such as skin color changes and head motion. Since these effects are imperceptible to the human eye and highly affected by the noise variations, we used advanced signal and video processing techniques, including developing video magnification technique, complete ensemble empirical mode decomposition with adaptive noise, and canonical correlation analysis to extract the heart rate and respiratory rate from multiple subjects under the noise artifact assumptions. The experimental results of the proposed system had a significant correlation (Pearson's correlation coefficient = 0.9994, Spearman correlation coefficient = 0.9987, and root mean square error = 0.32) when compared with the conventional contact methods (pulse oximeter and piezorespiratory belt), which makes the proposed system a promising candidate for novel applications.
Collapse
|
68
|
Combrisson E, Vallat R, Eichenlaub JB, O'Reilly C, Lajnef T, Guillot A, Ruby PM, Jerbi K. Sleep: An Open-Source Python Software for Visualization, Analysis, and Staging of Sleep Data. Front Neuroinform 2017; 11:60. [PMID: 28983246 PMCID: PMC5613192 DOI: 10.3389/fninf.2017.00060] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2017] [Accepted: 09/06/2017] [Indexed: 11/13/2022] Open
Abstract
We introduce Sleep, a new Python open-source graphical user interface (GUI) dedicated to visualization, scoring and analyses of sleep data. Among its most prominent features are: (1) Dynamic display of polysomnographic data, spectrogram, hypnogram and topographic maps with several customizable parameters, (2) Implementation of several automatic detection of sleep features such as spindles, K-complexes, slow waves, and rapid eye movements (REM), (3) Implementation of practical signal processing tools such as re-referencing or filtering, and (4) Display of main descriptive statistics including publication-ready tables and figures. The software package supports loading and reading raw EEG data from standard file formats such as European Data Format, in addition to a range of commercial data formats. Most importantly, Sleep is built on top of the VisPy library, which provides GPU-based fast and high-level visualization. As a result, it is capable of efficiently handling and displaying large sleep datasets. Sleep is freely available (http://visbrain.org/sleep) and comes with sample datasets and an extensive documentation. Novel functionalities will continue to be added and open-science community efforts are expected to enhance the capacities of this module.
Collapse
|
69
|
Yun Y, Carass A, Lang A, Prince JL, Antony BJ. Collaborative SDOCT Segmentation and Analysis Software. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2017; 10138. [PMID: 28919660 DOI: 10.1117/12.2254050] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Spectral domain optical coherence tomography (SDOCT) is routinely used in the management and diagnosis of a variety of ocular diseases. This imaging modality also finds widespread use in research, where quantitative measurements obtained from the images are used to track disease progression. In recent years, the number of available scanners and imaging protocols grown and there is a distinct absence of a unified tool that is capable of visualizing, segmenting, and analyzing the data. This is especially noteworthy in longitudinal studies, where data from older scanners and/or protocols may need to be analyzed. Here, we present a graphical user interface (GUI) that allows users to visualize and analyze SDOCT images obtained from two commonly used scanners. The retinal surfaces in the scans can be segmented using a previously described method, and the retinal layer thicknesses can be compared to a normative database. If necessary, the segmented surfaces can also be corrected and the changes applied. The interface also allows users to import and export retinal layer thickness data to an SQL database, thereby allowing for the collation of data from a number of collaborating sites.
Collapse
|
70
|
Sallaz-Damaz Y, Ferrer JL. WIFIP: a web-based user interface for automated synchrotron beamlines. JOURNAL OF SYNCHROTRON RADIATION 2017; 24:1105-1111. [PMID: 28862636 DOI: 10.1107/s1600577517009080] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/10/2016] [Accepted: 06/18/2017] [Indexed: 06/07/2023]
Abstract
The beamline control software, through the associated graphical user interface (GUI), is the user access point to the experiment, interacting with synchrotron beamline components and providing automated routines. FIP, the French beamline for the Investigation of Proteins, is a highly automatized macromolecular crystallography (MX) beamline at the European Synchrotron Radiation Facility. On such a beamline, a significant number of users choose to control their experiment remotely. This is often performed with a limited bandwidth and from a large choice of computers and operating systems. Furthermore, this has to be possible in a rapidly evolving experimental environment, where new developments have to be easily integrated. To face these challenges, a light, platform-independent, control software and associated GUI are required. Here, WIFIP, a web-based user interface developed at FIP, is described. Further than being the present FIP control interface, WIFIP is also a proof of concept for future MX control software.
Collapse
|
71
|
Etienne E, Le Breton N, Martinho M, Mileo E, Belle V. SimLabel: a graphical user interface to simulate continuous wave EPR spectra from site-directed spin labeling experiments. MAGNETIC RESONANCE IN CHEMISTRY : MRC 2017; 55:714-719. [PMID: 28078740 DOI: 10.1002/mrc.4578] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/27/2016] [Revised: 01/03/2017] [Accepted: 01/05/2017] [Indexed: 05/24/2023]
Abstract
Site-directed spin labeling (SDSL) combined with continuous wave electron paramagnetic resonance (cw EPR) spectroscopy is a powerful technique to reveal, at the residue level, structural transitions in proteins. SDSL-EPR is based on the selective grafting of a paramagnetic label on the protein under study, followed by cw EPR analysis. To extract valuable quantitative information from SDSL-EPR spectra and thus give reliable interpretation on biological system dynamics, numerical simulations of the spectra are required. Such spectral simulations can be carried out by coding in MATLAB using functions from the EasySpin toolbox. For non-expert users of MATLAB, this could be a complex task or even impede the use of such simulation tool. We developed a graphical user interface called SimLabel dedicated to run cw EPR spectra simulations particularly coming from SDSL-EPR experiments. Simlabel provides an intuitive way to visualize, simulate, and fit such cw EPR spectra. An example of SDSL-EPR spectra simulation concerning the study of an intrinsically disordered region undergoing a local induced folding is described and discussed. We believe that this new tool will help the users to rapidly obtain reliable simulated spectra and hence facilitate the interpretation of their results. Copyright © 2017 John Wiley & Sons, Ltd.
Collapse
|
72
|
Integrating macromolecular X-ray diffraction data with the graphical user interface iMosflm. Nat Protoc 2017; 12:1310-1325. [PMID: 28569763 PMCID: PMC5562275 DOI: 10.1038/nprot.2017.037] [Citation(s) in RCA: 58] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
Abstract
X-ray crystallography is the predominant source of structural information for biological macromolecules, providing fundamental insights into biological function. The availability of robust and user-friendly software to process the collected X-ray diffraction images makes the technique accessible to a wider range of scientists. iMosflm/MOSFLM (http://www.mrc-lmb.cam.ac.uk/harry/imosflm) is a software package designed to achieve this goal. The graphical user interface (GUI) version of MOSFLM (called iMosflm) is designed to guide inexperienced users through the steps of data integration, while retaining powerful features for more experienced users. Images from almost all commercially available X-ray detectors can be handled using this software. Although the program uses only 2D profile fitting, it can readily integrate data collected in the 'fine phi-slicing' mode (in which the rotation angle per image is less than the crystal mosaic spread by a factor of at least 2), which is commonly used with modern very fast readout detectors. The GUI provides real-time feedback on the success of the indexing step and the progress of data processing. This feedback includes the ability to monitor detector and crystal parameter refinement and to display the average spot shape in different regions of the detector. Data scaling and merging tasks can be initiated directly from the interface. Using this protocol, a data set of 360 images with ∼2,000 reflections per image can be processed in ∼4 min.
Collapse
|
73
|
Uhlirova H, Tian P, Kılıç K, Thunemann M, Sridhar VB, Bartsch H, Dale AM, Devor A, Saisan PA. Neurovascular Network Explorer 2.0: A Database of 2-Photon Single-Vessel Diameter Measurements from Mouse SI Cortex in Response To Optogenetic Stimulation. Front Neuroinform 2017; 11:4. [PMID: 28203155 PMCID: PMC5285378 DOI: 10.3389/fninf.2017.00004] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2016] [Accepted: 01/13/2017] [Indexed: 11/13/2022] Open
|
74
|
Knoblauch J, Sethuraman A, Hey J. IMGui-A Desktop GUI Application for Isolation with Migration Analyses. Mol Biol Evol 2017; 34:500-504. [PMID: 28025276 DOI: 10.1093/molbev/msw252] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
The Isolation with Migration (IM) programs (e.g., IMa2) have been utilized extensively by evolutionary biologists for model-based inference of demographic parameters including effective population sizes, migration rates, and divergence times. Here, we describe a graphical user interface for the latest IM program. IMGui provides a comprehensive set of tools for performing demographic analyses, tracking progress of runs, and visualizing results. Developed using node. js and the Electron framework, IMGui is an application that runs on any desktop operating system, and is available for download at https://github.com/jaredgk/IMgui-electron-packages.
Collapse
|
75
|
Zaimi A, Duval T, Gasecka A, Côté D, Stikov N, Cohen-Adad J. AxonSeg: Open Source Software for Axon and Myelin Segmentation and Morphometric Analysis. Front Neuroinform 2016; 10:37. [PMID: 27594833 PMCID: PMC4990549 DOI: 10.3389/fninf.2016.00037] [Citation(s) in RCA: 38] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2016] [Accepted: 08/08/2016] [Indexed: 01/21/2023] Open
Abstract
Segmenting axon and myelin from microscopic images is relevant for studying the peripheral and central nervous system and for validating new MRI techniques that aim at quantifying tissue microstructure. While several software packages have been proposed, their interface is sometimes limited and/or they are designed to work with a specific modality (e.g., scanning electron microscopy (SEM) only). Here we introduce AxonSeg, which allows to perform automatic axon and myelin segmentation on histology images, and to extract relevant morphometric information, such as axon diameter distribution, axon density and the myelin g-ratio. AxonSeg includes a simple and intuitive MATLAB-based graphical user interface (GUI) and can easily be adapted to a variety of imaging modalities. The main steps of AxonSeg consist of: (i) image pre-processing; (ii) pre-segmentation of axons over a cropped image and discriminant analysis (DA) to select the best parameters based on axon shape and intensity information; (iii) automatic axon and myelin segmentation over the full image; and (iv) atlas-based statistics to extract morphometric information. Segmentation results from standard optical microscopy (OM), SEM and coherent anti-Stokes Raman scattering (CARS) microscopy are presented, along with validation against manual segmentations. Being fully-automatic after a quick manual intervention on a cropped image, we believe AxonSeg will be useful to researchers interested in large throughput histology. AxonSeg is open source and freely available at: https://github.com/neuropoly/axonseg.
Collapse
|