1
|
Ben Bouchta Y, Gardner M, Sengupta C, Johnson J, Keall P. The Remove-the-Mask Open-Source head and neck Surface-Guided radiation therapy system. Phys Imaging Radiat Oncol 2024; 29:100541. [PMID: 38327762 PMCID: PMC10847032 DOI: 10.1016/j.phro.2024.100541] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Revised: 12/22/2023] [Accepted: 01/24/2024] [Indexed: 02/09/2024] Open
Abstract
Background and Purpose Surface Guided Radiotherapy (SGRT) for head and neck radiotherapy is challenging as obstructions are common and non-rigid facial motion can compromise surface accuracy. The purpose of this work was to develop and benchmark the Remove the Mask (RtM) SGRT system, an open-source system especially designed to address the challenges faced in radiotherapy of head and neck cancer. Materials and Methods The accuracy of the RtM SGRT system was benchmarked using a head phantom positioned on a robotic motion platform capable of sub-millimetre accuracy which was used to induce unidirectional shifts and to reproduce three real head motion traces. We also assessed the accuracy of the system in ten humans volunteers. The ground truth motion of the volunteers was obtained using a commercial motion capture system with an accuracy < 0.3 mm. Results The mean tracking error of the RtM SGRT system for the ten volunteers was of -0.1 ± 0.4 mm -0.6 ± 0.6 mm and 0.3 ± 0.2 mm, and 0.0 ± 0.2° 0.0 ± 0.1° and 0.0 ± 0.2° for translations and rotations along the left-right, superior-inferior and anterior-posterior axes respectively and we also found similar results in measurements with the head phantom. Forced facial motion was associated with lower tracking accuracy. The RtM SGRT system achieved submillimetre accuracy. Conclusion The RtM SGRT system is a low-cost, easy to build and open-source SGRT system that can achieve an accuracy that meets international commissioning guidelines. Its open-source and modular design allows for the development and easy translation of novel surface tracking techniques.
Collapse
Affiliation(s)
| | - Mark Gardner
- The University of Sydney, Camperdown, NSW 2050, Australia
| | | | - Julia Johnson
- The University of Sydney, Camperdown, NSW 2050, Australia
| | - Paul Keall
- The University of Sydney, Camperdown, NSW 2050, Australia
| |
Collapse
|
2
|
Martinez Vargas S, Vitale AJ, Genchi SA, Nogueira SF, Arias AH, Perillo GM, Siben A, Delrieux CA. Monitoring multiple parameters in complex water scenarios using a low-cost open-source data acquisition platform. HardwareX 2023; 16:e00492. [PMID: 38148972 PMCID: PMC10749909 DOI: 10.1016/j.ohx.2023.e00492] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Revised: 10/02/2023] [Accepted: 11/10/2023] [Indexed: 12/28/2023]
Abstract
Water monitoring faces challenges that are driven by the infrastructure, protection, financial resources, science and innovation policies, among others. A modular, low-cost, fully open-source and small-sized Unmanned Surface Vessel (USV) called EMAC-USV (EMAC: Estación de Monitoreo Ambiental Costero), is proposed for monitoring bathymetry and water quality parameters (i.e. temperature, suspended solids concentration and hydrocarbon concentration) in complex water scenarios. A detailed description of each part of the platform as well as all electronic connections and functioning is presented.The field works were carried out in two small waste stabilization ponds and in a portion of the main tidal channel of the Bahía Blanca port. The EMAC-USV is the result of a cautious design, regarding the balancing performance, communications, payload capacity, among others.
Collapse
Affiliation(s)
- Steven Martinez Vargas
- Instituto Argentino de Oceanografía (IADO), CONICET-Universidad Nacional del Sur (UNS), B8000FWB, Bahía Blanca, Argentina
- Departamento de Ingeniería Eléctrica y de Computadoras, UNS, Bahía Blanca, Argentina
| | - Alejandro J. Vitale
- Instituto Argentino de Oceanografía (IADO), CONICET-Universidad Nacional del Sur (UNS), B8000FWB, Bahía Blanca, Argentina
- Departamento de Ingeniería Eléctrica y de Computadoras, UNS, Bahía Blanca, Argentina
- Departamento de Geografía y Turismo, UNS, Bahía Blanca, Argentina
| | - Sibila A. Genchi
- Instituto Argentino de Oceanografía (IADO), CONICET-Universidad Nacional del Sur (UNS), B8000FWB, Bahía Blanca, Argentina
- Departamento de Geografía y Turismo, UNS, Bahía Blanca, Argentina
| | - Simón F. Nogueira
- Instituto Argentino de Oceanografía (IADO), CONICET-Universidad Nacional del Sur (UNS), B8000FWB, Bahía Blanca, Argentina
- Departamento de Ingeniería, UNS, Bahía Blanca, Argentina
| | - Andrés H. Arias
- Instituto Argentino de Oceanografía (IADO), CONICET-Universidad Nacional del Sur (UNS), B8000FWB, Bahía Blanca, Argentina
- Departamento de Química, UNS, Bahía Blanca, Argentina
| | - Gerardo M.E. Perillo
- Instituto Argentino de Oceanografía (IADO), CONICET-Universidad Nacional del Sur (UNS), B8000FWB, Bahía Blanca, Argentina
- Departamento de Geología, UNS, Bahía Blanca, Argentina
| | - Agustín Siben
- Instituto Argentino de Oceanografía (IADO), CONICET-Universidad Nacional del Sur (UNS), B8000FWB, Bahía Blanca, Argentina
- Departamento de Ingeniería Eléctrica y de Computadoras, UNS, Bahía Blanca, Argentina
| | - Claudio A. Delrieux
- Departamento de Ingeniería Eléctrica y de Computadoras, UNS, Bahía Blanca, Argentina
- Instituto de Ciencias e Ingeniería de la Computación, CONICET-UNS, Bahía Blanca, Argentina
| |
Collapse
|
3
|
Preciado-Marquez D, Becker L, Storck M, Greulich L, Dugas M, Brix TJ. MainzelHandler: A Library for a Simple Integration and Usage of the Mainzelliste. Stud Health Technol Inform 2021; 281:233-7. [PMID: 34042740 DOI: 10.3233/SHTI210155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register]
Abstract
Pseudonymization plays a vital role in medical research. In Germany, the Technologie- und Methodenplattform für die vernetzte medizinische Forschung e.V. (TMF) has developed guidelines on how to create pseudonyms and how to handle personally identifiable information (PII) during this process. An open-source implementation of a pseudonymization service following these guidelines and therefore recommended by the TMF is the so-called "Mainzelliste". This web application supports a REST-API for (de-) pseudonymization. For security reasons, a complex session and tokening mechanism for each (de-) pseudonymization is required and a careful interaction between front- and backend to ensure a correct handling of PII. The objective of this work is the development of a library to simplify the integration and usage of the Mainzelliste's API in a TMF conform way. The frontend library uses JavaScript while the backend component is based on Java with an optional Spring Boot extension. The library is available under MIT open-source license from https://github.com/DanielPreciado-Marquez/MainzelHandler.
Collapse
|
4
|
Greulich L, Brix TJ, Storck M, Dugas M. A Seamless Pseudonymization and Randomization Workflow for REDCap. Stud Health Technol Inform 2021; 281:952-6. [PMID: 34042814 DOI: 10.3233/SHTI210319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register]
Abstract
The interaction of multiple computer systems during multi-center randomized controlled trials (RCTs) is a hurdle for IT-specialists as well as medical staff. A common workflow for the initial registration of a patient requires the generation of a pseudonym by a pseudonymization service, a manual transmission of the pseudonym to a randomization service, and a manual transfer of the pseudonym and assigned study arm into an electronic data capture (EDC) system. This interaction is often time consuming and error prone due to multiple system changes. Objective of this work is to enhance a commonly used EDC system, Research Electronic Data Capture (REDCap), as a single source of interaction for multi-center RCTs. This is achieved by providing two modules for a seamless integration of a pseudonymization service, i.e., Mainzelliste, and a randomization service, i.e., RandIMI. Thus, no site-specific system changes are required, which increases time efficiency and reduces errors. From a technical perspective, only authentication credentials and firewall exposure for a single system must be managed. To evaluate the usability of our implementation, the system usability scale was employed. The increase of time efficiency was measured in laboratory conditions by a comparison of the time for patient registrations with and without our modules. An "excellent" usability was shown and an average time reduction by nearly 64 %. Both open-source modules are available from the REDCap Repository of External Modules.
Collapse
|
5
|
Abstract
Background The identification of non-specifically cleaved peptides in proteomics and peptidomics poses a significant computational challenge. Current strategies for the identification of such peptides are typically time consuming and hinder routine data analysis. Objective We aimed to design an algorithm that would improve the speed of semi- and non-specific enzyme searches and could be applicable to existing search programs. Method We developed a novel search algorithm that leverages fragment-ion redundancy to simultaneously search multiple non-specifically cleaved peptides at once. Briefly, a theoretical peptide tandem mass spectrum is generated using only the fragment-ion series from a single terminus. This spectrum serves as a proxy for several shorter theoretical peptides sharing the same terminus. After database searching, amino acids are removed from the opposing terminus until the observed and theoretical precursor masses match within a given mass tolerance. Results The algorithm was implemented in the search program MetaMorpheus and found to perform an order of magnitude faster than the traditional MetaMorpheus search and produce superior results. Conclusion We report a speedy non-specific enzyme search algorithm which is open-source and enables search programs to utilize fragment-ion redundancy to achieve a notable increase in search speed.
Collapse
Affiliation(s)
- Zach Rolfs
- Department of Chemistry, University of Wisconsin-Madison, Madison, Wisconsin 53706
| | - Robert J Millikin
- Department of Chemistry, University of Wisconsin-Madison, Madison, Wisconsin 53706
| | - Lloyd M Smith
- Department of Chemistry, University of Wisconsin-Madison, Madison, Wisconsin 53706
| |
Collapse
|
6
|
Baker D. The Future of the Pharmaceutical Industry: Beyond Government-Granted Monopolies. J Law Med Ethics 2021; 49:25-29. [PMID: 33966644 DOI: 10.1017/jme.2021.5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Just as tariffs lead to economic distortions and provide incentives for corruption, so do patent monopolies on prescription drugs, except the impact is often an order of magnitude larger.
Collapse
|
7
|
Owen SF, Kreitzer AC. An open-source control system for in vivo fluorescence measurements from deep-brain structures. J Neurosci Methods 2018; 311:170-177. [PMID: 30342106 DOI: 10.1016/j.jneumeth.2018.10.022] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2018] [Revised: 09/26/2018] [Accepted: 10/16/2018] [Indexed: 11/17/2022]
Abstract
BACKGROUND Intracranial photometry through chronically implanted optical fibers is a widely adopted technique for measuring signals from fluorescent probes in deep-brain structures. The recent proliferation of bright, photo-stable, and specific genetically encoded fluorescent reporters for calcium and for other neuromodulators has greatly increased the utility and popularity of this technique. NEW METHOD Here we describe an open-source, cost-effective, microcontroller-based solution for controlling optical components in an intracranial photometry system and processing the resulting signal. RESULTS We show proof-of-principle that this system supports high quality intracranial photometry recordings from dorsal striatum in freely moving mice. A single system supports simultaneous fluorescence measurements in two independent color channels, but multiple systems can be integrated together if additional fluorescence channels are required. This system is designed to work in combination with either commercially available or custom-built optical components. Parts can be purchased for less than one tenth the cost of commercially available alternatives and complete assembly takes less than one day for an inexperienced user. COMPARISON WITH EXISTING METHOD(S) Currently available hardware draws on a variety of commercial, custom-built, or hybrid elements for both optical and electronic components. Many of these hardware systems are either specialized and inflexible, or over-engineered and expensive. CONCLUSIONS This open-source system increases experimental flexibility while reducing cost relative to current commercially available components. All software and firmware are open-source and customizable, affording a degree of experimental flexibility that is not available in current commercial systems.
Collapse
Affiliation(s)
| | - Anatol C Kreitzer
- Gladstone Institutes, United States; Department of Neurology, UCSF, United states; Kavli Institute for Fundamental Neuroscience, United States; UCSF Weill Institute for Neurosciences, United States; Department of Physiology, UCSF, United States
| |
Collapse
|
8
|
Abstract
dbVar houses over 3 million submitted structural variants (SSV) from 120 human studies including copy number variations (CNV), insertions, deletions, inversions, translocations, and complex chromosomal rearrangements. Users can submit multiple SSVs to dbVAR that are presumably identical, but were ascertained by different platforms and samples, to calculate whether the variant is rare or common in the population and allow for cross validation. However, because SSV genomic location reporting can vary – including fuzzy locations where the start and/or end points are not precisely known – analysis, comparison, annotation, and reporting of SSVs across studies can be difficult. This project was initiated by the Structural Variant Comparison Group for the purpose of generating a non-redundant set of genomic regions defined by counts of concordance for all human SSVs placed on RefSeq assembly GRCh38 (RefSeq accession GCF_000001405.26). We intend that the availability of these regions, called structural variant clusters (SVCs), will facilitate the analysis, annotation, and exchange of SV data and allow for simplified display in genomic sequence viewers for improved variant interpretation. Sets of SVCs were generated by variant type for each of the 120 studies as well as for a combined set across all studies. Starting from 3.64 million SSVs, 2.5 million and 3.4 million non-redundant SVCs with count >=1 were generated by variant type for each study and across all studies, respectively. In addition, we have developed utilities for annotating, searching, and filtering SVC data in GVF format for computing summary statistics, exporting data for genomic viewers, and annotating the SVC using external data sources.
Collapse
Affiliation(s)
- Lon Phan
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Jeffrey Hsu
- Cleveland Clinic Lerner Research Institute, Cleveland, OH, USA
| | - Le Quang Minh Tri
- Department of Biotechnology, Ho Chi Minh City International University, Ho Chi Minh, Vietnam
| | - Michaela Willi
- Laboratory of Genetics and Physiology, National Institute of Diabetes, Digestive and Kidney Diseases, National Institutes of Health, Bethesda, MA, USA; Division of Bioinformatics, Biocenter, Medical University Innsbruck, Innsbruck, Austria
| | - Tamer Mansour
- Lab for Data Intensive Biology, Department of Population Health and Reproduction, University of California, Davis, CA, USA; Department of Clinical Pathology, University of Mansoura, Mansoura, Egypt
| | - Yan Kai
- Cancer Epigenetics Laboratory, Department of Anatomy and Regenerative Biology, The George Washington University, Washington, DC, USA; Department of Physics, The George Washington University, Washington, DC, USA
| | - John Garner
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - John Lopez
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Ben Busby
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| |
Collapse
|
9
|
Abstract
In genomics, bioinformatics and other areas of data science, gaps exist between extant public datasets and the open-source software tools built by the community to analyze similar data types. The purpose of biological data science hackathons is to assemble groups of genomics or bioinformatics professionals and software developers to rapidly prototype software to address these gaps. The only two rules for the NCBI-assisted hackathons run so far are that 1) data either must be housed in public data repositories or be deposited to such repositories shortly after the hackathon’s conclusion, and 2) all software comprising the final pipeline must be open-source or open-use. Proposed topics, as well as suggested tools and approaches, are distributed to participants at the beginning of each hackathon and refined during the event. Software, scripts, and pipelines are developed and published on GitHub, a web service providing publicly available, free-usage tiers for collaborative software development. The code resulting from each hackathon is published at
https://github.com/NCBI-Hackathons/ with separate directories or repositories for each team.
Collapse
Affiliation(s)
- Ben Busby
- National Center for Biotechnology Information (NCBI), National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Matthew Lesko
- National Center for Biotechnology Information (NCBI), National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | | | - Lisa Federer
- NIH Library, Division of Library Services, Office of Research Services, National Institutes of Health, Bethesda, MD, USA
| |
Collapse
|