1
|
Manjón JV, Coupé P. volBrain: An Online MRI Brain Volumetry System. Front Neuroinform 2016; 10:30. [PMID: 27512372 PMCID: PMC4961698 DOI: 10.3389/fninf.2016.00030] [Citation(s) in RCA: 358] [Impact Index Per Article: 39.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2016] [Accepted: 07/11/2016] [Indexed: 01/18/2023] Open
Abstract
The amount of medical image data produced in clinical and research settings is rapidly growing resulting in vast amount of data to analyze. Automatic and reliable quantitative analysis tools, including segmentation, allow to analyze brain development and to understand specific patterns of many neurological diseases. This field has recently experienced many advances with successful techniques based on non-linear warping and label fusion. In this work we present a novel and fully automatic pipeline for volumetric brain analysis based on multi-atlas label fusion technology that is able to provide accurate volumetric information at different levels of detail in a short time. This method is available through the volBrain online web interface (http://volbrain.upv.es), which is publically and freely accessible to the scientific community. Our new framework has been compared with current state-of-the-art methods showing very competitive results.
Collapse
|
Journal Article |
9 |
358 |
2
|
Alyass A, Turcotte M, Meyre D. From big data analysis to personalized medicine for all: challenges and opportunities. BMC Med Genomics 2015; 8:33. [PMID: 26112054 PMCID: PMC4482045 DOI: 10.1186/s12920-015-0108-y] [Citation(s) in RCA: 244] [Impact Index Per Article: 24.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2015] [Accepted: 06/15/2015] [Indexed: 02/07/2023] Open
Abstract
Recent advances in high-throughput technologies have led to the emergence of systems biology as a holistic science to achieve more precise modeling of complex diseases. Many predict the emergence of personalized medicine in the near future. We are, however, moving from two-tiered health systems to a two-tiered personalized medicine. Omics facilities are restricted to affluent regions, and personalized medicine is likely to widen the growing gap in health systems between high and low-income countries. This is mirrored by an increasing lag between our ability to generate and analyze big data. Several bottlenecks slow-down the transition from conventional to personalized medicine: generation of cost-effective high-throughput data; hybrid education and multidisciplinary teams; data storage and processing; data integration and interpretation; and individual and global economic relevance. This review provides an update of important developments in the analysis of big data and forward strategies to accelerate the global transition to personalized medicine.
Collapse
|
Review |
10 |
244 |
3
|
Mobile Crowd Sensing for Traffic Prediction in Internet of Vehicles. SENSORS 2016; 16:s16010088. [PMID: 26761013 PMCID: PMC4732121 DOI: 10.3390/s16010088] [Citation(s) in RCA: 165] [Impact Index Per Article: 18.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/09/2015] [Revised: 12/28/2015] [Accepted: 12/28/2015] [Indexed: 11/25/2022]
Abstract
The advances in wireless communication techniques, mobile cloud computing, automotive and intelligent terminal technology are driving the evolution of vehicle ad hoc networks into the Internet of Vehicles (IoV) paradigm. This leads to a change in the vehicle routing problem from a calculation based on static data towards real-time traffic prediction. In this paper, we first address the taxonomy of cloud-assisted IoV from the viewpoint of the service relationship between cloud computing and IoV. Then, we review the traditional traffic prediction approached used by both Vehicle to Infrastructure (V2I) and Vehicle to Vehicle (V2V) communications. On this basis, we propose a mobile crowd sensing technology to support the creation of dynamic route choices for drivers wishing to avoid congestion. Experiments were carried out to verify the proposed approaches. Finally, we discuss the outlook of reliable traffic prediction.
Collapse
|
Journal Article |
9 |
165 |
4
|
A Review on Internet of Things for Defense and Public Safety. SENSORS 2016; 16:s16101644. [PMID: 27782052 PMCID: PMC5087432 DOI: 10.3390/s16101644] [Citation(s) in RCA: 143] [Impact Index Per Article: 15.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/29/2016] [Accepted: 09/29/2016] [Indexed: 11/17/2022]
Abstract
The Internet of Things (IoT) is undeniably transforming the way that organizations communicate and organize everyday businesses and industrial procedures. Its adoption has proven well suited for sectors that manage a large number of assets and coordinate complex and distributed processes. This survey analyzes the great potential for applying IoT technologies (i.e., data-driven applications or embedded automation and intelligent adaptive systems) to revolutionize modern warfare and provide benefits similar to those in industry. It identifies scenarios where Defense and Public Safety (PS) could leverage better commercial IoT capabilities to deliver greater survivability to the warfighter or first responders, while reducing costs and increasing operation efficiency and effectiveness. This article reviews the main tactical requirements and the architecture, examining gaps and shortcomings in existing IoT systems across the military field and mission-critical scenarios. The review characterizes the open challenges for a broad deployment and presents a research roadmap for enabling an affordable IoT for defense and PS.
Collapse
|
Review |
9 |
143 |
5
|
Abstract
A new generation of mobile sensing approaches offers significant advantages over traditional platforms in terms of test speed, control, low cost, ease-of-operation, and data management, and requires minimal equipment and user involvement. The marriage of novel sensing technologies with cellphones enables the development of powerful lab-on-smartphone platforms for many important applications including medical diagnosis, environmental monitoring, and food safety analysis. This paper reviews the recent advancements and developments in the field of smartphone-based food diagnostic technologies, with an emphasis on custom modules to enhance smartphone sensing capabilities. These devices typically comprise multiple components such as detectors, sample processors, disposable chips, batteries and software, which are integrated with a commercial smartphone. One of the most important aspects of developing these systems is the integration of these components onto a compact and lightweight platform that requires minimal power. To date, researchers have demonstrated several promising approaches employing various sensing techniques and device configurations. We aim to provide a systematic classification according to the detection strategy, providing a critical discussion of strengths and weaknesses. We have also extended the analysis to the food scanning devices that are increasingly populating the Internet of Things (IoT) market, demonstrating how this field is indeed promising, as the research outputs are quickly capitalized on new start-up companies.
Collapse
|
Review |
8 |
140 |
6
|
Singharoy A, Teo I, McGreevy R, Stone JE, Zhao J, Schulten K. Molecular dynamics-based refinement and validation for sub-5 Å cryo-electron microscopy maps. eLife 2016; 5. [PMID: 27383269 PMCID: PMC4990421 DOI: 10.7554/elife.16105] [Citation(s) in RCA: 123] [Impact Index Per Article: 13.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2016] [Accepted: 07/06/2016] [Indexed: 12/12/2022] Open
Abstract
Two structure determination methods, based on the molecular dynamics flexible fitting (MDFF) paradigm, are presented that resolve sub-5 Å cryo-electron microscopy (EM) maps with either single structures or ensembles of such structures. The methods, denoted cascade MDFF and resolution exchange MDFF, sequentially re-refine a search model against a series of maps of progressively higher resolutions, which ends with the original experimental resolution. Application of sequential re-refinement enables MDFF to achieve a radius of convergence of ~25 Å demonstrated with the accurate modeling of β-galactosidase and TRPV1 proteins at 3.2 Å and 3.4 Å resolution, respectively. The MDFF refinements uniquely offer map-model validation and B-factor determination criteria based on the inherent dynamics of the macromolecules studied, captured by means of local root mean square fluctuations. The MDFF tools described are available to researchers through an easy-to-use and cost-effective cloud computing resource on Amazon Web Services. DOI:http://dx.doi.org/10.7554/eLife.16105.001 To understand the roles that proteins and other large molecules play inside cells, it is important to determine their structures. One of the techniques that researchers can use to do this is called cryo-electron microscopy (cryo-EM), which rapidly freezes molecules to fix them in position before imaging them in fine detail. The cryo-EM images are like maps that show the approximate position of atoms. These images must then be processed in order to build a three-dimensional model of the protein that shows how its atoms are arranged relative to each other. One computational approach called Molecular Dynamics Flexible Fitting (MDFF) works by flexibly fitting possible atomic structures into cryo-EM maps. Although this approach works well with relatively undetailed (or ‘low resolution’) cryo-EM images, it struggles to handle the high-resolution cryo-EM maps now being generated. Singharoy, Teo, McGreevy et al. have now developed two MDFF methods – called cascade MDFF and resolution exchange MDFF – that help to resolve atomic models of biological molecules from cryo-EM images. Each method can refine poorly guessed models into ones that are consistent with the high-resolution experimental images. The refinement is achieved by interpreting a range of images that starts with a ‘fuzzy’ image. The contrast of the image is then progressively improved until an image is produced that has a resolution that is good enough to almost distinguish individual atoms. The method works because each cryo-EM image shows not just one, but a collection of atomic structures that the molecule can take on, with the fuzzier parts of the image representing the more flexible parts of the molecule. By taking into account this flexibility, the large-scale features of the protein structure can be determined first from the fuzzier images, and increasing the contrast of the images allows smaller-scale refinements to be made to the structure. The MDFF tools have been designed to be easy to use and are available to researchers at low cost through cloud computing platforms. They can now be used to unravel the structure of many different proteins and protein complexes including those involved in photosynthesis, respiration and protein synthesis. DOI:http://dx.doi.org/10.7554/eLife.16105.002
Collapse
|
Research Support, N.I.H., Extramural |
9 |
123 |
7
|
Ohno-Machado L, Bafna V, Boxwala AA, Chapman BE, Chapman WW, Chaudhuri K, Day ME, Farcas C, Heintzman ND, Jiang X, Kim H, Kim J, Matheny ME, Resnic FS, Vinterbo SA. iDASH: integrating data for analysis, anonymization, and sharing. J Am Med Inform Assoc 2012; 19:196-201. [PMID: 22081224 PMCID: PMC3277627 DOI: 10.1136/amiajnl-2011-000538] [Citation(s) in RCA: 115] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2011] [Accepted: 08/15/2011] [Indexed: 11/03/2022] Open
Abstract
iDASH (integrating data for analysis, anonymization, and sharing) is the newest National Center for Biomedical Computing funded by the NIH. It focuses on algorithms and tools for sharing data in a privacy-preserving manner. Foundational privacy technology research performed within iDASH is coupled with innovative engineering for collaborative tool development and data-sharing capabilities in a private Health Insurance Portability and Accountability Act (HIPAA)-certified cloud. Driving Biological Projects, which span different biological levels (from molecules to individuals to populations) and focus on various health conditions, help guide research and development within this Center. Furthermore, training and dissemination efforts connect the Center with its stakeholders and educate data owners and data consumers on how to share and use clinical and biological data. Through these various mechanisms, iDASH implements its goal of providing biomedical and behavioral researchers with access to data, software, and a high-performance computing environment, thus enabling them to generate and test new hypotheses.
Collapse
|
Research Support, N.I.H., Extramural |
13 |
115 |
8
|
Connor TR, Loman NJ, Thompson S, Smith A, Southgate J, Poplawski R, Bull MJ, Richardson E, Ismail M, Thompson SE, Kitchen C, Guest M, Bakke M, Sheppard SK, Pallen MJ. CLIMB (the Cloud Infrastructure for Microbial Bioinformatics): an online resource for the medical microbiology community. Microb Genom 2016; 2:e000086. [PMID: 28785418 PMCID: PMC5537631 DOI: 10.1099/mgen.0.000086] [Citation(s) in RCA: 115] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2016] [Accepted: 08/21/2016] [Indexed: 12/14/2022] Open
Abstract
The increasing availability and decreasing cost of high-throughput sequencing has transformed academic medical microbiology, delivering an explosion in available genomes while also driving advances in bioinformatics. However, many microbiologists are unable to exploit the resulting large genomics datasets because they do not have access to relevant computational resources and to an appropriate bioinformatics infrastructure. Here, we present the Cloud Infrastructure for Microbial Bioinformatics (CLIMB) facility, a shared computing infrastructure that has been designed from the ground up to provide an environment where microbiologists can share and reuse methods and data.
Collapse
|
other |
9 |
115 |
9
|
Sherif T, Rioux P, Rousseau ME, Kassis N, Beck N, Adalat R, Das S, Glatard T, Evans AC. CBRAIN: a web-based, distributed computing platform for collaborative neuroimaging research. Front Neuroinform 2014; 8:54. [PMID: 24904400 PMCID: PMC4033081 DOI: 10.3389/fninf.2014.00054] [Citation(s) in RCA: 114] [Impact Index Per Article: 10.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2013] [Accepted: 04/29/2014] [Indexed: 12/05/2022] Open
Abstract
The Canadian Brain Imaging Research Platform (CBRAIN) is a web-based collaborative research platform developed in response to the challenges raised by data-heavy, compute-intensive neuroimaging research. CBRAIN offers transparent access to remote data sources, distributed computing sites, and an array of processing and visualization tools within a controlled, secure environment. Its web interface is accessible through any modern browser and uses graphical interface idioms to reduce the technical expertise required to perform large-scale computational analyses. CBRAIN's flexible meta-scheduling has allowed the incorporation of a wide range of heterogeneous computing sites, currently including nine national research High Performance Computing (HPC) centers in Canada, one in Korea, one in Germany, and several local research servers. CBRAIN leverages remote computing cycles and facilitates resource-interoperability in a transparent manner for the end-user. Compared with typical grid solutions available, our architecture was designed to be easily extendable and deployed on existing remote computing sites with no tool modification, administrative intervention, or special software/hardware configuration. As October 2013, CBRAIN serves over 200 users spread across 53 cities in 17 countries. The platform is built as a generic framework that can accept data and analysis tools from any discipline. However, its current focus is primarily on neuroimaging research and studies of neurological diseases such as Autism, Parkinson's and Alzheimer's diseases, Multiple Sclerosis as well as on normal brain structure and development. This technical report presents the CBRAIN Platform, its current deployment and usage and future direction.
Collapse
|
Journal Article |
11 |
114 |
10
|
An Edge Computing Based Smart Healthcare Framework for Resource Management. SENSORS 2018; 18:s18124307. [PMID: 30563267 PMCID: PMC6308405 DOI: 10.3390/s18124307] [Citation(s) in RCA: 103] [Impact Index Per Article: 14.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/09/2018] [Revised: 11/28/2018] [Accepted: 12/03/2018] [Indexed: 02/06/2023]
Abstract
The revolution in information technologies, and the spread of the Internet of Things (IoT) and smart city industrial systems, have fostered widespread use of smart systems. As a complex, 24/7 service, healthcare requires efficient and reliable follow-up on daily operations, service and resources. Cloud and edge computing are essential for smart and efficient healthcare systems in smart cities. Emergency departments (ED) are real-time systems with complex dynamic behavior, and they require tailored techniques to model, simulate and optimize system resources and service flow. ED issues are mainly due to resource shortage and resource assignment efficiency. In this paper, we propose a resource preservation net (RPN) framework using Petri net, integrated with custom cloud and edge computing suitable for ED systems. The proposed framework is designed to model non-consumable resources and is theoretically described and validated. RPN is applicable to a real-life scenario where key performance indicators such as patient length of stay (LoS), resource utilization rate and average patient waiting time are modeled and optimized. As the system must be reliable, efficient and secure, the use of cloud and edge computing is critical. The proposed framework is simulated, which highlights significant improvements in LoS, resource utilization and patient waiting time.
Collapse
|
Journal Article |
7 |
103 |
11
|
Lightbody G, Haberland V, Browne F, Taggart L, Zheng H, Parkes E, Blayney JK. Review of applications of high-throughput sequencing in personalized medicine: barriers and facilitators of future progress in research and clinical application. Brief Bioinform 2019; 20:1795-1811. [PMID: 30084865 PMCID: PMC6917217 DOI: 10.1093/bib/bby051] [Citation(s) in RCA: 99] [Impact Index Per Article: 16.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2018] [Revised: 05/01/2018] [Indexed: 12/28/2022] Open
Abstract
There has been an exponential growth in the performance and output of sequencing technologies (omics data) with full genome sequencing now producing gigabases of reads on a daily basis. These data may hold the promise of personalized medicine, leading to routinely available sequencing tests that can guide patient treatment decisions. In the era of high-throughput sequencing (HTS), computational considerations, data governance and clinical translation are the greatest rate-limiting steps. To ensure that the analysis, management and interpretation of such extensive omics data is exploited to its full potential, key factors, including sample sourcing, technology selection and computational expertise and resources, need to be considered, leading to an integrated set of high-performance tools and systems. This article provides an up-to-date overview of the evolution of HTS and the accompanying tools, infrastructure and data management approaches that are emerging in this space, which, if used within in a multidisciplinary context, may ultimately facilitate the development of personalized medicine.
Collapse
|
Review |
6 |
99 |
12
|
Navarro E, Costa N, Pereira A. A Systematic Review of IoT Solutions for Smart Farming. SENSORS 2020; 20:s20154231. [PMID: 32751366 PMCID: PMC7436012 DOI: 10.3390/s20154231] [Citation(s) in RCA: 74] [Impact Index Per Article: 14.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/15/2020] [Revised: 07/24/2020] [Accepted: 07/27/2020] [Indexed: 02/02/2023]
Abstract
The world population growth is increasing the demand for food production. Furthermore, the reduction of the workforce in rural areas and the increase in production costs are challenges for food production nowadays. Smart farming is a farm management concept that may use Internet of Things (IoT) to overcome the current challenges of food production. This work uses the preferred reporting items for systematic reviews (PRISMA) methodology to systematically review the existing literature on smart farming with IoT. The review aims to identify the main devices, platforms, network protocols, processing data technologies and the applicability of smart farming with IoT to agriculture. The review shows an evolution in the way data is processed in recent years. Traditional approaches mostly used data in a reactive manner. In more recent approaches, however, new technological developments allowed the use of data to prevent crop problems and to improve the accuracy of crop diagnosis.
Collapse
|
Systematic Review |
5 |
74 |
13
|
A Blockchain-Based Location Privacy Protection Incentive Mechanism in Crowd Sensing Networks. SENSORS 2018; 18:s18113894. [PMID: 30424534 PMCID: PMC6263764 DOI: 10.3390/s18113894] [Citation(s) in RCA: 73] [Impact Index Per Article: 10.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/17/2018] [Revised: 10/22/2018] [Accepted: 10/23/2018] [Indexed: 11/17/2022]
Abstract
Crowd sensing is a perception mode that recruits mobile device users to complete tasks such as data collection and cloud computing. For the cloud computing platform, crowd sensing can not only enable users to collaborate to complete large-scale awareness tasks but also provide users for types, social attributes, and other information for the cloud platform. In order to improve the effectiveness of crowd sensing, many incentive mechanisms have been proposed. Common incentives are monetary reward, entertainment & gamification, social relation, and virtual credit. However, there are rare incentives based on privacy protection basically. In this paper, we proposed a mixed incentive mechanism which combined privacy protection and virtual credit called a blockchain-based location privacy protection incentive mechanism in crowd sensing networks. Its network structure can be divided into three parts which are intelligence crowd sensing networks, confusion mechanism, and blockchain. We conducted the experiments in the campus environment and the results shows that the incentive mechanism proposed in this paper has the efficacious effect in stimulating user participation.
Collapse
|
Journal Article |
7 |
73 |
14
|
Kim S, Song SM, Yoon YI. Smart learning services based on smart cloud computing. SENSORS 2011; 11:7835-50. [PMID: 22164048 PMCID: PMC3231729 DOI: 10.3390/s110807835] [Citation(s) in RCA: 70] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/19/2011] [Revised: 08/01/2011] [Accepted: 08/05/2011] [Indexed: 11/16/2022]
Abstract
Context-aware technologies can make e-learning services smarter and more efficient since context-aware services are based on the user's behavior. To add those technologies into existing e-learning services, a service architecture model is needed to transform the existing e-learning environment, which is situation-aware, into the environment that understands context as well. The context-awareness in e-learning may include the awareness of user profile and terminal context. In this paper, we propose a new notion of service that provides context-awareness to smart learning content in a cloud computing environment. We suggest the elastic four smarts (E4S)--smart pull, smart prospect, smart content, and smart push--concept to the cloud services so smart learning services are possible. The E4S focuses on meeting the users' needs by collecting and analyzing users' behavior, prospecting future services, building corresponding contents, and delivering the contents through cloud computing environment. Users' behavior can be collected through mobile devices such as smart phones that have built-in sensors. As results, the proposed smart e-learning model in cloud computing environment provides personalized and customized learning services to its users.
Collapse
|
Research Support, Non-U.S. Gov't |
14 |
70 |
15
|
Yan Z. Unprecedented pandemic, unprecedented shift, and unprecedented opportunity. HUMAN BEHAVIOR AND EMERGING TECHNOLOGIES 2020; 2:110-112. [PMID: 32427197 PMCID: PMC7228313 DOI: 10.1002/hbe2.192] [Citation(s) in RCA: 65] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2020] [Accepted: 03/26/2020] [Indexed: 12/25/2022]
|
Journal Article |
5 |
65 |
16
|
Griebel L, Prokosch HU, Köpcke F, Toddenroth D, Christoph J, Leb I, Engel I, Sedlmayr M. A scoping review of cloud computing in healthcare. BMC Med Inform Decis Mak 2015; 15:17. [PMID: 25888747 PMCID: PMC4372226 DOI: 10.1186/s12911-015-0145-7] [Citation(s) in RCA: 63] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2014] [Accepted: 03/04/2015] [Indexed: 11/11/2022] Open
Abstract
BACKGROUND Cloud computing is a recent and fast growing area of development in healthcare. Ubiquitous, on-demand access to virtually endless resources in combination with a pay-per-use model allow for new ways of developing, delivering and using services. Cloud computing is often used in an "OMICS-context", e.g. for computing in genomics, proteomics and molecular medicine, while other field of application still seem to be underrepresented. Thus, the objective of this scoping review was to identify the current state and hot topics in research on cloud computing in healthcare beyond this traditional domain. METHODS MEDLINE was searched in July 2013 and in December 2014 for publications containing the terms "cloud computing" and "cloud-based". Each journal and conference article was categorized and summarized independently by two researchers who consolidated their findings. RESULTS 102 publications have been analyzed and 6 main topics have been found: telemedicine/teleconsultation, medical imaging, public health and patient self-management, hospital management and information systems, therapy, and secondary use of data. Commonly used features are broad network access for sharing and accessing data and rapid elasticity to dynamically adapt to computing demands. Eight articles favor the pay-for-use characteristics of cloud-based services avoiding upfront investments. Nevertheless, while 22 articles present very general potentials of cloud computing in the medical domain and 66 articles describe conceptual or prototypic projects, only 14 articles report from successful implementations. Further, in many articles cloud computing is seen as an analogy to internet-/web-based data sharing and the characteristics of the particular cloud computing approach are unfortunately not really illustrated. CONCLUSIONS Even though cloud computing in healthcare is of growing interest only few successful implementations yet exist and many papers just use the term "cloud" synonymously for "using virtual machines" or "web-based" with no described benefit of the cloud paradigm. The biggest threat to the adoption in the healthcare domain is caused by involving external cloud partners: many issues of data safety and security are still to be solved. Until then, cloud computing is favored more for singular, individual features such as elasticity, pay-per-use and broad network access, rather than as cloud paradigm on its own.
Collapse
|
Scoping Review |
10 |
63 |
17
|
Nagasaki H, Mochizuki T, Kodama Y, Saruhashi S, Morizaki S, Sugawara H, Ohyanagi H, Kurata N, Okubo K, Takagi T, Kaminuma E, Nakamura Y. DDBJ read annotation pipeline: a cloud computing-based pipeline for high-throughput analysis of next-generation sequencing data. DNA Res 2013; 20:383-90. [PMID: 23657089 PMCID: PMC3738164 DOI: 10.1093/dnares/dst017] [Citation(s) in RCA: 62] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022] Open
Abstract
High-performance next-generation sequencing (NGS) technologies are advancing genomics and molecular biological research. However, the immense amount of sequence data requires computational skills and suitable hardware resources that are a challenge to molecular biologists. The DNA Data Bank of Japan (DDBJ) of the National Institute of Genetics (NIG) has initiated a cloud computing-based analytical pipeline, the DDBJ Read Annotation Pipeline (DDBJ Pipeline), for a high-throughput annotation of NGS reads. The DDBJ Pipeline offers a user-friendly graphical web interface and processes massive NGS datasets using decentralized processing by NIG supercomputers currently free of charge. The proposed pipeline consists of two analysis components: basic analysis for reference genome mapping and de novo assembly and subsequent high-level analysis of structural and functional annotations. Users may smoothly switch between the two components in the pipeline, facilitating web-based operations on a supercomputer for high-throughput data analysis. Moreover, public NGS reads of the DDBJ Sequence Read Archive located on the same supercomputer can be imported into the pipeline through the input of only an accession number. This proposed pipeline will facilitate research by utilizing unified analytical workflows applied to the NGS data. The DDBJ Pipeline is accessible at http://p.ddbj.nig.ac.jp/.
Collapse
|
Research Support, Non-U.S. Gov't |
12 |
62 |
18
|
Halligan BD, Geiger JF, Vallejos AK, Greene AS, Twigger SN. Low cost, scalable proteomics data analysis using Amazon's cloud computing services and open source search algorithms. J Proteome Res 2009; 8:3148-53. [PMID: 19358578 PMCID: PMC2691775 DOI: 10.1021/pr800970z] [Citation(s) in RCA: 56] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
One of the major difficulties for many laboratories setting up proteomics programs has been obtaining and maintaining the computational infrastructure required for the analysis of the large flow of proteomics data. We describe a system that combines distributed cloud computing and open source software to allow laboratories to set up scalable virtual proteomics analysis clusters without the investment in computational hardware or software licensing fees. Additionally, the pricing structure of distributed computing providers, such as Amazon Web Services, allows laboratories or even individuals to have large-scale computational resources at their disposal at a very low cost per run. We provide detailed step-by-step instructions on how to implement the virtual proteomics analysis clusters as well as a list of current available preconfigured Amazon machine images containing the OMSSA and X!Tandem search algorithms and sequence databases on the Medical College of Wisconsin Proteomics Center Web site ( http://proteomics.mcw.edu/vipdac ).
Collapse
|
Research Support, N.I.H., Extramural |
16 |
56 |
19
|
Heath AP, Greenway M, Powell R, Spring J, Suarez R, Hanley D, Bandlamudi C, McNerney ME, White KP, Grossman RL. Bionimbus: a cloud for managing, analyzing and sharing large genomics datasets. J Am Med Inform Assoc 2014; 21:969-75. [PMID: 24464852 PMCID: PMC4215034 DOI: 10.1136/amiajnl-2013-002155] [Citation(s) in RCA: 55] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2013] [Revised: 11/04/2013] [Accepted: 01/04/2014] [Indexed: 11/27/2022] Open
Abstract
BACKGROUND As large genomics and phenotypic datasets are becoming more common, it is increasingly difficult for most researchers to access, manage, and analyze them. One possible approach is to provide the research community with several petabyte-scale cloud-based computing platforms containing these data, along with tools and resources to analyze it. METHODS Bionimbus is an open source cloud-computing platform that is based primarily upon OpenStack, which manages on-demand virtual machines that provide the required computational resources, and GlusterFS, which is a high-performance clustered file system. Bionimbus also includes Tukey, which is a portal, and associated middleware that provides a single entry point and a single sign on for the various Bionimbus resources; and Yates, which automates the installation, configuration, and maintenance of the software infrastructure required. RESULTS Bionimbus is used by a variety of projects to process genomics and phenotypic data. For example, it is used by an acute myeloid leukemia resequencing project at the University of Chicago. The project requires several computational pipelines, including pipelines for quality control, alignment, variant calling, and annotation. For each sample, the alignment step requires eight CPUs for about 12 h. BAM file sizes ranged from 5 GB to 10 GB for each sample. CONCLUSIONS Most members of the research community have difficulty downloading large genomics datasets and obtaining sufficient storage and computer resources to manage and analyze the data. Cloud computing platforms, such as Bionimbus, with data commons that contain large genomics datasets, are one choice for broadening access to research data in genomics.
Collapse
|
Research Support, N.I.H., Extramural |
11 |
55 |
20
|
Goscinski WJ, McIntosh P, Felzmann U, Maksimenko A, Hall CJ, Gureyev T, Thompson D, Janke A, Galloway G, Killeen NEB, Raniga P, Kaluza O, Ng A, Poudel G, Barnes DG, Nguyen T, Bonnington P, Egan GF. The multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) high performance computing infrastructure: applications in neuroscience and neuroinformatics research. Front Neuroinform 2014; 8:30. [PMID: 24734019 PMCID: PMC3973921 DOI: 10.3389/fninf.2014.00030] [Citation(s) in RCA: 55] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2013] [Accepted: 03/10/2014] [Indexed: 11/22/2022] Open
Abstract
The Multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) is a national imaging and visualization facility established by Monash University, the Australian Synchrotron, the Commonwealth Scientific Industrial Research Organization (CSIRO), and the Victorian Partnership for Advanced Computing (VPAC), with funding from the National Computational Infrastructure and the Victorian Government. The MASSIVE facility provides hardware, software, and expertise to drive research in the biomedical sciences, particularly advanced brain imaging research using synchrotron x-ray and infrared imaging, functional and structural magnetic resonance imaging (MRI), x-ray computer tomography (CT), electron microscopy and optical microscopy. The development of MASSIVE has been based on best practice in system integration methodologies, frameworks, and architectures. The facility has: (i) integrated multiple different neuroimaging analysis software components, (ii) enabled cross-platform and cross-modality integration of neuroinformatics tools, and (iii) brought together neuroimaging databases and analysis workflows. MASSIVE is now operational as a nationally distributed and integrated facility for neuroinfomatics and brain imaging research.
Collapse
|
research-article |
11 |
55 |
21
|
Abd-El-Atty B, Iliyasu AM, Alaskar H, Abd El-Latif AA. A Robust Quasi-Quantum Walks-Based Steganography Protocol for Secure Transmission of Images on Cloud-Based E-healthcare Platforms. SENSORS 2020; 20:s20113108. [PMID: 32486383 PMCID: PMC7309012 DOI: 10.3390/s20113108] [Citation(s) in RCA: 50] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/27/2020] [Revised: 05/28/2020] [Accepted: 05/29/2020] [Indexed: 11/16/2022]
Abstract
Traditionally, tamper-proof steganography involves using efficient protocols to encrypt the stego cover image and/or hidden message prior to embedding it into the carrier object. However, as the inevitable transition to the quantum computing paradigm beckons, its immense computing power will be exploited to violate even the best non-quantum, i.e., classical, stego protocol. On its part, quantum walks can be tailored to utilise their astounding 'quantumness' to propagate nonlinear chaotic behaviours as well as its sufficient sensitivity to alterations in primary key parameters both important properties for efficient information security. Our study explores using a classical (i.e., quantum-inspired) rendition of the controlled alternate quantum walks (i.e., CAQWs) model to fabricate a robust image steganography protocol for cloud-based E-healthcare platforms by locating content that overlays the secret (or hidden) bits. The design employed in our technique precludes the need for pre and/or post encryption of the carrier and secret images. Furthermore, our design simplifies the process to extract the confidential (hidden) information since only the stego image and primary states to run the CAQWs are required. We validate our proposed protocol on a dataset of medical images, which exhibited remarkable outcomes in terms of their security, good visual quality, high resistance to data loss attacks, high embedding capacity, etc., making the proposed scheme a veritable strategy for efficient medical image steganography.
Collapse
|
Journal Article |
5 |
50 |
22
|
Klonoff DC. Fog Computing and Edge Computing Architectures for Processing Data From Diabetes Devices Connected to the Medical Internet of Things. J Diabetes Sci Technol 2017; 11:647-652. [PMID: 28745086 PMCID: PMC5588847 DOI: 10.1177/1932296817717007] [Citation(s) in RCA: 49] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
The Internet of Things (IoT) is generating an immense volume of data. With cloud computing, medical sensor and actuator data can be stored and analyzed remotely by distributed servers. The results can then be delivered via the Internet. The number of devices in IoT includes such wireless diabetes devices as blood glucose monitors, continuous glucose monitors, insulin pens, insulin pumps, and closed-loop systems. The cloud model for data storage and analysis is increasingly unable to process the data avalanche, and processing is being pushed out to the edge of the network closer to where the data-generating devices are. Fog computing and edge computing are two architectures for data handling that can offload data from the cloud, process it nearby the patient, and transmit information machine-to-machine or machine-to-human in milliseconds or seconds. Sensor data can be processed near the sensing and actuating devices with fog computing (with local nodes) and with edge computing (within the sensing devices). Compared to cloud computing, fog computing and edge computing offer five advantages: (1) greater data transmission speed, (2) less dependence on limited bandwidths, (3) greater privacy and security, (4) greater control over data generated in foreign countries where laws may limit use or permit unwanted governmental access, and (5) lower costs because more sensor-derived data are used locally and less data are transmitted remotely. Connected diabetes devices almost all use fog computing or edge computing because diabetes patients require a very rapid response to sensor input and cannot tolerate delays for cloud computing.
Collapse
|
Review |
8 |
49 |
23
|
Lahoura V, Singh H, Aggarwal A, Sharma B, Mohammed MA, Damaševičius R, Kadry S, Cengiz K. Cloud Computing-Based Framework for Breast Cancer Diagnosis Using Extreme Learning Machine. Diagnostics (Basel) 2021; 11:241. [PMID: 33557132 PMCID: PMC7913821 DOI: 10.3390/diagnostics11020241] [Citation(s) in RCA: 49] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2020] [Revised: 01/28/2021] [Accepted: 01/29/2021] [Indexed: 02/07/2023] Open
Abstract
Globally, breast cancer is one of the most significant causes of death among women. Early detection accompanied by prompt treatment can reduce the risk of death due to breast cancer. Currently, machine learning in cloud computing plays a pivotal role in disease diagnosis, but predominantly among the people living in remote areas where medical facilities are scarce. Diagnosis systems based on machine learning act as secondary readers and assist radiologists in the proper diagnosis of diseases, whereas cloud-based systems can support telehealth services and remote diagnostics. Techniques based on artificial neural networks (ANN) have attracted many researchers to explore their capability for disease diagnosis. Extreme learning machine (ELM) is one of the variants of ANN that has a huge potential for solving various classification problems. The framework proposed in this paper amalgamates three research domains: Firstly, ELM is applied for the diagnosis of breast cancer. Secondly, to eliminate insignificant features, the gain ratio feature selection method is employed. Lastly, a cloud computing-based system for remote diagnosis of breast cancer using ELM is proposed. The performance of the cloud-based ELM is compared with some state-of-the-art technologies for disease diagnosis. The results achieved on the Wisconsin Diagnostic Breast Cancer (WBCD) dataset indicate that the cloud-based ELM technique outperforms other results. The best performance results of ELM were found for both the standalone and cloud environments, which were compared. The important findings of the experimental results indicate that the accuracy achieved is 0.9868, the recall is 0.9130, the precision is 0.9054, and the F1-score is 0.8129.
Collapse
|
research-article |
4 |
49 |
24
|
Christley S, Scarborough W, Salinas E, Rounds WH, Toby IT, Fonner JM, Levin MK, Kim M, Mock SA, Jordan C, Ostmeyer J, Buntzman A, Rubelt F, Davila ML, Monson NL, Scheuermann RH, Cowell LG. VDJServer: A Cloud-Based Analysis Portal and Data Commons for Immune Repertoire Sequences and Rearrangements. Front Immunol 2018; 9:976. [PMID: 29867956 PMCID: PMC5953328 DOI: 10.3389/fimmu.2018.00976] [Citation(s) in RCA: 48] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2018] [Accepted: 04/19/2018] [Indexed: 11/13/2022] Open
Abstract
Background Recent technological advances in immune repertoire sequencing have created tremendous potential for advancing our understanding of adaptive immune response dynamics in various states of health and disease. Immune repertoire sequencing produces large, highly complex data sets, however, which require specialized methods and software tools for their effective analysis and interpretation. Results VDJServer is a cloud-based analysis portal for immune repertoire sequence data that provide access to a suite of tools for a complete analysis workflow, including modules for preprocessing and quality control of sequence reads, V(D)J gene segment assignment, repertoire characterization, and repertoire comparison. VDJServer also provides sophisticated visualizations for exploratory analysis. It is accessible through a standard web browser via a graphical user interface designed for use by immunologists, clinicians, and bioinformatics researchers. VDJServer provides a data commons for public sharing of repertoire sequencing data, as well as private sharing of data between users. We describe the main functionality and architecture of VDJServer and demonstrate its capabilities with use cases from cancer immunology and autoimmunity. Conclusion VDJServer provides a complete analysis suite for human and mouse T-cell and B-cell receptor repertoire sequencing data. The combination of its user-friendly interface and high-performance computing allows large immune repertoire sequencing projects to be analyzed with no programming or software installation required. VDJServer is a web-accessible cloud platform that provides access through a graphical user interface to a data management infrastructure, a collection of analysis tools covering all steps in an analysis, and an infrastructure for sharing data along with workflows, results, and computational provenance. VDJServer is a free, publicly available, and open-source licensed resource.
Collapse
|
research-article |
7 |
48 |
25
|
Shang M, Luo J. The Tapio Decoupling Principle and Key Strategies for Changing Factors of Chinese Urban Carbon Footprint Based on Cloud Computing. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:ijerph18042101. [PMID: 33670040 PMCID: PMC7926756 DOI: 10.3390/ijerph18042101] [Citation(s) in RCA: 48] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/16/2021] [Revised: 02/07/2021] [Accepted: 02/18/2021] [Indexed: 11/23/2022]
Abstract
The expansion of Xi’an City has caused the consumption of energy and land resources, leading to serious environmental pollution problems. For this purpose, this study was carried out to measure the carbon carrying capacity, net carbon footprint and net carbon footprint pressure index of Xi’an City, and to characterize the carbon sequestration capacity of Xi’an ecosystem, thereby laying a foundation for developing comprehensive and reasonable low-carbon development measures. This study expects to provide a reference for China to develop a low-carbon economy through Tapio decoupling principle. The decoupling relationship between CO2 and driving factors was explored through Tapio decoupling model. The time-series data was used to calculate the carbon footprint. The auto-encoder in deep learning technology was combined with the parallel algorithm in cloud computing. A general multilayer perceptron neural network realized by a parallel BP learning algorithm was proposed based on Map-Reduce on a cloud computing cluster. A partial least squares (PLS) regression model was constructed to analyze driving factors. The results show that in terms of city size, the variable importance in projection (VIP) output of the urbanization rate has a strong inhibitory effect on carbon footprint growth, and the VIP value of permanent population ranks the last; in terms of economic development, the impact of fixed asset investment and added value of the secondary industry on carbon footprint ranks third and fourth. As a result, the marginal effect of carbon footprint is greater than that of economic growth after economic growth reaches a certain stage, revealing that the driving forces and mechanisms can promote the growth of urban space.
Collapse
|
Journal Article |
4 |
48 |