1
|
Agulleiro JI, Fernandez JJ. Tomo3D 2.0--exploitation of advanced vector extensions (AVX) for 3D reconstruction. J Struct Biol 2014; 189:147-52. [PMID: 25528570 DOI: 10.1016/j.jsb.2014.11.009] [Citation(s) in RCA: 123] [Impact Index Per Article: 11.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2014] [Revised: 11/20/2014] [Accepted: 11/25/2014] [Indexed: 11/30/2022]
Abstract
Tomo3D is a program for fast tomographic reconstruction on multicore computers. Its high speed stems from code optimization, vectorization with Streaming SIMD Extensions (SSE), multithreading and optimization of disk access. Recently, Advanced Vector eXtensions (AVX) have been introduced in the x86 processor architecture. Compared to SSE, AVX double the number of simultaneous operations, thus pointing to a potential twofold gain in speed. However, in practice, achieving this potential is extremely difficult. Here, we provide a technical description and an assessment of the optimizations included in Tomo3D to take advantage of AVX instructions. Tomo3D 2.0 allows huge reconstructions to be calculated in standard computers in a matter of minutes. Thus, it will be a valuable tool for electron tomography studies with increasing resolution needs.
Collapse
|
Research Support, Non-U.S. Gov't |
11 |
123 |
2
|
Denison RN, Vu AT, Yacoub E, Feinberg DA, Silver MA. Functional mapping of the magnocellular and parvocellular subdivisions of human LGN. Neuroimage 2014; 102 Pt 2:358-69. [PMID: 25038435 DOI: 10.1016/j.neuroimage.2014.07.019] [Citation(s) in RCA: 50] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2013] [Revised: 06/27/2014] [Accepted: 07/11/2014] [Indexed: 10/25/2022] Open
Abstract
The magnocellular (M) and parvocellular (P) subdivisions of primate LGN are known to process complementary types of visual stimulus information, but a method for noninvasively defining these subdivisions in humans has proven elusive. As a result, the functional roles of these subdivisions in humans have not been investigated physiologically. To functionally map the M and P subdivisions of human LGN, we used high-resolution fMRI at high field (7 T and 3 T) together with a combination of spatial, temporal, luminance, and chromatic stimulus manipulations. We found that stimulus factors that differentially drive magnocellular and parvocellular neurons in primate LGN also elicit differential BOLD fMRI responses in human LGN and that these responses exhibit a spatial organization consistent with the known anatomical organization of the M and P subdivisions. In test-retest studies, the relative responses of individual voxels to M-type and P-type stimuli were reliable across scanning sessions on separate days and across sessions at different field strengths. The ability to functionally identify magnocellular and parvocellular regions of human LGN with fMRI opens possibilities for investigating the functions of these subdivisions in human visual perception, in patient populations with suspected abnormalities in one of these subdivisions, and in visual cortical processing streams arising from parallel thalamocortical pathways.
Collapse
|
Research Support, U.S. Gov't, Non-P.H.S. |
11 |
50 |
3
|
Samdani A, Vetrivel U. POAP: A GNU parallel based multithreaded pipeline of open babel and AutoDock suite for boosted high throughput virtual screening. Comput Biol Chem 2018. [PMID: 29533817 DOI: 10.1016/j.compbiolchem.2018.02.012] [Citation(s) in RCA: 49] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
Abstract
High throughput virtual screening plays a crucial role in hit identification during the drug discovery process. With the rapid increase in the chemical libraries, virtual screening process becomes computationally challenging, thereby posing a demand for efficiently parallelized software pipelines. Here we present a GNU Parallel based pipeline-POAP that is programmed to run Open Babel and AutoDock suite under highly optimized parallelization. The ligand preparation module is a unique feature in POAP, as it offers extensive options for geometry optimization, conformer generation, parallelization and also quarantines erroneous datasets for seamless operation. POAP also features multi receptor docking that can be utilized for comparative virtual screening and drug repurposing studies. As demonstrated using different structural datasets, POAP proves to be an efficient pipeline that enables high scalability, seamless operability, dynamic file handling and optimal utilization of CPU's for computationally demanding tasks. POAP is distributed freely under GNU GPL license and can be downloaded at https://github.com/inpacdb/POAP.
Collapse
|
Journal Article |
7 |
49 |
4
|
Parallel averaging of size is possible but range-limited: a reply to Marchant, Simons, and De Fockert. Acta Psychol (Amst) 2014; 146:7-18. [PMID: 24361740 DOI: 10.1016/j.actpsy.2013.11.012] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2013] [Revised: 11/23/2013] [Accepted: 11/29/2013] [Indexed: 11/24/2022] Open
Abstract
In their recent paper, Marchant, Simons, and De Fockert (2013) claimed that the ability to average between multiple items of different sizes is limited by small samples of arbitrarily attended members of a set. This claim is based on a finding that observers are good at representing the average when an ensemble includes only two sizes distributed among all items (regular sets), but their performance gets worse when the number of sizes increases with the number of items (irregular sets). We argue that an important factor not considered by Marchant et al. (2013) is the range of size variation that was much bigger in their irregular sets. We manipulated this factor across our experiments and found almost the same efficiency of averaging for both regular and irregular sets when the range was stabilized. Moreover, highly regular sets consisting only of small and large items (two-peaks distributions) were averaged with greater error than sets with small, large, and intermediate items, suggesting a segmentation threshold determining whether all variable items are perceived as a single ensemble or distinct subsets. Our results demonstrate that averaging can actually be parallel but the visual system has some difficulties with it when some items differ too much from others.
Collapse
|
Comment |
11 |
38 |
5
|
Hong FT. The role of pattern recognition in creative problem solving: a case study in search of new mathematics for biology. PROGRESS IN BIOPHYSICS AND MOLECULAR BIOLOGY 2013; 113:181-215. [PMID: 23597605 DOI: 10.1016/j.pbiomolbio.2013.03.017] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
Rosen classified sciences into two categories: formalizable and unformalizable. Whereas formalizable sciences expressed in terms of mathematical theories were highly valued by Rutherford, Hutchins pointed out that unformalizable parts of soft sciences are of genuine interest and importance. Attempts to build mathematical theories for biology in the past century was met with modest and sporadic successes, and only in simple systems. In this article, a qualitative model of humans' high creativity is presented as a starting point to consider whether the gap between soft and hard sciences is bridgeable. Simonton's chance-configuration theory, which mimics the process of evolution, was modified and improved. By treating problem solving as a process of pattern recognition, the known dichotomy of visual thinking vs. verbal thinking can be recast in terms of analog pattern recognition (non-algorithmic process) and digital pattern recognition (algorithmic process), respectively. Additional concepts commonly encountered in computer science, operations research and artificial intelligence were also invoked: heuristic searching, parallel and sequential processing. The refurbished chance-configuration model is now capable of explaining several long-standing puzzles in human cognition: a) why novel discoveries often came without prior warning, b) why some creators had no ideas about the source of inspiration even after the fact, c) why some creators were consistently luckier than others, and, last but not least, d) why it was so difficult to explain what intuition, inspiration, insight, hunch, serendipity, etc. are all about. The predictive power of the present model was tested by means of resolving Zeno's paradox of Achilles and the Tortoise after one deliberately invoked visual thinking. Additional evidence of its predictive power must await future large-scale field studies. The analysis was further generalized to constructions of scientific theories in general. This approach is in line with Campbell's evolutionary epistemology. Instead of treating science as immutable Natural Laws, which already existed and which were just waiting to be discovered, scientific theories are regarded as humans' mental constructs, which must be invented to reconcile with observed natural phenomena. In this way, the pursuit of science is shifted from diligent and systematic (or random) searching for existing Natural Laws to firing up humans' imagination to comprehend Nature's behavioral pattern. The insights gained in understanding human creativity indicated that new mathematics that is capable of handling effectively parallel processing and human subjectivity is sorely needed. The past classification of formalizability vs. non-formalizability was made in reference to contemporary mathematics. Rosen's conclusion did not preclude future inventions of new biology-friendly mathematics.
Collapse
|
Review |
12 |
32 |
6
|
Chen J, Wrightsman TR, Wessler SR, Stajich JE. RelocaTE2: a high resolution transposable element insertion site mapping tool for population resequencing. PeerJ 2017; 5:e2942. [PMID: 28149701 PMCID: PMC5274521 DOI: 10.7717/peerj.2942] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2016] [Accepted: 12/26/2016] [Indexed: 12/26/2022] Open
Abstract
Background Transposable element (TE) polymorphisms are important components of population genetic variation. The functional impacts of TEs in gene regulation and generating genetic diversity have been observed in multiple species, but the frequency and magnitude of TE variation is under appreciated. Inexpensive and deep sequencing technology has made it affordable to apply population genetic methods to whole genomes with methods that identify single nucleotide and insertion/deletion polymorphisms. However, identifying TE polymorphisms, particularly transposition events or non-reference insertion sites can be challenging due to the repetitive nature of these sequences, which hamper both the sensitivity and specificity of analysis tools. Methods We have developed the tool RelocaTE2 for identification of TE insertion sites at high sensitivity and specificity. RelocaTE2 searches for known TE sequences in whole genome sequencing reads from second generation sequencing platforms such as Illumina. These sequence reads are used as seeds to pinpoint chromosome locations where TEs have transposed. RelocaTE2 detects target site duplication (TSD) of TE insertions allowing it to report TE polymorphism loci with single base pair precision. Results and Discussion The performance of RelocaTE2 is evaluated using both simulated and real sequence data. RelocaTE2 demonstrate high level of sensitivity and specificity, particularly when the sequence coverage is not shallow. In comparison to other tools tested, RelocaTE2 achieves the best balance between sensitivity and specificity. In particular, RelocaTE2 performs best in prediction of TSDs for TE insertions. Even in highly repetitive regions, such as those tested on rice chromosome 4, RelocaTE2 is able to report up to 95% of simulated TE insertions with less than 0.1% false positive rate using 10-fold genome coverage resequencing data. RelocaTE2 provides a robust solution to identify TE insertion sites and can be incorporated into analysis workflows in support of describing the complete genotype from light coverage genome sequencing.
Collapse
|
Journal Article |
8 |
25 |
7
|
Han SH, Heo J, Sohn HG, Yu K. Parallel Processing Method for Airborne Laser Scanning Data Using a PC Cluster and a Virtual Grid. SENSORS 2009; 9:2555-73. [PMID: 22574032 PMCID: PMC3348793 DOI: 10.3390/s90402555] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/02/2009] [Revised: 04/09/2009] [Accepted: 04/10/2009] [Indexed: 11/16/2022]
Abstract
In this study, a parallel processing method using a PC cluster and a virtual grid is proposed for the fast processing of enormous amounts of airborne laser scanning (ALS) data. The method creates a raster digital surface model (DSM) by interpolating point data with inverse distance weighting (IDW), and produces a digital terrain model (DTM) by local minimum filtering of the DSM. To make a consistent comparison of performance between sequential and parallel processing approaches, the means of dealing with boundary data and of selecting interpolation centers were controlled for each processing node in parallel approach. To test the speedup, efficiency and linearity of the proposed algorithm, actual ALS data up to 134 million points were processed with a PC cluster consisting of one master node and eight slave nodes. The results showed that parallel processing provides better performance when the computational overhead, the number of processors, and the data size become large. It was verified that the proposed algorithm is a linear time operation and that the products obtained by parallel processing are identical to those produced by sequential processing.
Collapse
|
|
16 |
24 |
8
|
Abstract
In the present article, we investigate a largely unstudied cognitive process: word position coding. The question of how readers perceive word order is not trivial: Recent research has suggested that readers associate activated word representations with plausible locations in a sentence-level representation. Rather than simply being dictated by the order in which words are recognized, word position coding may be influenced by bottom-up visual cues (e.g., word length information), as well as by top-down expectations. Here we assessed how flexible word position coding is. We let readers make grammaticality judgments about four-word sentences. The incorrect sentences were constructed by transposing two words in a correct sentence (e.g., “the man can run” became “the can man run”). The critical comparison was between two types of incorrect sentence: one with a transposition of the inner two words, and one with a transposition of the outer two words (“run man can the”). We reasoned that under limited (local) flexibility, it should be easier to classify the outer-transposed sentences as incorrect, because the words were farther away from their plausible locations in this condition. If words were recognized irrespective of location, on the other hand, there should be no difference between these two conditions. As it turned out, we observed longer response times and higher error rates for inner- than for outer-transposed sentences, indicating that local flexibility and top-down expectations can jointly lead the reader to confuse the locations of words, with a probability that increases as the distance between the plausible and actual locations of a word decreases. We conclude that word position coding is subject to a moderate amount of noise.
Collapse
|
Journal Article |
6 |
21 |
9
|
Song Y, Su Q, Yang Q, Zhao R, Yin G, Qin W, Iannetti GD, Yu C, Liang M. Feedforward and feedback pathways of nociceptive and tactile processing in human somatosensory system: A study of dynamic causal modeling of fMRI data. Neuroimage 2021; 234:117957. [PMID: 33744457 DOI: 10.1016/j.neuroimage.2021.117957] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2020] [Revised: 03/07/2021] [Accepted: 03/09/2021] [Indexed: 11/30/2022] Open
Abstract
Nociceptive and tactile information is processed in the somatosensory system via reciprocal (i.e., feedforward and feedback) projections between the thalamus, the primary (S1) and secondary (S2) somatosensory cortices. The exact hierarchy of nociceptive and tactile information processing within this 'thalamus-S1-S2' network and whether the processing hierarchy differs between the two somatosensory submodalities remains unclear. In particular, two questions related to the ascending and descending pathways have not been addressed. For the ascending pathways, whether tactile or nociceptive information is processed in parallel (i.e., 'thalamus-S1' and 'thalamus-S2') or in serial (i.e., 'thalamus-S1-S2') remains controversial. For the descending pathways, how corticothalamic feedback regulates nociceptive and tactile processing also remains elusive. Here, we aimed to investigate the hierarchical organization for the processing of nociceptive and tactile information in the 'thalamus-S1-S2' network using dynamic causal modeling (DCM) combined with high-temporal-resolution fMRI. We found that, for both nociceptive and tactile information processing, both S1 and S2 received inputs from thalamus, indicating a parallel structure of ascending pathways for nociceptive and tactile information processing. Furthermore, we observed distinct corticothalamic feedback regulations from S1 and S2, showing that S1 generally exerts inhibitory feedback regulation independent of external stimulation whereas S2 provides additional inhibition to the thalamic activity during nociceptive and tactile information processing in humans. These findings revealed that nociceptive and tactile information processing have similar hierarchical organization within the somatosensory system in the human brain.
Collapse
|
Research Support, Non-U.S. Gov't |
4 |
19 |
10
|
Lin J, Kramna L, Autio R, Hyöty H, Nykter M, Cinek O. Vipie: web pipeline for parallel characterization of viral populations from multiple NGS samples. BMC Genomics 2017; 18:378. [PMID: 28506246 PMCID: PMC5430618 DOI: 10.1186/s12864-017-3721-7] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2017] [Accepted: 04/25/2017] [Indexed: 02/06/2023] Open
Abstract
Background Next generation sequencing (NGS) technology allows laboratories to investigate virome composition in clinical and environmental samples in a culture-independent way. There is a need for bioinformatic tools capable of parallel processing of virome sequencing data by exactly identical methods: this is especially important in studies of multifactorial diseases, or in parallel comparison of laboratory protocols. Results We have developed a web-based application allowing direct upload of sequences from multiple virome samples using custom parameters. The samples are then processed in parallel using an identical protocol, and can be easily reanalyzed. The pipeline performs de-novo assembly, taxonomic classification of viruses as well as sample analyses based on user-defined grouping categories. Tables of virus abundance are produced from cross-validation by remapping the sequencing reads to a union of all observed reference viruses. In addition, read sets and reports are created after processing unmapped reads against known human and bacterial ribosome references. Secured interactive results are dynamically plotted with population and diversity charts, clustered heatmaps and a sortable and searchable abundance table. Conclusions The Vipie web application is a unique tool for multi-sample metagenomic analysis of viral data, producing searchable hits tables, interactive population maps, alpha diversity measures and clustered heatmaps that are grouped in applicable custom sample categories. Known references such as human genome and bacterial ribosomal genes are optionally removed from unmapped (‘dark matter’) reads. Secured results are accessible and shareable on modern browsers. Vipie is a freely available web-based tool whose code is open source. Electronic supplementary material The online version of this article (doi:10.1186/s12864-017-3721-7) contains supplementary material, which is available to authorized users.
Collapse
|
Research Support, Non-U.S. Gov't |
8 |
17 |
11
|
Ichinose T, Habib S. ON and OFF Signaling Pathways in the Retina and the Visual System. FRONTIERS IN OPHTHALMOLOGY 2022; 2:989002. [PMID: 36926308 PMCID: PMC10016624 DOI: 10.3389/fopht.2022.989002] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
Visual processing starts at the retina of the eye, and signals are then transferred primarily to the visual cortex and the tectum. In the retina, multiple neural networks encode different aspects of visual input, such as color and motion. Subsequently, multiple neural streams in parallel convey unique aspects of visual information to cortical and subcortical regions. Bipolar cells, which are the second order neurons of the retina, separate visual signals evoked by light and dark contrasts and encode them to ON and OFF pathways, respectively. The interplay between ON and OFF neural signals is the foundation for visual processing for object contrast which underlies higher order stimulus processing. ON and OFF pathways have been classically thought to signal in a mirror-symmetric manner. However, while these two pathways contribute synergistically to visual perception in some instances, they have pronounced asymmetries suggesting independent operation in other cases. In this review, we summarize the role of the ON-OFF dichotomy in visual signaling, aiming to contribute to the understanding of visual recognition.
Collapse
|
research-article |
3 |
17 |
12
|
Zaslavsky L, Ciufo S, Fedorov B, Tatusova T. Clustering analysis of proteins from microbial genomes at multiple levels of resolution. BMC Bioinformatics 2016; 17 Suppl 8:276. [PMID: 27586436 PMCID: PMC5009818 DOI: 10.1186/s12859-016-1112-8] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022] Open
Abstract
Background Microbial genomes at the National Center for Biotechnology Information (NCBI) represent a large collection of more than 35,000 assemblies. There are several complexities associated with the data: a great variation in sampling density since human pathogens are densely sampled while other bacteria are less represented; different protein families occur in annotations with different frequencies; and the quality of genome annotation varies greatly. In order to extract useful information from these sophisticated data, the analysis needs to be performed at multiple levels of phylogenomic resolution and protein similarity, with an adequate sampling strategy. Results Protein clustering is used to construct meaningful and stable groups of similar proteins to be used for analysis and functional annotation. Our approach is to create protein clusters at three levels. First, tight clusters in groups of closely-related genomes (species-level clades) are constructed using a combined approach that takes into account both sequence similarity and genome context. Second, clustroids of conservative in-clade clusters are organized into seed global clusters. Finally, global protein clusters are built around the the seed clusters. We propose filtering strategies that allow limiting the protein set included in global clustering. The in-clade clustering procedure, subsequent selection of clustroids and organization into seed global clusters provides a robust representation and high rate of compression. Seed protein clusters are further extended by adding related proteins. Extended seed clusters include a significant part of the data and represent all major known cell machinery. The remaining part, coming from either non-conservative (unique) or rapidly evolving proteins, from rare genomes, or resulting from low-quality annotation, does not group together well. Processing these proteins requires significant computational resources and results in a large number of questionable clusters. Conclusion The developed filtering strategies allow to identify and exclude such peripheral proteins limiting the protein dataset in global clustering. Overall, the proposed methodology allows the relevant data at different levels of details to be obtained and data redundancy eliminated while keeping biologically interesting variations. Electronic supplementary material The online version of this article (doi:10.1186/s12859-016-1112-8) contains supplementary material, which is available to authorized users.
Collapse
|
Journal Article |
9 |
15 |
13
|
Abstract
The masked-priming lexical decision task has been the paradigm of choice for investigating how readers code for letter identity and position. Insight into the temporal integration of information between prime and target words has pointed out, among other things, that readers do not code for the absolute position of letters. This conception has spurred various accounts of the word recognition process, but the results at present do not favor one account in particular. Thus, employing a new strategy, the present study moves out of the arena of temporal- and into the arena of spatial information integration. We present two lexical decision experiments that tested how the processing of six-letter target words is influenced by simultaneously presented flanking stimuli (each stimulus was presented for 150 ms). We manipulated the orthographic relatedness between the targets and flankers, in terms of both letter identity (same/different letters based on the target's outer/inner letters) and letter position (intact/reversed order of letters and of flankers, contiguous/noncontiguous flankers). Target processing was strongly facilitated by same-letter flankers, and this facilitatory effect was modulated by both letter/flanker order and contiguity. However, when the flankers consisted of the target's inner-positioned letters alone, letter order no longer mattered. These findings suggest that readers may code for the relative position of letters using words' edges as spatial points of reference. We conclude that the flanker paradigm provides a fruitful means to investigate letter-position coding in the fovea and parafovea.
Collapse
|
Research Support, Non-U.S. Gov't |
6 |
13 |
14
|
gEMfitter: a highly parallel FFT-based 3D density fitting tool with GPU texture memory acceleration. J Struct Biol 2013; 184:348-54. [PMID: 24060989 DOI: 10.1016/j.jsb.2013.09.010] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2013] [Revised: 09/06/2013] [Accepted: 09/10/2013] [Indexed: 11/24/2022]
Abstract
Fitting high resolution protein structures into low resolution cryo-electron microscopy (cryo-EM) density maps is an important technique for modeling the atomic structures of very large macromolecular assemblies. This article presents "gEMfitter", a highly parallel fast Fourier transform (FFT) EM density fitting program which can exploit the special hardware properties of modern graphics processor units (GPUs) to accelerate both the translational and rotational parts of the correlation search. In particular, by using the GPU's special texture memory hardware to rotate 3D voxel grids, the cost of rotating large 3D density maps is almost completely eliminated. Compared to performing 3D correlations on one core of a contemporary central processor unit (CPU), running gEMfitter on a modern GPU gives up to 26-fold speed-up. Furthermore, using our parallel processing framework, this speed-up increases linearly with the number of CPUs or GPUs used. Thus, it is now possible to use routinely more robust but more expensive 3D correlation techniques. When tested on low resolution experimental cryo-EM data for the GroEL-GroES complex, we demonstrate the satisfactory fitting results that may be achieved by using a locally normalised cross-correlation with a Laplacian pre-filter, while still being up to three orders of magnitude faster than the well-known COLORES program.
Collapse
|
Research Support, Non-U.S. Gov't |
12 |
13 |
15
|
Macpherson T, Matsumoto M, Gomi H, Morimoto J, Uchibe E, Hikida T. Parallel and hierarchical neural mechanisms for adaptive and predictive behavioral control. Neural Netw 2021; 144:507-521. [PMID: 34601363 DOI: 10.1016/j.neunet.2021.09.009] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Revised: 07/21/2021] [Accepted: 09/06/2021] [Indexed: 12/21/2022]
Abstract
Our brain can be recognized as a network of largely hierarchically organized neural circuits that operate to control specific functions, but when acting in parallel, enable the performance of complex and simultaneous behaviors. Indeed, many of our daily actions require concurrent information processing in sensorimotor, associative, and limbic circuits that are dynamically and hierarchically modulated by sensory information and previous learning. This organization of information processing in biological organisms has served as a major inspiration for artificial intelligence and has helped to create in silico systems capable of matching or even outperforming humans in several specific tasks, including visual recognition and strategy-based games. However, the development of human-like robots that are able to move as quickly as humans and respond flexibly in various situations remains a major challenge and indicates an area where further use of parallel and hierarchical architectures may hold promise. In this article we review several important neural and behavioral mechanisms organizing hierarchical and predictive processing for the acquisition and realization of flexible behavioral control. Then, inspired by the organizational features of brain circuits, we introduce a multi-timescale parallel and hierarchical learning framework for the realization of versatile and agile movement in humanoid robots.
Collapse
|
Review |
4 |
12 |
16
|
Agliari E, Barra A, Galluzzi A, Guerra F, Tantari D, Tavani F. Hierarchical neural networks perform both serial and parallel processing. Neural Netw 2015; 66:22-35. [PMID: 25795510 DOI: 10.1016/j.neunet.2015.02.010] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2014] [Revised: 02/18/2015] [Accepted: 02/22/2015] [Indexed: 10/23/2022]
Abstract
In this work we study a Hebbian neural network, where neurons are arranged according to a hierarchical architecture such that their couplings scale with their reciprocal distance. As a full statistical mechanics solution is not yet available, after a streamlined introduction to the state of the art via that route, the problem is consistently approached through signal-to-noise technique and extensive numerical simulations. Focusing on the low-storage regime, where the amount of stored patterns grows at most logarithmical with the system size, we prove that these non-mean-field Hopfield-like networks display a richer phase diagram than their classical counterparts. In particular, these networks are able to perform serial processing (i.e. retrieve one pattern at a time through a complete rearrangement of the whole ensemble of neurons) as well as parallel processing (i.e. retrieve several patterns simultaneously, delegating the management of different patterns to diverse communities that build network). The tune between the two regimes is given by the rate of the coupling decay and by the level of noise affecting the system. The price to pay for those remarkable capabilities lies in a network's capacity smaller than the mean field counterpart, thus yielding a new budget principle: the wider the multitasking capabilities, the lower the network load and vice versa. This may have important implications in our understanding of biological complexity.
Collapse
|
Research Support, Non-U.S. Gov't |
10 |
12 |
17
|
Abstract
Feature Integration Theory (FIT) set out the groundwork for much of the work in visual cognition since its publication. One of the most important legacies of this theory has been the emphasis on feature-specific processing. Nowadays, visual features are thought of as a sort of currency of visual attention (e.g., features can be attended, processing of attended features is enhanced), and attended features are thought to guide attention towards likely targets in a scene. Here we propose an alternative theory - the Target Contrast Signal Theory - based on the idea that when we search for a specific target, it is not the target-specific features that guide our attention towards the target; rather, what determines behavior is the result of an active comparison between the target template in mind and every element present in the scene. This comparison occurs in parallel and is aimed at rejecting from consideration items that peripheral vision can confidently reject as being non-targets. The speed at which each item is evaluated is determined by the overall contrast between that item and the target template. We present computational simulations to demonstrate the workings of the theory as well as eye-movement data that support core predictions of the theory. The theory is discussed in the context of FIT and other important theories of visual search.
Collapse
|
Journal Article |
5 |
11 |
18
|
Minati L, Zacà D, D'Incerti L, Jovicich J. Fast computation of voxel-level brain connectivity maps from resting-state functional MRI using l₁-norm as approximation of Pearson's temporal correlation: proof-of-concept and example vector hardware implementation. Med Eng Phys 2014; 36:1212-7. [PMID: 25023958 DOI: 10.1016/j.medengphy.2014.06.012] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2014] [Revised: 05/19/2014] [Accepted: 06/19/2014] [Indexed: 10/25/2022]
Abstract
An outstanding issue in graph-based analysis of resting-state functional MRI is choice of network nodes. Individual consideration of entire brain voxels may represent a less biased approach than parcellating the cortex according to pre-determined atlases, but entails establishing connectedness for 1(9)-1(11) links, with often prohibitive computational cost. Using a representative Human Connectome Project dataset, we show that, following appropriate time-series normalization, it may be possible to accelerate connectivity determination replacing Pearson correlation with l1-norm. Even though the adjacency matrices derived from correlation coefficients and l1-norms are not identical, their similarity is high. Further, we describe and provide in full an example vector hardware implementation of l1-norm on an array of 4096 zero instruction-set processors. Calculation times <1000 s are attainable, removing the major deterrent to voxel-based resting-sate network mapping and revealing fine-grained node degree heterogeneity. L1-norm should be given consideration as a substitute for correlation in very high-density resting-state functional connectivity analyses.
Collapse
|
Research Support, Non-U.S. Gov't |
11 |
8 |
19
|
Liu Z, Li Y, Cutter MG, Paterson KB, Wang J. A transposed-word effect across space and time: Evidence from Chinese. Cognition 2021; 218:104922. [PMID: 34634533 DOI: 10.1016/j.cognition.2021.104922] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2021] [Revised: 08/08/2021] [Accepted: 09/28/2021] [Indexed: 11/03/2022]
Abstract
A compelling account of the reading process holds that words must be encoded serially, and so recognized strictly one at a time in the order they are encountered. However, this view has been challenged recently, based on evidence showing that readers sometimes fail to notice when adjacent words appear in ungrammatical order. This is argued to show that words are actually encoded in parallel, so that multiple words are processed simultaneously and therefore might be recognized out of order. We tested this account in an experiment in Chinese with 112 skilled readers, employing methods used previously to demonstrate flexible word order processing, and display techniques that allowed or disallowed the parallel encoding of words. The results provided evidence for flexible word order processing even when words must be encoded serially. Accordingly, while word order can be processed flexibly during reading, this need not entail that words are encoded in parallel.
Collapse
|
|
4 |
7 |
20
|
Detection of scale-freeness in brain connectivity by functional MRI: signal processing aspects and implementation of an open hardware co-processor. Med Eng Phys 2013; 35:1525-31. [PMID: 23742932 DOI: 10.1016/j.medengphy.2013.04.013] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2012] [Revised: 01/29/2013] [Accepted: 04/17/2013] [Indexed: 01/17/2023]
Abstract
An outstanding issue in graph-theoretical studies of brain functional connectivity is the lack of formal criteria for choosing parcellation granularity and correlation threshold. Here, we propose detectability of scale-freeness as a benchmark to evaluate time-series extraction settings. Scale-freeness, i.e., power-law distribution of node connections, is a fundamental topological property that is highly conserved across biological networks, and as such needs to be manifest within plausible reconstructions of brain connectivity. We demonstrate that scale-free network topology only emerges when adequately fine cortical parcellations are adopted alongside an appropriate correlation threshold, and provide the full design of the first open-source hardware platform to accelerate the calculation of large linear regression arrays.
Collapse
|
Research Support, Non-U.S. Gov't |
12 |
7 |
21
|
Kim C, Chong SC. Partial awareness can be induced by independent cognitive access to different spatial frequencies. Cognition 2021; 212:104692. [PMID: 33773425 DOI: 10.1016/j.cognition.2021.104692] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2020] [Revised: 03/17/2021] [Accepted: 03/18/2021] [Indexed: 01/15/2023]
Abstract
Partial awareness-an intermediate state between complete consciousness and unconsciousness-has been explained by independent cognitive access to different levels of representation in hierarchical visual processing. This account, however, cannot explain graded visual experiences in low levels. We aimed to explain partial awareness in low levels of visual processing by independent cognitive access to different spatial frequencies. To observe partial awareness stably, we used a novel method. Stimuli were presented briefly (12 ms) and repeatedly with a specific inter-stimulus interval, ranging from 0 to 235 ms. By using various stimuli containing high and low spatial frequencies (superimposed sinusoidal gratings, Navon letters, and scenes), we found that conscious percept was degraded with increasing inter-stimulus intervals. However, the degree of degradation was smaller for low spatial frequency than for high spatial frequency information. Our results reveal that cognitive access to different spatial frequencies can occur independently and this can explain partial awareness in low levels of visual processing.
Collapse
|
|
4 |
6 |
22
|
de Molina C, Serrano E, Garcia-Blas J, Carretero J, Desco M, Abella M. GPU-accelerated iterative reconstruction for limited-data tomography in CBCT systems. BMC Bioinformatics 2018; 19:171. [PMID: 29764362 PMCID: PMC5952580 DOI: 10.1186/s12859-018-2169-3] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2017] [Accepted: 04/26/2018] [Indexed: 11/30/2022] Open
Abstract
Background Standard cone-beam computed tomography (CBCT) involves the acquisition of at least 360 projections rotating through 360 degrees. Nevertheless, there are cases in which only a few projections can be taken in a limited angular span, such as during surgery, where rotation of the source-detector pair is limited to less than 180 degrees. Reconstruction of limited data with the conventional method proposed by Feldkamp, Davis and Kress (FDK) results in severe artifacts. Iterative methods may compensate for the lack of data by including additional prior information, although they imply a high computational burden and memory consumption. Results We present an accelerated implementation of an iterative method for CBCT following the Split Bregman formulation, which reduces computational time through GPU-accelerated kernels. The implementation enables the reconstruction of large volumes (>10243 pixels) using partitioning strategies in forward- and back-projection operations. We evaluated the algorithm on small-animal data for different scenarios with different numbers of projections, angular span, and projection size. Reconstruction time varied linearly with the number of projections and quadratically with projection size but remained almost unchanged with angular span. Forward- and back-projection operations represent 60% of the total computational burden. Conclusion Efficient implementation using parallel processing and large-memory management strategies together with GPU kernels enables the use of advanced reconstruction approaches which are needed in limited-data scenarios. Our GPU implementation showed a significant time reduction (up to 48 ×) compared to a CPU-only implementation, resulting in a total reconstruction time from several hours to few minutes.
Collapse
|
|
7 |
6 |
23
|
Shi Y, Veidenbaum AV, Nicolau A, Xu X. Large-scale neural circuit mapping data analysis accelerated with the graphical processing unit (GPU). J Neurosci Methods 2014; 239:1-10. [PMID: 25277633 DOI: 10.1016/j.jneumeth.2014.09.022] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2014] [Revised: 09/19/2014] [Accepted: 09/22/2014] [Indexed: 10/24/2022]
Abstract
BACKGROUND Modern neuroscience research demands computing power. Neural circuit mapping studies such as those using laser scanning photostimulation (LSPS) produce large amounts of data and require intensive computation for post hoc processing and analysis. NEW METHOD Here we report on the design and implementation of a cost-effective desktop computer system for accelerated experimental data processing with recent GPU computing technology. A new version of Matlab software with GPU enabled functions is used to develop programs that run on Nvidia GPUs to harness their parallel computing power. RESULTS We evaluated both the central processing unit (CPU) and GPU-enabled computational performance of our system in benchmark testing and practical applications. The experimental results show that the GPU-CPU co-processing of simulated data and actual LSPS experimental data clearly outperformed the multi-core CPU with up to a 22× speedup, depending on computational tasks. Further, we present a comparison of numerical accuracy between GPU and CPU computation to verify the precision of GPU computation. In addition, we show how GPUs can be effectively adapted to improve the performance of commercial image processing software such as Adobe Photoshop. COMPARISON WITH EXISTING METHOD(S) To our best knowledge, this is the first demonstration of GPU application in neural circuit mapping and electrophysiology-based data processing. CONCLUSIONS Together, GPU enabled computation enhances our ability to process large-scale data sets derived from neural circuit mapping studies, allowing for increased processing speeds while retaining data precision.
Collapse
|
Research Support, Non-U.S. Gov't |
11 |
5 |
24
|
Yim WC, Cushman JC. Divide and Conquer (DC) BLAST: fast and easy BLAST execution within HPC environments. PeerJ 2017; 5:e3486. [PMID: 28652936 PMCID: PMC5483034 DOI: 10.7717/peerj.3486] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2017] [Accepted: 05/31/2017] [Indexed: 12/02/2022] Open
Abstract
Bioinformatics is currently faced with very large-scale data sets that lead to computational jobs, especially sequence similarity searches, that can take absurdly long times to run. For example, the National Center for Biotechnology Information (NCBI) Basic Local Alignment Search Tool (BLAST and BLAST+) suite, which is by far the most widely used tool for rapid similarity searching among nucleic acid or amino acid sequences, is highly central processing unit (CPU) intensive. While the BLAST suite of programs perform searches very rapidly, they have the potential to be accelerated. In recent years, distributed computing environments have become more widely accessible and used due to the increasing availability of high-performance computing (HPC) systems. Therefore, simple solutions for data parallelization are needed to expedite BLAST and other sequence analysis tools. However, existing software for parallel sequence similarity searches often requires extensive computational experience and skill on the part of the user. In order to accelerate BLAST and other sequence analysis tools, Divide and Conquer BLAST (DCBLAST) was developed to perform NCBI BLAST searches within a cluster, grid, or HPC environment by using a query sequence distribution approach. Scaling from one (1) to 256 CPU cores resulted in significant improvements in processing speed. Thus, DCBLAST dramatically accelerates the execution of BLAST searches using a simple, accessible, robust, and parallel approach. DCBLAST works across multiple nodes automatically and it overcomes the speed limitation of single-node BLAST programs. DCBLAST can be used on any HPC system, can take advantage of hundreds of nodes, and has no output limitations. This freely available tool simplifies distributed computation pipelines to facilitate the rapid discovery of sequence similarities between very large data sets.
Collapse
|
|
8 |
5 |
25
|
A secure and efficiently searchable health information architecture. J Biomed Inform 2016; 61:237-46. [PMID: 27109933 DOI: 10.1016/j.jbi.2016.04.004] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2015] [Revised: 04/11/2016] [Accepted: 04/12/2016] [Indexed: 11/24/2022]
Abstract
Patient-centric repositories of health records are an important component of health information infrastructure. However, patient information in a single repository is potentially vulnerable to loss of the entire dataset from a single unauthorized intrusion. A new health record storage architecture, the personal grid, eliminates this risk by separately storing and encrypting each person's record. The tradeoff for this improved security is that a personal grid repository must be sequentially searched since each record must be individually accessed and decrypted. To allow reasonable search times for large numbers of records, parallel processing with hundreds (or even thousands) of on-demand virtual servers (now available in cloud computing environments) is used. Estimated search times for a 10 million record personal grid using 500 servers vary from 7 to 33min depending on the complexity of the query. Since extremely rapid searching is not a critical requirement of health information infrastructure, the personal grid may provide a practical and useful alternative architecture that eliminates the large-scale security vulnerabilities of traditional databases by sacrificing unnecessary searching speed.
Collapse
|
|
9 |
5 |