26
|
Data-driven emergence of convolutional structure in neural networks. Proc Natl Acad Sci U S A 2022; 119:e2201854119. [PMID: 36161906 DOI: 10.1073/pnas.2201854119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Exploiting data invariances is crucial for efficient learning in both artificial and biological neural circuits. Understanding how neural networks can discover appropriate representations capable of harnessing the underlying symmetries of their inputs is thus crucial in machine learning and neuroscience. Convolutional neural networks, for example, were designed to exploit translation symmetry, and their capabilities triggered the first wave of deep learning successes. However, learning convolutions directly from translation-invariant data with a fully connected network has so far proven elusive. Here we show how initially fully connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs, resulting in localized, space-tiling receptive fields. These receptive fields match the filters of a convolutional network trained on the same task. By carefully designing data models for the visual scene, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs, which has long been recognized as the hallmark of natural images. We provide an analytical and numerical characterization of the pattern formation mechanism responsible for this phenomenon in a simple model and find an unexpected link between receptive field formation and tensor decomposition of higher-order input correlations. These results provide a perspective on the development of low-level feature detectors in various sensory modalities and pave the way for studying the impact of higher-order statistics on learning in neural networks.
Collapse
|
27
|
Srivastava HM, Lone WZ, Shah FA, Zayed AI. Discrete Quadratic-Phase Fourier Transform: Theory and Convolution Structures. ENTROPY (BASEL, SWITZERLAND) 2022; 24:1340. [PMID: 37420360 DOI: 10.3390/e24101340] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/18/2022] [Revised: 09/21/2022] [Accepted: 09/21/2022] [Indexed: 07/09/2023]
Abstract
The discrete Fourier transform is considered as one of the most powerful tools in digital signal processing, which enable us to find the spectrum of finite-duration signals. In this article, we introduce the notion of discrete quadratic-phase Fourier transform, which encompasses a wider class of discrete Fourier transforms, including classical discrete Fourier transform, discrete fractional Fourier transform, discrete linear canonical transform, discrete Fresnal transform, and so on. To begin with, we examine the fundamental aspects of the discrete quadratic-phase Fourier transform, including the formulation of Parseval's and reconstruction formulae. To extend the scope of the present study, we establish weighted and non-weighted convolution and correlation structures associated with the discrete quadratic-phase Fourier transform.
Collapse
|
28
|
Liang J, Yang C, Zeng M, Wang X. TransConver: transformer and convolution parallel network for developing automatic brain tumor segmentation in MRI images. Quant Imaging Med Surg 2022; 12:2397-2415. [PMID: 35371952 PMCID: PMC8923874 DOI: 10.21037/qims-21-919] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2021] [Accepted: 01/04/2022] [Indexed: 07/28/2023]
Abstract
BACKGROUND Medical image segmentation plays a vital role in computer-aided diagnosis (CAD) systems. Both convolutional neural networks (CNNs) with strong local information extraction capacities and transformers with excellent global representation capacities have achieved remarkable performance in medical image segmentation. However, because of the semantic differences between local and global features, how to combine convolution and transformers effectively is an important challenge in medical image segmentation. METHODS In this paper, we proposed TransConver, a U-shaped segmentation network based on convolution and transformer for automatic and accurate brain tumor segmentation in MRI images. Unlike the recently proposed transformer and convolution based models, we proposed a parallel module named transformer-convolution inception (TC-inception), which extracts local and global information via convolution blocks and transformer blocks, respectively, and integrates them by a cross-attention fusion with global and local feature (CAFGL) mechanism. Meanwhile, the improved skip connection structure named skip connection with cross-attention fusion (SCCAF) mechanism can alleviate the semantic differences between encoder features and decoder features for better feature fusion. In addition, we designed 2D-TransConver and 3D-TransConver for 2D and 3D brain tumor segmentation tasks, respectively, and verified the performance and advantage of our model through brain tumor datasets. RESULTS We trained our model on 335 cases from the training dataset of MICCAI BraTS2019 and evaluated the model's performance based on 66 cases from MICCAI BraTS2018 and 125 cases from MICCAI BraTS2019. Our TransConver achieved the best average Dice score of 83.72% and 86.32% on BraTS2019 and BraTS2018, respectively. CONCLUSIONS We proposed a transformer and convolution parallel network named TransConver for brain tumor segmentation. The TC-Inception module effectively extracts global information while retaining local details. The experimental results demonstrated that good segmentation requires the model to extract local fine-grained details and global semantic information simultaneously, and our TransConver effectively improves the accuracy of brain tumor segmentation.
Collapse
|
29
|
Lobos RA, Haldar JP. On the shape of convolution kernels in MRI reconstruction: Rectangles versus ellipsoids. Magn Reson Med 2022; 87:2989-2996. [PMID: 35212009 DOI: 10.1002/mrm.29189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Revised: 01/17/2022] [Accepted: 01/18/2022] [Indexed: 11/09/2022]
Abstract
PURPOSE Many MRI reconstruction methods (including GRAPPA, SPIRiT, ESPIRiT, LORAKS, and convolutional neural network [CNN] methods) involve shift-invariant convolution models. Rectangular convolution kernel shapes are often chosen by default, although ellipsoidal kernel shapes have potentially appealing theoretical characteristics. In this work, we systematically investigate the differences between different kernel shape choices in several contexts. THEORY It is well-understood that a rectangular region of k-space is associated with anisotropic spatial resolution, while ellipsoidal regions can be associated with more isotropic resolution. Further, for a fixed spatial resolution, ellipsoidal kernels are associated with substantially fewer parameters than rectangular kernels. These characteristics suggest that ellipsoidal kernels may have certain advantages over rectangular kernels. METHODS We used real retrospectively undersampled k-space data to empirically study the characteristics of rectangular and ellipsoidal kernels in the context of seven methods (GRAPPA, SPIRiT, ESPIRiT, SAKE, LORAKS, AC-LORAKS, and CNN-based reconstructions). RESULTS Empirical results suggest that both kernel shapes can produce reconstructed images with similar error metrics, although the ellipsoidal shape can often achieve this with reduced computation time and memory usage and/or fewer model parameters. CONCLUSION Ellipsoidal kernel shapes may offer advantages over rectangular kernel shapes in various MRI applications.
Collapse
|
30
|
Qiu L, Cai W, Zhang M, Zhu W, Wang L. Two-stage ECG signal denoising based on deep convolutional network. Physiol Meas 2021; 42. [PMID: 34715686 DOI: 10.1088/1361-6579/ac34ea] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Accepted: 10/29/2021] [Indexed: 11/11/2022]
Abstract
Background.An electrocardiogram (ECG) is an effective and non-invasive indicator for the detection and prevention of arrhythmia. ECG signals are susceptible to noise contamination, which can lead to errors in ECG interpretation. Therefore, ECG pretreatment is important for accurate analysis.Methods.The ECG data used are from CPSC2018, and the noise signal is from MIT-BIH Noise Stress Test Database. In the experiment, the signal-to-noise ratio (SNR), the root mean square error (RMSE), and the correlation coefficientPare used to evaluate the performance of the network. The method proposed is divided into two stages. In the first stage, a Ude-net model is designed for ECG signal denoising to eliminate noise. The DR-net model in the second stage is used to reconstruct the ECG signal and to correct the waveform distortion caused by noise removal in the first stage. In this paper, the Ude-net and the DR-net are constructed by the convolution method to achieve end-to-end mapping from noisy ECG signals to clean ECG signals.Result.In SNR, RMSE andPindicators, Ude-net + DR-net proposed in this paper can achieve the best performance compared with the other five schemes (FCN, U-net etc). In the three data sets, SNR can be increased by 11.61 dB, 13.71 dB and 14.40 dB and RMSE can be reduced by 10.46 × 10-2, 21.55 × 10-2and 15.98 × 10-2.Conclusions.Despite the contradictory results, the proposed two-stages method can achieve both the elimination of noise and the preservation of effective details to a large extent of the signals. The proposed method has good application prospects in clinical practice.
Collapse
|
31
|
Messenger DA, Bortz DM. WEAK SINDY FOR PARTIAL DIFFERENTIAL EQUATIONS. JOURNAL OF COMPUTATIONAL PHYSICS 2021; 443:110525. [PMID: 34744183 PMCID: PMC8570254 DOI: 10.1016/j.jcp.2021.110525] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Sparse Identification of Nonlinear Dynamics (SINDy) is a method of system discovery that has been shown to successfully recover governing dynamical systems from data [6, 39]. Recently, several groups have independently discovered that the weak formulation provides orders of magnitude better robustness to noise. Here we extend our Weak SINDy (WSINDy) framework introduced in [28] to the setting of partial differential equations (PDEs). The elimination of pointwise derivative approximations via the weak form enables effective machine-precision recovery of model coefficients from noise-free data (i.e. below the tolerance of the simulation scheme) as well as robust identification of PDEs in the large noise regime (with signal-to-noise ratio approaching one in many well-known cases). This is accomplished by discretizing a convolutional weak form of the PDE and exploiting separability of test functions for efficient model identification using the Fast Fourier Transform. The resulting WSINDy algorithm for PDEs has a worst-case computational complexity of O ( N D + 1 log ( N ) ) for datasets with N points in each of D + 1 dimensions. Furthermore, our Fourier-based implementation reveals a connection between robustness to noise and the spectra of test functions, which we utilize in an a priori selection algorithm for test functions. Finally, we introduce a learning algorithm for the threshold in sequential-thresholding least-squares (STLS) that enables model identification from large libraries, and we utilize scale invariance at the continuum level to identify PDEs from poorly-scaled datasets. We demonstrate WSINDy's robustness, speed and accuracy on several challenging PDEs. Code is publicly available on GitHub at https://github.com/MathBioCU/WSINDy_PDE.
Collapse
|
32
|
Mukhtar H, Qaisar SM, Zaguia A. Deep Convolutional Neural Network Regularization for Alcoholism Detection Using EEG Signals. SENSORS (BASEL, SWITZERLAND) 2021; 21:5456. [PMID: 34450899 PMCID: PMC8402228 DOI: 10.3390/s21165456] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Revised: 08/05/2021] [Accepted: 08/09/2021] [Indexed: 12/31/2022]
Abstract
Alcoholism is attributed to regular or excessive drinking of alcohol and leads to the disturbance of the neuronal system in the human brain. This results in certain malfunctioning of neurons that can be detected by an electroencephalogram (EEG) using several electrodes on a human skull at appropriate positions. It is of great interest to be able to classify an EEG activity as that of a normal person or an alcoholic person using data from the minimum possible electrodes (or channels). Due to the complex nature of EEG signals, accurate classification of alcoholism using only a small dataset is a challenging task. Artificial neural networks, specifically convolutional neural networks (CNNs), provide efficient and accurate results in various pattern-based classification problems. In this work, we apply CNN on raw EEG data and demonstrate how we achieved 98% average accuracy by optimizing a baseline CNN model and outperforming its results in a range of performance evaluation metrics on the University of California at Irvine Machine Learning (UCI-ML) EEG dataset. This article explains the stepwise improvement of the baseline model using the dropout, batch normalization, and kernel regularization techniques and provides a comparison of the two models that can be beneficial for aspiring practitioners who aim to develop similar classification models in CNN. A performance comparison is also provided with other approaches using the same dataset.
Collapse
|
33
|
Gabrielli F, Megemont M, Dallel R, Luccarini P, Monconduit L. Model-based signal processing enables bidirectional inferring between local field potential and spikes evoked by noxious stimulation. Brain Res Bull 2021; 174:212-219. [PMID: 34089782 DOI: 10.1016/j.brainresbull.2021.05.025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2021] [Revised: 03/27/2021] [Accepted: 05/28/2021] [Indexed: 11/30/2022]
Abstract
BACKGROUND Recording spontaneous and evoked activities by means of unitary extracellular recordings and local field potential (LFP) are key understanding the mechanisms of neural coding. The LFP is one of the most popular and easy methods to measure the activity of a population of neurons. LFP is also a composite signal known to be difficult to interpret and model. There is a growing need to highlight the relationship between spiking activity and LFP. Here, we hypothesized that LFP could be inferred from spikes under evoked noxious conditions. METHOD Recording was performed from the medullary dorsal horn (MDH) in deeply anesthetized rats. We detail a process to highlight the C-fiber (nociceptive) evoked activity, by removing the A-fiber evoked activity using a model-based approach. Then, we applied the convolution kernel theory and optimization algorithms to infer the C-fiber LFP from the single cell spikes. Finally, we used a probability density function and an optimization algorithm to infer the spikes distribution from the LFP. RESULTS We successfully extracted C-fiber LFP in all data recordings. We observed that C-fibers spikes preceded the C-fiber LFP and were rather correlated to the LFP derivative. Finally, we inferred LFP from spikes with excellent correlation coefficient (r = 0.9) and reverse generated the spikes distribution from LFP with good correlation coefficients (r = 0.7) on spikes number. CONCLUSION We introduced the kernel convolution theory to successfully infer the LFP from spikes, and we demonstrated that we could generate the spikes distribution from the LFP.
Collapse
|
34
|
Wu K, He S, Fernie G, Roshan Fekr A. Deep Neural Network for Slip Detection on Ice Surface. SENSORS 2020; 20:s20236883. [PMID: 33276475 PMCID: PMC7730651 DOI: 10.3390/s20236883] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/15/2020] [Revised: 11/27/2020] [Accepted: 11/29/2020] [Indexed: 11/16/2022]
Abstract
Slip-induced falls are among the most common causes of major occupational injuries and economic loss in Canada. Identifying the risk factors associated with slip events is key to developing preventive solutions to reduce falls. One factor is the slip-resistance quality of footwear, which is fundamental to reducing the number of falls. Measuring footwear slip resistance with the recently developed Maximum Achievable Angle (MAA) test requires a trained researcher to identify slip events in a simulated winter environment. The human capacity for information processing is limited and human error is natural, especially in a cold environment. Therefore, to remove conflicts associated with human errors, in this paper a deep three-dimensional convolutional neural network is proposed to detect the slips in real-time. The model has been trained by a new dataset that includes data from 18 different participants with various clothing, footwear, walking directions, inclined angles, and surface types. The model was evaluated on three types of slips: Maxi-slip, midi-slip, and mini-slip. This classification is based on the slip perception and recovery of the participants. The model was evaluated based on both 5-fold and Leave-One-Subject-Out (LOSO) cross validation. The best accuracy of 97% was achieved when identifying the maxi-slips. The minimum accuracy of 77% was achieved when classifying the no-slip and mini-slip trials. The overall slip detection accuracy was 86% with sensitivity and specificity of 81% and 91%, respectively. The overall accuracy dropped by about 2% in LOSO cross validation. The proposed slip detection algorithm is not only beneficial for footwear manufactures to improve their footwear slip resistance quality, but it also has other potential applications, such as improving the slip resistance properties of flooring in healthcare facilities, commercial kitchens, and oil drilling platforms.
Collapse
|
35
|
Gopalakrishnan R, Chua Y, Sun P, Sreejith Kumar AJ, Basu A. HFNet: A CNN Architecture Co-designed for Neuromorphic Hardware With a Crossbar Array of Synapses. Front Neurosci 2020; 14:907. [PMID: 33192236 PMCID: PMC7649386 DOI: 10.3389/fnins.2020.00907] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2019] [Accepted: 08/04/2020] [Indexed: 11/16/2022] Open
Abstract
The hardware-software co-optimization of neural network architectures is a field of research that emerged with the advent of commercial neuromorphic chips, such as the IBM TrueNorth and Intel Loihi. Development of simulation and automated mapping software tools in tandem with the design of neuromorphic hardware, whilst taking into consideration the hardware constraints, will play an increasingly significant role in deployment of system-level applications. This paper illustrates the importance and benefits of co-design of convolutional neural networks (CNN) that are to be mapped onto neuromorphic hardware with a crossbar array of synapses. Toward this end, we first study which convolution techniques are more hardware friendly and propose different mapping techniques for different convolutions. We show that, for a seven-layered CNN, our proposed mapping technique can reduce the number of cores used by 4.9-13.8 times for crossbar sizes ranging from 128 × 256 to 1,024 × 1,024, and this can be compared to the toeplitz method of mapping. We next develop an iterative co-design process for the systematic design of more hardware-friendly CNNs whilst considering hardware constraints, such as core sizes. A python wrapper, developed for the mapping process, is also useful for validating hardware design and studies on traffic volume and energy consumption. Finally, a new neural network dubbed HFNet is proposed using the above co-design process; it achieves a classification accuracy of 71.3% on the IMAGENET dataset (comparable to the VGG-16) but uses 11 times less cores for neuromorphic hardware with core size of 1,024 × 1,024. We also modified the HFNet to fit onto different core sizes and report on the corresponding classification accuracies. Various aspects of the paper are patent pending.
Collapse
|
36
|
Kirkland P, Di Caterina G, Soraghan J, Matich G. Perception Understanding Action: Adding Understanding to the Perception Action Cycle With Spiking Segmentation. Front Neurorobot 2020; 14:568319. [PMID: 33192434 PMCID: PMC7604290 DOI: 10.3389/fnbot.2020.568319] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2020] [Accepted: 10/20/2020] [Indexed: 11/30/2022] Open
Abstract
Traditionally the Perception Action cycle is the first stage of building an autonomous robotic system and a practical way to implement a low latency reactive system within a low Size, Weight and Power (SWaP) package. However, within complex scenarios, this method can lack contextual understanding about the scene, such as object recognition-based tracking or system attention. Object detection, identification and tracking along with semantic segmentation and attention are all modern computer vision tasks in which Convolutional Neural Networks (CNN) have shown significant success, although such networks often have a large computational overhead and power requirements, which are not ideal in smaller robotics tasks. Furthermore, cloud computing and massively parallel processing like in Graphic Processing Units (GPUs) are outside the specification of many tasks due to their respective latency and SWaP constraints. In response to this, Spiking Convolutional Neural Networks (SCNNs) look to provide the feature extraction benefits of CNNs, while maintaining low latency and power overhead thanks to their asynchronous spiking event-based processing. A novel Neuromorphic Perception Understanding Action (PUA) system is presented, that aims to combine the feature extraction benefits of CNNs with low latency processing of SCNNs. The PUA utilizes a Neuromorphic Vision Sensor for Perception that facilitates asynchronous processing within a Spiking fully Convolutional Neural Network (SpikeCNN) to provide semantic segmentation and Understanding of the scene. The output is fed to a spiking control system providing Actions. With this approach, the aim is to bring features of deep learning into the lower levels of autonomous robotics, while maintaining a biologically plausible STDP rule throughout the learned encoding part of the network. The network will be shown to provide a more robust and predictable management of spiking activity with an improved thresholding response. The reported experiments show that this system can deliver robust results of over 96 and 81% for accuracy and Intersection over Union, ensuring such a system can be successfully used within object recognition, classification and tracking problem. This demonstrates that the attention of the system can be tracked accurately, while the asynchronous processing means the controller can give precise track updates with minimal latency.
Collapse
|
37
|
NIMBLE for Bayesian Disease Mapping. Spat Spatiotemporal Epidemiol 2020; 33:100323. [PMID: 32370936 DOI: 10.1016/j.sste.2020.100323] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/11/2019] [Revised: 01/23/2020] [Accepted: 01/23/2020] [Indexed: 11/23/2022]
Abstract
This tutorial describes the basic implementation of Bayesian hierarchical models for spatial health data using the R package nimble. To quote the nimble R description: A system for writing hierarchical statistical models largely compatible with 'BUGS' and 'JAGS', writing nimbleFunctions to operate models and do basic R-style math, and compiling both models and nimbleFunctions via custom-generated C++. 'NIMBLE' includes default methods for MCMC, particle filtering, Monte Carlo Expectation Maximization, and some other tools. The nimbleFunction system makes it easy to do things like implement new MCMC samplers from R, customize the assignment of samplers to different parts of a model from R, and compile the new samplers automatically via C++ alongside the samplers 'NIMBLE' provides. Examples of the use of the package for a small range of Bayesian Disease Mapping (BDM) models is explored and focus on different approaches to model fitting and analysis are discussed. Examples of publicly available small area health data is used throughout.
Collapse
|
38
|
Kim D, Han TH, Hong SC, Park SJ, Lee YH, Kim H, Park M, Lee J. PLGA Microspheres with Alginate-Coated Large Pores for the Formulation of an Injectable Depot of Donepezil Hydrochloride. Pharmaceutics 2020; 12:E311. [PMID: 32244736 PMCID: PMC7238133 DOI: 10.3390/pharmaceutics12040311] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2020] [Revised: 03/27/2020] [Accepted: 03/30/2020] [Indexed: 11/25/2022] Open
Abstract
As the main symptom of Alzheimer's disease-related dementia is memory loss, patient compliance for donepezil hydrochloride (donepezil), administered as once-daily oral formulations, is poor. Thus, we aimed to design poly(lactic-co-glycolic acid) (PLGA) microspheres (MS) with alginate-coated large pores as an injectable depot of donepezil exhibiting sustained release over 2-3 weeks. The PLGA MS with large pores could provide large space for loading drugs with high loading capacity, and thereby sufficient amounts of drugs were considered to be delivered with minimal use of PLGA MS being injected. However, initial burst release of donepezil from the porous PLGA MS was observed. To reduce this initial burst release, the surface pores were closed with calcium alginate coating using a spray-ionotropic gelation method. The final pore-closed PLGA MS showed in vitro sustained release for approximately 3 weeks, and the initial burst release was remarkably decreased by the calcium alginate coating. In the prediction of plasma drug concentration profiles using convolution method, the mean residence time of the pore-closed PLGA MS was 2.7-fold longer than that of the porous PLGA MS. Therefore, our results reveal that our pore-closed PLGA MS formulation is a promising candidate for the treatment of dementia with high patient compliance.
Collapse
|
39
|
Tichter T, Schneider J, Andrae D, Gebhard M, Roth C. Universal Algorithm for Simulating and Evaluating Cyclic Voltammetry at Macroporous Electrodes by Considering Random Arrays of Microelectrodes. Chemphyschem 2020; 21:428-441. [PMID: 31841241 PMCID: PMC7078989 DOI: 10.1002/cphc.201901113] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2019] [Revised: 12/13/2019] [Indexed: 11/13/2022]
Abstract
An algorithm for the simulation and evaluation of cyclic voltammetry (CV) at macroporous electrodes such as felts, foams, and layered structures is presented. By considering 1D, 2D, and 3D arrays of electrode sheets, cylindrical microelectrodes, hollow-cylindrical microelectrodes, and hollow-spherical microelectrodes the internal diffusion domains of the macroporous structures are approximated. A universal algorithm providing the time-dependent surface concentrations of the electrochemically active species, required for simulating cyclic voltammetry responses of the individual planar, cylindrical, and spherical microelectrodes, is presented as well. An essential ingredient of the algorithm, which is based on Laplace integral transformation techniques, is the use of a modified Talbot contour for the inverse Laplace transformation. It is demonstrated that first-order homogeneous chemical kinetics preceding and/or following the electrochemical reaction and electrochemically active species with non-equal diffusion coefficients can be included in all diffusion models as well. The proposed theory is supported by experimental data acquired for a reference reaction, the oxidation of [Fe(CN)6 ]4- at platinum electrodes as well as for a technically relevant reaction, the oxidation of VO2+ at carbon felt electrodes. Based on our calculation strategy, we provide a powerful open source tool for simulating and evaluating CV data implemented into a Python graphical user interface (GUI).
Collapse
|
40
|
Pemberton S, Odom TF, Dittmer KE, Kopke MA, Marshall JC, Poirier VJ, Owen MC. The hypoattenuating ocular lens on CT is not always due to cataract formation. Vet Radiol Ultrasound 2019; 61:147-156. [PMID: 31825152 DOI: 10.1111/vru.12828] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2019] [Revised: 08/14/2019] [Accepted: 08/19/2019] [Indexed: 11/29/2022] Open
Abstract
Hypoattenuating ocular lenses on CT have been described with cataract formation in humans, however published studies are currently lacking regarding this finding in veterinary patients. The purpose of this retrospective and prospective study was to describe the varying CT appearances of the ocular lens in vivo, and investigate the causes for CT density variations in a population of cats and dogs. A total of 102 canine and feline patients with CT of the head acquired at the authors' hospital between May 2011 and March 2019 were included. A bilateral hypoattenuating halo surrounding an isoattenuating to mildly hypoattenuating core was described in the ocular lens center of every cat in which a Philips brand proprietary image construction filter was used. A similar but more varied hypoattenuating region was noted in the lenses of 45.8% of dogs where the same filter was applied, as well as 43.8% of dogs with a second, similar filter. Ophthalmic examination of three live cats and one dog with hypoattenuating lenses demonstrated normal lens translucency, excluding the presence of cataract. The effect of different proprietary filters on lens appearance was also described in three fresh cadavers with normal lenses identified on ophthalmic, macroscopic, and microscopic examination. Etiology of the hypoattenuating areas within the ocular lens was not conclusively determined. Recognition that such a variant may be seen in the absence of cataract is important, in order to prevent misdiagnosis.
Collapse
|
41
|
Gomeni R, Bressolle-Gomeni F. De convolution Analysis by Non-linear Regression Using a Convolution-Based Model: Comparison of Nonparametric and Parametric Approaches. AAPS JOURNAL 2019; 22:9. [PMID: 31820258 DOI: 10.1208/s12248-019-0389-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/10/2019] [Accepted: 10/30/2019] [Indexed: 12/15/2022]
Abstract
The convolution-based modeling approach has been shown to be flexible and easy to implement for performing a deconvolution analysis and for assessing in vitro/in vivo correlation using non-linear regression and a pre-specified model describing the in vivo drug absorption. A generalization of this method has been developed using a nonparametric description of the in vivo drug absorption process in replacement of a model-based definition. A comparison of the parametric and nonparametric deconvolution and convolution analyses was conducted on the pharmacokinetic (PK) data observed in four published studies after the administration of an extended-release formulation of methylphenidate at the dose of 18 mg. All the analyses were conducted using a conventional non-linear regression software (NONMEM). The results of the deconvolution analysis indicated that the parametric and nonparametric approaches performed similarly. The parametric approach described the input function using a double Weibull equation (6 parameters) while the nonparametric approach described the input function using a piecewise approximation (12-13 parameters). The validation of the results of the deconvolution analysis was conducted by comparing observed and predicted PK concentrations by the convolution analysis. The performance of the parametric and nonparametric approaches for assessing deconvolution was evaluated using the Akaike and the Bayesian information criteria. These criteria indicated that, despite the similar results obtained with the two approaches, the nonparametric approach provided better results. In conclusion, these results indicated that the nonparametric approach should be considered as the preferred approach for conducting a deconvolution analysis.
Collapse
|
42
|
Neph R, Ouyang C, Neylon J, Yang Y, Sheng K. Parallel beamlet dose calculation via beamlet contexts in a distributed multi-GPU framework. Med Phys 2019; 46:3719-3733. [PMID: 31183871 DOI: 10.1002/mp.13651] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2019] [Revised: 06/03/2019] [Accepted: 06/03/2019] [Indexed: 12/14/2022] Open
Abstract
PURPOSE Dose calculation is one of the most computationally intensive, yet essential tasks in the treatment planning process. With the recent interest in automatic beam orientation and arc trajectory optimization techniques, there is a great need for more efficient model-based dose calculation algorithms that can accommodate hundreds to thousands of beam candidates at once. Foundational work has shown the translation of dose calculation algorithms to graphical processing units (GPUs), lending to remarkable gains in processing efficiency. But these methods provide parallelization of dose for only a single beamlet, serializing the calculation of multiple beamlets and under-utilizing the potential of modern GPUs. In this paper, the authors propose a framework enabling parallel computation of many beamlet doses using a novel beamlet context transformation and further embed this approach in a scalable network of multi-GPU computational nodes. METHODS The proposed context-based transformation separates beamlet-local density and TERMA into distinct beamlet contexts that independently provide sufficient data for beamlet dose calculation. Beamlet contexts are arranged in a composite context array with dosimetric isolation, and the context array is subjected to a GPU collapsed-cone convolution superposition procedure, producing the set of beamlet-specific dose distributions in a single pass. Dose from each context is converted to a sparse representation for efficient storage and retrieval during treatment plan optimization. The context radius is a new parameter permitting flexibility between the speed and fidelity of the dose calculation process. A distributed manager-worker architecture is constructed around the context-based GPU dose calculation approach supporting an arbitrary number of worker nodes and resident GPUs. Phantom experiments were executed to verify the accuracy of the context-based approach compared to Monte Carlo and a reference CPU-CCCS implementation for single beamlets and broad beams composed by addition of beamlets. Dose for representative 4π beam sets was calculated in lung and prostate cases to compare its efficiency with that of an existing beamlet-sequential GPU-CCCS implementation. Code profiling was also performed to evaluate the scalability of the framework across many networked GPUs. RESULTS The dosimetric accuracy of the context-based method displays <1.35% and 2.35% average error from the existing serialized CPU-CCCS algorithm and Monte Carlo simulation for beamlet-specific PDDs in water and slab phantoms, respectively. The context-based method demonstrates substantial speedup of up to two orders of magnitude over the beamlet-sequential GPU-CCCS method in the tested configurations. The context-based framework demonstrates near linear scaling in the number of distributed compute nodes and GPUs employed, indicating that it is flexible enough to meet the performance requirements of most users by simply increasing the hardware utilization. CONCLUSIONS The context-based approach demonstrates a new expectation of performance for beamlet-based dose calculation methods. This approach has been successful in accelerating the dose calculation process for very large-scale treatment planning problems - such as automatic 4π IMRT beam orientation and VMAT arc trajectory selection, with hundreds of thousands of beamlets - in clinically feasible timeframes. The flexibility of this framework makes it as a strong candidate for use in a variety of other very large-scale treatment planning tasks and clinical workflows.
Collapse
|
43
|
Sun Y, Gu Z, Wang J, Yuan X. Research of Method for Solving Relaxation Modulus Based on Three-Point Bending Creep Test. MATERIALS 2019; 12:ma12122021. [PMID: 31238527 PMCID: PMC6631944 DOI: 10.3390/ma12122021] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/20/2019] [Revised: 06/14/2019] [Accepted: 06/20/2019] [Indexed: 12/05/2022]
Abstract
A method was developed for solving the relaxation modulus of high viscosity asphalt sand (HVAS) based on the three-point bending creep test, and was verified by comparison with experimental results. In this method, firstly, a transcendental equation was obtained by the convolution, and then equations were obtained by Taylor’s formula, which were solved by Mathmatica to obtain the relaxation modulus by Newton’s method. Subsequently, the laboratory investigations of the viscoelastic parameters of the Burgers model for the HVAS by three-point bending creep tests were carried out. In addition, the method was verified by comparing the relaxation moduli with the indoor relaxation experiments. Results showed that the numerical calculation and the test data were in good agreement, and the relaxation characteristics of the HVAS were reflected more accurately. The method can be used to study the relaxation characteristics of the asphalt mixtures effectively. In addition, this study provides a research basis for road crack prevention.
Collapse
|
44
|
Mapping Tobacco Fields Using UAV RGB Images. SENSORS 2019; 19:s19081791. [PMID: 30991636 PMCID: PMC6515098 DOI: 10.3390/s19081791] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/05/2019] [Revised: 04/06/2019] [Accepted: 04/06/2019] [Indexed: 11/16/2022]
Abstract
Tobacco planting information is an important part of tobacco production management. Unmanned aerial vehicle (UAV) remote sensing systems have become a popular topic worldwide because they are mobile, rapid and economic. In this paper, an automatic identification method for tobacco fields based on UAV images is developed by combining supervised classifications with image morphological operations, and this method was used in the Yunnan Province, which is the top province for tobacco planting in China. The results show that the produce accuracy, user accuracy, and overall accuracy of tobacco field identification using the method proposed in this paper are 92.59%, 96.61% and 95.93%, respectively. The method proposed in this paper has the advantages of automation, flow process, high accuracy and easy operation, but the ground sampling distance (GSD) of the UAV image has an effect on the accuracy of the proposed method. When the image GSD was reduced to 1 m, the overall accuracy decreased by approximately 10%. To solve this problem, we further introduced the convolution method into the proposed method, which can ensure the recognition accuracy of tobacco field is above 90% when GSD is less than or equal to 1 m. Some other potential improvements of methods for mapping tobacco fields were also discussed in this paper.
Collapse
|
45
|
Tarpey T, Petkova E. Letter to the Editor. AM STAT 2019; 73:312. [PMID: 33762775 DOI: 10.1080/00031305.2018.1537894] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
Hutson and Vexler (2018) demonstrate an example of aliasing with the beta and normal distribution. This letter presents another illustration of aliasing using the beta and normal distributions via an infinite mixture model, inspired by the problem of modeling placebo response.
Collapse
|
46
|
Lumley T, Brody J, Peloso G, Morrison A, Rice K. FastSKAT: Sequence kernel association tests for very large sets of markers. Genet Epidemiol 2018; 42:516-527. [PMID: 29932245 PMCID: PMC6129408 DOI: 10.1002/gepi.22136] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2018] [Revised: 04/30/2018] [Accepted: 05/10/2018] [Indexed: 11/06/2022]
Abstract
The sequence kernel association test (SKAT) is widely used to test for associations between a phenotype and a set of genetic variants that are usually rare. Evaluating tail probabilities or quantiles of the null distribution for SKAT requires computing the eigenvalues of a matrix related to the genotype covariance between markers. Extracting the full set of eigenvalues of this matrix (an n × n matrix, for n subjects) has computational complexity proportional to n3 . As SKAT is often used when n > 10 4 , this step becomes a major bottleneck in its use in practice. We therefore propose fastSKAT, a new computationally inexpensive but accurate approximations to the tail probabilities, in which the k largest eigenvalues of a weighted genotype covariance matrix or the largest singular values of a weighted genotype matrix are extracted, and a single term based on the Satterthwaite approximation is used for the remaining eigenvalues. While the method is not particularly sensitive to the choice of k, we also describe how to choose its value, and show how fastSKAT can automatically alert users to the rare cases where the choice may affect results. As well as providing faster implementation of SKAT, the new method also enables entirely new applications of SKAT that were not possible before; we give examples grouping variants by topologically associating domains, and comparing chromosome-wide association by class of histone marker.
Collapse
|
47
|
Mu R, Xu J. Predicting events in clinical trials using two time-to-event outcomes. Biom J 2018; 60:815-826. [PMID: 29790186 DOI: 10.1002/bimj.201700083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2017] [Revised: 02/21/2018] [Accepted: 04/03/2018] [Indexed: 11/06/2022]
Abstract
In clinical trials with time-to-event outcomes, it is of interest to predict when a prespecified number of events can be reached. Interim analysis is conducted to estimate the underlying survival function. When another correlated time-to-event endpoint is available, both outcome variables can be used to improve estimation efficiency. In this paper, we propose to use the convolution of two time-to-event variables to estimate the survival function of interest. Propositions and examples are provided based on exponential models that accommodate possible change points. We further propose a new estimation equation about the expected time that exploits the relationship of two endpoints. Simulations and the analysis of real data show that the proposed methods with bivariate information yield significant improvement in prediction over that of the univariate method.
Collapse
|
48
|
Jacob S, Nair AB. An updated overview with simple and practical approach for developing in vitro-in vivo correlation. Drug Dev Res 2018; 79:97-110. [PMID: 29697151 DOI: 10.1002/ddr.21427] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2018] [Revised: 04/08/2018] [Accepted: 04/09/2018] [Indexed: 12/12/2022]
Abstract
Preclinical Research & Development An in vitro-in vivo correlation (IVIVC) is as a predictive mathematical model that demonstrates a key role in the development, advancement, evaluation and optimization of extended release, modified release and immediate release pharmaceutical formulations. A validated IVIVC model can serve as a surrogate for bioequivalence studies and subsequently save time, effort and expenditure during pharmaceutical product development. This review discusses about different levels of correlations, general approaches to develop an IVIVC by mathematical modelling, validation, data analysis and various applications. In the current setting, the dearth of success associated with IVIVC is due to complexity of underlying scientific principles as well as the practice of fitting/matching in vivo plasma level-time data with in vitro dissolution profile. Hence, a simple, straightforward practical means to predict plasma drug levels by convolution technique and percentage drug absorbed computed from in vitro dissolution profile based on deconvolution method are illustrated. The bioavailability/bioequivalence assessment and evaluation are frequently validated by the pharmacokinetic parameters such as maximum concentration, time to reach maximum concentration, and area under the curve. The implementation of a quality by design manufacturing based on in vivo bioavailability and clinically relevant dissolution specification are recommended because corresponding design safe space will guarantee that all batches from relevant products are met with sufficient quality and bioperformance. Recently, United States Food and Drug Administration and European Medicines Agency have proposed that in silico/physiologically based pharmacokinetic modelling can be used in decision making during preclinical experiments as well as to recognize the dissolution profiles that can forecast and ensure the desired clinical performance.
Collapse
|
49
|
Ge Z, Heitjan DF, Gerber DE, Xuan L, Pruitt SL. Estimating lead-time bias in lung cancer diagnosis of patients with previous cancers. Stat Med 2018; 37:2516-2529. [PMID: 29687467 DOI: 10.1002/sim.7691] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2017] [Revised: 03/27/2018] [Accepted: 03/30/2018] [Indexed: 12/20/2022]
Abstract
Surprisingly, survival from a diagnosis of lung cancer has been found to be longer for those who experienced a previous cancer than for those with no previous cancer. A possible explanation is lead-time bias, which, by advancing the time of diagnosis, apparently extends survival among those with a previous cancer even when they enjoy no real clinical advantage. We propose a discrete parametric model to jointly describe survival in a no-previous-cancer group (where, by definition, lead-time bias cannot exist) and in a previous-cancer group (where lead-time bias is possible). We model the lead time with a negative binomial distribution and the post-lead-time survival with a linear spline on the logit hazard scale, which allows for survival to differ between groups even in the absence of bias; we denote our model Logit-Spline/Negative Binomial. We fit Logit-Spline/Negative Binomial to a propensity-score matched subset of the Surveillance, Epidemiology, and End Results-Medicare linked data set, conducting sensitivity analyses to assess the effects of key assumptions. With lung cancer-specific death as the end point, the estimated mean lead time is roughly 11 months for stage I&II patients; with overall survival, it is roughly 3.4 months in stage I&II. For patients with higher-stage lung cancers, the mean lead time is 1 month or less for both outcomes. Accounting for lead-time bias reduces the survival advantage of the previous-cancer group when one exists, but it does not nullify it in all cases.
Collapse
|
50
|
Fallows P, Wright G, Harrold N, Bownes P. A comparison of the convolution and TMR10 treatment planning algorithms for Gamma Knife ® radiosurgery. JOURNAL OF RADIOSURGERY AND SBRT 2018; 5:157-167. [PMID: 29657896 PMCID: PMC5893456] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Received: 08/07/2017] [Accepted: 10/25/2017] [Indexed: 06/08/2023]
Abstract
AIMS To compare the accuracies of the convolution and TMR10 Gamma Knife treatment planning algorithms, and assess the impact upon clinical practice of implementing convolution-based treatment planning. METHODS Doses calculated by both algorithms were compared against ionisation chamber measurements in homogeneous and heterogeneous phantoms. Relative dose distributions calculated by both algorithms were compared against film-derived 2D isodose plots in a heterogeneous phantom, with distance-to-agreement (DTA) measured at the 80%, 50% and 20% isodose levels. A retrospective planning study compared 19 clinically acceptable metastasis convolution plans against TMR10 plans with matched shot times, allowing novel comparison of true dosimetric parameters rather than total beam-on-time. Gamma analysis and dose-difference analysis were performed on each pair of dose distributions. RESULTS Both algorithms matched point dose measurement within ±1.1% in homogeneous conditions. Convolution provided superior point-dose accuracy in the heterogeneous phantom (-1.1% v 4.0%), with no discernible differences in relative dose distribution accuracy. In our study convolution-calculated plans yielded D99% 6.4% (95% CI:5.5%-7.3%,p<0.001) less than shot matched TMR10 plans. For gamma passing criteria 1%/1mm, 16% of targets had passing rates >95%. The range of dose differences in the targets was 0.2-4.6Gy. CONCLUSIONS Convolution provides superior accuracy versus TMR10 in heterogeneous conditions. Implementing convolution would result in increased target doses therefore its implementation may require a revaluation of prescription doses.
Collapse
|