1
|
Jang J, Jang S, Choi S, Wang G. Run-off election-based decision method for the training and inference process in an artificial neural network. Sci Rep 2021; 11:895. [PMID: 33441631 PMCID: PMC7806707 DOI: 10.1038/s41598-020-79452-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2020] [Accepted: 12/08/2020] [Indexed: 11/09/2022] Open
Abstract
Generally, the decision rule for classifying unstructured data in an artificial neural network system depends on the sequence results of an activation function determined by vector-matrix multiplication between the input bias signal and the analog synaptic weight quantity of each node in a matrix array. Although a sequence-based decision rule can efficiently extract a common feature in a large data set in a short time, it can occasionally fail to classify similar species because it does not intrinsically consider other quantitative configurations of the activation function that affect the synaptic weight update. In this work, we implemented a simple run-off election-based decision rule via an additional filter evaluation to mitigate the confusion from proximity of output activation functions, enabling the improved training and inference performance of artificial neural network system. Using the filter evaluation selected via the difference among common features of classified images, the recognition accuracy achieved for three types of shoe image data sets reached ~ 82.03%, outperforming the maximum accuracy of ~ 79.23% obtained via the sequence-based decision rule in a fully connected single layer network. This training algorithm with an independent filter can precisely supply the output class in the decision step of the fully connected network.
Collapse
Affiliation(s)
- Jingon Jang
- KU-KIST Graduate School of Converging Science and Technology, Korea University, 145, Anam-ro, Seongbuk-gu, Seoul, 02841, Republic of Korea.
| | - Seonghoon Jang
- KU-KIST Graduate School of Converging Science and Technology, Korea University, 145, Anam-ro, Seongbuk-gu, Seoul, 02841, Republic of Korea
| | - Sanghyeon Choi
- KU-KIST Graduate School of Converging Science and Technology, Korea University, 145, Anam-ro, Seongbuk-gu, Seoul, 02841, Republic of Korea
| | - Gunuk Wang
- KU-KIST Graduate School of Converging Science and Technology, Korea University, 145, Anam-ro, Seongbuk-gu, Seoul, 02841, Republic of Korea.
| |
Collapse
|
2
|
Soni M, Dahiya R. Soft eSkin: distributed touch sensing with harmonized energy and computing. PHILOSOPHICAL TRANSACTIONS. SERIES A, MATHEMATICAL, PHYSICAL, AND ENGINEERING SCIENCES 2020; 378:20190156. [PMID: 31865882 PMCID: PMC6939237 DOI: 10.1098/rsta.2019.0156] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
Inspired by biology, significant advances have been made in the field of electronic skin (eSkin) or tactile skin. Many of these advances have come through mimicking the morphology of human skin and by distributing few touch sensors in an area. However, the complexity of human skin goes beyond mimicking few morphological features or using few sensors. For example, embedded computing (e.g. processing of tactile data at the point of contact) is centric to the human skin as some neuroscience studies show. Likewise, distributed cell or molecular energy is a key feature of human skin. The eSkin with such features, along with distributed and embedded sensors/electronics on soft substrates, is an interesting topic to explore. These features also make eSkin significantly different from conventional computing. For example, unlike conventional centralized computing enabled by miniaturized chips, the eSkin could be seen as a flexible and wearable large area computer with distributed sensors and harmonized energy. This paper discusses these advanced features in eSkin, particularly the distributed sensing harmoniously integrated with energy harvesters, storage devices and distributed computing to read and locally process the tactile sensory data. Rapid advances in neuromorphic hardware, flexible energy generation, energy-conscious electronics, flexible and printed electronics are also discussed. This article is part of the theme issue 'Harmonizing energy-autonomous computing and intelligence'.
Collapse
|
3
|
Camuñas-Mesa LA, Linares-Barranco B, Serrano-Gotarredona T. Neuromorphic Spiking Neural Networks and Their Memristor-CMOS Hardware Implementations. MATERIALS (BASEL, SWITZERLAND) 2019; 12:E2745. [PMID: 31461877 PMCID: PMC6747825 DOI: 10.3390/ma12172745] [Citation(s) in RCA: 41] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/05/2019] [Revised: 08/02/2019] [Accepted: 08/10/2019] [Indexed: 11/17/2022]
Abstract
Inspired by biology, neuromorphic systems have been trying to emulate the human brain for decades, taking advantage of its massive parallelism and sparse information coding. Recently, several large-scale hardware projects have demonstrated the outstanding capabilities of this paradigm for applications related to sensory information processing. These systems allow for the implementation of massive neural networks with millions of neurons and billions of synapses. However, the realization of learning strategies in these systems consumes an important proportion of resources in terms of area and power. The recent development of nanoscale memristors that can be integrated with Complementary Metal-Oxide-Semiconductor (CMOS) technology opens a very promising solution to emulate the behavior of biological synapses. Therefore, hybrid memristor-CMOS approaches have been proposed to implement large-scale neural networks with learning capabilities, offering a scalable and lower-cost alternative to existing CMOS systems.
Collapse
Affiliation(s)
- Luis A Camuñas-Mesa
- Instituto de Microelectrónica de Sevilla (IMSE-CNM), CSIC and Universidad de Sevilla, 41092 Sevilla, Spain.
| | - Bernabé Linares-Barranco
- Instituto de Microelectrónica de Sevilla (IMSE-CNM), CSIC and Universidad de Sevilla, 41092 Sevilla, Spain
| | - Teresa Serrano-Gotarredona
- Instituto de Microelectrónica de Sevilla (IMSE-CNM), CSIC and Universidad de Sevilla, 41092 Sevilla, Spain
| |
Collapse
|
4
|
Abstract
We present both an overview and a perspective of recent experimental advances and proposed new approaches to performing computation using memristors. A memristor is a 2-terminal passive component with a dynamic resistance depending on an internal parameter. We provide an brief historical introduction, as well as an overview over the physical mechanism that lead to memristive behavior. This review is meant to guide nonpractitioners in the field of memristive circuits and their connection to machine learning and neural computation.
Collapse
|
5
|
Liu C, Bellec G, Vogginger B, Kappel D, Partzsch J, Neumärker F, Höppner S, Maass W, Furber SB, Legenstein R, Mayr CG. Memory-Efficient Deep Learning on a SpiNNaker 2 Prototype. Front Neurosci 2018; 12:840. [PMID: 30505263 PMCID: PMC6250847 DOI: 10.3389/fnins.2018.00840] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2018] [Accepted: 10/29/2018] [Indexed: 11/13/2022] Open
Abstract
The memory requirement of deep learning algorithms is considered incompatible with the memory restriction of energy-efficient hardware. A low memory footprint can be achieved by pruning obsolete connections or reducing the precision of connection strengths after the network has been trained. Yet, these techniques are not applicable to the case when neural networks have to be trained directly on hardware due to the hard memory constraints. Deep Rewiring (DEEP R) is a training algorithm which continuously rewires the network while preserving very sparse connectivity all along the training procedure. We apply DEEP R to a deep neural network implementation on a prototype chip of the 2nd generation SpiNNaker system. The local memory of a single core on this chip is limited to 64 KB and a deep network architecture is trained entirely within this constraint without the use of external memory. Throughout training, the proportion of active connections is limited to 1.3%. On the handwritten digits dataset MNIST, this extremely sparse network achieves 96.6% classification accuracy at convergence. Utilizing the multi-processor feature of the SpiNNaker system, we found very good scaling in terms of computation time, per-core memory consumption, and energy constraints. When compared to a X86 CPU implementation, neural network training on the SpiNNaker 2 prototype improves power and energy consumption by two orders of magnitude.
Collapse
Affiliation(s)
- Chen Liu
- Chair of Highly-Parallel VLSI-Systems and Neuromorphic Circuits, Department of Electrical Engineering and Information Technology, Institute of Circuits and Systems, Technische Universität Dresden, Dresden, Germany
| | - Guillaume Bellec
- Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria
| | - Bernhard Vogginger
- Chair of Highly-Parallel VLSI-Systems and Neuromorphic Circuits, Department of Electrical Engineering and Information Technology, Institute of Circuits and Systems, Technische Universität Dresden, Dresden, Germany
| | - David Kappel
- Chair of Highly-Parallel VLSI-Systems and Neuromorphic Circuits, Department of Electrical Engineering and Information Technology, Institute of Circuits and Systems, Technische Universität Dresden, Dresden, Germany.,Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria.,Bernstein Center for Computational Neuroscience, III Physikalisches Institut - Biophysik, Georg-August Universität, Göttingen, Germany
| | - Johannes Partzsch
- Chair of Highly-Parallel VLSI-Systems and Neuromorphic Circuits, Department of Electrical Engineering and Information Technology, Institute of Circuits and Systems, Technische Universität Dresden, Dresden, Germany
| | - Felix Neumärker
- Chair of Highly-Parallel VLSI-Systems and Neuromorphic Circuits, Department of Electrical Engineering and Information Technology, Institute of Circuits and Systems, Technische Universität Dresden, Dresden, Germany
| | - Sebastian Höppner
- Chair of Highly-Parallel VLSI-Systems and Neuromorphic Circuits, Department of Electrical Engineering and Information Technology, Institute of Circuits and Systems, Technische Universität Dresden, Dresden, Germany
| | - Wolfgang Maass
- Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria
| | - Steve B Furber
- Advanced Processor Technologies Group, School of Computer Science, University of Manchester, Manchester, United Kingdom
| | - Robert Legenstein
- Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria
| | - Christian G Mayr
- Chair of Highly-Parallel VLSI-Systems and Neuromorphic Circuits, Department of Electrical Engineering and Information Technology, Institute of Circuits and Systems, Technische Universität Dresden, Dresden, Germany
| |
Collapse
|
6
|
Detorakis G, Sheik S, Augustine C, Paul S, Pedroni BU, Dutt N, Krichmar J, Cauwenberghs G, Neftci E. Neural and Synaptic Array Transceiver: A Brain-Inspired Computing Framework for Embedded Learning. Front Neurosci 2018; 12:583. [PMID: 30210274 PMCID: PMC6123384 DOI: 10.3389/fnins.2018.00583] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2018] [Accepted: 08/03/2018] [Indexed: 11/13/2022] Open
Abstract
Embedded, continual learning for autonomous and adaptive behavior is a key application of neuromorphic hardware. However, neuromorphic implementations of embedded learning at large scales that are both flexible and efficient have been hindered by a lack of a suitable algorithmic framework. As a result, most neuromorphic hardware are trained off-line on large clusters of dedicated processors or GPUs and transferred post hoc to the device. We address this by introducing the neural and synaptic array transceiver (NSAT), a neuromorphic computational framework facilitating flexible and efficient embedded learning by matching algorithmic requirements and neural and synaptic dynamics. NSAT supports event-driven supervised, unsupervised and reinforcement learning algorithms including deep learning. We demonstrate the NSAT in a wide range of tasks, including the simulation of Mihalas-Niebur neuron, dynamic neural fields, event-driven random back-propagation for event-based deep learning, event-based contrastive divergence for unsupervised learning, and voltage-based learning rules for sequence learning. We anticipate that this contribution will establish the foundation for a new generation of devices enabling adaptive mobile systems, wearable devices, and robots with data-driven autonomy.
Collapse
Affiliation(s)
- Georgios Detorakis
- Department of Cognitive Sciences, University of California, Irvine, Irvine, CA, United States
| | - Sadique Sheik
- Biocircuits Institute, University of California, San Diego, La Jolla, CA, United States
| | - Charles Augustine
- Intel Corporation-Circuit Research Lab, Hillsboro, OR, United States
| | - Somnath Paul
- Intel Corporation-Circuit Research Lab, Hillsboro, OR, United States
| | - Bruno U. Pedroni
- Department of Bioengineering and Institute for Neural Computation, University of California, San Diego, La Jolla, CA, United States
| | - Nikil Dutt
- Department of Cognitive Sciences, University of California, Irvine, Irvine, CA, United States
- Department of Computer Science, University of California, Irvine, Irvine, CA, United States
| | - Jeffrey Krichmar
- Department of Cognitive Sciences, University of California, Irvine, Irvine, CA, United States
- Department of Computer Science, University of California, Irvine, Irvine, CA, United States
| | - Gert Cauwenberghs
- Department of Bioengineering and Institute for Neural Computation, University of California, San Diego, La Jolla, CA, United States
| | - Emre Neftci
- Department of Cognitive Sciences, University of California, Irvine, Irvine, CA, United States
- Department of Computer Science, University of California, Irvine, Irvine, CA, United States
| |
Collapse
|
7
|
Boybat I, Le Gallo M, Nandakumar SR, Moraitis T, Parnell T, Tuma T, Rajendran B, Leblebici Y, Sebastian A, Eleftheriou E. Neuromorphic computing with multi-memristive synapses. Nat Commun 2018; 9:2514. [PMID: 29955057 PMCID: PMC6023896 DOI: 10.1038/s41467-018-04933-y] [Citation(s) in RCA: 183] [Impact Index Per Article: 30.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2017] [Accepted: 06/04/2018] [Indexed: 11/10/2022] Open
Abstract
Neuromorphic computing has emerged as a promising avenue towards building the next generation of intelligent computing systems. It has been proposed that memristive devices, which exhibit history-dependent conductivity modulation, could efficiently represent the synaptic weights in artificial neural networks. However, precise modulation of the device conductance over a wide dynamic range, necessary to maintain high network accuracy, is proving to be challenging. To address this, we present a multi-memristive synaptic architecture with an efficient global counter-based arbitration scheme. We focus on phase change memory devices, develop a comprehensive model and demonstrate via simulations the effectiveness of the concept for both spiking and non-spiking neural networks. Moreover, we present experimental results involving over a million phase change memory devices for unsupervised learning of temporal correlations using a spiking neural network. The work presents a significant step towards the realization of large-scale and energy-efficient neuromorphic computing systems. Memristive technology is a promising avenue towards realizing efficient non-von Neumann neuromorphic hardware. Boybat et al. proposes a multi-memristive synaptic architecture with a counter-based global arbitration scheme to address challenges associated with the non-ideal memristive device behavior.
Collapse
Affiliation(s)
- Irem Boybat
- IBM Research - Zurich, Säumerstrasse 4, 8803, Rüschlikon, Switzerland. .,Microelectronic Systems Laboratory, EPFL, Bldg ELD, Station 11, CH-1015, Lausanne, Switzerland.
| | - Manuel Le Gallo
- IBM Research - Zurich, Säumerstrasse 4, 8803, Rüschlikon, Switzerland
| | - S R Nandakumar
- IBM Research - Zurich, Säumerstrasse 4, 8803, Rüschlikon, Switzerland.,Department of Electrical and Computer Engineering, New Jersey Institute of Technology, Newark, NJ, 07102, USA
| | - Timoleon Moraitis
- IBM Research - Zurich, Säumerstrasse 4, 8803, Rüschlikon, Switzerland
| | - Thomas Parnell
- IBM Research - Zurich, Säumerstrasse 4, 8803, Rüschlikon, Switzerland
| | - Tomas Tuma
- IBM Research - Zurich, Säumerstrasse 4, 8803, Rüschlikon, Switzerland
| | - Bipin Rajendran
- Department of Electrical and Computer Engineering, New Jersey Institute of Technology, Newark, NJ, 07102, USA
| | - Yusuf Leblebici
- Microelectronic Systems Laboratory, EPFL, Bldg ELD, Station 11, CH-1015, Lausanne, Switzerland
| | - Abu Sebastian
- IBM Research - Zurich, Säumerstrasse 4, 8803, Rüschlikon, Switzerland.
| | | |
Collapse
|
8
|
Khiat A, Cortese S, Serb A, Prodromakis T. Resistive switching of Pt/TiO x /Pt devices fabricated on flexible Parylene-C substrates. NANOTECHNOLOGY 2017; 28:025303. [PMID: 27924782 DOI: 10.1088/1361-6528/28/2/025303] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Pt/TiO x /Pt resistive switching (RS) devices are considered to be amongst the most promising candidates in memristor family and the technology transfer to flexible substrates could open the way to new opportunities for flexible memory implementations. Hence, an important goal is to achieve a fully flexible RS memory technology. Nonetheless, several fabrication challenges are present and must be solved prior to achieving reliable device fabrication and good electronic performances. Here, we propose a fabrication method for the successful transfer of Pt/TiO x /Pt stack onto flexible Parylene-C substrates. The devices were electrically characterised, exhibiting both digital and analogue memory characteristics, which are obtained by proper adjustment of pulsing schemes during tests. This approach could open new application possibilities of these devices in neuromorphic computing, data processing, implantable sensors and bio-compatible neural interfaces.
Collapse
Affiliation(s)
- Ali Khiat
- Nanoelectronics and Nanotechnology Research Group, Department of Electronics and Computer Science, University of Southampton, University Road, SO17 1BJ, Southampton, UK. Southampton Nanofabrication Centre, University of Southampton, Highfield Campus, Southampton, SO17 1BJ, UK
| | | | | | | |
Collapse
|
9
|
Serb A, Bill J, Khiat A, Berdan R, Legenstein R, Prodromakis T. Unsupervised learning in probabilistic neural networks with multi-state metal-oxide memristive synapses. Nat Commun 2016; 7:12611. [PMID: 27681181 PMCID: PMC5056401 DOI: 10.1038/ncomms12611] [Citation(s) in RCA: 92] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2016] [Accepted: 07/15/2016] [Indexed: 12/29/2022] Open
Abstract
In an increasingly data-rich world the need for developing computing systems that cannot only process, but ideally also interpret big data is becoming continuously more pressing. Brain-inspired concepts have shown great promise towards addressing this need. Here we demonstrate unsupervised learning in a probabilistic neural network that utilizes metal-oxide memristive devices as multi-state synapses. Our approach can be exploited for processing unlabelled data and can adapt to time-varying clusters that underlie incoming data by supporting the capability of reversible unsupervised learning. The potential of this work is showcased through the demonstration of successful learning in the presence of corrupted input data and probabilistic neurons, thus paving the way towards robust big-data processors.
Collapse
Affiliation(s)
- Alexander Serb
- Electronics and Computer Science Department, University of Southampton, Southampton SO17 1BJ, UK
| | - Johannes Bill
- Institute for Theoretical Computer Science, Graz University of Technology, 8010 Graz, Austria
- Heidelberg University, Department of Physics and Astronomy, Kirchhoff Institute for Physics, 69120 Heidelberg, Germany
| | - Ali Khiat
- Electronics and Computer Science Department, University of Southampton, Southampton SO17 1BJ, UK
| | - Radu Berdan
- Department of Electrical and Electronic Engineering, Imperial College, London SW7 2AZ, UK
| | - Robert Legenstein
- Institute for Theoretical Computer Science, Graz University of Technology, 8010 Graz, Austria
| | - Themis Prodromakis
- Electronics and Computer Science Department, University of Southampton, Southampton SO17 1BJ, UK
| |
Collapse
|