1
|
Wan C, Pei M, Shi K, Cui H, Long H, Qiao L, Xing Q, Wan Q. Toward a Brain-Neuromorphics Interface. ADVANCED MATERIALS (DEERFIELD BEACH, FLA.) 2024:e2311288. [PMID: 38339866 DOI: 10.1002/adma.202311288] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Revised: 01/17/2024] [Indexed: 02/12/2024]
Abstract
Brain-computer interfaces (BCIs) that enable human-machine interaction have immense potential in restoring or augmenting human capabilities. Traditional BCIs are realized based on complementary metal-oxide-semiconductor (CMOS) technologies with complex, bulky, and low biocompatible circuits, and suffer with the low energy efficiency of the von Neumann architecture. The brain-neuromorphics interface (BNI) would offer a promising solution to advance the BCI technologies and shape the interactions with machineries. Neuromorphic devices and systems are able to provide substantial computation power with extremely high energy-efficiency by implementing in-materia computing such as in situ vector-matrix multiplication (VMM) and physical reservoir computing. Recent progresses on integrating neuromorphic components with sensing and/or actuating modules, give birth to the neuromorphic afferent nerve, efferent nerve, sensorimotor loop, and so on, which has advanced the technologies for future neurorobotics by achieving sophisticated sensorimotor capabilities as the biological system. With the development on the compact artificial spiking neuron and bioelectronic interfaces, the seamless communication between a BNI and a bioentity is reasonably expectable. In this review, the upcoming BNIs are profiled by introducing the brief history of neuromorphics, reviewing the recent progresses on related areas, and discussing the future advances and challenges that lie ahead.
Collapse
Affiliation(s)
- Changjin Wan
- Yongjiang Laboratory (Y-LAB), Ningbo, Zhejiang, 315202, China
- School of Electronic Science and Engineering, National Laboratory of Solid-State Microstructures, Collaborative Innovation Center of Advanced Microstructures, Nanjing University, Nanjing, 210093, China
- Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, 315201, China
| | - Mengjiao Pei
- School of Electronic Science and Engineering, National Laboratory of Solid-State Microstructures, Collaborative Innovation Center of Advanced Microstructures, Nanjing University, Nanjing, 210093, China
| | - Kailu Shi
- School of Electronic Science and Engineering, National Laboratory of Solid-State Microstructures, Collaborative Innovation Center of Advanced Microstructures, Nanjing University, Nanjing, 210093, China
| | - Hangyuan Cui
- School of Electronic Science and Engineering, National Laboratory of Solid-State Microstructures, Collaborative Innovation Center of Advanced Microstructures, Nanjing University, Nanjing, 210093, China
| | - Haotian Long
- School of Electronic Science and Engineering, National Laboratory of Solid-State Microstructures, Collaborative Innovation Center of Advanced Microstructures, Nanjing University, Nanjing, 210093, China
| | - Lesheng Qiao
- School of Electronic Science and Engineering, National Laboratory of Solid-State Microstructures, Collaborative Innovation Center of Advanced Microstructures, Nanjing University, Nanjing, 210093, China
| | - Qianye Xing
- School of Electronic Science and Engineering, National Laboratory of Solid-State Microstructures, Collaborative Innovation Center of Advanced Microstructures, Nanjing University, Nanjing, 210093, China
| | - Qing Wan
- Yongjiang Laboratory (Y-LAB), Ningbo, Zhejiang, 315202, China
- School of Electronic Science and Engineering, National Laboratory of Solid-State Microstructures, Collaborative Innovation Center of Advanced Microstructures, Nanjing University, Nanjing, 210093, China
- Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, 315201, China
| |
Collapse
|
2
|
Castagnetti A, Pegatoquet A, Miramond B. SPIDEN: deep Spiking Neural Networks for efficient image denoising. Front Neurosci 2023; 17:1224457. [PMID: 37638316 PMCID: PMC10450950 DOI: 10.3389/fnins.2023.1224457] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Accepted: 07/27/2023] [Indexed: 08/29/2023] Open
Abstract
In recent years, Deep Convolutional Neural Networks (DCNNs) have outreached the performance of classical algorithms for image restoration tasks. However, most of these methods are not suited for computational efficiency. In this work, we investigate Spiking Neural Networks (SNNs) for the specific and uncovered case of image denoising, with the goal of reaching the performance of conventional DCNN while reducing the computational cost. This task is challenging for two reasons. First, as denoising is a regression task, the network has to predict a continuous value (i.e., the noise amplitude) for each pixel of the image, with high precision. Moreover, state of the art results have been obtained with deep networks that are notably difficult to train in the spiking domain. To overcome these issues, we propose a formal analysis of the information conversion processing carried out by the Integrate and Fire (IF) spiking neurons and we formalize the trade-off between conversion error and activation sparsity in SNNs. We then propose, for the first time, an image denoising solution based on SNNs. The SNN networks are trained directly in the spike domain using surrogate gradient learning and backpropagation through time. Experimental results show that the proposed SNN provides a level of performance close to the state of the art with CNN based solutions. Specifically, our SNN achieves 30.18 dB of signal-to-noise ratio on the Set12 dataset, which is only 0.25 dB below the performance of the equivalent DCNN. Moreover we show that this performance can be achieved with low latency, i.e., using few timesteps, and with a significant level of sparsity. Finally, we analyze the energy consumption for different network latencies and network sizes. We show that the energy consumption of SNNs increases with longer latencies, making them more energy efficient compared to CNNs only for very small inference latencies. However, we also show that by increasing the network size, SNNs can provide competitive denoising performance while reducing the energy consumption by 20%.
Collapse
|
3
|
Masominia A, Calvet LE, Thorpe S, Barbay S. Online spike-based recognition of digits with ultrafast microlaser neurons. Front Comput Neurosci 2023; 17:1164472. [PMID: 37465646 PMCID: PMC10350502 DOI: 10.3389/fncom.2023.1164472] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2023] [Accepted: 06/13/2023] [Indexed: 07/20/2023] Open
Abstract
Classification and recognition tasks performed on photonic hardware-based neural networks often require at least one offline computational step, such as in the increasingly popular reservoir computing paradigm. Removing this offline step can significantly improve the response time and energy efficiency of such systems. We present numerical simulations of different algorithms that utilize ultrafast photonic spiking neurons as receptive fields to allow for image recognition without an offline computing step. In particular, we discuss the merits of event, spike-time and rank-order based algorithms adapted to this system. These techniques have the potential to significantly improve the efficiency and effectiveness of optical classification systems, minimizing the number of spiking nodes required for a given task and leveraging the parallelism offered by photonic hardware.
Collapse
Affiliation(s)
- Amir Masominia
- Université Paris-Saclay, CNRS, Centre de Nanosciences et de Nanotechnologies, Palaiseau, France
| | | | - Simon Thorpe
- CERCO UMR5549, CNRS—Université Toulouse III, Toulouse, France
| | - Sylvain Barbay
- Université Paris-Saclay, CNRS, Centre de Nanosciences et de Nanotechnologies, Palaiseau, France
| |
Collapse
|
4
|
Castagnetti A, Pegatoquet A, Miramond B. Trainable quantization for Speedy Spiking Neural Networks. Front Neurosci 2023; 17:1154241. [PMID: 36937675 PMCID: PMC10020579 DOI: 10.3389/fnins.2023.1154241] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Accepted: 02/14/2023] [Indexed: 03/06/2023] Open
Abstract
Spiking neural networks are considered as the third generation of Artificial Neural Networks. SNNs perform computation using neurons and synapses that communicate using binary and asynchronous signals known as spikes. They have attracted significant research interest over the last years since their computing paradigm allows theoretically sparse and low-power operations. This hypothetical gain, used from the beginning of the neuromorphic research, was however limited by three main factors: the absence of an efficient learning rule competing with the one of classical deep learning, the lack of mature learning framework, and an important data processing latency finally generating energy overhead. While the first two limitations have recently been addressed in the literature, the major problem of latency is not solved yet. Indeed, information is not exchanged instantaneously between spiking neurons but gradually builds up over time as spikes are generated and propagated through the network. This paper focuses on quantization error, one of the main consequence of the SNN discrete representation of information. We argue that the quantization error is the main source of accuracy drop between ANN and SNN. In this article we propose an in-depth characterization of SNN quantization noise. We then propose a end-to-end direct learning approach based on a new trainable spiking neural model. This model allows adapting the threshold of neurons during training and implements efficient quantization strategies. This novel approach better explains the global behavior of SNNs and minimizes the quantization noise during training. The resulting SNN can be trained over a limited amount of timesteps, reducing latency, while beating state of the art accuracy and preserving high sparsity on the main datasets considered in the neuromorphic community.
Collapse
|
5
|
Guo W, Fouda ME, Eltawil AM, Salama KN. Efficient training of spiking neural networks with temporally-truncated local backpropagation through time. Front Neurosci 2023; 17:1047008. [PMID: 37090791 PMCID: PMC10117667 DOI: 10.3389/fnins.2023.1047008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2022] [Accepted: 03/20/2023] [Indexed: 04/25/2023] Open
Abstract
Directly training spiking neural networks (SNNs) has remained challenging due to complex neural dynamics and intrinsic non-differentiability in firing functions. The well-known backpropagation through time (BPTT) algorithm proposed to train SNNs suffers from large memory footprint and prohibits backward and update unlocking, making it impossible to exploit the potential of locally-supervised training methods. This work proposes an efficient and direct training algorithm for SNNs that integrates a locally-supervised training method with a temporally-truncated BPTT algorithm. The proposed algorithm explores both temporal and spatial locality in BPTT and contributes to significant reduction in computational cost including GPU memory utilization, main memory access and arithmetic operations. We thoroughly explore the design space concerning temporal truncation length and local training block size and benchmark their impact on classification accuracy of different networks running different types of tasks. The results reveal that temporal truncation has a negative effect on the accuracy of classifying frame-based datasets, but leads to improvement in accuracy on event-based datasets. In spite of resulting information loss, local training is capable of alleviating overfitting. The combined effect of temporal truncation and local training can lead to the slowdown of accuracy drop and even improvement in accuracy. In addition, training deep SNNs' models such as AlexNet classifying CIFAR10-DVS dataset leads to 7.26% increase in accuracy, 89.94% reduction in GPU memory, 10.79% reduction in memory access, and 99.64% reduction in MAC operations compared to the standard end-to-end BPTT. Thus, the proposed method has shown high potential to enable fast and energy-efficient on-chip training for real-time learning at the edge.
Collapse
Affiliation(s)
- Wenzhe Guo
- Sensors Lab, Advanced Membranes and Porous Materials Center (AMPMC), Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division, King Abdullah University of Science and Technology, Thuwal, Saudi Arabia
- Communication and Computing Systems Lab, Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division, King Abdullah University of Science and Technology, Thuwal, Saudi Arabia
| | - Mohammed E. Fouda
- Center for Embedded & Cyber-Physical Systems, University of California, Irvine, Irvine, CA, United States
| | - Ahmed M. Eltawil
- Communication and Computing Systems Lab, Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division, King Abdullah University of Science and Technology, Thuwal, Saudi Arabia
- Center for Embedded & Cyber-Physical Systems, University of California, Irvine, Irvine, CA, United States
| | - Khaled Nabil Salama
- Sensors Lab, Advanced Membranes and Porous Materials Center (AMPMC), Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division, King Abdullah University of Science and Technology, Thuwal, Saudi Arabia
- *Correspondence: Khaled Nabil Salama
| |
Collapse
|
6
|
Liang J, Li R, Wang C, Zhang R, Yue K, Li W, Li Y. A Spiking Neural Network Based on Retinal Ganglion Cells for Automatic Burn Image Segmentation. ENTROPY (BASEL, SWITZERLAND) 2022; 24:1526. [PMID: 36359618 PMCID: PMC9689035 DOI: 10.3390/e24111526] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/09/2022] [Revised: 10/17/2022] [Accepted: 10/18/2022] [Indexed: 06/16/2023]
Abstract
Burn is a common traumatic disease. After severe burn injury, the human body will increase catabolism, and burn wounds lead to a large amount of body fluid loss, with a high mortality rate. Therefore, in the early treatment for burn patients, it is essential to calculate the patient's water requirement based on the percentage of the burn wound area in the total body surface area (TBSA%). However, burn wounds are so complex that there is observer variability by the clinicians, making it challenging to locate the burn wounds accurately. Therefore, an objective, accurate location method of burn wounds is very necessary and meaningful. Convolutional neural networks (CNNs) provide feasible means for this requirement. However, although the CNNs continue to improve the accuracy in the semantic segmentation task, they are often limited by the computing resources of edge hardware. For this purpose, a lightweight burn wounds segmentation model is required. In our work, we constructed a burn image dataset and proposed a U-type spiking neural networks (SNNs) based on retinal ganglion cells (RGC) for segmenting burn and non-burn areas. Moreover, a module with cross-layer skip concatenation structure was introduced. Experimental results showed that the pixel accuracy of the proposed reached 92.89%, and our network parameter only needed 16.6 Mbytes. The results showed our model achieved remarkable accuracy while achieving edge hardware affinity.
Collapse
|
7
|
Bonilla L, Gautrais J, Thorpe S, Masquelier T. Analyzing time-to-first-spike coding schemes: A theoretical approach. Front Neurosci 2022; 16:971937. [PMID: 36225737 PMCID: PMC9548614 DOI: 10.3389/fnins.2022.971937] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Accepted: 08/26/2022] [Indexed: 11/14/2022] Open
Abstract
Spiking neural networks (SNNs) using time-to-first-spike (TTFS) codes, in which neurons fire at most once, are appealing for rapid and low power processing. In this theoretical paper, we focus on information coding and decoding in those networks, and introduce a new unifying mathematical framework that allows the comparison of various coding schemes. In an early proposal, called rank-order coding (ROC), neurons are maximally activated when inputs arrive in the order of their synaptic weights, thanks to a shunting inhibition mechanism that progressively desensitizes the neurons as spikes arrive. In another proposal, called NoM coding, only the first N spikes of M input neurons are propagated, and these “first spike patterns” can be readout by downstream neurons with homogeneous weights and no desensitization: as a result, the exact order between the first spikes does not matter. This paper also introduces a third option—“Ranked-NoM” (R-NoM), which combines features from both ROC and NoM coding schemes: only the first N input spikes are propagated, but their order is readout by downstream neurons thanks to inhomogeneous weights and linear desensitization. The unifying mathematical framework allows the three codes to be compared in terms of discriminability, which measures to what extent a neuron responds more strongly to its preferred input spike pattern than to random patterns. This discriminability turns out to be much higher for R-NoM than for the other codes, especially in the early phase of the responses. We also argue that R-NoM is much more hardware-friendly than the original ROC proposal, although NoM remains the easiest to implement in hardware because it only requires binary synapses.
Collapse
Affiliation(s)
- Lina Bonilla
- CERCO UMR5549, CNRS – Université Toulouse III, Toulouse, France
- *Correspondence: Lina Bonilla
| | - Jacques Gautrais
- Centre de Recherches sur la Cognition Animale (CRCA), Centre de Biologie Intégrative (CBI), Université de Toulouse, Toulouse, France
- CNRS, UPS, Toulouse, France
| | - Simon Thorpe
- CERCO UMR5549, CNRS – Université Toulouse III, Toulouse, France
| | | |
Collapse
|
8
|
Chu Y, Li P, Bai Y, Hu Z, Chen Y, Lu J. Group channel pruning and spatial attention distilling for object detection. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03293-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
9
|
Kim Y, Panda P. Optimizing Deeper Spiking Neural Networks for Dynamic Vision Sensing. Neural Netw 2021; 144:686-698. [PMID: 34662827 DOI: 10.1016/j.neunet.2021.09.022] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2021] [Revised: 09/22/2021] [Accepted: 09/24/2021] [Indexed: 11/20/2022]
Abstract
Spiking Neural Networks (SNNs) have recently emerged as a new generation of low-power deep neural networks due to sparse, asynchronous, and binary event-driven processing. Most previous deep SNN optimization methods focus on static datasets (e.g., MNIST) from a conventional frame-based camera. On the other hand, optimization techniques for event data from Dynamic Vision Sensor (DVS) cameras are still at infancy. Most prior SNN techniques handling DVS data are limited to shallow networks and thus, show low performance. Generally, we observe that the integrate-and-fire behavior of spiking neurons diminishes spike activity in deeper layers. The sparse spike activity results in a sub-optimal solution during training (i.e., performance degradation). To address this limitation, we propose novel algorithmic and architectural advances to accelerate the training of very deep SNNs on DVS data. Specifically, we propose Spike Activation Lift Training (SALT) which increases spike activity across all layers by optimizing both weights and thresholds in convolutional layers. After applying SALT, we train the weights based on the cross-entropy loss. SALT helps the networks to convey ample information across all layers during training and therefore improves the performance. Furthermore, we propose a simple and effective architecture, called Switched-BN, which exploits Batch Normalization (BN). Previous methods show that the standard BN is incompatible with the temporal dynamics of SNNs. Therefore, in Switched-BN architecture, we apply BN to the last layer of an SNN after accumulating all the spikes from previous layer with a spike voltage accumulator (i.e., converting temporal spike information to float value). Even though we apply BN in just one layer of SNNs, our results demonstrate a considerable performance gain without any significant computational overhead. Through extensive experiments, we show the effectiveness of SALT and Switched-BN for training very deep SNNs from scratch on various benchmarks including, DVS-Cifar10, N-Caltech, DHP19, CIFAR10, and CIFAR100. To the best of our knowledge, this is the first work showing state-of-the-art performance with deep SNNs on DVS data.
Collapse
Affiliation(s)
- Youngeun Kim
- Department of Electrical Engineering, Yale University, New Haven, CT, USA.
| | | |
Collapse
|
10
|
Spaeth A, Tebyani M, Haussler D, Teodorescu M. Spiking neural state machine for gait frequency entrainment in a flexible modular robot. PLoS One 2020; 15:e0240267. [PMID: 33085673 PMCID: PMC7577446 DOI: 10.1371/journal.pone.0240267] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2020] [Accepted: 09/22/2020] [Indexed: 12/02/2022] Open
Abstract
We propose a modular architecture for neuromorphic closed-loop control based on bistable relaxation oscillator modules consisting of three spiking neurons each. Like its biological prototypes, this basic component is robust to parameter variation but can be modulated by external inputs. By combining these modules, we can construct a neural state machine capable of generating the cyclic or repetitive behaviors necessary for legged locomotion. A concrete case study for the approach is provided by a modular robot constructed from flexible plastic volumetric pixels, in which we produce a forward crawling gait entrained to the natural frequency of the robot by a minimal system of twelve neurons organized into four modules.
Collapse
Affiliation(s)
- Alex Spaeth
- Department of Electrical and Computer Engineering, University of California, Santa Cruz, Santa Cruz, California, United States of America
- Genomics Institute, University of California, Santa Cruz, Santa Cruz, California, United States of America
- * E-mail:
| | - Maryam Tebyani
- Department of Electrical and Computer Engineering, University of California, Santa Cruz, Santa Cruz, California, United States of America
| | - David Haussler
- Genomics Institute, University of California, Santa Cruz, Santa Cruz, California, United States of America
- Howard Hughes Medical Institute, University of California, Santa Cruz, Santa Cruz, California, United States of America
| | - Mircea Teodorescu
- Department of Electrical and Computer Engineering, University of California, Santa Cruz, Santa Cruz, California, United States of America
- Genomics Institute, University of California, Santa Cruz, Santa Cruz, California, United States of America
| |
Collapse
|
11
|
Dutta S, Schafer C, Gomez J, Ni K, Joshi S, Datta S. Supervised Learning in All FeFET-Based Spiking Neural Network: Opportunities and Challenges. Front Neurosci 2020; 14:634. [PMID: 32670012 PMCID: PMC7327100 DOI: 10.3389/fnins.2020.00634] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2020] [Accepted: 05/22/2020] [Indexed: 11/13/2022] Open
Abstract
The two possible pathways toward artificial intelligence (AI)-(i) neuroscience-oriented neuromorphic computing [like spiking neural network (SNN)] and (ii) computer science driven machine learning (like deep learning) differ widely in their fundamental formalism and coding schemes (Pei et al., 2019). Deviating from traditional deep learning approach of relying on neuronal models with static nonlinearities, SNNs attempt to capture brain-like features like computation using spikes. This holds the promise of improving the energy efficiency of the computing platforms. In order to achieve a much higher areal and energy efficiency compared to today's hardware implementation of SNN, we need to go beyond the traditional route of relying on CMOS-based digital or mixed-signal neuronal circuits and segregation of computation and memory under the von Neumann architecture. Recently, ferroelectric field-effect transistors (FeFETs) are being explored as a promising alternative for building neuromorphic hardware by utilizing their non-volatile nature and rich polarization switching dynamics. In this work, we propose an all FeFET-based SNN hardware that allows low-power spike-based information processing and co-localized memory and computing (a.k.a. in-memory computing). We experimentally demonstrate the essential neuronal and synaptic dynamics in a 28 nm high-K metal gate FeFET technology. Furthermore, drawing inspiration from the traditional machine learning approach of optimizing a cost function to adjust the synaptic weights, we implement a surrogate gradient (SG) learning algorithm on our SNN platform that allows us to perform supervised learning on MNIST dataset. As such, we provide a pathway toward building energy-efficient neuromorphic hardware that can support traditional machine learning algorithms. Finally, we undertake synergistic device-algorithm co-design by accounting for the impacts of device-level variation (stochasticity) and limited bit precision of on-chip synaptic weights (available analog states) on the classification accuracy.
Collapse
Affiliation(s)
- Sourav Dutta
- Department of Electrical Engineering, College of Engineering, University of Notre Dame, Notre Dame, IN, United States
| | - Clemens Schafer
- Department of Computer Science and Engineering, College of Engineering, University of Notre Dame, Notre Dame, IN, United States
| | - Jorge Gomez
- Department of Electrical Engineering, College of Engineering, University of Notre Dame, Notre Dame, IN, United States
| | - Kai Ni
- Department of Microsystems Engineering, Rochester Institute of Technology, Rochester, NY, United States
| | - Siddharth Joshi
- Department of Computer Science and Engineering, College of Engineering, University of Notre Dame, Notre Dame, IN, United States
| | - Suman Datta
- Department of Electrical Engineering, College of Engineering, University of Notre Dame, Notre Dame, IN, United States
| |
Collapse
|