tinyML is a fast-growing initiative around low-power machine-learning technologies for edge devices. The scope of tinyML naturally aligns with the field of neuromorphic engineering, whose purpose is to replicate and exploit the way biological systems sense and process information within constrained resources.
In order to build on these synergies, we are excited to announce the first tinyML Forum on Neuromorphic Engineering. During this event, key experts from academia and industry will introduce the main trends in neuromorphic hardware, algorithms, sensors, systems, and applications.
Pacific Time Zone
8:00 am to 8:35 am
Short welcome - Intro to tinyML and event
Neuromorphic intelligence and learning in robotics
Yulia SANDAMIRSKAYA, Applications Research Lead, Intel Labs
Robotics is a flagship use case for more power-, time-, and data-efficient AI – especially mobile robots have no time, space, or energy to lose and require advanced cognitive and spatial processing to move and solve tasks in real-world, dynamic and unstructured environments. I will present an overview of recent on-chip learning and processing algorithms in perception, planning, and control for neuromorphic hardware, implemented using open source software framework Lava, targeting Intel’s neuromorphic research chip Loihi 2.
8:35 am to 8:55 am
The SpiNNaker neuromorphic computing platform
Steve FURBER, ICL Professor of Computer Engineering, The University of Manchester
SpiNNaker (a contraction of Spiking Neural Network Architecture) is a digital many-core neuromorphic computing platform designed primarily to support large-scale models of brain networks in biological real time. In conception for over 20 years and in construction for over 15 years, the million-core SpiNNaker machine at Manchester has been supporting an open neuromorphic computing service under the auspices of the EU Human Brain Project since April 2016, and has been used of real-time modelling of a detailed cortical microcircuit, cerebellar models, basal ganglia and other brain areas. In addition to its use in brain modelling, its real-time characteristics render it useful for neurorobotic and other engineering applications. A second generation SpiNNaker chip has been developed in collaboration with TU Dresden offering a 10x improvement in functional density and energy efficiency, and first silicon is currently being used to support software development. SpiNNaker2 offers state-of-the-art neuromorphic performance and efficiency in a very flexible configuration, building on a decade of experience of deploying SpiNNaker1.
Digital spiking neural network accelerators for neuromorphic edge intelligence
Charlotte FRENKEL, Assistant professor, Delft University of Technology
Taking inspiration from biology is a promising avenue toward endowing edge devices with the ability to adapt, autonomously and within a tight power budget, to their users and environments. In this talk, we will see how custom digital online-learning spiking neural network accelerators support the deployment of neuromorphic edge intelligence by providing a favorable tradeoff between power, performance, area, robustness, flexibility, scalability, and design time.
8:55 am to 9:35 am
Towards “Greener” AI on the Edge: Energy-Efficient Neuromorphic Learning and Inference
Gert CAUWENBERGHS, Professor of Bioengineering , Institute for Neural Computation, UC San Diego
We present neuromorphic cognitive computing systems-on-chip implemented in custom silicon compute-in-memory neural and synaptic crossbar array architectures that combine the efficiency of local interconnects with flexibility and sparsity in global interconnects, and that realize a wide class of deeply layered and recurrent neural network topologies with embedded local plasticity for on-line learning, at a fraction of the computational and energy cost of implementation on CPU and GPGPU platforms. Adiabatic energy recycling in charge-mode crossbar arrays permit extreme scaling in energy efficiency, approaching that of synaptic transmission in the mammalian brain.
The role of neuromorphic analog computing in solutions for Industry 4.0
Alexander TIMOFEEV, Founder and Chief Executive Officer , Polyn.ai
Vibration-based condition monitoring, responsible for the machine’s failure detection is one of the basic Predictive Maintenance options used in many sensors. Different types of vibrations help to measure displacement, velocity, and acceleration, with different measuring technologies, such as piezoelectric sensors, microelectromechanical sensors, and many others. Today it is the most popular solution implemented by most sensor node makers.
The power-hungry sensor node collects a lot of data for further analytics by Machine Learning (ML) algorithms. To send all this data to the cloud for analysis, the data communication would be more trouble than it worth. The data reduction can significantly decrease the amount of data sent to the cloud, saving OPEX and improving latency.
NASP technology enables the generation of a digital imprint (signature ) corresponding to different signals — a unique digital array generated by the NASP neural core from vibration sensor data flow. Digital imprint analysis at the application level makes it possible to identify unusual signal patterns from any source of vibration. The NASP chip is integrated into the sensor node for vibration signal pre-processing. It enables the reduction of sensor node power requirements, reduces data bandwidth allocation, enables long-range communications, and improves cloud TCO.
Tiny spiking neural networks for sub-milliwatt AI at the sensor-edge
Sumeet KUMAR, CEO, Innatera
The brain relies on tiny spiking neural networks for sparse, robust, and highly energy efficient processing of sensory data. In this talk, Innatera CEO Sumeet Kumar explores neuromorphic processing for always-on sensing applications with the company’s Spiking Neural Processor (SNP). The SNP implements a revolutionary architecture for energy-efficient inference of spiking neural networks, enabling full-featured AI applications within a sub-milliwatt power and sub-millisecond latency envelope. Innatera radically simplifies application development through the Talamo software development kit, which brings the power and ease of use of PyTorch to spiking neural networks. Through this combination of energy-efficient neuromorphic silicon and industry-standard software tooling, Innatera’s Spiking Neural Processor enables unprecedented AI capabilities in a wide-range of sensor-edge applications in the consumer and industrial domains.
9:35 am to 9:45 am
9:45 am to 10:25 am
Hardware Friendly Learning for Edge ML
Ralph Etienne CUMMINGS, Professor of Electrical and Computer Engineering, Johns Hopkins University
Realizing Hebbian plasticity in large-scale neuromorphic systems is essential for reconfiguring synapses during recognition tasks. Spike-timing dependent plasticity (STDP), as a tool to this effect, has received a lot of attention in recent years. This phenomenon encodes weight update information as correlations between the presynaptic and postsynaptic event times, as such, it is imperative for each synapse in a silicon neural network to somehow understand track its activity and keep its own time. Carefully design synapses that can do that can be incorporated into compact, dense and energy efficient learning.
Here we present a biologically plausible and optimized Register Transfer Level (RTL), and algorithmic approach to realize Nearest-Neighbor STDP with temporal tracking and management handled by the postsynaptic dendrite on which the synapse sits. We adopt a time-constant based ramp approximation for ease of RTL implementation and incorporation in large-scale digital neuromorphic systems. We will describe the architecture, circuits and function of our hardware realizable STDP based learning system.
Training spiking neural networks end-to-end with surrogate gradients
Friedemann ZENKE, Research group leader, Friedrich Miescher Institute for Biomedical Research
Brains rely on spiking neural networks for ultra-low-power information processing. Integrating similar efficiency into artificial intelligence requires learning algorithms to instantiate complex spiking neural networks and brain-inspired neuromorphic hardware to emulate them efficiently. To this end, I will briefly introduce surrogate gradients as a general framework for training spiking neural networks end-to-end, showcase its capabilities for instantiating spiking neural networks with sparse activity, and demonstrate its capabilities on analog neuromorphic hardware. I will also outline a deep link between approximate surrogate gradients and a family of bio-inspired online learning rules.
Enabling Neuromorphic Learning Machines with Meta-learning
Emre NEFTCI, Director , Jülich Research Centre
The data-intensive and randomized learning process that characterizes state-of-the-art Spiking Neural Network (SNN) training is incompatible with the physical nature and real-time operation of the brain and neuromorphic hardware. Bi-level learning, such as meta-learning can be used in deep learning to overcome these limitations. This talk will introduce gradient-based meta-learning methods, namely Model Agnostic Meta Learning (MAML), in SNNs in conjunction with the surrogate gradient method. I’ll further discuss 1) the hardware advantages that accrue from meta-learning: fast learning without the requirement of high precision weights or gradients and training-to-learn with quantization and mitigating the effects of approximate synaptic plasticity rules, 2) the requirements with respect to datasets, and 3) and how meta-learning can enable new neuromorphic learning technologies for real-world problems.
Exploring Robustness and Efficiency in Neural Systems with Spike-based Machine Intelligence
Priya PANDA, Assistant Professor , Yale University
Spiking Neural Networks (SNNs) have recently emerged as an alternative to deep learning due to their huge energy efficiency benefits on neuromorphic hardware. In this presentation, I will talk about important techniques for training SNNs which bring a huge benefit in terms of latency, accuracy, interpretability, and robustness. We will first delve into how training is performed in SNNs. Training SNNs with surrogate gradients presents computational benefits due to short latency. However, due to the non-differentiable nature of spiking neurons, the training becomes problematic and surrogate methods have thus been limited to shallow networks. To address this training issue with surrogate gradients, we will go over a recently proposed method Batch Normalization Through Time (BNTT) that allows us to train SNNs from scratch with very low latency and enables us to target interesting applications like video segmentation and beyond traditional learning scenarios, like federated training. Another critical limitation of SNNs is the lack of interpretability. While a considerable amount of attention has been given to optimizing SNNs, the development of explainability still is at its infancy. I will talk about our recent work on a bio-plausible visualization tool for SNNs, called Spike Activation Map (SAM) compatible with BNTT training. The proposed SAM highlights spikes having short inter-spike interval, containing discriminative information for classification. Finally, with proposed BNTT and SAM, I will highlight the robustness aspect of SNNs with respect to adversarial attacks. In the end, time permitting, I will talk about interesting prospects of SNNs for non-conventional learning scenarios such as privacy-preserving distributed learning as well as unraveling the temporal correlation in SNNs with feedback connections.
10:25 am to 10:45 am
Neuromorphic Event-based Vision
Christoph POSCH, CTO, PROPHESEE
Neuromorphic Event-based (EB) vision is an emerging paradigm of acquisition and processing of visual information that takes inspiration from the functioning of the human vision system, trying to recreate its visual information acquisition and processing operations on VLSI silicon chips. In contrast to conventional image sensors, EB sensors do not use one common sampling rate (=frame rate) for all pixels, but each pixel defines the timing of its own sampling points in response to its visual input by reacting to changes of the amount of incident light. The highly efficient way of acquiring sparse data, the high temporal resolution and the robustness to uncontrolled lighting conditions are characteristics of the event sensing process that make EB vision attractive for numerous applications in industrial, surveillance, IoT, AR/VR, automotive. This short presentation will give an introduction to EB sensing technology and highlight a few exemplary use cases.
New Architecture for Visual AI, Oculi Technology Enables Edge Solutions At The Speed Of Machines With The Efficiency of Biology
Charbel RIZK, Founder CEO , Oculi Inc.
Oculi is putting the “Human Eye” in AI: machines outperform humans in most tasks but human vision remains far superior delivering the actionable signal in real time and consuming only mW’s. As biology and nature have been the inspiration for much of the technology innovations, developing vision technology that mimics the eye+brain architecture is the logical path. Unlike photos and videos we collect for personal consumption, machine vision is not about pretty images and the most number of pixels. Machine vision should extract the “best” actionable information very efficiently (in time and energy) from the available signal (photons). At Oculi, we have developed a new architecture for computer and machine vision that enables dynamic and real time optimization. The core of this disruptive approach is the Oculi SPU (Sensing & Processing Unit) which is a Software Defined Vision Sensor combining sensing + processing at the pixel, the true edge for imaging sensors. This presentation will highlight the novel architecture and provide example use cases that are uniquely positioned for TinyML.
10:45 am to 10:55 am
10:55 am to 11:45 am
Event driven signal processing
Sadique SHEIK, Senior Director, Algorithms and Applications, SynSense AG
One of the challenges for neuromorphic enginering over the last decade has been the search for applications that can benefit from neuromorphic sensors and processors. I will give a quick overview of the various application domains and opportinities that SynSense is focused on. I will talk about our development pipeline and show some quick demonstrations of how neuromorhpic devices designed at SynSense can be used for real-world applications. I will conclude with some of the current challenges in developing and training applications for event-driven systems.
Fully Spike-based Architecture with Front-end Dynamic Vision Sensor and Back-end Spiking Neural Network
Jae-sun SEO, Associate Professor, Arizona State University
Spiking neural networks (SNN) mimic the operations in biological nervous systems. By exploiting event-driven computation and data communication, SNNs can achieve very low power consumption. However, two important issues have persisted: (1) directly training SNNs have not resulted in competitive inference accuracy; (2) non-spike inputs (e.g. natural images) need to be converted to a train of spikes, which results in long latency. To exploit event-driven end-to-end operations, integration of spike-based front-end sensors such as dynamic vision sensors (DVS) and back-end SNNs become ideal. In addition, it is crucial to have a back-propagation based training algorithm that can directly train SNNs with continuous input spikes from DVS output. Several works from the literature with such neuromorphic algorithms and custom hardware designs will be presented and the design trade-offs such as spike sparsity and inference accuracy will be discussed.
tinyML In-filter Computing using Neuromorphic Cochlea
Chetan SINGH THAKUR, Assistant Professor, Indian Institute of Science (IISc)
Edge devices are often constrained by the available computational power and hardware resource. We present a novel in-filter computing framework that can be used for designing ultra-light classifiers for time-series data. Unlike a conventional pattern recognizer, where the feature extraction and classification are designed independently, this architecture directly integrates the convolution and nonlinear filtering operations into the kernels of a Support Vector Machine (SVM). The result of this integration is a template-based SVM, which does not impose restrictions on the SVM kernel to be positive-definite and allows the user to define memory constraints in terms of fixed template vectors. Here, we have used the Neuromorphic Cochlea as a kernel in our template-based SVM formulation, which also acts as a feature extractor for time-series data. We prototyped the proposed system, on an FPGA and a Cortex-M4 MCU, for multiple ecological and healthcare applications using acoustic and IMU sensors.
Combining Neuromorphic Design Principles with Modern Machine Learning Algorithms
Anil MANKAR, Chief Development Officer, BrainChip
Neuromorphic computing takes inspiration from the structure and function of neural systems and seeks to replicate the energy efficiency, tolerance to noise, representational power, and learning plasticity these systems possess. Current machine learning (ML) algorithms, such as convolutional neural networks (CNNs), are capable of state-of-the-art performance in many computer vision applications such as object classification, detection, and segmentation. In this talk, we discuss how our neuromorphic design architecture, Akida, brings these ML algorithms into the neuromorphic computing domain by executing them as spiking neural networks (SNNs). We highlight how hardware design choices such as the event-based computing paradigm, low-bit width precision computation, the co-location of processing and memory, distributed computation, and support for efficient, on-chip learning algorithms enable low-power, high-performance ML execution at the edge. Finally, we discuss how this architecture supports next generation SNN algorithms such as binarized CNNs and algorithms that efficiently utilize temporal information to increase accuracy.
Autonomous Agile Drones
Davide SCARAMUZZA, Professor, University of Zurich
I will summarize our latest research in learning deep sensorimotor policies for agile vision-based quadrotor flight. Learning sensorimotor policies represents a holistic approach that is more resilient to noisy sensory observations and imperfect world models. However, training robust policies requires a large amount of data. I will show that simulation data is enough to train policies that transfer to the real world without fine-tuning. We achieve one-shot sim-to-real transfer through the appropriate abstraction of sensory observations and control commands. I will show that these learned policies enable autonomous quadrotors to fly faster and more robustly than before, using only onboard cameras and computation. Applications include acrobatics, high-speed navigation in the wild, and autonomous drone racing.
11:45 am to 12:00 pm
Neuromorphic Engineering needs applications
André van SCHAIK, Director of the International Centre for Neuromorphic Systems , Western Sydney University
In this talk I will demonstrate that Neuromorphic Engineering is a Hot Topic with great promise, and also argue that this means we need to focus urgently on providing applications of neuromorphic technology in the next few years. I will present some examples of these that we are working on at the International Centre for Neuromorphic Systems.
12:00 pm to 12:05 pm
Wrap-up and closing remarks
Schedule subject to change without notice.
Delft University of Technology
Arizona State University
Friedrich Miescher Institute for Biomedical Research
University of Basel
André van SCHAIK
Western Sydney University
The University of Manchester
Delft University of Technology
Institute for Neural Computation, UC San Diego
Ralph Etienne CUMMINGS
Johns Hopkins University
Friedrich Miescher Institute for Biomedical Research
University of Basel
Jülich Research Centre
Johns Hopkins ECE
Arizona State University
Chetan SINGH THAKUR
Indian Institute of Science (IISc)
University of Zurich
André van SCHAIK
Western Sydney University