tinyML Neuromorphic Engineering Forum

tinyML is a fast-growing initiative around low-power machine-learning technologies for edge devices. The scope of tinyML naturally aligns with the field of neuromorphic engineering, whose purpose is to replicate and exploit the way biological systems sense and process information within constrained resources.

September 27, 2022

About

tinyML is a fast-growing initiative around low-power machine-learning technologies for edge devices. The scope of tinyML naturally aligns with the field of neuromorphic engineering, whose purpose is to replicate and exploit the way biological systems sense and process information within constrained resources.

In order to build on these synergies, we are excited to announce the first tinyML Forum on Neuromorphic Engineering. During this event, key experts from academia and industry will introduce the main trends in neuromorphic hardware, algorithms, sensors, systems, and applications.

Venue

Virtual

Zoom

Contact us

Olga Goremichina

Schedule

Pacific Time Zone

8:00 am to 8:35 am

Short welcome - Intro to tinyML and event

Neuromorphic intelligence and learning in robotics

Yulia SANDAMIRSKAYA, Applications Research Lead, Intel Labs

Abstract (English)

Robotics is a flagship use case for more power-, time-, and data-efficient AI – especially mobile robots have no time, space, or energy to lose and require advanced cognitive and spatial processing to move and solve tasks in real-world, dynamic and unstructured environments. I will present an overview of recent on-chip learning and processing algorithms in perception, planning, and control for neuromorphic hardware, implemented using open source software framework Lava, targeting Intel’s neuromorphic research chip Loihi 2.

8:35 am to 8:55 am

Digital

The SpiNNaker neuromorphic computing platform

Steve FURBER, ICL Professor of Computer Engineering, The University of Manchester

Abstract (English)

SpiNNaker (a contraction of Spiking Neural Network Architecture) is a digital many-core neuromorphic computing platform designed primarily to support large-scale models of brain networks in biological real time. In conception for over 20 years and in construction for over 15 years, the million-core SpiNNaker machine at Manchester has been supporting an open neuromorphic computing service under the auspices of the EU Human Brain Project since April 2016, and has been used of real-time modelling of a detailed cortical microcircuit, cerebellar models, basal ganglia and other brain areas. In addition to its use in brain modelling, its real-time characteristics render it useful for neurorobotic and other engineering applications. A second generation SpiNNaker chip has been developed in collaboration with TU Dresden offering a 10x improvement in functional density and energy efficiency, and first silicon is currently being used to support software development. SpiNNaker2 offers state-of-the-art neuromorphic performance and efficiency in a very flexible configuration, building on a decade of experience of deploying SpiNNaker1.

Digital spiking neural network accelerators for neuromorphic edge intelligence

Charlotte FRENKEL, Assistant professor, Delft University of Technology

Abstract (English)

Taking inspiration from biology is a promising avenue toward endowing edge devices with the ability to adapt, autonomously and within a tight power budget, to their users and environments. In this talk, we will see how custom digital online-learning spiking neural network accelerators support the deployment of neuromorphic edge intelligence by providing a favorable tradeoff between power, performance, area, robustness, flexibility, scalability, and design time.

  • YouTube

8:55 am to 9:35 am

Analog

Towards “Greener” AI on the Edge: Energy-Efficient Neuromorphic Learning and Inference

Gert CAUWENBERGHS, Professor of Bioengineering , Institute for Neural Computation, UC San Diego

Abstract (English)

We present neuromorphic cognitive computing systems-on-chip implemented in custom silicon compute-in-memory neural and synaptic crossbar array architectures that combine the efficiency of local interconnects with flexibility and sparsity in global interconnects, and that realize a wide class of deeply layered and recurrent neural network topologies with embedded local plasticity for on-line learning, at a fraction of the computational and energy cost of implementation on CPU and GPGPU platforms.  Adiabatic energy recycling in charge-mode crossbar arrays permit extreme scaling in energy efficiency, approaching that of synaptic transmission in the mammalian brain.

The role of neuromorphic analog computing in solutions for Industry 4.0

Alexander TIMOFEEV, Founder and Chief Executive Officer , Polyn.ai

Abstract (English)

Vibration-based condition monitoring, responsible for the machine’s failure detection is one of the basic Predictive Maintenance options used in many sensors. Different types of vibrations help to measure displacement, velocity, and acceleration, with different measuring technologies, such as piezoelectric sensors, microelectromechanical sensors, and many others. Today it is the most popular solution implemented by most sensor node makers.

The power-hungry sensor node collects a lot of data for further analytics by Machine Learning (ML) algorithms. To send all this data to the cloud for analysis, the data communication would be more trouble than it worth. The data reduction can significantly decrease the amount of data sent to the cloud, saving OPEX and improving latency.

NASP technology enables the generation of a digital imprint (signature ) corresponding to different signals — a unique digital array generated by the NASP neural core from vibration sensor data flow. Digital imprint analysis at the application level makes it possible to identify unusual signal patterns from any source of vibration. The NASP chip is integrated into the sensor node for vibration signal pre-processing. It enables the reduction of sensor node power requirements, reduces data bandwidth allocation, enables long-range communications, and improves cloud TCO.

  • YouTube

Tiny spiking neural networks for sub-milliwatt AI at the sensor-edge

Sumeet KUMAR, CEO, Innatera

Abstract (English)

The brain relies on tiny spiking neural networks for sparse, robust, and highly energy efficient processing of sensory data. In this talk, Innatera CEO Sumeet Kumar explores neuromorphic processing for always-on sensing applications with the company’s Spiking Neural Processor (SNP). The SNP implements a revolutionary architecture for energy-efficient inference of spiking neural networks, enabling full-featured AI applications within a sub-milliwatt power and sub-millisecond latency envelope. Innatera radically simplifies application development through the Talamo software development kit, which brings the power and ease of use of PyTorch to spiking neural networks. Through this combination of energy-efficient neuromorphic silicon and industry-standard software tooling, Innatera’s Spiking Neural Processor enables unprecedented AI capabilities in a wide-range of sensor-edge applications in the consumer and industrial domains.

  • YouTube

What comes after digital? The path forward for brain-like artificial intelligence.

Gordon WILSON, CEO and co-founder , Rain Neuromorphics

The deep learning revolution that began in 2012 was sparked by the integration of two preexisting technologies: the back propagation algorithm for training neural networks, and the GPU architecture for scaling them up to large sizes.  For a decade, the roadmap defined by these two technologies has enabled immense progress in digital artificial intelligence.  Now, the costs associated with this approach make further progress impractical.  At Rain, we have focused on developing a new learning algorithm and scaling architecture for neuromorphic hardware.  The equilibrium propagation algorithm and the sparse neural array architecture form the backbone for a new roadmap of analog artificial intelligence.  In this talk, I will discuss our mission, our past 5 years of research and development, plans for partnerships, and the path forward as we scale our organization after closing Series A funding in January 2022.

  • YouTube

9:35 am to 9:45 am

Short break

9:45 am to 10:25 am

Algorithms

Hardware Friendly Learning for Edge ML

Ralph Etienne CUMMINGS, Professor of Electrical and Computer Engineering, Johns Hopkins University

Abstract (English)

Realizing Hebbian plasticity in large-scale neuromorphic systems is essential for reconfiguring synapses during recognition tasks. Spike-timing dependent plasticity (STDP), as a tool to this effect, has received a lot of attention in recent years. This phenomenon encodes weight update information as correlations between the presynaptic and postsynaptic event times, as such, it is imperative for each synapse in a silicon neural network to somehow understand track its activity and keep its own time. Carefully design synapses that can do that can be incorporated into compact, dense and energy efficient learning.

Here we present a biologically plausible and optimized Register Transfer Level (RTL), and algorithmic approach to realize Nearest-Neighbor STDP with temporal tracking and management handled by the postsynaptic dendrite on which the synapse sits. We adopt a time-constant based ramp approximation for ease of RTL implementation and incorporation in large-scale digital neuromorphic systems.  We will describe the architecture, circuits and function of our hardware realizable STDP based learning system.   

Exploring Robustness and Efficiency in Neural Systems with Spike-based Machine Intelligence

Priya PANDA, Assistant Professor , Yale University

Abstract (English)

Spiking Neural Networks (SNNs) have recently emerged as an alternative to deep learning due to their huge energy efficiency benefits on neuromorphic hardware. In this presentation, I will talk about important techniques for training SNNs which bring a huge benefit in terms of latency, accuracy, interpretability, and robustness. We will first delve into how training is performed in SNNs. Training SNNs with surrogate gradients presents computational benefits due to short latency. However, due to the non-differentiable nature of spiking neurons, the training becomes problematic and surrogate methods have thus been limited to shallow networks. To address this training issue with surrogate gradients, we will go over a recently proposed method Batch Normalization Through Time (BNTT) that allows us to train SNNs from scratch with very low latency and enables us to target interesting applications like video segmentation and beyond traditional learning scenarios, like federated training. Another critical limitation of SNNs is the lack of interpretability. While a considerable amount of attention has been given to optimizing SNNs, the development of explainability still is at its infancy. I will talk about our recent work on a bio-plausible visualization tool for SNNs, called Spike Activation Map (SAM) compatible with BNTT training. The proposed SAM highlights spikes having short inter-spike interval, containing discriminative information for classification. Finally, with proposed BNTT and SAM, I will highlight the robustness aspect of SNNs with respect to adversarial attacks. In the end, time permitting, I will talk about interesting prospects of SNNs for non-conventional learning scenarios such as privacy-preserving distributed learning as well as unraveling the temporal correlation in SNNs with feedback connections.

  • YouTube

Training spiking neural networks end-to-end with surrogate gradients

Friedemann ZENKE, Research group leader, Friedrich Miescher Institute for Biomedical Research

Abstract (English)

Brains rely on spiking neural networks for ultra-low-power information processing. Integrating similar efficiency into artificial intelligence requires learning algorithms to instantiate complex spiking neural networks and brain-inspired neuromorphic hardware to emulate them efficiently. To this end, I will briefly introduce surrogate gradients as a general framework for training spiking neural networks end-to-end, showcase its capabilities for instantiating spiking neural networks with sparse activity, and demonstrate its capabilities on analog neuromorphic hardware. I will also outline a deep link between approximate surrogate gradients and a family of bio-inspired online learning rules.

  • YouTube

Enabling Neuromorphic Learning Machines with Meta-learning

Emre NEFTCI, Director , Jülich Research Centre

Abstract (English)

The data-intensive and randomized learning process that characterizes state-of-the-art Spiking Neural Network (SNN) training is incompatible with the physical nature and real-time operation of the brain and neuromorphic hardware. Bi-level learning, such as meta-learning can be used in deep learning to overcome these limitations. This talk will introduce gradient-based meta-learning methods, namely Model Agnostic Meta Learning (MAML), in SNNs in conjunction with the surrogate gradient method. I’ll further discuss 1) the hardware advantages that accrue from meta-learning: fast learning without the requirement of high precision weights or gradients and training-to-learn with quantization and mitigating the effects of approximate synaptic plasticity rules, 2) the requirements with respect to datasets, and 3) and how meta-learning can enable new neuromorphic learning technologies for real-world problems.

  • YouTube

10:25 am to 10:45 am

Sensors

Neuromorphic Event-based Vision

Christoph POSCH, CTO, PROPHESEE

Abstract (English)

Neuromorphic Event-based (EB) vision is an emerging paradigm of acquisition and processing of visual information that takes inspiration from the functioning of the human vision system, trying to recreate its visual information acquisition and processing operations on VLSI silicon chips. In contrast to conventional image sensors, EB sensors do not use one common sampling rate (=frame rate) for all pixels, but each pixel defines the timing of its own sampling points in response to its visual input by reacting to changes of the amount of incident light. The highly efficient way of acquiring sparse data, the high temporal resolution and the robustness to uncontrolled lighting conditions are characteristics of the event sensing process that make EB vision attractive for numerous applications in industrial, surveillance, IoT, AR/VR, automotive. This short presentation will give an introduction to EB sensing technology and highlight a few exemplary use cases.

New Architecture for Visual AI, Oculi Technology Enables Edge Solutions At The Speed Of Machines With The Efficiency of Biology

Charbel RIZK, Founder CEO , Oculi Inc.

Abstract (English)

Oculi is putting the “Human Eye” in AI: machines outperform humans in most tasks but human vision remains far superior delivering the actionable signal in real time and consuming only mW’s. As biology and nature have been the inspiration for much of the technology innovations, developing vision technology that mimics the eye+brain architecture is the logical path. Unlike photos and videos we collect for personal consumption, machine vision is not about pretty images and the most number of pixels. Machine vision should extract the “best” actionable information very efficiently (in time and energy) from the available signal (photons). At Oculi, we have developed a new architecture for computer and machine vision that enables dynamic and real time optimization. The core of this disruptive approach is the Oculi SPU (Sensing & Processing Unit) which is a Software Defined Vision Sensor combining sensing + processing at the pixel, the true edge for imaging sensors. This presentation will highlight the novel architecture and provide example use cases that are uniquely positioned for TinyML.

  • YouTube

10:45 am to 10:55 am

Short break

10:55 am to 11:45 am

Systems

Event driven signal processing

Sadique SHEIK, VP of Artificial Intelligence, Head of Algorithms, Architectures and Applications, SynSense AG

Abstract (English)

One of the challenges for neuromorphic engineering over the last decade has been the search for applications that can benefit from neuromorphic sensors and processors. I will give a quick overview of the various application domains and opportunities that SynSense is focused on. I will talk about our development pipeline and show some quick demonstrations of how neuromorphic devices designed at SynSense can be used for real-world applications. I will conclude with some of the current challenges in developing and training applications for event-driven systems.

Fully Spike-based Architecture with Front-end Dynamic Vision Sensor and Back-end Spiking Neural Network

Jae-sun SEO, Associate Professor, Arizona State University

Abstract (English)

Spiking neural networks (SNN) mimic the operations in biological nervous systems. By exploiting event-driven computation and data communication, SNNs can achieve very low power consumption. However, two important issues have persisted: (1) directly training SNNs have not resulted in competitive inference accuracy; (2) non-spike inputs (e.g. natural images) need to be converted to a train of spikes, which results in long latency. To exploit event-driven end-to-end operations, integration of spike-based front-end sensors such as dynamic vision sensors (DVS) and back-end SNNs become ideal. In addition, it is crucial to have a back-propagation based training algorithm that can directly train SNNs with continuous input spikes from DVS output. Several works from the literature with such neuromorphic algorithms and custom hardware designs will be presented and the design trade-offs such as spike sparsity and inference accuracy will be discussed.

  • YouTube

tinyML In-filter Computing using Neuromorphic Cochlea

Chetan SINGH THAKUR, Assistant Professor, Indian Institute of Science (IISc)

Abstract (English)

Edge devices are often constrained by the available computational power and hardware resource. We present a novel in-filter computing framework that can be used for designing ultra-light classifiers for time-series data. Unlike a conventional pattern recognizer, where the feature extraction and classification are designed independently, this architecture directly integrates the convolution and nonlinear filtering operations into the kernels of a Support Vector Machine (SVM). The result of this integration is a template-based SVM, which does not impose restrictions on the SVM kernel to be positive-definite and allows the user to define memory constraints in terms of fixed template vectors. Here, we have used the Neuromorphic Cochlea as a kernel in our template-based SVM formulation, which also acts as a feature extractor for time-series data. We prototyped the proposed system, on an FPGA and a Cortex-M4 MCU, for multiple ecological and healthcare applications using acoustic and IMU sensors.

  • YouTube

Combining Neuromorphic Design Principles with Modern Machine Learning Algorithms

Anil MANKAR, Chief Development Officer, BrainChip

Abstract (English)

Neuromorphic computing takes inspiration from the structure and function of neural systems and seeks to replicate the energy efficiency, tolerance to noise, representational power, and learning plasticity these systems possess. Current machine learning (ML) algorithms, such as convolutional neural networks (CNNs), are capable of state-of-the-art performance in many computer vision applications such as object classification, detection, and segmentation. In this talk, we discuss how our neuromorphic design architecture, Akida, brings these ML algorithms into the neuromorphic computing domain by executing them as spiking neural networks (SNNs). We highlight how hardware design choices such as the event-based computing paradigm, low-bit width precision computation, the co-location of processing and memory, distributed computation, and support for efficient, on-chip learning algorithms enable low-power, high-performance ML execution at the edge. Finally, we discuss how this architecture supports next generation SNN algorithms such as binarized CNNs and algorithms that efficiently utilize temporal information to increase accuracy.

  • YouTube

Autonomous Agile Drones

Davide SCARAMUZZA, Professor, University of Zurich

Abstract (English)

Event cameras are bio-inspired vision sensors with much lower latency, higher dynamic range, and much lower power consumption than standard cameras. This talk will present current trends and opportunities with event cameras, ranging from robotics to virtual reality and smartphones, as well as open challenges and the road ahead.

  • YouTube

11:45 am to 12:00 pm

Applications

Neuromorphic Engineering needs applications

André van SCHAIK, Director of the International Centre for Neuromorphic Systems , Western Sydney University

Abstract (English)

In this talk I will demonstrate that Neuromorphic Engineering is a Hot Topic with great promise, and also argue that this means we need to focus urgently on providing applications of neuromorphic technology in the next few years. I will present some examples of these that we are working on at the International Centre for Neuromorphic Systems.

12:00 pm to 12:05 pm

Wrap-up and closing remarks

Schedule subject to change without notice.

Committee

Charlotte FRENKEL

Chair

Delft University of Technology

Christoph POSCH

PROPHESEE

Jae-sun SEO

Arizona State University

Priya PANDA

Yale University

Sadique SHEIK

SynSense AG

Yulia SANDAMIRSKAYA

Intel Labs

Friedemann ZENKE

Friedrich Miescher Institute for Biomedical Research

University of Basel

André van SCHAIK

Western Sydney University

Speakers

Yulia SANDAMIRSKAYA

Intel Labs

Steve FURBER

The University of Manchester

Charlotte FRENKEL

Delft University of Technology

Gert CAUWENBERGHS

Institute for Neural Computation, UC San Diego

Alexander TIMOFEEV

Polyn.ai

Sumeet KUMAR

Innatera

Gordon WILSON

Rain Neuromorphics

Ralph Etienne CUMMINGS

Johns Hopkins University

Priya PANDA

Yale University

Friedemann ZENKE

Friedrich Miescher Institute for Biomedical Research

University of Basel

Emre NEFTCI

Jülich Research Centre

Christoph POSCH

PROPHESEE

Charbel RIZK

Oculi Inc.

Johns Hopkins ECE

Sadique SHEIK

SynSense AG

Jae-sun SEO

Arizona State University

Chetan SINGH THAKUR

Indian Institute of Science (IISc)

Anil MANKAR

BrainChip

Davide SCARAMUZZA

University of Zurich

André van SCHAIK

Western Sydney University

Downloads

Sponsors

( Click on a logo to get more information)