tinyML On Device Learning Forum 2023

The goal of On Device Learning (ODL) is to make edge devices “smarter” and more efficient by observing changes in the data collected and self-adjusting / reconfiguring the device’s operating model. Optionally the “knowledge” gained by the device is shared with other deployed devices.

May 16, 2023

About

To date, most ultra-low power machine learning (ML) applications at the edge are trained “off device” (typically in the cloud where virtually unlimited computing assets are available) while the edge devices perform the inferencing. Many successful applications have been deployed in this fashion as demonstrated by the rapid growth of the tinyML community and the support from the industry.

It’s time to move to the next milestone: On Device Learning (ODL). The ambition is to replace off-device training with localized training and adaptive “intelligence”. Industry and academic experts are actively exploring how to fit better the edge devices and applications into time-varying environments in which they are expected to be deployed for a long time.

Never before has ML been characterized by such innovative waves of technology. And the tinyML Foundation is accelerating the growth of this vibrant ecosystem of skills and technology resulting in new applications and end uses.

Venue

Virtual

Zoom

Contact us

Olga Goremichina

Schedule

Pacific

8:00 am to 11:15 am

Merging insights from artificial and biological neural networks for neuromorphic edge intelligence

Charlotte FRENKEL, Assistant professor, Delft University of Technology

Abstract (English)

The development of efficient bio-inspired training algorithms and adaptive hardware is currently missing a clear framework. Should we start from the brain computational primitives and figure out how to apply them to real-world problems (bottom-up approach), or should we build on working AI solutions and fine-tune them to increase their biological plausibility (top-down approach)? In this talk, we will see why biological plausibility and hardware efficiency are often two sides of the same coin, and how neuroscience- and AI-driven insights can cross-feed each other toward low-cost on-device learning.

Forward Learning with Top-Down Feedback: Solving the Credit Assignment Problem without a Backward Pass

Giorgia DELLAFERRERA, Researcher, Institute of Neuroinformatics Zurich

Abstract (English)

Supervised learning in artificial neural networks typically relies on backpropagation, where the weights are updated based on the error-function gradients and sequentially propagated from the output layer to the input layer. Although this approach has proven effective in a broad domain of applications, it lacks biological plausibility in many regards, including the weight symmetry problem, the dependence of learning on nonlocal signals, the freezing of neural activity during error propagation, and the update locking problem. Alternative training schemes have been introduced, including sign symmetry, feedback alignment, and direct feedback alignment, but they invariably rely on a backward pass that hinders the possibility of solving all the issues simultaneously. “Forward-only” algorithms, which train neural networks while avoiding a backward pass, have recently gained attention as a way of solving the biologically unrealistic aspects of backpropagation. In this talk, we discuss PEPITA and the Forward-Forward algorithm, which train artificial neural networks by replacing the backward pass of the backpropagation algorithm with a second forward pass. In the second pass, the input signal is modulated based on the top-down error of the network (PEPITA) or by other input samples (Forward-Forward). We show that these learning rules comprehensively address all the above-mentioned issues and can be applied to train both fully connected and convolutional models on datasets such as MNIST, CIFAR-10, and CIFAR-100. Furthermore, as they do not require precise knowledge of the gradients, nor any non-local information, “Forward-only” algorithms are well-suited for implementation in neuromorphic hardware.

  • YouTube

NeuroMem®, Ultra Low Power hardwired incremental learning and parallel pattern recognition

Guy PAILLET, Co-founder and Chairman , General Vision Holdings

Abstract (English)

GV will present a Tiny RTML platform comprising of ST Nucleo64, together with a NeuroShield including 37 parallelized NM500 chips. This allows maintaining a parallel content addressable set of for example 21,000 Chinese characters.

Submitting the image (16 x 16 pixels pattern) of a Chinese character, will return a category pointing on the English meaning within a constant search time of 30 microseconds.

Learning time for additional character (on the spot learning) will also take about 30 microseconds per unknown character.

The ANM5500 just released will make the same with only 4 chips and 5 times faster always, at milliwatts power.

General Vision goal is to solve real world image recognition with learning and recognition on a small battery into for example a standalone (no network connection) Barbie doll, hence the patented “Monolithic Image Perception Device” successor of MTVS (Miniature Trainable Vision Sensor) allowing on “image sensor learning” and recognition.

  • YouTube

On-Chip Learning and Implementation Challenges with Oscillatory Neural Networks

Aida TODRI-SANIAL, Full Professor , Department of Electrical Engineering, Eindhoven University of Technology

Abstract (English)

Research on adaptive and continuous learning, beyond supervised or unsupervised learning, is becoming of main interest to train neural networks that evolve with the environment and input data change through time. Moreover, ongoing research efforts on brain-inspired computing provide an energy efficient computing architecture implementable on edge devices. In recent years, computing with coupled oscillators or oscillatory neural networks (ONNs) presents an alternative computing paradigm with massive parallelism and energy efficiency. Most of the research efforts on ONNs are focused on hardware implementation such as materials, devices, circuit design, digital, analog, mixed-signal, and benchmarking AI applications. In this talk, I will focus mainly on how to train ONNs and possible implementations for on-chip learning which considers ONN topology and synaptic connections.

  • YouTube

Online Learning TinyML for Anomaly Detection Based on Extreme Values Theory

Eduardo DOS SANTOS PEREIRA, Technology Expert III, Serviço Nacional de Aprendizagem Industrial São Paulo

Abstract (English)

Anomalies in a system are rare, extreme events that can have a significant impact. The Extreme Value Theory deals with these events, and it has inspired an unsupervised and online learning TinyML algorithm proposed in this paper. The algorithm uses the two-parameter Weibull distribution function to detect anomalies in discrete time series, and it runs on Microcontroller Units device. This algorithm has the potential to contribute to various industries, from manufacturing to healthcare, by enabling real-time monitoring and predictive maintenance. The ability to detect anomalies is crucial in many applications, including monitoring environmental and location parameters based on sensor readings. TinyML can be a powerful tool for detecting abnormal or anomalous behavior in real-time.

  • YouTube

Schedule subject to change without notice.

Committee

Danilo PAU

Chair

STMicroelectronics

Speakers

Charlotte FRENKEL

Delft University of Technology

Giorgia DELLAFERRERA

Institute of Neuroinformatics Zurich

Guy PAILLET

General Vision Holdings

Aida TODRI-SANIAL

Department of Electrical Engineering, Eindhoven University of Technology

French National Council of Scientific Research (CNRS)

Eduardo DOS SANTOS PEREIRA

Serviço Nacional de Aprendizagem Industrial São Paulo

Sponsors

( Click on a logo to get more information)