embedded world 2023

Visit the tinyML Foundation pavilion at the exhibition and see the technical program at the conference!

March 14-16, 2023

embedded world 2023 - exhibition & conference

Come visit the tinyML Foundation pavilion at embedded world 2023 – March 14-16, 2023! The nine participating companies shown below will be showcasing always-on AI at the edge!

Register for free to visit the exhibition courtesy of tinyML Foundation!

At the technical conference, don’t miss the sessions listed below.

Don’t forget to join our mailing list to stay in touch!

Venue

NürnbergMesse

Messezentrum 1 90471 Nürnberg, Germany

Contact us

Rosina Haberl

Schedule

2:00 pm to 6:00 pm

An Introduction to TinyML: Bringing Deep Learning to Ultra-low-power Micro-Controllers

The rise of AI drives the next generation of smart systems. The deployment of AI models often demands powerful compute machines usually located in the cloud. For many applications, there are high gains possible in terms of energy efficiency, latency, connectivity, privacy or compute cost by shifting the intelligence from the cloud to the edge near to the devices, where the data is captured. TinyML targets to bring AI models to the extreme edge, specifically to ultra-low-power, micro-controller-type devices that are typically part or near to the sensors. This tutorial targets to give the audience an introduction to the field of TinyML. Memory is most often the limiting resource of TinyML systems, such that every byte of data counts. Hence, the tutorial will introduce TinyML compression methods and frameworks such as Tensor Flow Lite for Micro, which optimizes the AI model and generate a memory-optimized inference code. In this tutorial, we give an short introduction to the background of the underlying TinyML methods as well as a practical guide how to get started with TinyML frameworks. Specifically, the audience will learn about: • The background and application fields of TinyML to deploy Deep Neural Networks on ultra-low-power, micro-controller-type devices • The basics of TinyML methods such as model compression (quantization and pruning) to bring down the memory demand of neural network inference • A practical guide how to get started with TinyML with a hands-on example

Session Moderator: Daniel MÜLLER-GRITSCHNEDER, Research Group Leader, Chair of Electronic Design Automation

Session Moderator: Marcus RUB , Research assistant, Hahn-Schickard

11:00 am to 12:30 am

Autonomous & Intelligent Systems | Tiny ML Technologies

Accelerating Binary and Mixed-Precision NNs Inference on STMicroelectronics Embedded NPU with Digital In-Memory-Computing at EW

Fabrizio INDIRLI, Software Design Engineer, STMicroelectronics

Abstract (English)

The proliferation of embedded Neural Processing Units (NPUs) is enabling the adoption of Tiny Machine Learning for numerous cognitive computing applications on the edge, where maximizing energy efficiency is key. To overcome the limitations of traditional Von Neumann architectures, novel designs based on computational memories are arising.  STMicroelectronics is developing an experimental low-power NPU that integrates Digital In-Memory Computing (DIMC) SRAM with a modular dataflow inference engine, capable of accelerating a wide range of DNNs. In this work, we present a 40nm version of this architecture with DIMC-SRAM tiles capable of in-memory binary computations to dramatically increase the computational efficiency of binary layers. We performed power/performance analysis to demonstrate the advantages of this paradigm, which in our tests achieved a TOPS/W efficiency up to 40x higher than software and traditional NPU implementations. We have then extended the ST Neural compilation toolchain to automatically map binary and mixed-precision NNs on the NPU, applying high-level optimizations and binding the model’s binary GEMM layers to the DIMC tiles.

The overall system was validated by developing a real-time Face Presence Detection application, as a potential real-world power-constrained use-case. The application ran with a latency < 3 ms, and the DIMC subsystem achieved a peak efficiency > 100 TOPS/W for binary in-memory computations.

Seventh-SensIC: Evaluating Time Series Data Using Decision Trees in Hardware

Jacob GÖPPERT, Research Engineer, Hahn Schickard

In 2020, the German BMBF ran a hardware design competition for characterizing ECG-data on atrial fibrillation using machine learning (ML) inference. One of the winning entries, GeNERIC, achieved the best energy efficiency in the field by a considerable margin using decision tree ensembles (DTE). As the follow-up project to GeNERIC, Seventh-SensIC expands the concepts explored during the competition to further and more complex datasets. Ranging from multi-condition — and thus multi-class — ECG-data, over human activity recognition to anomaly detection in acoustic signals, these cover a wide spectrum of ML-problems.
The system’s core piece, the DTE-classifier, operates on features, i.e. processed data, specific to a certain ML-problem. In addition to the classifier, the system features a dedicated hardware library for calculating a set of features selected based on the aforementioned data sets, a RISC-V core for arbitrary feature calculation, and an interface for external data processing. This approach enables inference at extremely low power for problems covered by the integrated feature-library, while retaining the flexibility to process arbitrarily complex ML-problems via either the integrated processor, or external data processing. A key functionality of the developed software toolchain will be a synthesis procedure to generate the required hardware configuration out of an arbitrary ML-data set.

Sidestepping Deployment Bottlenecks Before they Happen: tinyML Models Using Virtual Targets

James HUI, System Simulation, Product Management, Wind River

The tinyML project enables deep learning models to run on resource-constrained target devices. It opens up many possibilities for empowering IoT devices to have intelligent functionalities. It is an exciting and challenging time when the model is ready to deploy. Before running the model in the field, nevertheless, there are many common questions one needs to consider. One can be, do we have enough understanding of the model? What if one or more inputs are out of range? What if the ADC data changes earlier than expected during the inference phase?

In this talk, we will share our experiment work using an instruction set simulator. The simulator creates many parallelly running virtual targets and injects fault conditions into a simple tinyML model. Scaling testing using simulation helps us gain insight and identify potential bottlenecks in the model before deploying the product.

1:45 pm to 3:15 pm

Autonomous & Intelligent Systems | Tiny ML Applications

Coffee Leaf Health Diagnosis on Ultra-low-power CNN Accelerator

Nathaniel ALTEZA, Machine Learning Engineer, Analog Devices

An automatic coffee leaf health diagnosis is needed since coffee is an important commodity and its yield is affected by diseases. Performing the diagnosis at the edge device can perform real-time assessment and can provide feedback to the agricultural control system. For the tasks of recognition and classification, deep learning is a viable technique. However, the challenge lies on implementing the deep learning models on an edge device. This paper focuses on implementing SimpleNet deep learning classification of “healthy” and “unhealthy” coffee leaves on ultra-low-power artificial intelligence microcontroller MAX78000. The SimpleNet is compatible with edge computing as it takes advantage of the simplicity in its design and outperforms deeper and more complex architectures. In this work, the convolutional neural network (CNN) model is trained using the robusta coffee leaf images dataset (RoCoLe), which is freely and publicly available. Consequently, a validation accuracy of 92.78% is achieved. An inference accuracy of at most 100% on MAX78000EVKit is achieved with 15ms inference time.

Vital Sign and Seat Occupancy Detection Using AURIX TC3x6 ADAS 60GHz Radar Baseboard (RBB)

Lee GONZALES-FUENTES, Application Engineer, Infineon Technologies

Mathurin LEMARIE, Engineer Intern, Infineon Technologies

Road safety and enhanced driver experience are not limited anymore to sensing the surroundings of a vehicle but also its interior. Current regulations and legislations demand auto-manufacturers to include different in-cabin applications as part of their roadmap so occupant safety standards and vehicle autonomy are met. The EURO and ASEAN New Car Assessment Program (NCAP) emphasize the need of occupancy detection and vital sign monitoring for child presence detection and thus, for heat stroke and death prevention.
This paper presents seat occupancy and vital sign detection features implemented into an AURIXTM TC3x6 ADAS 60GHz radar baseboard (RBB). Here, the advantage of radar of measuring distances coming from the chest wall displacements due to respiration and cardiac vibrations is exploited. Multiple challenges arise such as separating both breathing and heart beating data which are also corrupted by car vibrations and sudden body movement as well as from implementing such algorithms in a microcontroller where high memory support and real-time processing are required. Machine learning (ML) based-algorithms are deployed to counteract these effects. Measurement results are presented whereas a comparison with existing methods are discussed.

TinyML Applications – How Machine Learning Can Enhance the User Experience and Security in Smart Home Products Such as Fingerprint Reading Door Locks

Tamas DARANYI, Product Manager, Silicon Labs

ML applications in embedded microcontrollers (also known as TinyML applications) are continuously growing as intelligence moves down to the sensing level, or the ‘tiny’ edge, of IoT systems. Machine Learning can also enhance intelligent communications between smart home devices by utilizing universal protocols such as Matter. However, even alone, TinyML at the edge can enable multiple benefits for manufacturers and end users. For example, AI/ML accelerated inferencing in smart devices has shown significant reductions in energy consumption and is magnitudes faster than network processing or standard non-accelerated MCUs. Using machine learning to recognize signatures such as fingerprint detection/reading for door locks or security hubs/panels combined with universal communication protocols set the stage for an enhanced user experience and added peace of mind for home security and safety. For example, for fingerprint detection capabilities, machine learning can uniquely determine if the person at the door is authorized to enter.

Schedule subject to change without notice.

Sponsors

( Click on a logo to get more information)