tinyML EMEA Innovation Forum 2022

Connect, Unify, and Grow the tinyML EMEA Community

October 10-12, 2022

Event will be held in person in Cyprus.

October 10 - 12, 2022

tinyML EMEA Innovation Forum 2022

The tinyML EMEA Innovation Forum brings together key industry leaders, technical experts, and researchers, from Europe, the Middle East, and Africa (EMEA) region, innovating with machine learning and artificial intelligence on ultra low-powered devices.

Tiny machine learning combines innovation across a deep technological stack ranging from dataset collection and ML application design, through innovative algorithms and system-level software, down to hardware and novel sensor technology. As a cross-layer technology, achieving good results in tinyML requires carefully tuning the interaction between the various layers, and designing systems with a vertically integrated approach. In the EMEA region many startups, established companies, universities, and research labs are investing substantial time and effort in developing novel advanced solutions for tinyML. The tinyML EMEA Innovation Forum aims to connect these efforts, find strength in unity, and cooperate to accelerate tinyML innovation across the region.

Venue

Grand Resort

Limassol, Cyprus

Contact us

Rosina Haberl

KEYNOTE SPEAKERS

PROGRAM PREVIEW

Full Stack Solutions

The overall objective of tinyML is to see autonomous devices able to react and interact with their environment.  This can only happen when machine learning models are optimized for the device hardware and software and potentially linked to remote systems to provide smart functions, alerts, and user adaptation.  The full-stack applications session will cover uses of tinyML in such systems comprising hardware, software, sensing, and backend operations to deliver compelling user experiences

Algorithms, Software & Tools

Algorithms and software designed for tiny devices are a crucial part in the enablement of tinyML technology. Deployment at scale is the next challenge in making tinyML solutions ubiquitous in our lives.

Bringing highly efficient inference models to real devices requires innovation in key areas of network architecture, toolchains, and embedded SW to optimize performance and energy.

The Algorithms, SW & tools session will explore some of the recent advancements in the field while uncovering some of the challenges we are facing as an industry.

tinyML Ecosystem Development Panel

Towards a thriving TinyML Ecosystem Strategy and Roadmap for EMEA.

This session will bring together a panel drawn from TinyML stakeholders including industry players, academics, and policy experts to discuss strategies needed for a thriving tinyML ecosystem. In particular, we will explore policy directions from the AU and EU and strategic partnerships needed to promote innovation in TinyML.

Hardware & Sensors

This session will cover innovation and advances within tinyML HW and sensors.

Improvement of tinyML applications is dependent on innovation in both the tinyML HW as well as in the sensor technology. For many applications, the sensor and processing will be highly integrated and make use of new innovative technologies to push the boundaries for power consumption and the necessary computing, both in terms of memory technology and arithmetic design, including the use of analog. New sensors and new hardware push the need for innovation in tooling, for example for novel ways to quantize and prune networks.

On Device Learning

To date, the tinyML community has been successful in creating outstanding initiatives which established on-device Machine Learning inference at an industry level as an essential technology.

Looking forward to the ML technology evolution, experts are increasingly asking how to let tiny devices adapt to the variability of the environments in which on-device inference is deployed, thereby compensating for concept and data drifts.

To follow up with that need, the tinyML foundation created a focus working group on On Device Learning (ODL), whose goal is to make edge devices smarter, more efficient, and self-adaptive by observing changes in the collected data, potentially in collaboration with other deployed devices.

As a further contribution, the tinyML EMEA 2022 innovation forum will include an ODL session with presentations and potentially demos at the upcoming in-person event in Cyprus, thereby sharing with academia and industry experts their current research and solutions for ODL.

News

August 02, 2022

tinyML EMEA Innovation Forum 2022 Sponsorship Opportunities

The tinyML EMEA Innovation Forum 2022 will continue the tradition of high-quality state-of-the-art presentations. Find out more about sponsoring and supporting the tinyML Foundation.

August 02, 2022

EMEA 2022 Venue Booking

The Grand Resort is a 5-Star facility in Limassol, Cyprus.

Schedule

10:00 am to 10:15 am

Welcome

10:15 am to 5:25 pm

Innovation Showcase Session

In this session, we will have eight slots for demonstrations of vendor-related or independent tinyML tools. The talks will be hands-on, interactive demonstrations by the speaker, showing a real example of how the demonstrated tool helps the TinyML product developer.

Following each set of talks, we will hold a Roundtable discussion with the presenters.  This will insure an interactive, engaging the first day that will help attendees deepen their understanding of what is possible with tinyML.

 

9:00 am to 9:15 am

Welcome

9:15 am to 10:00 am

Keynote

Session Chair: Massimo BANZI, Co-founder Arduino, Arduino

10:00 am to 11:00 am

Full Stack Solutions session - Part I

The overall objective of tinyML is to see autonomous devices able to react and interact with their environment.  This can only happen when machine learning models are optimized for the device hardware and software, and potentially linked to remote systems to provide smart functions, alerts and user adaptation.  The full-stack applications session will cover uses of tinyML in such systems comprising hardware, software, sensing and backend operations to deliver compelling user experiences

Session Moderator: Dominic BINKS, VP of Technology, Audio Analytic

Transforming sensors at the edge

Jonathan PEACE, CTO, InferSens Ltd

Abstract (English)

Until now, embedded DL at the edge has been limited to simple Neural Networks such as CNNs.

InferSens is focused on leveraging new silicon architectures to enable more sophisticated models at the edge, that are able to unlock new meaning from sensor data while operating on a nanoPower budget compatible with a multi-year battery life.

This talk presents a real-world application of such networks to help automate the manual monitoring of water systems which is done in hundreds of millions of properties worldwide to prevent outbreaks such as Legionnaire’s disease. Our non-invasive and easy to deploy flow sensors infer flow and monitor temperature in real-time.

 

Autonomous Nano-UAVs: An Extreme Edge Computing Case

Daniele PALOSSI, Postdoctoral Researcher, IDSIA & ETH Zurich

Abstract (English)

Nano-sized autonomous unmanned aerial vehicles (UAVs), i.e., nano-UAVs, are compelling
flying robots characterized by a stringent form factor and payload, such as 10 cm diameter
and a few tens of grams in weight. These fundamental traits challenge the onboard sensorial
and computational capabilities, allowing only for limited microcontroller-class computational units (MCUs) mapped to a sub-100 mW power envelope. At the same time, making a UAV fully autonomous means to fulfill its mission with the only resources available onboard, i.e., avoiding any external infrastructure or off-board computation.
To some extent, achieving such an ambitious goal of a fully autonomous nano-UAV can be
seen as the embodiment of the extreme edge computing paradigm. The computational/memory limitations are exacerbated by the paramount need for real-time
mission-critical execution of complex algorithms on a flying cyber-physical system (CPS).
Enabling timely computation on a nano-UAV ultimately would lead to i. new application
scenarios, otherwise prevented for bigger UAVs, ii. increased safety in the human-robot
interaction, and iii. reduced cost of versatile robotic platforms.
In recent years, many researchers have pushed the state-of-the-art in the onboard
intelligence of nano-UAVs by leveraging machine learning and deep learning techniques as
an alternative approach to the traditional and computationally expensive, geometrical, and
computer vision-based methods. This talk presents our latest research effort and
achievements by delivering holistic and vertical-integrated solutions, which frame
energy-efficient ultra-low power MCUs with deep neural network vision-based algorithms,
quantization techniques, and data augmentation pipelines.
In this presentation, we will follow our two keystone works in the nanorobotics area, i.e., the
seminal PULP-Dronet [1,2] and the recent PULP-Frontnet [3] project, as concrete examples
to introduce our latest scientific contributions. In detail, we will address several fundamental research questions, such as “how to shrink the number of operations and memory footprint of convolutional neural networks (CNNs) for autonomous navigation” [4], “how to improve the generalization capabilities of tiny CNNs for human-robot interaction” [5] and “how to combine the CPS’ state with vision-based CNNs for enhancing the performance of an autonomous nano-UAV” [6]. Finally, we will support our key findings with thorough in-field evaluations of our methodologies and resulting closed-loop end-to-end robotic demonstrators.

[1] Palossi, Daniele, Antonio Loquercio, Francesco Conti, Eric Flamand, Davide Scaramuzza, and
Luca Benini. “A 64-mW DNN-based visual navigation engine for autonomous nano-drones.” IEEE
Internet of Things Journal 6, no. 5 (2019): 8357-8371.
[2] Palossi, Daniele, Francesco Conti, and Luca Benini. “An open source and open hardware deep
learning-powered visual navigation engine for autonomous nano-UAVs.” In 2019 15th International
Conference on Distributed Computing in Sensor Systems (DCOSS), pp. 604-611. IEEE, 2019.
[3] Palossi, Daniele, Nicky Zimmerman, Alessio Burrello, Francesco Conti, Hanna Müller, Luca Maria
Gambardella, Luca Benini, Alessandro Giusti, and Jérôme Guzzi. “Fully onboard ai-powered
human-drone pose estimation on ultralow-power autonomous flying nano-uavs.” IEEE Internet of
Things Journal 9, no. 3 (2021): 1913-1929.
[4] Lorenzo Lamberti, Vlad Niculescu, Michał Barciś, Lorenzo Bellone, Enrico Natalizio, Luca Benini
and Daniele Palossi. “Tiny-PULP-Dronets: Squeezing Neural Networks for Faster and Lighter
Inference on Multi-Tasking Autonomous Nano-Drones.” In 2022 IEEE 4th International Conference on
Artificial Intelligence Circuits and Systems (AICAS), IEEE, 2022.
[5] Cereda, Elia, Marco Ferri, Dario Mantegazza, Nicky Zimmerman, Luca M. Gambardella, Jérôme
Guzzi, Alessandro Giusti, and Daniele Palossi. “Improving the Generalization Capability of DNNs for
Ultra-low Power Autonomous Nano-UAVs.” In 2021 17th International Conference on Distributed
Computing in Sensor Systems (DCOSS), pp. 327-334. IEEE, 2021.
[6] Cereda, Elia, Stefano Bonato, Mirko Nava, Alessandro Giusti, and Daniele Palossi. “Vision-State
Fusion: Improving Deep Neural Networks for Autonomous Robotics.” arXiv preprint arXiv:2206.06112
(2022).

11:00 am to 11:30 am

Coffee & Networking

11:30 am to 12:30 pm

Full Stack Solutions session - Part II

Session Moderator: Dominic BINKS, VP of Technology, Audio Analytic

Full-stack neuromorphic, autonomous tiny drones

Federico PAREDES-VALLES, PhD Candidate, MAVLab, Delft University of Technology

Abstract (English)

Neuromorphic sensing and processing hold an important promise for creating autonomous tiny drones. Both promise to be lightweight and highly energy efficient, while allowing for high-speed perception and control. For tiny drones, these characteristics are essential, as these vehicles are very agile but extremely restricted in terms of size, weight and power. In my talk, I present our work on developing neuromorphic perception and control for tiny autonomous drones. I delve into the approach we followed for having spiking neural networks learn visual tasks such as optical flow estimation. Furthermore, I explain our ongoing effort to integrate these networks in the control loop of autonomously flying drones.
Schematic summary of the presentation:
– Introduction: tiny drones, specifications and constraints
– How to make tiny drones autonomous? Drawing inspiration from nature
– Neuromorphic sensing and computing
o Event cameras and spiking neural networks
o Goal: Fully neuromorphic, vision-based autonomous flight
o Work 1: Vertical landings using event-based optical flow
o Works 2 and 3: Unsupervised and self-supervised learning of event-based optical
flow using spiking neural networks
o Work 4: Neuromorphic control for high-speed landings
– Conclusion

Low power, low latency multi-object tracking and classification using a fully event-driven neuromorphic pipeline

Nogay KUEPELIOGLU, Assistant Neurmorphic Machine Learning Engineer, SynSense AG

Abstract (English)

Detection, tracking and classification of objects in real-time from a video is a very important but costly problem in computer vision. Typically, this is achieved by running a window based CNN and sliding it over the full image to obtain the areas of interest (i.e. location of the object) and then identifying it. This approach requires the image to be stationary such that the model can be run over the entire object over multiple passes. A second approach involves a one shot one pass model such as YOLO, where the entire image is passed and the localization and identification is done simultaneously. While this approach is better for real-time systems compared to the former due to its requirements of multiple passes, they typically require a lot of memory and thus consume a lot of power to achieve this. Furthermore, previous work along these lines also demonstrated that it is challenging to achieve good accuracy using spiking convolutional neural networks for such a model. The problem becomes more costly to solve due to the state-holding nature of spiking neurons and the requirement of using memory to store those states.

Locating and identifying the objects are two inherently different problems and if they are solved separately this might reduce the network size significantly. This is partially achieved in the field of neuromorphic engineering by the use of dynamic vision sensors, which only pass the information on the change in luminosity as events for individual pixels with corresponding timestamps rather than passing an entire frame. This eliminates the background which is unrelated for the objects to be tracked. These events can easily be clustered into separate clusters by their spatial locations. In this work, we implement such a clustering algorithm to track the objects, preprocess it such that each cluster has the same output size and then pass these events to a spiking neural network for identification. The pipeline is implemented and tested for tracking of single and multiple objects, as well as identification of these objects on novel DynapCNN™ asynchronous spiking convolutional neural network processor.

12:30 pm to 1:30 pm

Lunch & Networking

1:30 pm to 2:30 pm

Demos & Posters

2:30 pm to 3:30 pm

Algorithms, Software & Tools session - Part I

Algorithms and software designed for tiny devices are a crucial part in the enablement of tinyML technology. Deployment at scale is the next challenge in making tinyML solutions ubiquitous in our lives.

Bringing highly efficient inference models to real devices requires innovation in key areas of network architecture, tool chain and embedded SW to optimise performance and energy.

The Algorithms, SW & tools session will explore some of the recent advancements in the field, while uncovering some of the challenges we are facing as an industry.

Session Moderator: Elad BARAM, VP Products, Emza Visual Sense

Session Moderator: Martin CROOME, Vice President Marketing, GreenWaves

Schedule subject to change without notice.

Committee

Francesco CONTI

Chair

University of Bologna, Italy

Alessandro GRANDE

Vice-chair

Edge Impulse

Theocharis THEOCHARIDES

Local chair

University of Cyprus

Fran BAKER

Arm

Elad BARAM

Emza Visual Sense

Dominic BINKS

Audio Analytic

Martin CROOME

GreenWaves

Tomas EDSÖ

Arm

Charlotte FRENKEL

Institute of Neuroinformatics

Evgeni GOUSEV

Qualcomm Research, USA

Ciira MAINA

Dedan Kimathi University of Technology

Hajar MOUSANNIF

Cadi Ayyad University, Morocco

Daniel MÜLLER-GRITSCHNEDER

Chair of Electronic Design Automation

Speakers

Alberto L. SANGIOVANNI-VINCENTELLI

UC Berkley, Cadence & Synopsys

Massimo BANZI

Arduino

Sponsors

( Click on a logo to get more information)