The tinyML Summit 2023 will be the premier gathering of key tinyML members from all aspects of the ecosystem. This year, end-users, innovators, and business leaders will be invited to encompass the expanding breadth of industries impacted by the maturing tinyML technology and application space. The tinyML Summit 2023 will provide a unique environment to have focused, high-impact presentations and conversations from both suppliers and users to advance the accessibility and adoption of tinyML solutions. No matter where you are in the Edge Computing AI/ML supply chain, this is the must-attend event for 2023.
The tinyML Research Symposium 2023 will be held in conjunction with the tinyML Summit. The Research Symposium is the premier annual gathering of senior level technical experts and decision makers representing fast growing global tinyML community.
Hyatt Regency San Francisco Airport
1333 Bayshore Highway, Burlingame, CA 94010
End-user applications and products paving the future of tinyML
The tinyML Summit end-user applications and products will feature presentations and demonstrations on the innovative ways neural networks and machine learning technologies are being used to develop groundbreaking products for consumers and industries across various sectors. From health wearables to electric vehicles, these technologies are being leveraged to revolutionize the way we live and work. For example, tinyML is being utilized in wearables that monitor our health, and deep learning in devices that check the state of health of EV batteries. Join us for this session to learn more about these exciting new applications of tinyML and how they are shaping future products
Software to accelerate TinyML Solutions
In the process of bringing machine learning solutions to low-power embedded systems, software is critical. Recent advancements in software utilities, libraries, ecosystems, and runtime environments have made it easier and faster to bring tinyML solutions to market and solve real user challenges. Software innovations brought to market from the tinyML ecosystem improve the performance, reliability, and efficiency of on-device inference to make new entirely solutions possible. Talks in this topic will highlight speakers who show the impact of such software innovation on the improvement and proliferation of end-systems encompassing tinyML, as well as discuss the challenges and future opportunities in the tinyML software space at large. They may also include joint presentations with end-users, key roadmap announcements, and advances that push the status quo for tinyML forward.
Optimized Sensing for tinyML product solutions
As more is done on the edge with less power, code and physical real estate, sensor requirements need to stay aligned with the rapid growth in demand for tinyML products For the tinyML Summit 2023, we are seeking sensing solutions presentations on innovative applications and products in vision, audio, inertial, environmental, medical sensing capabilities and more. We are interested in new innovations and ‘smart capabilities’ required to squeeze more performance, bandwidth, and ever lower-power edge sensing solutions. How this impacts processor and ASIC design, as well as the required changes in Machine Learning methods and tools should be discussed.
Enabling TinyML Applications via Innovations in Circuits, Hardware Architectures and Devices
Embedded AI has advanced in leaps in recent years. The combination of silicon technology size reduction, leading to higher memory densities and increased compute, and ever shrinking tinyML Inference algorithms has offered a compelling proposition for MCU-class applications. In this session we will be looking at the impact of the confluence of silicon design techniques and smarter algorithms/tooling in bringing tinyML technology to life on the edge. This session will focus on the challenges and opportunities faced by silicon vendors, and follow the impact that innovative hardware solutions have made all the way through the ecosystem – from tooling/SW up to deployment and end-customer applications.
Development Tools enable TinyML Solutions
The key to productively exploring solutions using tinyML is highly productive development tools. These tools can help collect and label data, use automatic machine learning techniques to select and train ML models, compile, and optimize tinyML libraries for specific devices, manage firmware deployment to fleets of embedded hardware, and even more. The innovations in supporting the entire tinyML solutions enable users to rapidly iterate, develop and deploy new solutions. In talks on this topic, speakers will demonstrate the business impact of development tool improvements and capabilities on end-user applications and solutions. The presentations may also include joint presentations with end-users, key roadmap announcements, and advances that push the state of the art for tinyML forward.
Who should attend:
- Engineers, Developers, Managers, Executives, and Founders developing sensors, silicon, software, machine learning tools, or systems for the tiny ML market
- System Designers and Integrators looking to incorporate low-power low-cost machine learning into your devices and products (across different verticals, e.g. consumer electronics, industrial, XR, healthcare, etc.)
- Engineers looking to incorporate smart sensor systems to solve your particular industry’s challenges in any of factory, industrial, warehouse, medical, consumer electronics, human-computer interaction, automotive, wearables, and more
- Product Managers and Technology Officers driving digital transformation at the very edge and who are interested and motivated in bringing low power and affordable artificial intelligence and real-time data analytics into their products and solutions
- Investors interested in the state-of-the-art in extreme energy-efficient AI and seeing opportunities in this space (including HW-SW-MLOps)
- Anyone looking to be inspired to help create the future of extremely low-power, intelligent systems
tinyML Summit 2023 will continue the tradition of high-quality state-of-the-art presentations. Find out more about sponsoring and supporting tinyML Foundation.
The tinyML Summit organizers are pleased to continue their Best Product of the Year and Best Innovation of the Year nominations. However, for 2023, more categories have been added to each.
7:30 am to 8:30 am
Registration & Breakfast
8:30 am to 9:00 am
Welcome & tinyML state of the union
9:00 am to 9:45 am
Keynote - Achin Bhowmik from Starkey
Tiny ML, Large Impact: Multifunctional Hearing, Health, and Communication Devices
Achin BHOWMIK, CTO and EVP Engineering, Starkey
With half a billion people suffering from disabling hearing loss globally according to the World Health Organization, hearing aids are crucially important medical wearable devices. Untreated hearing loss has been linked to increased risks of social isolation, depression, dementia, fall injuries, and other health issues. However, partly due to historical stigma associated with assistive devices, only a small fraction of people who need help with hearing have adopted the devices.
In this talk, we will review the neuroscience of hearing, technologies for enhancing and augmenting auditory perception, and a new class of multifunctional in-ear devices with embedded sensors and artificial intelligence. In addition to providing frequency-dependent amplification of sound to compensate for audibility deficiencies, these devices continuously classify sound with advanced machine learning algorithms and enhance speech understanding, serve as a continuous monitor for physical and cognitive activities, an automatic fall detection and alert system, as well as a personal assistant with connectivity to the cloud. Furthermore, these devices stream phone calls and music with all-day battery life, translate languages, transcribe speech, and remind the wearer of medication and other tasks.
Rapid progress in artificial intelligence is bringing an array of new devices, applications, and user benefits to the world. Now, these technologies are transforming the traditional hearing aids into multipurpose devices, helping people not only hear better, but live better lives in many more ways.
9:45 am to 10:05 am
A perspective on the trajectory from custom intelligent sensors to broad market adoption of smart platforms
Al HESHMATI, VP of Systems and Software, TDK USA
While not always visible or appreciated by the end consumer users, there are already some good examples of intelligent use of sensors enabled by edge AI, utilizing machine learning techniques. We also see some examples of ML solutions deployed in industrial applications, including Condition Based Monitoring, quality control automation, and factory digitization. These solutions tend to be developed by teams including domain experts, sensor experts, data scientists and Machine Learning engineers. Depending on the application, it may also require close collaboration with HW/Silicon designers and platform SW developers. While this is reasonable for well-defined applications and by large companies with significant resources who can assemble such multi-disciplinary teams, it’s not practical for wide range of emerging applications. In this session, TDK, an industry-leading sensor provider to the IoT/tinyML community, shares its perspective on the trajectory for widespread adoption.
To enable broad market adoption and scaling, requires supporting the ecosystem with the level of integration at good cost/size/power that can be used to quickly enable new applications. The smart platforms including sensors, connectivity, processing, and power are common requirements for many IOT applications. Many sensor vendors, including TDK, are deploying eval kits intended to seed the developer community activities. There is an opportunity to deliver the product commercial HW platform that can more easily be utilized towards different end-applications. Over time, these designs can lead to more integrated solution where level of integration will be the balance between flexibility and cost/power/size optimization.
Broad enabling of Always-On, interactive apps and services will drive intelligence to the Edge, but many verticals with differing AI requirements exist. We can make an IOT device spanning multiple applications. However, the diversity of applications and their requirements means there is no “single AI solution” covering all applications. There is exciting SW work being done in the industry to enable tinyML applications more broadly. We already see several ML automation toolchains in the market focused on tinyML. Widespread adoption of smart sensor platforms requires seamless availability and optimized interworking of end-to-end elements from tool-chains to readily available ML-enabled HW platforms.
10:05 am to 10:25 am
Joint Application end user HP with STMicroelectronics
Personal Computing devices use-case and applications enabled by Smart Sensors
Nick THAMMA, Engineering Manager CMIT Sensor & Vision Architecture, HP
Mahesh CHOWDHARY, Fellow and Senior Director of MEMS Software Solutions , STMicroelectronics
Smart Sensors are enabling a distributed computing approach which significantly reduces the bandwidth requirement for transferring sensor data when the edge computing capabilities in MCUs or sensors are utilized. STMicroelectronics offers smart sensors, such as LSM6DSOX, which have a built-in Machine Learning Core (MLC) and Finite State Machine (FSM).
This capability allows the user to develop a variety of applications for consumer devices such as laptop, smartwatches or wireless sensor nodes where power consumption for applications needs to be minimized. These advanced sensors are increasingly being used to build solutions with an always-on user experience with extremely low current consumption, in order of single-digit micro-amps for sensor applications, such as activity tracking, gesture recognition, and vibration monitoring. A decision tree or a finite state machine can be downloaded into the sensor to build functionality such as human activity tracking.
By integrating ST’s latest IMU with an embedded ML core into our devices, our engineering team at HP worked with ST’s experts to build and train an AI model for recognizing various user activities based on device and user motion. Because of this work, our PC’s are now able to intelligently manage it’s thermals and power states for best comfort in using the PC off the desk and best battery life when on the go. A set of features enabled on personal computing devices will be presented by HP.
10:25 am to 10:45 am
End user presentation by Braveheart
Machine Learning processors are used to create advanced insights on a biometric wearable patch
Stuart MCEACHERN, Founder and VP, Braveheart Wireless
Machine learning enhances the analysis of biometric data captured by wearable patches. The medical industry commonly employs these wearable devices for a variety of purposes, balancing battery life, functionality, and processing power. Advances in on-device AI and embedded processors provide the system designer with increased processing capabilities while maintaining battery life. Braveheart Wireless incorporates deep neural networks and tinyML technologies in cutting edge ultra-low power chips to optimize the power efficiency and processing power of wearable biometric patches, leading to more insightful biometric monitoring.
10:45 am to 11:15 am
Break & Networking
11:15 am to 11:35 am
Arm Ethos-U support in TVM ML framework
Rahul VENKATRAM, Sr. Product Manager, Arm
The Arm Ethos-U55 and Ethos-U65 microNPUs are a new class of machine learning (ML)
processors, specifically designed to accelerate ML computation in constrained embedded
and IoT devices. They are optimized to execute mathematical operations efficiently that are
commonly used in ML algorithms.
The existing software stack uses Vela – a tool to compile a Google® TensorFlowTM Lite neural network model into an optimised version that can run on an embedded system containing an Arm Ethos-U microNPU.
We are excited to share with the tinyML audience a brand-new way to compile NN models
on the Ethos U microNPUs – using the open-source TVM framework. In this presentation,
we will walk through how you can use this framework to develop, deploy and debug your
tinyML applications faster by using a common toolchain to run ML networks. This will help
to remove the issue of ML framework fragmentation that most ML engineers struggle with.
We look forward to sharing insights and best practices to give you a head start in using this
new method on our Ethos U microNPUs.
11:35 am to 11:55 am
Why TinyML Applications Fail: An examination of common challenges and issues encountered for real-world projects
Christopher KNOROWSKI, CTO, SensiML Corp
Much has been made about the benefits of running TinyML models capable of executing at
the IoT edge to reduce latency, privacy, performance, and reliability concerns of centralized cloud AI processing. Demos and proof-of-concepts have spanned a broad array of intelligent IoT applications from industrial predictive maintenance, to wake word triggering, agricultural monitoring, and image recognition to name a few.
Despite this, many users have encountered challenges in implementing TinyML for their real-world applications. In this talk we will discuss such issues drawing from over ten years of experience SensiML has had working with customers across a broad array of applications. This no-nonsense discussion will provide an unvarnished look at the various challenges encountered, common misconceptions about the technology, and various methods to address the common pitfalls that besiege commercial TinyML projects.
11:55 am to 12:15 pm
Deploying Visual AI Solutions in the Retail Industry
Mark HANSON , VP of Technology and Business Innovation, Sony Semiconductor Solutions of America
An image sensor with AI-processing capability is a novel architecture that is pushing vision AI closer to the edge to enable applications at scale. Today many AI applications stall in the PoC stage and never reach commercial deployment to solve real-world problems because existing systems lack simplicity, flexibility, affordability, and commercial-grade reliability. We’ll investigate why the retail industry struggles to keep track of stock on its retail shelves while relying on retail employees to manually monitor stock and how our (AITRIOS) vision AI application for on-shelf-availability can eliminate complexity and inefficiency at scale.
12:15 pm to 12:35 pm
End user presentation by Mayo Clinic SPPDG
Low-Energy Physiologic Biomarker Machine-Learning Inference on a Wearable Device with a GAP9 RISCV Based Processor
Christopher L. FELTON, Development Engineer IV, Mayo Clinic SPPDG
The presentation will cover a top to bottom institutional framework to develop new machine-learning physiologic biomarkers, which covers a breadth of related topics culminating in the evaluation of machine-learning (ML) models on low-energy wearable prototypes. The presentation will give an overview of the human subject testing the Mayo Clinic develops to build datasets for training (Techentin, et al., 2019); the machine-learning approaches to extract new physiologic biomarkers or signatures; and an overview how the Mayo team uses the Greenwaves toolflow and the Greenwaves GAP9 target to design low-energy wearable prototypes to demonstrate wearable physiologic monitoring
concepts. The presentation will include a specific example(s), primarily an effort to build regenerative physiologic signal autoencoders and determine the feasibility of implementation on a low-energy wearable platform. Additionally, the performance of the ML models and the results of the tinyML toolflow to reduce the model’s memory and computation resources, as well as results on the mapping to the Greenwaves GAP9 processor on custom hardware will be presented.
12:35 pm to 2:00 pm
Lunch & Networking
2:00 pm to 2:45 pm
Exhibits & Posters
2:45 pm to 3:00 pm
Responsible Design of Edge AI: A Pattern Approach for Detecting and Mitigating Bias
Wiebke HUTIRI, PhD Candidate, Delft University of Technology
In the past years, there have been many incidents of biased AI systems that have systematically discriminated against certain groups of people. There is thus growing societal and governmental pressure for fair and non-discriminatory AI systems, and a pressing need to detect and mitigate bias in machine learning (ML) workflows.
With ML systems being an integral component of Edge AI, the need for detecting and mitigating bias to design trustworthy Edge AI is evident. However, while bias has been widely studied in several domains that deploy AI, very few studies examine bias in the Edge AI setting. Despite Edge AI being prolific (for example, it is estimated that in 2024 there will be 8.4 billion voice assistants deploying voice-based Edge AI, a number roughly equal to the human population), research on bias and fairness in Edge AI remains scarce. Best-practice approaches exist for detecting and mitigating bias in ML systems, but these approaches are not common in Edge AI workflows. Progress in developing trustworthy Edge AI systems thus remains slow, increasing the potential for harm resulting from biased systems, and its legal repercussions.
This presentation introduces patterns for detecting and mitigating bias in Edge AI. Patterns
capture proven design experience in generalizable templates so that they can be reused in
future design projects. This makes them effective at capturing and communicating design
knowledge, and a popular tool in object-oriented programming, software engineering, and other engineering design disciplines. The pattern catalog that I present has been developed from best-practice knowledge in ML fairness and has been adapted to account for new design knowledge specific to detecting and mitigating bias in Edge AI systems. The presentation will demonstrate how the patterns can be used to detect and mitigate bias in voice activation systems comprising keyword spotting and speaker verification components.
Bias in Edge AI is a new area of study. So is the use of patterns to communicate transferable
knowledge from ML fairness to Edge AI applications. By learning how to detect and mitigate
bias from best practices in ML fairness, and by making this knowledge transferable between
Edge AI domains, responsible design of Edge AI can be significantly facilitated.
3:00 pm to 3:15 pm
Designing Multi-Model Smart Human Machine Interfaces with Microcontrollers
Sriram KALLURI, Product Manager, NXP Semiconductors
Explore the development of a solution kit for next-generation smart human-machine interfaces (HMI) require that enable multi-modal, intelligent, hands-free capabilities including machine learning (ML), vision for face and gesture recognition, far-field voice control and 2D graphical user interface (GUI) into an overall system design using just a single high-performance MCU.
The machine learning application of face and gesture recognition and voice control are
performed entirely on the edge device and eliminates the need for the cloud as well as
enhancing privacy and latency concerns that come with it.
The audience will learn about the design decisions in bringing a development kit with a
variety of features to help minimize time to market, risk, and development effort,
including fully-integrated turnkey software, hardware reference designs balanced with a
a software framework that gives designers the flexibility to customize vision, voice
functions and a combination of these features.
3:15 pm to 3:30 pm
Human Health Analysis: A killer app for tinyML
Zach SHELBY, Co-founder and CEO, Edge Impulse
Human-health wearables are becoming increasingly popular, with the potential to track and detect a wide range of conditions that traditionally have only been able to be observed through larger machinery or prolonged personal observation. Their small sizes and portability requirements, however, present processing, power, and connectivity challenges. Cloud-based machine learning approaches don’t work well with these restrictions, and create further issues around data privacy and security.
Incorporating edge machine learning into human-health wearable devices offers a solution to these problems. With end-to-end tinyML-capable development tools that can enable on-device data processing and analysis, the power and connectivity needs of wearables are reduced, allowing the maintenance of ultra-compact mobility, while offering no-latency data analysis for health monitoring of the wearer. Additionally, eliminating the need for data transmission also increases the privacy and security of user data.
Developing optimized and highly efficient algorithms able to take advantage of a wearable device’s specific processor and sensor architecture requires a data-centric approach to tinyML that can allow ML practitioners to experiment with different ML architectures while being able to understand how these choices impact on-device performance. Thanks to this device-aware approach we have seen a rise in applications revolutionizing the healthcare industry, from the activity and sleep pattern analysis (Oura) to blood glucose monitoring (Know Labs), to stress level sensing (Nowatch).
We invite the tinyML community to contribute to the development of edge ML in health wearables by sharing their own datasets and algorithms and proposing new use cases and applications. Together, we can continue to push the boundaries of what is possible and make on-person health monitoring a reality for all.
3:30 pm to 4:00 pm
Break & Networking
4:00 pm to 4:45 pm
tinyML application throwdown: What application area has the most potential?
4:45 pm to 5:30 pm
tinyML edu Update
Machine Learning Sensor Certifications
Vijay JANAPA REDDI, Associate Professor, Harvard University
There are billions of microcontrollers worldwide, and we are on the verge of a new data-centric paradigm: putting machine learning intelligence into embedded microcontrollers. This paradigm is made possible by improvements in TinyML methods, tools, and technologies. This new idea of “machine learning sensors” (ML Sensors) is a significant change for the embedded ecosystem. ML sensors raise new concerns about the privacy and security of sensitive user data and the portability and ease of integrating them into the existing ecosystem. Because of this, we need a framework for the practical, responsible, and efficient deployment of ML sensors as a community. To this end, the talk will focus on defining ML sensors and the challenges and opportunities of bringing a new generation of embedded sensor technology to the market.
This talk is not about a point solution; instead, it discusses the framework that needs to be
implemented for ML sensors to be realized in practice. Achieving this requires knowing ML
sensors’ technical and ethical implications before they are developed and distributed.
Nevertheless, one of the most critical issues to address is the concept of a datasheet for ML
sensors. Future sensors must be clear and transparent about what they do and how they do it.
Such information can be enshrined within a datasheet analogous to a traditional sensor
datasheet. Hence, we will focus on the following three topics:
1. Interface – What universal interface is needed for ML Sensors?
2. Standards – What standards need to be in place for ML Sensors?
3. Ethics – What ethical considerations are needed for ML Sensors?
• Call to Action for the TinyML Community
We need the community’s help to develop and deploy ML sensors safely and reliably so they
can reach their full potential. Therefore, the first goal of this talk is to educate the community about the implications of developing ML sensors and understand how we can develop them systematically so that they are both efficient and effective in the existing embedded ecosystem
5:30 pm to 5:45 pm
Closing Day 1
5:45 pm to 7:30 pm
7:30 am to 8:30 am
8:30 am to 9:00 am
Welcome, Recap Day 1 and Agenda for Day 2
9:00 am to 9:45 am
Keynote - Ian Bratt from Arm Limited
tinyML: From Concept to Reality
Ian BRATT, Fellow and Senior Director, Central Technology Group, Arm
tinyML is at a tipping point. This community has come together to form a stable technology foundation, enabled by standardization of software and methodologies, which will enable tinyML to scale at a level that’s never been seen before. Join Ian as he talks about the journey of tinyML, the infinite possibilities this community has made a reality, and the new use cases they’re unlocking for the future.
9:45 am to 10:05 am
Multi-Lingual Digital Assistance on Edge Devices
Mahesh GODAVARTI, Engineering Technical Leader, Cisco Systems
Advances in speech-recognition technology in the past decade have led to ubiquitous deployment of natural language and speech-based digital assistant technologies albeit, in the cloud where the latency of ASR can be high. With a system that is built to mimic normal conversations, this latency can lead to non-natural user experiences. With an ASR model based in the cloud, there is no one source that we can reliably pinpoint to reduce latency, as factors out of our immediate control (back and forth transmission time, end of speech detection, server location, etc.) can vary along user-specific lines. But hardware and software compute limitations on edge devices also makes offloading an entire multilingual ASR model infeasible for recognition of domain-agnostic natural language input. Therefore, developing a hybrid model, that resides partly in the cloud and partly on the edge device, is key to developing natural interactions between digital assistants and end users. This is where the tinyML approach plays a crucial role in the development of a hybrid system that can be supported by the edge device’s compute capabilities.
We present our approach to building a natural language multilingual hybrid-model that combines the expertise of linguistics, modeling, and implementation to create a system that mimics a system running wholly on the edge device. Our hybrid-model consists of a short-phrase “local command-recognition” system running on the edge device and a larger natural language command-recognition system running in the cloud. We describe the process for selecting the commands to be offloaded to the local device and the training regimen for developing the tinyML “command-recognition” model. This model’s network architecture includes a common embedding network, universal to all languages, followed by a per-language decision network that captures the variations in output commands, specific to each language. We then share the improvement in user experience in terms of reduced latency using our hybrid approach.
We conclude with planned future innovations to help bring larger functionality of the digital assistant to the edge.
10:05 am to 10:25 am
Joint Application end user Shiloh Industries with imagimob
Using tinyML and Sound Event Detection for weld anomaly detection in Manufacturing
Jeffrey MOORE, Senior Controls Engineer, Shiloh/Dura
Alexander SAMUELSSON, CTO/Co-Founder, Imagimob
In an increasingly competitive global manufacturing industry, innovation has never been more essential. In today’s presentation we will show an example of innovative effort.
tinyML has wide ranging potential to improve manufacturing processes including robotic operations, quality control and efficiency.
Working with Imagimob we have initiated a project for the acoustic detection of robotic gas metal arcwelding anomalies. These include the detection of the weld defects porosity, burn through,and low deposition. Conventional electronic detection methods have significant limitations and as such human visual inspection is heavily relied upon.
Our project aims to improve current processes and gain a potential competitive advantage. We will discuss the challenge, the data, data collection, the tinyML model building, testing and implementation and the concept of continuous learning. We will also discuss the hardware, open-source software used, PLC/ SPS integration, and workflows developed for this project. Additionally, in this presentation will be examples of how Large Language Models (LLM )were used to expedite our efforts.
10:25 am to 10:45 am
How can we find real uses for tinyML?
Pete WARDEN, CEO, Useful Sensors
A lot of us are excited by the new possibilities that running machine learning on small, low-cost, low-energy chips creates, but for our work to be sustainable we need customers who will pay for what we’re building. It’s fair to say that these use cases haven’t emerged as quickly as expected. So, why is that? and more fundamentally, how can we address the challenges we face in launching ML products?
In this talk, Pete will share his experiences working with customers, and the surprising challenges he has encountered in areas like predictive maintenance, consumer electronics, and some product categories where there does seem to be commercial demand for tinyML solutions. Finally, Pete will summarize his thoughts with a clear Call to Action for the tinyML community to address the challenges and opportunities before us.
10:45 am to 11:15 am
Break & Networking
11:15 am to 11:35 am
End user presentation by Tecniplast
End to End MLOp system for pre-clinical medical research
Marco GARZOLA, Digital Innovation Manager , Tecniplast
Tecniplast core business is mainly generated by special plastic cage production for laboratory animals and customer support for big pharma and Academia Institutions in the pre-clinical industry sector.
Nowadays, operators accessing clean rooms where laboratory animals are housed create several drawbacks, such as the need for sterility of any operation, induced stress of the animals due to the visual inspection requested for welfare purposes by regulatory bodies, difficulty in detecting animal problems because they are nocturnal animals with consequent latency of human interventions and associated errors, which, in the end, affect animal welfare management, Facility productivity, costs, and scalability of services.
Current Tecniplast digital solutions are based on capacitive sensors technology that generates data sent to the cloud, where they are processed. This paradigm is sustainable since their data transfer is limited in size. Unfortunately, it rapidly becomes unstainable if more complex sensors (video, audio) are introduced since generating a huge amount of data.
Recognizing these challenges as opportunities, in 2019, the Tecniplast Innovation team started the tiny, embedded camera project for laboratory animal welfare management, exploring it with local monitoring end-to-end system application. End-to-end means 1) production-ready hardware architectures and boards, including micro-controllers and sensors; 2) embedded firmware; 3) connectivity protocols, 4) cloud dash-board and data storage/processing.
In this way, Tecniplast customers will get the following benefits: massively scale monitoring operations, steadily improve sterility of medical protocol applications, reduce costs, and introduce a new way of machine learning services.
In partnership with STMicroelectronics expertise on TinyML we started the ML development for the hardware designed and deployed tinyAI intelligence onto the embedded boards themself. This approach simplified the end-to-end SW architecture and, even more relevant, reduced the cloud-associated costs, which were unsustainable to the scale.
Currently, the pre-production system is up and running with field tests of 500 boards from which we can collect data to fine-tune our tiny NN solutions; every MCU runs 3 NNs with 3 different purposes. In the next future, new solutions with tinyAI will be introduced for a wider range of sensors.
This project so far has been a remarkable MLOps example encompassing from cloud to the embedded deployment with all the required steps from problem definition to field trials, passing through data acquisition, labeling, tinyML design, and mapping on low-power MCU.
The optimized product today encompasses not only the embedded device but also the SW/Cloud and the mechanical parts.
That’s the way we conceived a close-to-production system in Tecniplast, and I am very happy to share my experience within the TinyML community, including the lessons learned.
11:35 am to 11:55 am
Low Power Radar Sensors and TinyML for Embedded Gesture Recognition and Non-Contact Vital Sign Monitoring
Michele MAGNO, Head of the Project-based learning Center, ETH Zurich, D-ITET
Human-computer interface (HCI) is an attractive scenario, and a wide range of solutions, strategies, and technologies have been proposed recently. A promising novel sensing technology is high-frequency short-range Doppler-radar. This talk presents a low-power high-accuracy embedded hand-gesture recognition using low power short-range radar sensors from Acconeer. A 2D Convolutional Neural Network (CNN) using range frequency Doppler features is combined with a Temporal Convolutional Neural Network (TCN) for time sequence prediction. The final algorithm has a model size of only 45723 parameters, yielding a memory footprint of only 91kB. We acquired two datasets containing 11 challenging hand gestures performed by 26 different people containing a total of 20210 gesture instances. The algorithm achieved an accuracy of up to 92% on the 11 hands gestures. Furthermore, we implemented the prediction algorithm on the GAP8 Parallel Ultra-Low-Power processor RISC-V and ARM Cortex-M processors. The hardware-software solution matches the requirements for battery-operated wearable devices. This work will also present novel recent results based on neural network with Transformes and a demostrator of a form factor of an ear bud will be presented.
This year I will present also some recent resent on the use of the same radar sensor technology to extract vital sign monitoring such as respiration rate and especially the challenging hearth rate combining signal processing and tinyML. The Algorithm run in a few hundreds of kilobyte footprint and can run in a ARM Cortex-M microcontroller.
11:55 am to 12:15 pm
Industrial IoT applications with low power smart sensing
Kishore MANGHNANI, Co-founder and CEO, Shoreline IoT
Mark STUBBS, Co-founder & CTO, Shoreline IoT
12:15 am to 12:35 am
Tiny spiking AI for the sensor-edge
Petrut BOGDAN, Neuromorphic Architect, Innatera
The brain relies on a powerful computing paradigm known as the spiking neural network (SNN) to realize its cognitive functions. SNNs encode sensory information as simple, precisely-timed voltage pulses – or spikes – and realize advanced cognitive functions by leveraging the fine-grained temporal relationships between sequences of spikes. This principle underpins the brain’s ability to memorize and robustly recognize complex patterns in noisy sensory data. Innatera applies the principles of SNNs toward overcoming the challenges of always-on sensing applications in power-limited and battery-
Innatera’s Spiking Neural Processor (SNP) is an analog-mixed signal processing platform that leverages SNNs for high-performance signal processing and pattern recognition in sensing applications.
The SNP implements a proprietary continuous-time analog processing architecture that utilizes highly parameterized silicon neurons and synapses to carry out analog-domain processing on sparse, spike-based representations of sensor data. The combination of massively parallel execution, radically low power dissipation of the analog-mixed signal computing elements, and the event-driven nature of SNNs together allows the SNP to realize complex signal processing and pattern recognition functions within
a sub-milliwatt power envelope and sub-millisecond latencies. Applications for the SNP are developed using the PyTorch-compatible Talamo SDK which simplifies the development, optimization, and deployment of SNNs onto this innovative new hardware.
The unprecedented combination of ultra-low power, low latency, and compact models enables the SNP to realize complex spatio-temporal pattern recognition capabilities even in battery-powered sensing devices. An example of such an application is acoustic scene classification, where audio data streams are continuously processed to identify the type of ambient noise. The SNP’s capabilities are demonstrated in this application, yielding inference with state-of-the-art accuracy figures, power dissipation <100μW and latency <50ms. This highlights the promise of SNNs in addressing the challenges of sensing applications and underscores why the next generation of tiny ML at the sensor edge is neuromorphic.
12:35 pm to 1:45 pm
Lunch & Networking
1:45 pm to 2:00 pm
Best Product for ML Processing Chips, MCUs
Best Product for ML Processing Chips, AI Accelerators
Best Product for Vision
Best Product for Sensors
Best Innovation for Software Enablement and Tools
Session Moderator: Davis SAWYER, Co-founder & Chief Product Officer, Deeplite
Session Moderator: Adam FUKS, Fellow, MCU/MPU Architecture, NXP
2:00 pm to 3:00 pm
Exhibits & Posters
3:00 pm to 3:45 pm
Growing end-users and application diversity in tinyML roundtable
Michael Hansen, Nabu Casa Inc.
Nancy Li, Bright AI
Muthu Sabarethinam, Honeywell Connected Enterprise
Session Moderator: Stacey HIGGINBOTHAM, Founder , Stacey on IoT
3:45 pm to 4:15 pm
Break & Networking
4:15 pm to 4:45 pm
tinyML 4Good Update
Session Moderator: Thomas BASIKOLO, Programme Officer, ITU
Session Moderator: Alex GLOW, Lead Hardware Nerd, Hackster.io
Session Moderator: Evgeni GOUSEV, Senior Director, Qualcomm Research, USA
4:45 pm to 5:00 pm
Joint end user application The Procter & Gamble Company with Qeexo
How a consumer goods company leverages Qeexo’s AutoML to accelerate data science adoption and value
Stephanie PAVLICK, Machine Learning Engineer , Qeexo
Grant STRIEMER, Director, Corporate R&D, The Procter & Gamble Company
Qeexo has collaborated closely with P&G as part of their (P&G) digital transformation strategy to disrupt how they innovate (faster/better/cheaper) in the CPG industry which is helping drive design of irresistibly superior products and experiences for our global consumers. While P&G is growing both expert and practitioner level AI know-how such that their digital transformation and impact is pervasive across all aspects of innovation, the practitioners are citizen data scientists coming from other disciplines. P&G is leveraging Qeexo’s Auto Machine Learning as one of the tools to lower activation energy to help enable this broad innovator community to develop, manage and deploy new algorithms.
We will detail many of the productivity-boosting features available in recent releases of Qeexo AutoML. Some of these include: an assisted segmentation feature to aid customers in quickly and efficiently labelling large amounts of data, model size reduction, data augmentation to represent possible variations of a collected dataset, and more.
Finally, we will discuss the overall potential for the end-customer solution, lessons learned, and future opportunities.
5:00 pm to 5:15 pm
Enhancing neural processing units with digital in-memory computing
Danilo PAU, Technical Director, IEEE and ST Fellow, STMicroelectronics
The proliferation of embedded Neural Processing Units (NPUs) is enabling the adoption of Tiny Machine Learning for numerous cognitive computing applications on the edge, where maximizing energy efficiency is key. To overcome the limitations of traditional Von Neumann architectures, novel designs based on computational memories are arising. STMicroelectronics is developing an experimental low-power NPU
that integrates Digital In-Memory Computing (DIMC) SRAM with a modular dataflow inference engine, capable of accelerating a wide range of DNNs. In this work, we present a 40nm version of this architecture with DIMC-SRAM tiles capable of in-memory binary computations to dramatically increase the computational efficiency of binary layers. We performed power/performance analysis to demonstrate the advantages of this paradigm, which in our tests achieved a TOPS/W efficiency up to 40x higher than software and traditional NPU implementations. We have then extended the ST Neural compilation
toolchain to automatically map binary and mixed-precision NNs on the NPU, applying high-level optimizations and binding the model’s binary GEMM layers to the DIMC tiles.
The overall system was validated by developing a real-time Face Presence Detection application, as a potential real-world power-constrained use-case. The application ran with a latency < 3 ms, and the DIMC subsystem achieved a peak efficiency > 100 TOPS/W for binary in-memory computations.
5:15 pm to 5:30 pm
Exploring ML Compiler Optimizations with microTVM
Gavin UBERTI, Software Engineer, OctoML
Deep learning compilers can use optimization passes to make TinyML models run
much faster. But what optimizations do they actually perform? In this talk, we’ll use Apache
TVM to compile a MobileNetV1 model for Cortex-M microcontrollers. We’ll look inside its
intermediate representations, and watch how they change when optimizations are applied.
We’ll see how convolution kernels are tailored for the device, how quantization parameters are folded into subsequent operators, and how layouts are rewritten on the fly.
Schedule subject to change without notice.
Qualcomm Research, USA
Mallik P. MOTURI
Strategic World Ventures
Christopher L. FELTON
Mayo Clinic SPPDG
Nabu Casa, Inc.
Sony Semiconductor Solutions of America
Stacey on IoT
Delft University of Technology
ETH Zurich, D-ITET
Vijay JANAPA REDDI
Honeywell Connected Enterprise
The Procter & Gamble Company