tinyML Talks: TinyML as-a-Service – Bringing ML inference to the deepest IoT Edge & Speech Recognition on low power devices

Date

September 15, 2020

Location

Virtual

Contact us

Discussion

Schedule

Timezone: PDT

TinyML as-a-Service – Bringing ML inference to the deepest IoT Edge

Hiroshi DOYU, Embedded AI Researcher

Ericsson

TinyML, as a concept, concerns the running of ML inference on Ultra Low-Power microcontrollers found on IoT devices. Yet today, various challenges still limit the effective execution of TinyML in the embedded IoT world. As both a concept and community, it is still
under development.
Here at Ericsson, the focus of our TinyML as-a-Service activity is to democratize TinyML, enabling manufacturers to start their AI businesses using TinyML more easily.
Our goal is to make the execution of ML tasks possible and easy in a specific class of devices. These devices are characterized by very constrained hardware and software resources such as sensor and actuator nodes based on these microcontrollers.
We will present how we can bind the “as-a-service” model to TinyML and provide a high-level technical overview of our concept and introduce the design requirements and building blocks which characterize this emerging paradigm.

Hiroshi DOYU, Embedded AI Researcher

Ericsson

Hiroshi Doyu is a system software developer, researcher, and long-time Linux kernel contributor. Hiroshi is part of the Ericsson Research IoT technologies team and has spent more than 20 years in product development. He has contributed to the upstream Linux kernel development for more than a decade, including Nvidia Tegra SoC. He received his M.Sc. in Aerospace engineering from the Osaka Prefecture University, Japan. Hiroshi is passionate about technology but also loves to play floorball and ice hockey.

Timezone: PDT

Speech Recognition on low power devices

Vikrant TOMAR, Founder and CTO

Fluent.ai

In this talk, we will cover how we at Fluent.ai go from training models in high level libraries such as Pytorch and then run the models on low-power MCUs, such as ARM Cortex M series of microcontrollers, or DSPG digital signal processors. We will talk about optimizations achieved using low-level programming optimizations, as well as, neural network optimizations, such as 8-bit quantization, unique model architectures, network compression, layer selection, etc.

Sam MYER, Lead Developer

Fluent.ai Inc.

In this talk, we will cover how we at Fluent.ai go from training models in high level libraries such as Pytorch and then run the models on low-power MCUs, such as ARM Cortex M series of microcontrollers, or DSPG digital signal processors. We will talk about optimizations achieved using low-level programming optimizations, as well as, neural network optimizations, such as 8-bit quantization, unique model architectures, network compression, layer selection, etc.

Vikrant TOMAR, Founder and CTO

Fluent.ai

Vikrant Tomar is Founder and CTO of Fluent.ai Inc. He is a scientist and executive with nearly 10 years of experience in speech recognition and machine/deep learning. He obtained his PhD in automatic speech recognition at McGill University, Canada, where he worked on manifold learning and deep learning approaches for acoustic modeling. In the past, he has also worked at Nuance Communications Inc. and Vestec Inc. as a Research Scientist.

Sam MYER, Lead Developer

Fluent.ai Inc.

Sam Myer is the lead developer at Fluent.ai Inc., where his responsibilities include Fluent’s embedded speech recognition engine. He has a M.Sc. in signal processing from Queen Mary University of London and a B.Sc. in computer science from McGill University. Sam has extensive software development experience encompassing nearly 15 years and multiple cities including New York, Berlin and Montreal.

Schedule subject to change without notice.