tinyML Talks: Deploying AI to Embedded Systems

Date

April 13, 2021

Location

Virtual

Contact us

Discussion

Schedule

Timezone: PDT

Deploying AI to Embedded Systems

Bernhard SUHM, Product Manager for Machine Learning

MathWorks

While most AI frameworks provide high-level languages and user interfaces to train models, preparation into a format that’s suitable for embedded deployment often requires recoding. Deploying industrial applications to embedded systems raises additional challenges including:
1. Integration of the AI model within a larger system
2. Meeting hardware constraints like limited memory and power consumption
3. Ensuring ongoing model performance, even if there are changes in the environment

This presentation describes an environment that supports interactive training of AI models, their preparation for embedded deployment, and integration within industrial application, all within a single framework. After prototyping the system using a high-level language as single codebase, low-level deployable C/C++ or CUDA code is generated automatically. A system modeling and simulation environment with preconfigured blocks for many industrial applications facilitates integration of the AI model.

To fit larger AI models on hardware with limited memory and power, conversion to fixed-point arithmetic reduces footprint for machine learning models, while in deep learning, quantization is applied to the millions of parameters in deep neural networks.

To ensure ongoing model performance, retraining models from scratch with additional data requires significant memory and computational power – too much for most embedded systems. Incremental learning adjusts model parameters continuously on streaming data and thus computationally less demanding. When using code generation for deployment, model parameters need to be separated from prediction code, to avoid having to redeploy models with every update.

Bernhard SUHM, Product Manager for Machine Learning

MathWorks

Bernhard Suhm is the product manager for Machine Learning at MathWorks. He works closely with customer facing and development teams to address customer needs and market trends in our machine learning related products, primarily the Statistics and Machine Learning toolbox. Prior to joining MathWorks Bernhard led a team of analysts consulting call centers on optimizing the delivery of customer service. He also held positions at a usability consulting company and Carnegie Mellon University. He received a PhD in Computer Science specializing in speech user interfaces from Karlsruhe University in Germany.

Schedule subject to change without notice.