tinyML Talks: Training Embedded AI/ML Using Synthetic Data & Using AI to design energy-efficient AI accelerators for the edge

Date

December 8, 2020

Location

Virtual

Contact us

Discussion

Schedule

Timezone: PST

Training Embedded AI/ML Using Synthetic Data

Ian CAMPBELL, CEO

OnScale

In the past, engineers relied on physical testing to generate datasets to train embedded AI and ML algorithms. Today, engineers at world-class companies are using Cloud Simulation to generate “synthetic” datasets for training embedded AI/ML. Cloud Simulation empowers engineers to run massive parametric sweeps of things like sensors within a system that are subjected to variances in manufacturing and environmental operating conditions. Simulation also allows engineers and data scientists to control noise within the AI/ML dataset – either removing it entirely for baseline analysis or injecting various types of noise (e.g. thermal or vibration noise) into a sensor system to ensure AI/ML algorithms are robust against noise.

Ian CAMPBELL, CEO

OnScale

Ian Campbell is a Georgia Tech trained engineer and serial entrepreneur, having founded two Silicon Valley high tech companies. The first, NextInput, broke records in getting a new MEMS technology to market in high volume applications like smartphones and wearables.
The second, OnScale, is a Cloud Engineering Simulation platform backed by Intel Capital and Google’s Gradient Ventures. OnScale Cloud combines advanced proprietary multiphysics solvers with cloud supercomputers and AI/ML and breaks performance and cost constraints for engineers optimizing Digital Prototypes of devices like next-gen MEMS, 5G RF filters, IoT devices, medical devices, and much more. OnScale massively reduces cost, risk, and time-to-market for R&D firms pushing the boundaries of new technology.

Timezone: PST

Using AI to design energy-efficient AI accelerators for the edge

Weiwen JIANG, Postdoctoral Scholar

University of Notre Dame

In this talk I will present a novel machine learning driven hardware and software co-exploration framework for overcoming the challenge of automating the design of energy-efficient hardware accelerators for neural networks. Different from existing hardware-aware neural architecture search (NAS) which assumes a fixed hardware design and explores the NAS space only, such a framework simultaneously explores both the architecture search space and the hardware design space to identify the best neural architecture and hardware pairs that maximize both accuracy and hardware efficiency metrics. Especially for running machine learning on resource constrained edge devices, such a practice greatly opens up the design freedom. We will see how we can significantly push forward the Pareto frontier between hardware efficiency and model accuracy for better design tradeoffs, and rapid time to market for flexible accelerators designed from the ground.

Weiwen JIANG, Postdoctoral Scholar

University of Notre Dame

Schedule subject to change without notice.