tinyML Talks: Embedded Computer Vision Hardware through the Eyes of AR/VR & Using TensorFlow Lite for Microcontrollers for High-Efficiency NN Inference on Ultra-Low Power Processors

Date

May 14, 2020

Location

Virtual

Contact us

Discussion

Schedule

Timezone: PDT

Embedded Computer Vision Hardware through the Eyes of AR/VR

Hans REYSERHOVE, Postdoctoral Research Scientist

Facebook Reality Labs

Augmented reality is an emerging technology that requires pushing the curve on almost all relevant fronts: computer vision algorithms and ML pipelines, sensing and processing hardware, memories, power consumption and system form factor. This talk will dig deeper on the technological challenges that are being solved today to make augmented reality happen. Although there are parallels with other embedded CV systems, a few key differentiators are essential to AR. Essential is the technology stack to make it all happen: image sensors, interfaces and processing hardware are a few blocks under consideration that ultimately guide the system-level trade-offs. These trade-offs are further illustrated by applying them to the key always-on computer vision and ML pipelines necessary for augmented reality. A lot of these considerations translate to the bigger tinyML and embedded computer vision design space.

Hans REYSERHOVE, Postdoctoral Research Scientist

Facebook Reality Labs

Hans Reyserhove is a Postdoctoral Research Scientist at Facebook Reality Labs. His research focuses on intelligent vision systems and sensing technologies for Augmented & Virtual Reality. He holds a PhD from University of Leuven, Belgium, focused on design of energy-efficient microcontroller systems and better-than-worst-case silicon systems. He has a M.S. degree focused on CMOS image sensors with pixel-level A/D conversion for extreme parallelism. His main interests lie in design, prototyping & optimization of silicon systems, including image sensors, hardware accelerators and computer vision applications.

Timezone: PDT

Using TensorFlow Lite for Microcontrollers for High-Efficiency NN Inference on Ultra-Low Power Processors

Jamie CAMPBELL, Software Engineering Manager

Synopsys, Inc.

Deeply-embedded AIoT applications doing neural network (NN) inference need to achieve specified real-time performance requirements on systems with limited memory and power budget. Meanwhile, developers want a convenient way of migrating their NN graph designs to an embedded environment. In this talk, we will describe how specific hardware extensions on embedded processors can vastly improve the performance of NN inference operations, which allows targets to be met while consuming less power. We will then show how optimized NN inference libraries can be integrated with well-known ML front-ends to facilitate development flows.
To illustrate these concepts, we’ll show the Synopsys MLI Machine Learning Inference library running on a DSP-enhanced DesignWare® ARC® EM processor and explain how it was integrated with TensorFlow Lite for Microcontrollers (TFLM). To conclude, we will showcase Himax Technologies’ WE-I Plus silicon, a very low-power SoC targeted at AIoT applications that supports both MLI and TFLM.

Jamie CAMPBELL, Software Engineering Manager

Synopsys, Inc.

Schedule subject to change without notice.