tinyML Talks: Low-Power Computer Vision & Saving 95% of your edge power with Sparsity to enable tiny ML

Date

June 16, 2020

Location

Virtual

Contact us

Discussion

Schedule

Timezone: PDT

Low-Power Computer Vision

Yung-Hsiang LU, Inaugural director of Purdue Engineering John Martinson Entrepreneurial Center

Purdue University

Computer vision has been widely adopted. Many applications require that vision solutions run on battery-powered systems, such as mobile phones, autonomous robots, and drones. This presentation will survey the existing technologies for making computer vision energy-efficient, including (1) parameter quantization and pruning, (2) compressed convolutional filters and matrix factorization, (3) network architecture search, and (4) knowledge distillation. The speaker will explain how to use hierarchical neural networks to reduce energy consumption on embedded systems. Finally, this speech will introduce the IEEE International Low-Power Computer Vision Challenge (for 2020, the competition is open on July 1-31, please visit https://lpcv.ai/ 3).

Yung-Hsiang LU, Inaugural director of Purdue Engineering John Martinson Entrepreneurial Center

Purdue University

Timezone: PDT

Saving 95% of your edge power with Sparsity to enable tiny ML

Jonathan TAPSON, CSO

GrAI Matter Labs

The kind of tasks for which ML is used at the edge are different than those for ML in the datacenter. Specifically, they tend to be continuous real-time processes with streaming data, on which inference must be performed in each sampling interval. In this talk, we will describe how this type of process offers significant possibilities for reducing the computation needed that can be exploited as very low latency and/or very low power required for enabling tiny machine learning tasks. If we make a conscious effort to exploit multiple types of sparsity, we can drive significant advances in edge processing. We will explain these types of sparsity (time, space, connectivity, activation) in terms of edge processes, and how they affect computation on a practical level. We will introduce the new GrAI Core architecture, and explain how it uses an event-based paradigm to maximally exploit sparsity and save energy in edge inference loads or improve latency relevant to tiny machine learning applications. The results will be illustrated with some examples of real-world applications in which GrAI core chips are being used.

Jonathan TAPSON, CSO

GrAI Matter Labs

Jonathan Tapson is the Chief Scientific Officer of GrAI Matter Labs. Prior to this, he was the Director of the MARCS Institute for Brain, Behaviour and Development at the University of Western Sydney, and has held positions at Dean and Head of Department levels in multiple universities. His research covers neuromorphic engineering and bioinspired sensors, and he has authored over 160 papers and a dozen patents.

Schedule subject to change without notice.