tinyML Talks: Towards Software-Defined Imaging: Adaptive Video Subsampling for Energy-Efficient Object Tracking & The Akida Neural Processor: Low Power CNN Inference and Learning at the Edge

Date

September 1, 2020

Location

Virtual

Contact us

Discussion

Schedule

Timezone: PDT

Towards Software-Defined Imaging: Adaptive Video Subsampling for Energy-Efficient Object Tracking

Suren JAYASURIYA, Assistant Professor

Arizona State University

CMOS image sensors have become more computational in nature including region-of-interest (ROI) readout, high dynamic range (HDR) functionality, and burst photography capabilities. Software-defined imaging is the new paradigm, modeling similar advances of radio technology, where image sensors are increasingly programmable and configurable to meet application-specific needs. In this talk, we present a suite of software-defined imaging algorithms that leverage CMOS sensors’ ROI capabilities for energy-efficient object tracking. In particular, we discuss how adaptive video subsampling can learn to jointly track objects and subsample future image frames in an online fashion. We present software results as well as FPGA accelerated algorithms that achieve video rate performance in their latency. Further, we highlight emerging work on using deep reinforcement learning to perform adaptive video subsampling during object tracking. All this work points to the software-hardware co-design of intelligent image sensors in the future.

Suren JAYASURIYA, Assistant Professor

Arizona State University

Suren Jayasuriya is an assistant professor jointly between the departments of Arts, Media and Engineering (AME) and Electrical, Computer, and Energy Engineering (ECEE). Before, he was a postdoctoral fellow at the Robotics Institute at Carnegie Mellon University. He received his doctorate in 2017 from the ECE Department at Cornell University, and a bachelor’s in mathematics and in philosophy from the University of Pittsburgh in 2012. His research focuses on designing new types of computational cameras, systems, and visual computing algorithms that can extract and understand more information from the world around us.

Timezone: PDT

The Akida Neural Processor: Low Power CNN Inference and Learning at the Edge

Kristofor CARLSON, Manager of Applied Research

BrainChip Inc.

The Akida event-based neural processor is a high-performance, low-power SoC targeting edge applications. In this session, we discuss the key distinguishing factors of Akida’s computing architecture which include aggressive 1 to 4-bit weight and activation quantization, event-based implementation of machine-learning operations, and the distribution of computation across many small neural processing units (NPUs). We show how these architectural changes result in a 50% reduction of MACs, parameter memory usage, and peak bandwidth requirements when compared with non-event-based 8-bit machine learning accelerators. Finally, we describe how Akida performs on-chip learning with a proprietary bio-inspired learning algorithm. We present state-of-the-art few-shot learning in both visual (MobileNet on mini-imagenet) and auditory (6-layer CNN on Google Speech Commands) domains.

Kristofor CARLSON, Manager of Applied Research

BrainChip Inc.

Kris Carlson is Manager of Applied Research at BrainChip Inc, a company that develops both hardware and software neuromorphic computing solutions. Previously, he worked as postdoctoral scholar in Jeff Krichmar’s cognitive robotics laboratory at UC Irvine where he studied unsupervised learning rules in spiking neural networks (SNNs), the application of evolutionary algorithms to SNNs, and neuromorphic computing. Afterwards, he worked as postdoctoral appointee at Sandia National Laboratories where he studied uncertainty quantification in computational neural models and helped develop neuromorphic systems. In his current role, he develops, modifies, and optimizes neural and machine learning algorithms for development on BrainChip’s latest neuromorphic system on a chip, Akida.

Schedule subject to change without notice.