LinkedIn icon

The tinyML Summit 2020 speaker slides posters are now online. For the slides Go to the Program and scroll down to February 12. If there is an icon below the presentation title and speaker name, click on it for the slides. Some slides have not yet been approved for release by the speakers, but please check back for them periodically. For the Posters go to Poster Session and look for the icon under the poster title and presenter name.

tinyML Summit 2020

Enabling ultra-low Power Machine Learning at the Edge

February 12-13, 2020

About the tinyMLTM Summit

Following the success of the inaugural tinyML Summit 2019, the tinyML committee invites low power machine learning experts from the industry, academia, start-ups and government labs from all over the Globe to join the tinyML Summit 2020 to share the “latest & greatest” in the field and to collectively drive the whole ecosystem forward.

Tiny machine learning is broadly defined as a fast growing field of machine learning technologies and applications including hardware (dedicated integrated circuits), algorithms and software capable of performing on-device sensor (vision, audio, IMU, biomedical, etc.) data analytics at extremely low power, typically in the mW range and below, and hence enabling a variety of always-on use-cases and targeting battery operated devices. The inaugural tinyML Summit in March 2019 showed very strong interest from the community with active participation of senior experts from 90 companies. It revealed that: (i) tiny machine learning capable hardware is becoming “good enough” for many commercial applications and new architectures (e.g. in-memory compute) are on the horizon; (ii) significant progress on algorithms, networks and models down to 100kB and below; and (iii) initial low power applications in the vision and audio space. There is growing momentum demonstrated by technical progress and ecosystem development.

tinyML Summit 2020 will continue the tradition of high quality invited talks, poster and demo presentations, open and stimulating discussions, and significant networking opportunities. It will cover the whole stack of technologies (Systems-Hardware-Algorithms-Software-Applications) at the deep technical levels, a unique feature of the tinyML Summits. While the majority of the participants and speakers will come from industry, leading edge academic research will be represented as well as an important ingredient of the evolving tiny machine learning ecosystem. In 2020, special attention will be given to recent progress on algorithm development and tiny machine learning use-cases and applications. The program will be organized in four technical sessions: Hardware, Systems, Algorithms & Software, and Applications. There will be approximately twenty invited presentations selected by the Technical Program Committee and dedicated poster sessions and demos by tiny machine learning companies and sponsors. Overview and hands-on tutorials on hardware and software developments will be available the day before the main technical program starts. Registration will open in October 2019.

2020 tinyML Summit Participants

Program

February 11 (Tuesday) Location: Qualcomm, 3195 Kifer Road, Building B, Santa Clara, CA

Tutorials

  • 7:30 am – 8:30 am

    Registration/breakfast
  • 8:30 am – 10:00 am

    NVIDIA Deep Learning Accelerator (NVDLA)

    Led by: Frans Sijstermans, Deep Learning Software Manager, NVIDIA
    Mitch Harwell, Vice President, Multimedia Arch/ASIC, NVIDIA
    Robin Paul Prakash, Senior System Architect, NVIDIA

    Designing new custom hardware accelerators for deep learning is clearly popular, but achieving state-of-the-art performance and efficiency with a new design is a complex and challenging problem. Innovation is required in both HW and SW domains, and we will be including topics from both today.
    This workshop will cover NVDLA HW's design and methodology, leveraging domain-specific concepts to help achieve performance scalability as well as best-in-class computational efficiency. We will also be covering deep learning compiler concepts used to help convert NVDLA's raw performance into accessible performance. By the completion of this workshop, attendees will be able to deploy their own NVDLA in the cloud and execute real-time inference with NVDLA's open-source SW toolchain.

  • 10:00 am – 10:30 am

    Break
  • 10:30 am – 12:00 pm

    Algorithmic and SW Techniques for designing and implementing energy efficient CNNs

    Led by: Jinwon Lee, Senior Staff Engineer at Qualcomm AI Research

    There is increasing demand to deploy diverse deep learning use cases (e.g., vision, speech and NLP) on edge devices. However, deploying complex DL models on resource constrained edge devices (e.g., mobile phones and IoT devices) is intrinsically challenging due to their tight memory and computation requirements. To address this, diverse quantization and compression methods has been proposed and widely used for model efficiency. In this tutorial, we will high-level overview on quantization and compression methods in the literature and show how they are used in the wild. The first part of this talk will cover quantization techniques for deep learning including quantization-aware training and data-free methods. The second part of talk will summarize diverse compression approaches including unstructured and structured weight pruning. In addition, we will introduce recent research efforts to enable efficient deep learning end-to-end through hardware-aware optimization. Finally, we will briefly introduce Qualcomm AI Model Efficiency Toolkit (AIMET) for practical use in the wild.

  • 12:00 pm – 1:00 pm

    Lunch
  • 1:00 pm – 2:30 pm

    tinyML SW frameworks for tinyML: TF-lite

    Led by: Pete Warden, Technical lead of the TensorFlow mobile and embedded team, Google

    This workshop will show you how to run a magic wand and other machine learning examples in the TensorFlow Lite for Microcontrollers framework.

  • 3:00 pm – 4:30 pm

    Enabling Intelligent edge devices with ultra low-power Arm MCUs and TensorFlow Lite

    Led by: Wei Xiao, Principal Evangelist, Arm AI Ecosystems

    Advances in processing power and machine learning algorithms enable us to run machine learning models on tiny far edge devices. Arm’s latest improvements in SIMD and DSP extensions as well as our collaboration with Google TensorFlow Lite team is pushing machine smarts to our tiniest micro-controllers used in intelligent wireless sensors.

    In this hands-on workshop, attendees will build a machine learning application with TensorFlow Lite Micro on Arm Cortex-M devices, then optimize our solution to unleash the unparalleled power of Arm microcontrollers.

  • Register for Tutorials here

February 11 (Tuesday) evening Location: San Jose Capital Club, 50 W San Fernando Road, Suite 1700 (17th Floor), San Jose, CA 95113

  • 6 pm - 9 pm

    VIP Reception For Summit Speakers, Panelists, Tutorial Instructors, Sponsors and Committee Members

February 12 (Wednesday)
Hyatt Regency San Francisco Airport 1333 Bayshore Highway, Burlingame, CA 94010

February 13 (Thursday)
Hyatt Regency San Francisco Airport 1333 Bayshore Highway, Burlingame, CA 94010

Committee

General Chairs:

Technical Program Committee:

Speakers

Tutorial Leaders

Panels

Leaders:

Panelists:

2020 tinyML Summit Sponsors

Premier

tinyML Executive and Founders

Platinum Sponsors

Gold Sponsors

Silver Sponsors

Supporting Sponsors

Poster Session

Listed in alphabetical order by presenter

tinyML Perf: Expanding the MLPerf Inference benchmark to microcontrollers and tiny devices.Colby Banbury, Max Lam, and Vijay Janapa Reddi, Harvard University, Cambridge, MA

Engineering tinyML models in sound recognition. An analysis of an entire specialist pipeline, from data collection to deployment.Dominic Binks, VP of Technology, Audio Analytic

TSM: Temporal Shift Module for Efficient Video Understanding.Han Cai, MIT

High-Efficiency Neural Network Inference using DesignWare ARC EMxD Processors & TensorFlow Lite for Microcontrollers.Jamie Campbell, Synopsys, Inc.

End-to-End Sound Classification On Loihi Neuromorphic Chip.Mohammad Ebrahimpour, Graduate Researcher, UC Merced

SpArSe: Sparse Architecture Search for CNNs on Resource-Constrained Microcontrollers.Igor Fedorov (Arm ML Research), Ryan P. Adams (Princeton University), Matthew Mattina (Arm ML Research), Paul N. Whatmough (Arm ML Research)

Bio-Inspired Edge Learning on the Akida Event-Based Neural Processor.Sasskia Brüers, Kristofor D. Carlson, Marco Cheng, Sébastien Crouzet, Mahendran Devarajlu, Hussein Makki, Douglas McLelland, Nicolas Oros, Charles Wilson, and Kenneth Wu

Highlights of architecture optimizations for ultra-low power inference on larger CNNs.Eric Flamand, CTO, GreenWaves Technologies

Novel method for Ultra-Low-Footprint Keyword-Spotting.Adam Fuks and Frans Widdershoven, NXP Semiconductors

Aggressive Compression of MobileNets Using Hybrid Ternary Layers.Dibakar Gope, Senior Research Engineer, Machine Learning & AI, Arm

Once for All: Train One Network and Specialize it for Efficient Deployment.Song Han, Assistant Professor, MIT EECS

Adaptive Video Sampling for Energy-Efficient Object Detection.Dr. Suren Jayasuriya, Assistant Professor, Arizona State University

Benchmarking and improving NN execution on DSP vs. custom accelerator for hearing instruments.Zuzana Jelčicová, Demant

Precision Reconfigurable Digital Compute-In-Memory for Embedded Neural Network Processing.Bongjin Kim (Assistant Professor), Hyunjoon Kim (Ph.D Student), Nanyang Technological University, Singapore

Fast neural network inference on xcore.ai.Laszlo Kindrat and Andrew Cavanaugh, XMOS Inc. Poster to be uploaded later in 2020

A weight-averaging approach to speeding up model training on resource-constrained devices.Unmesh Kurup, Samarth Tripathi, Jiayi (Jason) Liu, Mohak Shah, LG Electronics

A scalable, configurable neural network accelerator supporting on-device training.Jaehwa Kwak, Senior Staff Engineer, LG Electronics, USA

TSM: Temporal Shift Module for Efficient Video Understanding.Ji Lin, MIT.

Low-Power Computer Vision Competition.Yung-Hsiang Lu, Professor, School of Electrical and Computer Engineering, Purdue University

Low Power Embedded Gesture Recognition Using Novel Short-Range Radar Sensors.Michele Magno, Senior Researcher; ETH Zurich

MediaPipe: Building real time cross platform (mobile, web, edge, desktop) video audio ML pipelines.Chris McClanahan, Google

Vau da Muntanialas: An Energy-Efficient Systolic Array of LSTM Accelerators.Gianna Paulin, Ph.D. Student, ETH Zurich, Switzerland

Enabling Computer Vision and AI on the edge with milliWatts.Venkat Rangan, Founder, tinyVision.ai Inc.

Bio-inspired analog architecture for ultra-low power always-on sensing.Brandon Rumberg, CTO and Founder, Aspinity

Extended Bit-Plane Compression: Alleviating the Costs of Data Transfer for Edge AI.Georg Rutishauser, Ph.D. Student, Integrated Systems Lab, ETH Zurich

Pushing the Limits of Ultra-low Power Computer Vision for tinyML Applications.Ravishankar Sivalingam, Edwin Park, Evgeni Gousev, Qualcomm Artificial Intelligence (AI) Research, Qualcomm Technologies, Inc.

TinyML and Novel AI Workflow Enables Smarter Wireless Low Power Sensors Managed and Deployed at Large Scale at the Far Edge.Mark Stubbs, Co-Founder and Principal Architect, Shoreline IoT Inc.

Improving accuracy of neural networks compressed using fixed structures via doping.Urmish Thakker, Ganesh Dasika, Paul Whatmough, Matthew Mattina, Jesse Beu, Arm ML Research Lab

tinyEOD: Small Deep Neural Networks and Beyond for Embedded Vision Applications.Christos Kyrkou and Theocharis Theocharides, KIOS Research and Innovation Center of Excellence and Department of Electrical and Computer Engineering, University of Cyprus

SNNs with analog neurons and RRAM synapses.Alexandre Valentian, Ph.D., Head of Advanced Technologies and Systems-on-chip Laboratory, LETI

Demos

Listed in alphabetical order by company

Smart vision-based presence detection at mW power.Dylan Muir and Sadique Sheik, Senior R&D Engineers, aiCTX AG

Managing the end-to-end ML lifecycle with Arm technologies.Wei Xiao, Principal AI Ecosystem Evangelist, Arm

BabbleLabs #1 - Deep learning-based Command Recognition.Chris Rowen, CEO, BabbleLabs, Inc.

BabbleLabs #2 - Speech Enhancement Portfolio Showcase.Chris Rowen, CEO, BabbleLabs, Inc.

On-Chip Learning with Akida.Chris Anastasi, Senior Field Applications Engineer, BrainChip Inc.

Ultra-low power key word spotting.Moshe Haiut, NN HW Architect and Niv Peled, NN SW engineering manager, DSP Group

Bringing AI to the Edge.Elad Baram, VP Products, Emza Visual Sense

Image Recognition on 750 microamps and 100mS Inference Time.Gopal Raghavan, Eta Compute

TensorFlow Lite for Microcontrollers.Daniel Situnayake, Developer Advocacy lead for TensorFlow Lite, Google

GrAI One – A Hybrid Neuromorphic and Dataflow Processor.Jonathan Tapson, CSO, GrAI Matter Labs

Flexible, Ultra-Low Power On-Device AI. Hoon Choi and Hussein Osman, Lattice Semiconductor

Spiking neural networks enabling massively parallel, low-power & low-latency computation. Alexandre Valentian, Ph.D., Head of Advanced Technologies and Systems-on-chip Laboratory, LETI

Online Hand Gesture Recognition with Temporal Shift Module (TSM).Han Cai, MIT

Highly Optimized AI Solutions for Microcontrollers.Kyeongryeol Bong, CTO, NALBI Inc.

OpenM1 project demo.Yi-Lin Tung, General Manager, On-Device AI Co., Ltd.

Low Power 720p Global Shutter Sensor with Smart Motion Detect.Charles Chong, Director Of Strategic Marketing and Jason Lin, VP of Engineering, PixArt Imaging Inc.

Low power high FPS object detection.Ravi Sivalingam, Senior Staff Engineer, Qualcomm QTI Inc.

SensiML: Enabling Predictive Maintenance at the Sensor.Chris Knorowski, CTO, SensiML

Always-On Artificial Intelligence for Battery-Powered Devices.Mallik P. Motur, Vice President, product and business development, Syntiant Corp
Dave Garrett, vice president of hardware, Syntiant Corp.

Always-On Voice powered by custom AI Silicon.Mallik P. Motur, Vice President, product and business development, Syntiant Corp
Dave Garrett, vice president of hardware, Syntiant Corp.

Others to be announced

Venue & Accommodations

Hyatt Regency San Francisco Airport

1333 Bayshore Highway
Burlingame, CA 94010

 

Accommodations

The Hyatt Regency is holding a block of rooms for attendees for a discounted nightly rate of $189+tax. Book Now.

NOTE: Although the link will show a range of dates only the nights of February 11 and 12 will give you the tinyML rate. If you wish to stay a day or two before or after you must make those separately through the hotel’s general reservation page. The prices for dates outside the 11th and 12th are not much more.

News

Can artificial intelligence give elephants a winning edge?

Open-source developers and tech giants created the world's most advanced elephant tracking collars.

“Sara Olsson, a Swedish software engineer who has a passion for the natural world created a tinyML and IoT monitoring dashboard”.

read full TechCrunch article

tinyML book written by Pete Warden and Daniel Situnayake of Google

Neural networks are getting smaller. Much smaller. The OK Google team, for example, has run machine learning models that are just 14 kilobytes in size—small enough to work on the digital signal processor in an Android phone. With this practical book, you’ll learn about TensorFlow Lite for Microcontrollers, a miniscule machine learning library that allows you to run machine learning algorithms on tiny hardware.

read full description

Stanford University Seminar

Evgeni Gousev of Qualcomm and Pete Warden of Google participated in a panel at Stanford University seminar "Current Status of tinyML and the Enormous Opportunities Ahead".

read full article

AI at the Very, Very Edge (EE Times)

When the TinyML group recently convened its inaugural meeting, members had to tackle a number of fundamental questions, starting with: What is TinyML? TinyML is a community of engineers focused on how best to implement machine learning (ML) in ultra-low power systems. The first of their monthly meetings was dedicated to defining the issue.

read full article

TinyML Sees Big Hopes for Small AI (EE Times)

SUNNYVALE, Calif. – A group of nearly 200 engineers and researchers gathered here to discuss forming a community to cultivate deep learning in ultra-low power systems, a field they call TinyML. In presentations and dialogs, they openly struggled to get a handle on a still immature branch of tech’s fastest-moving area in hopes of enabling a new class of systems.

read full article

Meetups

We currently have two tinyML meetup.com groups called tinyML: Enabling ultra-low power ML at the Edge: a Bay Area group and an Austin, TX group.

The meetups are designed to be an informal gathering of people interested in various aspects of tinyML technologies and a great opportunity of networking and also a good way to grow the tinyML Community.

2019 Meetups

December 2 Bay Area Meetup

November 20 Austin Meetup

October 28 Bay Area Meetup

September 26 Meetups

There were two Meetups on September 26, one in the Bay Area, the other in Austin, TX. Links to the slides, and a video link for the Bay Area meeting.

Bay Area

Austin

July 25 Meetup

Held at Qualcomm in Santa Clara and attended by over 100 people, we had two speakers – click on title for slides:

June 27 Meetup

Our first meetup was held on June 27 at Qualcomm and was a great success. Over 130 people were in attendance. Below are the presentations and a video file:

Visit our Meetup.com event page for photos of the event.

tinyML meetups will be held on the last Thursday of each month. Visit our Meet up page for updates.

Meetup Committee

Contact Us

Bette Cooper
tinyML Summit Organizer
650-714-1570
bette@tinyml.org