Enabling ultra-low Power Machine Learning at the Edge
November 16-19, 2020 | Online
Machine learning (ML) is at the forefront of providing artificial intelligence to all aspects of computing. It is the technology powering many of today’s advanced applications from image recognition to voice interfaces to self-driving vehicles and beyond. Many of these initial ML applications require significant computational resources most often found in cloud-scale data centers. To enable industry usage and adoption, it is therefore necessary to significantly reduce the power consumed to bring applications to end devices at the cloud edge (smartphones, wearables, vehicles, IoT devices, etc.) and to reduce the load on required data center resources.
tinyML Technical Forum Asia 2020 will be the first tinyML “regional” event and will be held on November 16-19, 2020 from 9 to 11:30 am (China Standard Time, UTC+8) each day. The online workshop will be focused on applications, end users, and supply chain for tiny ML from both a global and Asian perspective. Unlike other existing big industry and academic events that lack focus on low power ML solutions, tinyML events cover the entire ecosystem bringing industry and academia together.
Chinese Academy of Sciences
Director, Institute of Microelectronics, Tsinghua University
Shanghai Jiao Tong University
Institute of Electronics, National Chiao Tung University
National University of Singapore
Institute of Microelectronics, Tsinghua University
Shanghai Jiao Tong University
Arm technology is at the heart of a computing and data revolution that is transforming the way people live and businesses operate. Our energy-efficient processor designs and software platforms have enabled advanced computing in more than 180 billion chips and our technologies securely power products from the sensor to the smartphone and the supercomputer. Together with 1,000+ technology partners we are at the forefront of designing, securing and managing all areas of AI-enhanced connected compute from the chip to the cloud.
EdgeCortix’s Dynamic Neural Accelerator ™ architecture and industry first AI hardware and software co-exploration engine enable adaptation of neural inference co-processor designs in an application specific manner, bringing cloud-level performance to resource constrained edge devices, for ultra-low latency, low cost and energy-efficient deep neural network inference. They offer their proprietary DNA neural inference co-processor IP, co-exploration software and compiler, designed to deliver from 6 TOPs to greater than 100 TOPs with a small power budget as full SoC, while avoiding ultra-low bit-width quantization. DNA is designed to be hyper-scalable with flexible programming starting from high-level frameworks like Tensorflow and Pytorch.
SynSense builds ultra-low-power (sub-mW) sensing and inference hardware for embedded, mobile and edge devices. We design systems for real-time always-on smart sensing, for audio, vision, IMUs, bio-signals and more. SynSense provides low-power ASICs for machine learning inference and signal processing, based on asynchronous event-driven computation. We specialise in mixed-signal and asynchronous digital designs, and provide application development services to support our HW design and customisation work. Our unique technological edge and IP portfolio comes from over 20 years of experience in mixed-signal neural processor design, advanced neural routing architectures, and neural algorithms.