tinyML Talks: How to design a power frugal hardware for AI – the bio-inspiration path

Embedded AI is gaining traction for reasons of privacy, latency, safety of operation and energy consumption. Dedicated hardware accelerator must therefore be designed and fabricated, along with the associated learning and quantization strategies. The main challenge to be solved is the energy dissipation, and in data centric applications such as AI, the main source of energy comes from moving data. Current accelerators try and leverage weight quantization as well as sparsity of activations. We will get insights into what semiconductor technology can bring in that respect.

However, when compared to what biology achieves, the current state-of-the-art is still orders of magnitude away in terms of energy efficiency. We will see how brain inspiration does translate into circuit or technology specification. We will explore what spiking neural networks can bring, exploiting novel technologies, such as Non-Volatile Memories, for increasing data locality, as in the brain.

Date

September 23, 2021

Location

Virtual

Contact us

Discussion

Schedule

Timezone: PDT

How to design a power frugal hardware for AI – the bio-inspiration path

Alexandre VALENTIAN, Research Engineer

CEA

Embedded AI is gaining traction for reasons of privacy, latency, safety of operation and energy consumption. Dedicated hardware accelerator must therefore be designed and fabricated, along with the associated learning and quantization strategies. The main challenge to be solved is the energy dissipation, and in data centric applications such as AI, the main source of energy comes from moving data. Current accelerators try and leverage weight quantization as well as sparsity of activations. We will get insights into what semiconductor technology can bring in that respect.

However, when compared to what biology achieves, the current state-of-the-art is still orders of magnitude away in terms of energy efficiency. We will see how brain inspiration does translate into circuit or technology specification. We will explore what spiking neural networks can bring, exploiting novel technologies, such as Non-Volatile Memories, for increasing data locality, as in the brain.

Alexandre VALENTIAN, Research Engineer

CEA

After an MSc and a PhD in microelectronics, Alexandre Valentian joined CEA LETI in 2005. His past research activities included design technology co-optimization, promoting the FDSOI technology (notably through his participation in the SOI Academy), 2.5D/3D integration technologies and non-volatile memory technology. He is currently pursuing the development of bio-inspired circuits for AI, combining memory technology, information encoding and dedicated learning methods. Since 2020, he heads the Systems-on-Chip and Advanced Technologies (LSTA) laboratory. Dr Valentian has authored or co-authored 80 conference and journal papers.

Schedule subject to change without notice.