tinyML Talks: Suitability of Forward-Forward and PEPITA Learning to MLCommons-Tiny benchmarks

On-device learning challenges the restricted memory and computation requirements imposed by its deployment on tiny devices. Current training algorithms are based on backpropagation which requires storing intermediate activations to compute the backward pass and to update the weights into the memory.
Recently ”Forward-only algorithms” have been proposed as biologically plausible alternatives to backpropagation. At the same time, they remove the need to store the intermediate activations which potentially lower the power consumption due to memory read and write operations, thus, opening to new opportunities for further savings. This talk investigates quantitatively the improvements in terms of complexity and memory usage brought by PEPITA and Forward-Forward computing approaches with respect to backpropagation on the MLCommons-Tiny benchmarks set as case studies. It was observed that the reduction in activations’ memory provided by ”Forward-only algorithms” does not affect total RAM in Fully-connected networks. On the other hand, Convolutional neural networks benefit the most from such reduction due to lower parameters-activations ratio. In the context of the latter, a memory-efficient version of PEPITA reduces, on average, one third of the total RAM with respect to backpropagation, introducing only a third more complexity.
Forward-Forward brings average memory reduction to 40%, and it involves additional computation at inference that, depending on the benchmarks studied, can be costly on micro-controllers.

Date

September 19, 2023

Location

Virtual

Contact us

Discussion

Schedule

Timezone: PDT

Suitability of Forward-Forward and PEPITA Learning to MLCommons-Tiny benchmarks

Danilo PAU, Technical Director, IEEE & ST Fellow, System Research and Applications

ST Microelectronics

Danilo PAU, Technical Director, IEEE & ST Fellow, System Research and Applications

ST Microelectronics

Danilo PAU (h-index 25, i10-index 67) graduated in 1992 at Politecnico di Milano, Italy. One year before his graduation, he joined SGS-THOMSONS (now STMicroelectronics) as interns on Advanced Multimedia Architectures, and he worked on memory reduced HDMAC HW design. Then MPEG2 video memory reduction. Next, on video coding, transcoding, embedded 2/3 graphics, and computer vision. Currently, his work focuses on developing solutions for tiny machine learning tools.
Since 2019 Danilo is an IEEE Fellow; he served as Industry Ambassador coordinator for IEEE Region 8 South Europe, was vice-chairman of the “Intelligent Cyber-Physical Systems” Task Force within IEEE CIS, was IEEE R8 AfI member in charge of internship initiative. Today he is a Member of the Machine Learning, Deep Learning and AI in the CE (MDA) Technical Stream Committee CESoc. He was AE of IEEE TNNLS.
He wrote the IEEE Milestone on Multiple Silicon Technologies on a chip, 1985 which was ratified by IEEE BoD in 2021 and IEEE Milestone on MPEG Multimedia Integrated Circuits, 1984-1993 which was ratified in 2022. He served as TPC member to TinyML EMEA forum and is the chair of the TinyML on Device Learning working group. He serves as 2023 IEEE Computer Society Fellow Evaluating Committee Members.
With over 83 application patents, 150 publications, 113 MPEG authored documents and 67 invited talks/seminars at various Universities and Conferences, Danilo’s favorite activity remains supervising undergraduate students, MSc engineers and PhDs.

Schedule subject to change without notice.