tinyML Talks: Introduction to optimization algorithms for compressing neural networks

Date

November 4, 2020

Location

Virtual

Contact us

Discussion

Schedule

Timezone: PST

Introduction to optimization algorithms for compressing neural networks

Marcus RÜB, Data Scientist & Machine learning engineer

Hahn-Schickard

Deep neural networks enable state-of-the-art accuracy in visual recognition tasks such as image classification and object recognition. However, modern networks contain millions of learned connections, and the current trend is toward deeper and more tightly connected architectures. This poses a challenge for the deployment of advanced networks on resource-constrained systems such as smartphones or mobile applications. To make neural networks on embedded devices more usable, there are different techniques to compress the models.
In this Talk the most common compression algorithms will be presented and their functionality explained. Among the techniques presented will be pruning, quantization and others.

Marcus RÜB, Data Scientist & Machine learning engineer

Hahn-Schickard

Marcus Rüb studied electrical engineering at Furtwangen University. After completing his bachelor’s degree, he worked as a scientific assistant for AI at Hahn-Schickard while completing his master’s degree. His main interest is in embedded AI. This often involves the implementation of machine learning algorithms on embedded devices and the compression of ML models. Furthermore Marcus is one of the federal funded AI trainers and supports companies in integrating AI into their processes.

Schedule subject to change without notice.