tinyML Talks: ScaleDowStudy Group: Optimisation Techniques: Knowledge Distillation

Knowledge Distillation is a process of compressing information from a larger model(teacher) to a smaller model(student). This student model is trained using the predictions of a teacher model. This way the student model can be trained with unlabelled data, by using the teacher model to generate labels!
Join us on 19th August at 8 30 pm SGT to learn about Knowledge Distillation and try your hands at testing KD at the edge.

In this session, we will cover:
1. Introduction to Knowledge Distillation
2. Implementation of KD using Tensorflow and Pytorch
3. Using ScaleDow for KD optimisation technique
4. Test KD on an embedded device
5. Resources and Research Papers

Date

August 29, 2022

Location

Virtual

Contact us

Discussion

Schedule

Timezone: PDT

ScaleDowStudy Group: Optimisation Techniques: Knowledge Distillation

Soham CHATTERJEE, ML engineer

Sleek Tech

Soham CHATTERJEE, ML engineer

Sleek Tech

Soham is a machine learning engineer at Sleek Tech, Singapore. Previously, he was a research master’s Student at NTU where he did research on combining edge computing techniques with neuromorphic hardware to build optimized microcontrollers. He is also the instructor for Udacity Nanodegree “Intel Edge AI for IoT Developers”, where he taught how to optimize models for edge computing applications. Soham’s passion for TinyML and MLOps led him to combine the two to build tools and techniques to efficiently and easily deploy TinyML models including ScaleDown where he is a core developer. Apart from this, Soham is also the instructor for Udacity’s “Machine Learning Engineer with Microsoft Azure” Nanodegree and “AWS Machine Learning” Nanodegree.

 

 

Schedule subject to change without notice.