Have you ever wondered what makes neural networks, like MobileNetV3, FBNet, and BlazeFace, so special? These networks may be found in commonplace items like our phone or TV.
Their primary challenge—and area of interest—lies in the efficient creation of a neural network for low-power devices.
This lecture will cover the following topics:
– the rationale for the design of layers like Fire module and Squeeze-and-Excitation;
– the best methods for determining the number of model parameters, width, and depth of architecture layers, such as EfficientNet or Model Rubik’s Cube algorithms;
– the SOTA solutions that will enable you to accelerate and optimize certain layers of your neural network.
Those who want to understand how to create a lightweight, effective neural network will find this session to be interesting.
Schedule
Timezone: PST
Lightweight Neural Network Architectures
Andrii POLUKHIN, Machine Learning Engineer
Data Science UA
Andrii POLUKHIN, Machine Learning Engineer
Data Science UA
The main areas of Andrii’s interests are the research of Deep Learning architectures, methods of training deep networks, intuition and mathematical theorems behind it, interaction of AI with the surrounding world.
Schedule subject to change without notice.