tinyML Talks: Train-by-weight (TBW): Accelerated Deep Learning by Data Dimensionality Reduction

Date

April 27, 2021

Location

Virtual

Contact us

Schedule

Timezone: PDT

Train-by-weight (TBW): Accelerated Deep Learning by Data Dimensionality Reduction

Michael JO, Assistant Professor

Rose-Hulman Institute of Technology

The state-of-the-arts pretrained machine/deep learning (M/DL) models are available in tinyML community for numerous applications. However, training these models for new objects and retraining the pretrained models are computationally expensive.
Our proposed Train-by-Weight (TBW) approach is a combination of linear classifier, such as principal component analysis (PCA), and a nonlinear classifier, such as deep learning model. There are two key contributions in this approach. First, we perform dimensionality reduction by generating weighted data sets using linear classifiers. Secondly, weighted data sets offer essential data sets to M/DL model. As a result, we achieved reduced training and verification time by maximum 88% in deep artificial neural network model with approximately 1% accuracy loss.
TinyML community may benefit from the proposed approach by faster training of M/DL models due to lower bandwidth of data. Moreover, this may offer energy efficient hardware/software solutions due to its relatively simple architecture.

Xingheng LIN, Researcher

Rose-Hulman Institute of Technology

The state-of-the-arts pretrained machine/deep learning (M/DL) models are available in tinyML community for numerous applications. However, training these models for new objects and retraining the pretrained models are computationally expensive.
Our proposed Train-by-Weight (TBW) approach is a combination of linear classifier, such as principal component analysis (PCA), and a nonlinear classifier, such as deep learning model. There are two key contributions in this approach. First, we perform dimensionality reduction by generating weighted data sets using linear classifiers. Secondly, weighted data sets offer essential data sets to M/DL model. As a result, we achieved reduced training and verification time by maximum 88% in deep artificial neural network model with approximately 1% accuracy loss.
TinyML community may benefit from the proposed approach by faster training of M/DL models due to lower bandwidth of data. Moreover, this may offer energy efficient hardware/software solutions due to its relatively simple architecture.

Michael JO, Assistant Professor

Rose-Hulman Institute of Technology

Michael Jo received his Ph.D. in Electrical and Computer Engineering in 2018 from the University of Illinois at Urbana-Champaign. He is currently an assistant professor at Rose-Hulman Institute of Technology in the department of Electrical and Computer Engineering. His current research interests are accelerated embedded machine learning, computer vision, and integration of artificial intelligence and nanotechnology.

Xingheng LIN, Researcher

Rose-Hulman Institute of Technology

Xingheng Lin was born in Jiangxi Province, China, in 2000. He is currently pursuing the B. S. degree in computer engineering at Rose-Hulman Institute of Technology. His primary research interests are Principal Component Analysis based machine learning and deep learning acceleration. Besides his primary research project, Xingheng is currently working on pattern recognition of rapid saliva COVID-19 test response which is a collaboration with 12-15 Molecular Diagnostics.

Schedule subject to change without notice.