tinyML Talks: FFConv: An FPGA-based Accelerator for Fast Convolution Layers in Convolutional Neural Network

Image classification is known to be one of the most challenging problems in the domain of computer vision. Significant research is being done on developing systems and algorithms improving accuracy, performance, area, and power consumption for related problems.
Convolutional Neural Networks (CNNs) have been shown to give outstanding accuracies for problems such as image classification, object detection and semantic segmentation. While CNNs are pioneering the development of high accuracy systems, their excessive computational complexity presents a barrier for a more permeated deployment. Although Graphical Processing Units (GPUs), due to their massively parallel architecture, have been shown to give performance orders of magnitude better than general-purpose processors, the former are limited by their higher power consumption and generality.
Consequently, Field Programmable Gate Arrays (FPGAs) are being explored to implement CNN architectures as they also provide massively parallel logic resources but with a relatively lower power consumption than GPUs. In this talk, we present FFConv, an efficient FPGA-based fast convolutional layer accelerator for CNNs. We design a pipelined, high-throughput convolution engine based on the Winograd minimal filtering (also called Fast Convolution) algorithms for computing the convolutional layers of three popular CNN architectures, VGG16, Alexnet, and Shufflenet.

Date

January 20, 2022

Location

Virtual

Contact us

Discussion

Schedule

Timezone: PST

FFConv: An FPGA-based Accelerator for Fast Convolution Layers in Convolutional Neural Network

Muhammad Adeel PASHA, Associate professor

Lahore University of Management Sciences

Image classification is known to be one of the most challenging problems in the domain of computer vision. Significant research is being done on developing systems and algorithms improving accuracy, performance, area, and power consumption for related problems.
Convolutional Neural Networks (CNNs) have been shown to give outstanding accuracies for problems such as image classification, object detection and semantic segmentation. While CNNs are pioneering the development of high accuracy systems, their excessive computational complexity presents a barrier for a more permeated deployment. Although Graphical Processing Units (GPUs), due to their massively parallel architecture, have been shown to give performance orders of magnitude better than general-purpose processors, the former are limited by their higher power consumption and generality.
Consequently, Field Programmable Gate Arrays (FPGAs) are being explored to implement CNN architectures as they also provide massively parallel logic resources but with a relatively lower power consumption than GPUs. In this talk, we present FFConv, an efficient FPGA-based fast convolutional layer accelerator for CNNs. We design a pipelined, high-throughput convolution engine based on the Winograd minimal filtering (also called Fast Convolution) algorithms for computing the convolutional layers of three popular CNN architectures, VGG16, Alexnet, and Shufflenet. We implement our accelerator on a Virtex-7 FPGA platform where we exploit the computational parallelization to the maximum while exploring optimizations aimed at improving performance.

Muhammad Adeel PASHA, Associate professor

Lahore University of Management Sciences

Muhammad Adeel Pasha (Senior Member, IEEE) received the B.Sc. degree in Electrical Engineering from the University of Engineering and Technology (UET), Lahore, Pakistan, in 2004 and the M.S. and Ph.D. degrees in Electrical and Computer Engineering (ECE) with specialization in Embedded Systems from University of Nice Sophia-Antipolis, Nice and University of Rennes-I, Rennes, France in 2007 and 2010, respectively. He is currently working as an associate professor with the Department of Electrical Engineering, Lahore University of Management Sciences (LUMS), Pakistan. He is also the director of the Electronics and Embedded Systems Lab at LUMS since 2014. He has several years of research and development experience and has published numerous refereed papers in major international journals and conferences. His research interests include energy-efficient hardware design for compute-intensive applications, real-time scheduling for multicore systems, and future platforms for green computing.

Schedule subject to change without notice.