An Adaptive Convolution Neural Network Weight Parameter Quantization Method for FPGA

Authors

  • Surbhi Sharma

Keywords:

Convolution Neural Network, Direct Quantization, FPGA, Log Quantization, Weight Parameter Quantization

Abstract

In this paper, a CNN weight boundary quantization strategy that is reasonable for FPGA has been planned. By making the weight boundary logarithm dependent on 2, the duplication of convolution is streamlined to move, which is anything but difficult to be acknowledged in FPGA. Contrasted and the customary direct quantization strategy, the quantization productivity of the logarithm quantization technique planned is improved significantly. If the cycle width of the conventional quantization strategy is N that the quantization bit width of the log quantization technique is changed to ceil (log2 (n-1)) +1, and the handling postponement of the log quantization strategy is superior to that of the immediate quantization strategy, particularly on account of little scope digit width preparing. The benefits of involving LUTs, FFs, BRAMs and DSP48esetc equipment assets are self-evident, and the handling exactness is like that of the immediate quantization strategy, which is reasonable for huge scope equal quickened activity.

Published

2020-11-22

Issue

Section

Articles