Sensitivity-Aware Bit Allocation for Intermediate
Deep Feature Compression


Teaser

Fig.1 Pipeline of the Sensitivity-Aware Bit Allocation for intermediate deep feature compression.

Abstract

In this paper, we focus on compressing and transmitting deep intermediate features to support the prosperous applications at the cloud side efficiently, and propose a sensitivity-aware bit allocation algorithm for the deep intermediate feature compression. Considering that different channels’ contributions to the final inference result of the deep learning model might differ a lot, we design a channel-wise bit allocation mechanism to maintain the accuracy while trying to reduce the bit-rate cost. The algorithm consists of two passes. In the first pass, only one channel is exposed to compression degradation while other channels are kept as the original ones in order to test this channel’s sensitivity to the compression degradation. This process will be repeated until all channels’ sensitivity is obtained. Then, in the second pass, bits allocated to each channel will be automatically decided according to the sensitivity obtained in the first pass to make sure that the channel with higher sensitivity can be allocated with more bits to maintain accuracy as much aspossible. With the well-designed algorithm, our method surpasses state-of-the-art compression tools with on average 6.4% BD-ratesaving.

Resourses

  • Paper: Coming soon!
  • Citation

    @article{hyz2020vcip,   title={Sensitivity-Aware Bit Allocation for Intermediate Deep Feature Compression},   author={Hu, Yuzhang and Xia, Sifeng and Yang, Wenhan and Liu, Jiaying},   booktitle={IEEE International Conference on Visual Communications and Image Processing (VCIP)},   year={2020},   publisher={IEEE} }

    Feature Compression Results

    Table 1. Results of rate reduction of the proposed method.


    Fig 2. The Rate-Fidelity curve of layer Conv1 for VGG and ResNet.