Affordable Access

deepdyve-link
Publisher Website

SSGD: SPARSITY-PROMOTING STOCHASTIC GRADIENT DESCENT ALGORITHM FOR UNBIASED DNN PRUNING.

Authors
  • Lee, Ching-Hua1
  • Fedorov, Igor2
  • Rao, Bhaskar D1
  • Garudadri, Harinath1
  • 1 Department of ECE, University of California, San Diego.
  • 2 ARM ML Research.
Type
Published Article
Journal
Proceedings of the ... IEEE International Conference on Acoustics, Speech, and Signal Processing. ICASSP (Conference)
Publication Date
May 01, 2020
Volume
2020
Pages
5410–5414
Identifiers
DOI: 10.1109/icassp40776.2020.9054436
PMID: 33162834
Source
Medline
Keywords
Language
English
License
Unknown

Abstract

While deep neural networks (DNNs) have achieved state-of-the-art results in many fields, they are typically over-parameterized. Parameter redundancy, in turn, leads to inefficiency. Sparse signal recovery (SSR) techniques, on the other hand, find compact solutions to overcomplete linear problems. Therefore, a logical step is to draw the connection between SSR and DNNs. In this paper, we explore the application of iterative reweighting methods popular in SSR to learning efficient DNNs. By efficient, we mean sparse networks that require less computation and storage than the original, dense network. We propose a reweighting framework to learn sparse connections within a given architecture without biasing the optimization process, by utilizing the affine scaling transformation strategy. The resulting algorithm, referred to as Sparsity-promoting Stochastic Gradient Descent (SSGD), has simple gradient-based updates which can be easily implemented in existing deep learning libraries. We demonstrate the sparsification ability of SSGD on image classification tasks and show that it outperforms existing methods on the MNIST and CIFAR-10 datasets.

Report this publication

Statistics

Seen <100 times