Affordable Access

Access to the full text

Detecting and Mitigating Adversarial Perturbations for Robust Face Recognition

Authors
  • Goswami, Gaurav1
  • Agarwal, Akshay1
  • Ratha, Nalini2
  • Singh, Richa1
  • Vatsa, Mayank1
  • 1 IIIT-Delhi, New Delhi, India , New Delhi (India)
  • 2 IBM, TJ Watson Research Center, Yorktown Heights, USA , Yorktown Heights (United States)
Type
Published Article
Journal
International Journal of Computer Vision
Publisher
Springer-Verlag
Publication Date
Mar 22, 2019
Volume
127
Issue
6-7
Pages
719–742
Identifiers
DOI: 10.1007/s11263-019-01160-w
Source
Springer Nature
Keywords
License
Yellow

Abstract

Deep neural network (DNN) architecture based models have high expressive power and learning capacity. However, they are essentially a black box method since it is not easy to mathematically formulate the functions that are learned within its many layers of representation. Realizing this, many researchers have started to design methods to exploit the drawbacks of deep learning based algorithms questioning their robustness and exposing their singularities. In this paper, we attempt to unravel three aspects related to the robustness of DNNs for face recognition: (i) assessing the impact of deep architectures for face recognition in terms of vulnerabilities to attacks, (ii) detecting the singularities by characterizing abnormal filter response behavior in the hidden layers of deep networks; and (iii) making corrections to the processing pipeline to alleviate the problem. Our experimental evaluation using multiple open-source DNN-based face recognition networks, and three publicly available face databases demonstrates that the performance of deep learning based face recognition algorithms can suffer greatly in the presence of such distortions. We also evaluate the proposed approaches on four existing quasi-imperceptible distortions: DeepFool, Universal adversarial perturbations, l2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$l_2$$\end{document}, and Elastic-Net (EAD). The proposed method is able to detect both types of attacks with very high accuracy by suitably designing a classifier using the response of the hidden layers in the network. Finally, we present effective countermeasures to mitigate the impact of adversarial attacks and improve the overall robustness of DNN-based face recognition.

Report this publication

Statistics

Seen <100 times