Kherchouche, Anouar Fezza, Sid Ahmed Hamidouche, Wassim
Despite the enormous performance of deep neural networks (DNNs), recent studies have shown their vulnerability to adversarial examples (AEs), i.e., carefully perturbed inputs designed to fool the targeted DNN. Currently, the literature is rich with many effective attacks to craft such AEs. Meanwhile, many defense strategies have been developed to m...
Shamshiri, Samaneh Sohn, Insoo
Published in
ICT Express
Rapid progress and widespread outbreak of COVID-19 have caused devastating influence on the health systems all around the world. The importance of countermeasures to tackle this problem lead to widespread use of Computer Aided Diagnosis (CADs) applications using deep neural networks. The unprecedented success of machine learning techniques, especia...
James, Hailey Gupta, Otkrist Raviv, Dan
Published in
EURASIP Journal on Image and Video Processing
Examining the authenticity of images has become increasingly important as manipulation tools become more accessible and advanced. Recent work has shown that while CNN-based image manipulation detectors can successfully identify manipulations, they are also vulnerable to adversarial attacks, ranging from simple double JPEG compression to advanced pi...
Haleta, Pavlo Likhomanov, Dmytro Sokol, Oleksandra
Published in
EURASIP Journal on Information Security
Recently, adversarial attacks have drawn the community’s attention as an effective tool to degrade the accuracy of neural networks. However, their actual usage in the world is limited. The main reason is that real-world machine learning systems, such as content filters or face detectors, often consist of multiple neural networks, each performing an...
Liang, Ling Hu, Xing Deng, Lei Wu, Yujie Li, Guoqi Ding, Yufei Li, Peng Xie, Yuan
Spiking neural network (SNN) is broadly deployed in neuromorphic devices to emulate brain function. In this context, SNN security becomes important while lacking in-depth investigation. To this end, we target the adversarial attack against SNNs and identify several challenges distinct from the artificial neural network (ANN) attack: 1) current adve...
Yang, Runze Long, Teng
Published in
PeerJ Computer Science
In recent years, graph convolutional networks (GCNs) have emerged rapidly due to their excellent performance in graph data processing. However, recent researches show that GCNs are vulnerable to adversarial attacks. An attacker can maliciously modify edges or nodes of the graph to mislead the model’s classification of the target nodes, or even caus...
Zhang, Hanwei
This thesis is about the adversarial attacks and defenses in deep learning. We propose to improve the performance of adversarial attacks in the aspect of speed, magnitude of distortion, and invisibility. We contribute by defining invisibility with smoothness and integrating it into the optimization of producing adversarial examples. We succeed in c...
Zhang, Hanwei
This thesis is about the adversarial attacks and defenses in deep learning. We propose to improve the performance of adversarial attacks in the aspect of speed, the magnitude of distortion, and invisibility. We contribute by defining invisibility with smoothness and integrating it into the optimization of producing adversarial examples. We succeed ...
Amsaleg, Laurent Bailey, James Barbe, Amelie Erfani, Sarah Furon, Teddy Houle, Michael Radovanovic, Milos Vinh, Nguyen Xuan
International audience
Mygdalis, Vasileios Tefas, Anastasios Pitas, Ioannis
Published in
Neural networks : the official journal of the International Neural Network Society
A novel adversarial attack methodology for fooling deep neural network classifiers in image classification tasks is proposed, along with a novel defense mechanism to counter such attacks. Two concepts are introduced, namely the K-Anonymity-inspired Adversarial Attack (K-A3) and the Multiple Support Vector Data Description Defense (M-SVDD-D). The pr...