Exploiting Artificial Neural Networks Machine Learning Errors for Attacks on AI Systems
PDF (Russian)

Keywords

artificial neural networks
machine learning errors
attacks on artificial intelligence systems

How to Cite

1.
Gavrilenko T.V., Gavrilenko A.V. Exploiting Artificial Neural Networks Machine Learning Errors for Attacks on AI Systems // Russian Journal of Cybernetics. 2021. Vol. 2, № 3. P. 23-32. DOI: 10.51790/2712-9942-2021-2-3-4.

Abstract

The paper provides an overview of methods and approaches to attacks on neural network-based artificial intelligence systems. It is shown that since 2015, global researchers have been intensively developing methods and approaches for attacks on artificial neural networks, while the existing ones may have critical consequences for artificial intelligence systems operations. We come to the conclusion that theory and methodology for artificial neural networks is to be elaborated, since trusted artificial intelligence systems cannot be created in the framework of the current paradigm.

https://doi.org/10.51790/2712-9942-2021-2-3-4
PDF (Russian)

References

Goodfellow I. J., Shlens J., Szegedy C. Explaining and Harnessing Adversarial Examples. Proceedings of the International Conference on Learning Representations (ICLR). 2015.

Kexin Pei, Yinzhi Cao, Junfeng Yang, Suman Jana. DeepXplore: Automated Whitebox Testing of Deep Learning Systems. SOSP ’17, October 28, 2017, Shanghai, China.

Jiawei Su, Danilo Vasconcellos Vargas, Kouichi Sakurai. One Pixel Attack for Fooling Deep Neural Networks. 2017. Режим доступа: https://arxiv.org/pdf/1710.08864.pdf.

Andrew Ilyas, Logan Engstrom, Anish Athalye, Jessy Lin. Query-efficient Black-box Adversarial Examples. 2017. Режим доступа: https://arxiv.org/pdf/1712.07113.pdf.

Tom B. Brown, Dandelion Mané, Aurko Roy, Martín Abadi, Justin Gilmer. Adversarial Patch. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. 2017. Режим доступа: https://arxiv.org/pdf/1712.07113.pdf.

Moustafa Alzantot, Bharathan Balaji, Mani Srivastava. Did You Hear That? Adversarial Examples Against Automatic Speech Recognition. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. 2017. Режим доступа: https://arxiv.org/pdf/1801.00554v1.pdf.

Shehzeen Hussain, Paarth Neekhara, Malhar Jere, Farinaz Koushanfar, Julian McAuley. Adversarial Deepfakes: Evaluating Vulnerability of Deepfake Detectors to Adversarial Examples. WACV 2021. Режим доступа: https://adversarialdeepfakes.github.io/.

DeepRobust: библиотека для состязательных атак на нейросети. Режим доступа: https://neurohive.io/ru/frameworki/deeprobust-biblioteka-dlya-sostyazatelnyh-atak-na-nejroseti/.

Ajaya Adhikari, Richard den Hollander, Ioannis Toliosa, Michael van Bekkuma, Anneloes Balc, Stijn Hendriksc, Maarten Kruithofb, Dennis Grossd, Nils Jansend, Guillermo Péreze, Kit Buurmana, Stephan Raaijmakersa. Adversarial Patch Camouflage against Aerial Detection. 2020. Режим доступа: https://arxiv.org/pdf/1712.07113.pdf.

Downloads

Download data is not yet available.