Towards cross-task universal perturbation against black-box object detectors in autonomous driving

Quanxin Zhang, Yuhang Zhao, Yajie Wang, Thar Baker, Jian Zhang, Jingjing Hu

Research output: Contribution to journalArticlepeer-review

Abstract

Deep neural network is the main research branch in artificial intelligence and suitable for many decision-making fields. Autonomous driving and unmanned vehicle often depend on deep neural networks for accurate and reliable detection, classification, and ranging of surrounding objects in real on-road environments, either locally or by swarm intelligence among distributed nodes via 5G channel. But, it has been demonstrated that deep neural networks are vulnerable to well-designed adversarial examples that are imperceptible to human eyes in computer vision tasks. It is valuable to study the vulnerability for enhancing the robustness of neural networks. However, existing adversarial examples against object detection models are image-dependent, so in this paper, we implement adversarial attacks against object detection models using universal perturbations. We find the cross-task, cross-model, and cross-dataset transferability of universal perturbations, we train universal perturbations generator firstly and then add the universal perturbations to the target images in two ways: resizing and pile-up, in order to solve the problem that universal perturbations cannot be directly applied to attack object detection models. We use the transferability of universal perturbations to attack black-box object detection models. In this way, the time cost of generating adversarial examples is reduced. A series of experiments are conducted on PASCAL VOC and MS COCO datasets demonstrating the feasibility of cross-task attacks and proving the effectiveness of our attack on two representative object detectors: regression-based models like YOLOv3 and proposal-based models like Faster R-CNN.

Original languageEnglish
Article number107388
JournalComputer Networks
Volume180
DOIs
Publication statusPublished - 15 Jul 2020

Bibliographical note

Funding Information:
This work is supported by the National Natural Science Foundation of China under Grant No. 61876019 .

Publisher Copyright:
© 2020

Keywords

  • Adversarial example
  • Object detection
  • Universal perturbation

Cite this