Attenion Attacks on Transformer-based Object Detection
Published:
To date, white-box adversarial methods are only capable of attacking convolutional neural network (CNN)-based object detection models. In this project, I devise a novel attention-based attack that is capable of corrupting both CNN-based and transformer-based approaches. Early results show that this method is effective against many of the state-of-the-art object detection models, having implications for important applications like autonomous driving, radiology, facial recognition, and more.
This project is currently in development.
Check out the code base here
Examples of this attack borrowed from its predecessor, TOG. Variations include vanishing, fabrication, mislabeling, and random untargeted.