BANDS Workshop at ICLR 2023 - Department of Computer Science - Purdue University Skip to main content

BANDS Workshop at ICLR 2023

12-07-2022

BANDS 2023 is organized by Professor Xiangyu Zhang and PhD students Guanhong Tao and Kaiyuan Zhang (plus additional researchers at other institiutions. 

 

The Department of Computer Science welcomes attendees to BANDS (Backdoor Attacks and Defenses in Machine Learning) 

The event be held virtually on May 5, 2023 in conjuction with ICLR 2023. Use this link to register.

 

Overview

Backdoor attacks aim to cause consistent misclassification of any input by adding a specific pattern called a trigger. Unlike adversarial attacks requiring generating perturbations on the fly to induce misclassification for one single input, backdoor attacks have prompt effects by simply applying a pre-chosen trigger. Recent studies have shown the feasibility of launching backdoor attacks in various domains, such as computer vision (CV), natural language processing (NLP), federated learning (FL), etc. As backdoor attacks are mostly carried out through data poisoning (i.e., adding malicious inputs to training data), it raises major concerns for many publicly available pre-trained models. Companies relying on user data to construct their machine learning models are also susceptible to backdoor attacks.

Defending against backdoor attacks has sparked multiple lines of research, including detecting inputs with backdoor triggers, determining whether a model has hidden backdoors, eliminating potential backdoors inside a model, etc. Many defense techniques are effective against some particular types of backdoor attacks. However, with increasingly emerging diverse backdoors, the defense performance of existing work tends to be limited. Most defense techniques and attacks are developed for the computer vision domain. It is yet to explore the connection between attacks and defenses among different domains.

This workshop, Backdoor Attacks aNd DefenSes in Machine Learning (BANDS), aims to bring together researchers from government, academia, and industry that share a common interest in exploring and building more secure machine learning models against backdoor attacks. With the wide adoption of large pre-trained models in real-world applications, any injected malicious behaviors, such as backdoors in those models, are particularly concerning. It is, therefore, particularly important to gather researchers in the area and expand the community to improve the security of machine learning.

 

This workshop aims to answer the following questions:

  • What other types of backdoor attacks can we find in CV/NLP/FL machine learning models?
  • Can we launch backdoor attacks in other domains, such as binary analysis tools, network intrusion detection systems, reinforcement learning, etc.?
  • What are the similarities and differences of backdoor attacks in various tasks?
  • How can we measure the stealthiness of backdoor attacks in different domains? What are the costs and practicality of launching backdoor attacks in the real world?
  • What is the performance of existing defense techniques in studied domains? Can they be adapted to other domains?
  • How can we develop a general defense method against a variety of backdoor attacks and even unseen attacks?
  • Are there other forms of defenses that are practical in the real world?

 

Organizers

BANDS 2023 is organized by Professor Xiangyu Zhang and PhD students Guanhong Tao and Kaiyuan Zhang (plus additional researchers at other institiutions. 

 

Call for Papers

Submissions are invited on any aspect of backdoor attacks and defenses in machine learning, which includes but is not limited to:

  • Novel backdoor attacks against ML systems, including CV, NLP, ML models in cyber-physical systems, etc.
  • Detecting backdoored models under different threat models, such as having limited clean data or no data, no access to model weights, using attack samples, etc.
  • Eliminating backdoors in attacked models under different settings, such as limited access or no access to the original training/test data
  • Certification/verification methods against backdoor attacks with guarantees
  • Real-world or physical backdoor attacks in deployed systems, such as autonomous driving systems, facial recognition systems, etc.
  • Hardware-based backdoor attacks in ML
  • Backdoors in distributed learning, federated learning, reinforcement learning, etc.
  • Theoretical understanding of backdoor attacks in machine learning
  • Explainable and interpretable AI in backdoor scenario
  • Futuristic concerns on trustworthiness and societal impact of ML systems regarding backdoor threats
  • Exploration of the relation among backdoors, adversarial robustness, fairness
  • New applications of backdoors in other scenarios, such as watermarking ML property, boosting privacy attacks, etc.

Last Updated: Dec 8, 2022 11:03 AM

Department of Computer Science, 305 N. University Street, West Lafayette, IN 47907

Phone: (765) 494-6010 • Fax: (765) 494-0739

Copyright © 2024 Purdue University | An equal access/equal opportunity university | Copyright Complaints

Trouble with this page? Disability-related accessibility issue? Please contact the College of Science.