Skip to content
View bboylyg's full-sized avatar
😀
I may be slow to respond.
😀
I may be slow to respond.
  • Singapore

Block or report bboylyg

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
bboylyg/README.md

Hi there, I am Yige Li👋

I am a research fellow at the School of Computing and Information Systems at Singapore Management University supervised by Prof. Jun Sun. I also work closely with the Prof. Xingjun Ma at Fudan university. I have completed my Ph.D. degree at Xidian University supervised by Prof. Xixiang Lyu. Research publications in Google Scholar.

🔭 My research mainly focus on:

  • Understanding the effectiveness of backdoor attacks
  • Robust training against backdoor attacks
  • Design and implement a general defense framework for backdoor attacks

🌱 Publications:

  • Yige Li, Xingjun Ma, et al., “Multi-Trigger Backdoor Attacks: More Triggers, More Threats”, submitting, 2024.
  • Yige Li, Xixiang Lyu, et al., “Reconstructive Neuron Pruning for Backdoor Defense”, ICML 2023.
  • Yige Li, Xixiang Lyu, et al., “Anti-Backdoor Learning: Training Clean Models on Poisoned Data”, NeurIPS 2021.
  • Yige Li, Xixiang Lyu, et al., “Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks”, ICLR 2021.

⚡ Significance of our works:

  • Neural Attention Distillation (NAD)

    • A simple and universal method against 6 state-of-the-art backdoor attacks via knowledge distillation
    • Only a small amount of clean data is required (5%)
    • Only a few epochs of fine-tuning (2-10 epochs) are required
  • Anti-Backdoor Learning (ABL)

    • Simple, effective, and universal, can defend against 10 state-of-the-art backdoor attacks
    • 1% isolation data is required
    • A novel stratrgy benefit companies, research institutes, or government agencies to train backdoor-free machine learning models

📫 How to reach me:

Pinned Loading

  1. BackdoorLLM BackdoorLLM Public

    BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks on Large Language Models

    Python 86 6

  2. NAD NAD Public

    This is an implementation demo of the ICLR 2021 paper [Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks](https://openreview.net/pdf?id=9l0K4OM-oXE) in PyTorch.

    Python 119 13

  3. ABL ABL Public

    Anti-Backdoor learning (NeurIPS 2021)

    Python 79 10

  4. RNP RNP Public

    Reconstructive Neuron Pruning for Backdoor Defense (ICML 2023)

    Python 28 3

  5. Expose-Before-You-Defend Expose-Before-You-Defend Public

    Expose Before You Defend: Unifying and Enhancing Backdoor Defenses via Exposed Models

    Python 3

  6. Multi-Trigger-Backdoor-Attacks Multi-Trigger-Backdoor-Attacks Public

    Shortcuts Everywhere and Nowhere: Exploring Multi-Trigger Backdoor Attacks

    Python 1 2