This repository contains the experiments for the paper "Adversarial Attacks on Robotic Vision-Language-Action Models."
- Python 3.8+
- CUDA-compatible GPU (for efficient optimization)
- Dependencies listed in
robo_env.yml
-
Clone this repository:
git clone https://github.com/eliotjones1/robogcg.git cd robogcg -
Create and activate the conda environment:
conda env create -f robo_env.yml conda activate robo_env
-
Install the package:
pip install -e . -
Install cotracker for TraceVLA:
git clone https://github.com/facebookresearch/co-tracker cd co-tracker pip install -e . pip install matplotlib flow_vis tqdm tensorboard
To run the main gradient-based adversarial attacks:
python -m experiments.single_step.run_experiment \
--config experiments/single_step/configs/libero_10/libero_10_0.json \
--num-gpus 1 To evaluate how well adversarial prompts persist across multiple frames:
./scripts/run_persistence_experiment.shTo run these experiments, you will need to clone two additional repositories:
For CogACT, simply do the following:
git clone https://github.com/microsoft/CogACT
cd CogACT
pip install -e .For OpenPi0, you will need to clone the repo into the models directory:
git clone https://github.com/allenzren/open-pi-zero
mv open-pi-zero experiments/models/OpenPi0
cd experiments/models/OpenPi0
pip install -e .To test transferability of attacks across different models:
./scripts/run_transfer_experiment.sh To run experiments specifically targeting the TraceVLA model architecture:
./scripts/run_trace_experiment.sh./scripts/run_perplexity_defense.sh --perplexity_mode vla
# Or use LLM-only perplexity
./scripts/run_perplexity_defense.sh --perplexity_mode llm
# Run all variants
./scripts/run_perplexity_defense.sh --run_all_variants./scripts/run_perturbations_defense.sh./scripts/run_sysprompt_defense.shrobogcg/
├── experiments/
│ ├── defenses/ # Defense mechanism implementations
│ ├── models/ # Model wrapper implementations
│ └── single_step/ # Single-step experiment code
├── images/ # Test images for experiments
│ ├── libero_10/ # LIBERO task images
│ ├── libero_goal/ # Goal-oriented task images
│ ├── libero_object/ # Object manipulation task images
│ ├── libero_spatial/ # Spatial reasoning task images
│ └── seed/ # Seed images for experiments
├── roboGCG/ # Core implementation of the RoboGCG framework
├── README.md # This file
├── robo_env.yml # Conda environment specification
├── pyproject.toml # Project metadata
└── setup.py # Package installation configuration