This repository contains the official code release for Zero-Shot Unsupervised and Text-Based Audio Editing Using DDPM Inversion.
python -m pip install -r requirements.txt
Supported models are AudioLDM, TANGO, and AudioLDM2. For unsupervised editing, Stable Diffusion is also supported.
CUDA_VISIBLE_DEVICES=<gpu_num> python main_run.py --cfg_tar <target_cfg_strength> --cfg_src <source_cfg_strength> --init_aud <input_audio_path> --target_prompt <description of the wanted edited signal> --tstart <edit from timestep> --model_id <model_name> --results_path <path to dump results>
You can supply a source prompt that describes the original audio by using --source_prompt
.
Use python main_run.py --help
for all options.
use --mode ddim
to run DDIM inversion and editing (note that --tstart
must be equal to num_diffusion_steps
(by default set to 200
)).
First extract the PCs for your wanted timesteps:
CUDA_VISIBLE_DEVICES=<gpu_num> python main_pc_extract_inv.py --init_aud <input_audio_path> --model_id <model_name> --results_path <path to dump results> --drift_start <start extraction timestep> --drift_end <end extraction timestep> --n_evs <amount of evs to extract>
You can supply a source prompt that describes the original audio by using --source_prompt
.
Then apply the PCs:
CUDA_VISIBLE_DEVICES=<gpu_num> python main_pc_apply_drift.py --extraction_path <path to extracted .pt file> --drift_start <timestep to start apply> --drift_end <timestep to end apply> --amount <edit strength> --evs <ev nums to apply>
By using --use_specific_ts_pc <timestep num>
you choose a different
Add --combine_evs
to apply all the given PCs together.
Changing --evals_pt
to empty will try to get the eigenvalues from the extracted path, and will not work unless the applied timesteps were run in extraction.
Use python main_pc_extract_inv.py --help
and python main_pc_apply_drift.py --help
for all options.
To recreate the random vectors baseline, use --rand_v
. Image samples can be recreated using images_pc_extract_inv.py
and images_pc_apply_drift.py
.
SDEdit can be run similarly with:
CUDA_VISIBLE_DEVICES=<gpu_num> python main_run_sdedit.py --cfg_tar <target_cfg_strength> --init_aud <input_audio_path> --target_prompt <description of the wanted edited signal> --tstart <edit from timestep> --model_id <model_name> --results_path <path to dump results>
Use python main_run_sdedit.py --help
for all options.
Image samples can be recreated using images_run_sdedit.py
.
If you use this code for your research, please cite our paper:
@article{manor2024zeroshot,
title={Zero-Shot Unsupervised and Text-Based Audio Editing Using {DDPM} Inversion},
author={Manor, Hila and Michaeli, Tomer},
journal={arXiv preprint arXiv:2402.10009},
year={2024},
}
Parts of this code are heavily based on DDPM Inversion and on Gaussian Denoising Posterior.
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.