Skip to content
/ WeGeFT Public
forked from savadikarc/wegeft

WeGeFT: Weight‑Generative Fine‑Tuning for Multi‑Faceted Efficient Adaptation of Large Models

License

Notifications You must be signed in to change notification settings

iVMCL/WeGeFT

 
 

Repository files navigation

WeGeFT: Weight‑Generative Fine‑Tuning for Multi‑Faceted Efficient Adaptation of Large Models

Chinmay Savadikar1, Xi Song2, Tianfu Wu1
1North Carolina State University, 2An Independent Researcher
ICML 2025
[Openreview] | [ArXiv] | [Website]

Method Overview

Performance

Visual Interpretability

PEFT integration

We provide a custom peft package with WeGeFT integration, forked from peft==0.12.0

Language Modeling

The language modeling experiments use the custom peft package. Please refer to language_modeling/README.md for instructions about envionment setup, installation and scripts.

Visual Recognition

The visual recognition experiments use a custom WeGeFT code, provded in visual_recognition. Please refer to visual_recognition/README.md for instructions about envionment setup, installation and scripts.

Acknowledgements

This code is based on code from timm, TOAST, pyreft, LoRA-GA. We thank the authors for their amazing work.

Citation

@inproceedings{
    savadikar2025wegeft,
    title={WeGe{FT}: Weight\nobreakdash-Generative Fine\nobreakdash-Tuning for Multi\nobreakdash-Faceted Efficient Adaptation of Large Models},
    author={Chinmay Savadikar and Xi Song and Tianfu Wu},
    booktitle={Forty-second International Conference on Machine Learning},
    year={2025},
    url={https://openreview.net/forum?id=K0sv5T2usb}
}

About

WeGeFT: Weight‑Generative Fine‑Tuning for Multi‑Faceted Efficient Adaptation of Large Models

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 80.0%
  • Python 19.6%
  • Shell 0.3%
  • Dockerfile 0.1%
  • Makefile 0.0%
  • Cuda 0.0%