Skip to content

Latest commit

 

History

History
23 lines (15 loc) · 964 Bytes

peft.md

File metadata and controls

23 lines (15 loc) · 964 Bytes
description
State-of-the-art Parameter-Efficient Fine-Tuning methods

🎨 PEFT

{% hint style="info" %} PEFT methods enable efficient adaptation of pre-trained language models (PLMs) to various downstream applications without fine-tuning all the model's parameters. {% endhint %}

It only fine-tunes a small number of (extra) model parameters, thereby greatly decreasing the computational and storage costs. And according to the official repo's states, recent State-of-the-Art PEFT techniques achieve performance comparable to that of full fine-tuning.

Use Cases

{% hint style="info" %} Get comparable performance to full finetuning by adapting LLMs to downstream tasks using consumer hardware. {% endhint %}

PEFT methods only fine-tune a small number of (extra) model parameters, significantly decreasing computational and storage costs

Reference

https://huggingface.co/docs/peft/index#peft