A collection of tricks to simplify and speed up transformer models:
- Slim attention: [paper], [video], [podcast], [notebook], [code-readme], [reddit]
- Matrix-shrink [work in progress]: [paper]
- Flash normalization: [paper], [podcast], [notebook], [code-readme]
- Precomputing the first layer: [paper], [podcast]
- Removing weights from skipless transformers: [paper], [podcast], [notebook]
Many of these tricks follow a recent trend of removing parts from neural networks such as RMSNorm’s removal of mean centering from LayerNorm, PaLM's removal of bias-parameters, decoder-only transformer's removal of the encoder stack, and of course transformer’s revolutionary removal of recurrent layers.
For example, our FlashNorm removes the weights from RMSNorm and merges them with the next linear layer. And slim attention removes the entire V-cache from the context memory for MHA transformers.
Install the transformer tricks package:
pip install transformer-tricks
Alternatively, to run from latest repo:
git clone https://github.com/OpenMachine-ai/transformer-tricks.git
pip3 install --quiet -r requirements.txt
Follow the links below for documentation of the python code in this directory:
The papers are accompanied by the following Jupyter notebooks:
Please subscribe to our newsletter on substack to get the latest news about this project. We will never send you more than one email per month.
We pay cash for high-impact contributions. Please check out CONTRIBUTING for how to get involved.
The Transformer Tricks project is currently sponsored by OpenMachine. We'd love to hear from you if you'd like to join us in supporting this project.