Skip to content

Commit 7d44d65

Browse files
committed
Update README and changelogs
1 parent d875a1d commit 7d44d65

File tree

3 files changed

+124
-212
lines changed

3 files changed

+124
-212
lines changed

README.md

Lines changed: 17 additions & 180 deletions
Original file line numberDiff line numberDiff line change
@@ -21,6 +21,20 @@ And a big thanks to all GitHub sponsors who helped with some of my costs before
2121

2222
## What's New
2323

24+
### July 27, 2022
25+
* All runtime benchmark and validation result csv files are finally up-to-date!
26+
* A few more weights & model defs added:
27+
* `darknetaa53` - 79.8 @ 256, 80.5 @ 288
28+
* `convnext_nano` - 80.8 @ 224, 81.5 @ 288
29+
* `cs3sedarknet_l` - 81.2 @ 256, 81.8 @ 288
30+
* `cs3darknet_x` - 81.8 @ 256, 82.2 @ 288
31+
* `cs3sedarknet_x` - 82.2 @ 256, 82.7 @ 288
32+
* `cs3edgenet_x` - 82.2 @ 256, 82.7 @ 288
33+
* `cs3se_edgenet_x` - 82.8 @ 256, 83.5 @ 320
34+
* Add output_stride=8 and 16 support to ConvNeXt (dilation)
35+
* deit3 models not being able to resize pos_emb fixed
36+
* Version 0.6.7 PyPi release (/w above bug fixes and new weighs since 0.6.5)
37+
2438
### July 8, 2022
2539
More models, more fixes
2640
* Official research models (w/ weights) added:
@@ -178,185 +192,6 @@ More models, more fixes
178192
* SGDP and AdamP still won't work with PyTorch XLA but others should (have yet to test Adabelief, Adafactor, Adahessian myself).
179193
* EfficientNet-V2 XL TF ported weights added, but they don't validate well in PyTorch (L is better). The pre-processing for the V2 TF training is a bit diff and the fine-tuned 21k -> 1k weights are very sensitive and less robust than the 1k weights.
180194
* Added PyTorch trained EfficientNet-V2 'Tiny' w/ GlobalContext attn weights. Only .1-.2 top-1 better than the SE so more of a curiosity for those interested.
181-
182-
### July 12, 2021
183-
* Add XCiT models from [official facebook impl](https://github.com/facebookresearch/xcit). Contributed by [Alexander Soare](https://github.com/alexander-soare)
184-
185-
### July 5-9, 2021
186-
* Add `efficientnetv2_rw_t` weights, a custom 'tiny' 13.6M param variant that is a bit better than (non NoisyStudent) B3 models. Both faster and better accuracy (at same or lower res)
187-
* top-1 82.34 @ 288x288 and 82.54 @ 320x320
188-
* Add [SAM pretrained](https://arxiv.org/abs/2106.01548) in1k weight for ViT B/16 (`vit_base_patch16_sam_224`) and B/32 (`vit_base_patch32_sam_224`) models.
189-
* Add 'Aggregating Nested Transformer' (NesT) w/ weights converted from official [Flax impl](https://github.com/google-research/nested-transformer). Contributed by [Alexander Soare](https://github.com/alexander-soare).
190-
* `jx_nest_base` - 83.534, `jx_nest_small` - 83.120, `jx_nest_tiny` - 81.426
191-
192-
### June 23, 2021
193-
* Reproduce gMLP model training, `gmlp_s16_224` trained to 79.6 top-1, matching [paper](https://arxiv.org/abs/2105.08050). Hparams for this and other recent MLP training [here](https://gist.github.com/rwightman/d6c264a9001f9167e06c209f630b2cc6)
194-
195-
### June 20, 2021
196-
* Release Vision Transformer 'AugReg' weights from [How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers](https://arxiv.org/abs/2106.10270)
197-
* .npz weight loading support added, can load any of the 50K+ weights from the [AugReg series](https://console.cloud.google.com/storage/browser/vit_models/augreg)
198-
* See [example notebook](https://colab.research.google.com/github/google-research/vision_transformer/blob/master/vit_jax_augreg.ipynb) from [official impl](https://github.com/google-research/vision_transformer/) for navigating the augreg weights
199-
* Replaced all default weights w/ best AugReg variant (if possible). All AugReg 21k classifiers work.
200-
* Highlights: `vit_large_patch16_384` (87.1 top-1), `vit_large_r50_s32_384` (86.2 top-1), `vit_base_patch16_384` (86.0 top-1)
201-
* `vit_deit_*` renamed to just `deit_*`
202-
* Remove my old small model, replace with DeiT compatible small w/ AugReg weights
203-
* Add 1st training of my `gmixer_24_224` MLP /w GLU, 78.1 top-1 w/ 25M params.
204-
* Add weights from official ResMLP release (https://github.com/facebookresearch/deit)
205-
* Add `eca_nfnet_l2` weights from my 'lightweight' series. 84.7 top-1 at 384x384.
206-
* Add distilled BiT 50x1 student and 152x2 Teacher weights from [Knowledge distillation: A good teacher is patient and consistent](https://arxiv.org/abs/2106.05237)
207-
* NFNets and ResNetV2-BiT models work w/ Pytorch XLA now
208-
* weight standardization uses F.batch_norm instead of std_mean (std_mean wasn't lowered)
209-
* eps values adjusted, will be slight differences but should be quite close
210-
* Improve test coverage and classifier interface of non-conv (vision transformer and mlp) models
211-
* Cleanup a few classifier / flatten details for models w/ conv classifiers or early global pool
212-
* Please report any regressions, this PR touched quite a few models.
213-
214-
### June 8, 2021
215-
* Add first ResMLP weights, trained in PyTorch XLA on TPU-VM w/ my XLA branch. 24 block variant, 79.2 top-1.
216-
* Add ResNet51-Q model w/ pretrained weights at 82.36 top-1.
217-
* NFNet inspired block layout with quad layer stem and no maxpool
218-
* Same param count (35.7M) and throughput as ResNetRS-50 but +1.5 top-1 @ 224x224 and +2.5 top-1 at 288x288
219-
220-
### May 25, 2021
221-
* Add LeViT, Visformer, ConViT (PR by Aman Arora), Twins (PR by paper authors) transformer models
222-
* Add ResMLP and gMLP MLP vision models to the existing MLP Mixer impl
223-
* Fix a number of torchscript issues with various vision transformer models
224-
* Cleanup input_size/img_size override handling and improve testing / test coverage for all vision transformer and MLP models
225-
* More flexible pos embedding resize (non-square) for ViT and TnT. Thanks [Alexander Soare](https://github.com/alexander-soare)
226-
* Add `efficientnetv2_rw_m` model and weights (started training before official code). 84.8 top-1, 53M params.
227-
228-
### May 14, 2021
229-
* Add EfficientNet-V2 official model defs w/ ported weights from official [Tensorflow/Keras](https://github.com/google/automl/tree/master/efficientnetv2) impl.
230-
* 1k trained variants: `tf_efficientnetv2_s/m/l`
231-
* 21k trained variants: `tf_efficientnetv2_s/m/l_in21k`
232-
* 21k pretrained -> 1k fine-tuned: `tf_efficientnetv2_s/m/l_in21ft1k`
233-
* v2 models w/ v1 scaling: `tf_efficientnetv2_b0` through `b3`
234-
* Rename my prev V2 guess `efficientnet_v2s` -> `efficientnetv2_rw_s`
235-
* Some blank `efficientnetv2_*` models in-place for future native PyTorch training
236-
237-
### May 5, 2021
238-
* Add MLP-Mixer models and port pretrained weights from [Google JAX impl](https://github.com/google-research/vision_transformer/tree/linen)
239-
* Add CaiT models and pretrained weights from [FB](https://github.com/facebookresearch/deit)
240-
* Add ResNet-RS models and weights from [TF](https://github.com/tensorflow/tpu/tree/master/models/official/resnet/resnet_rs). Thanks [Aman Arora](https://github.com/amaarora)
241-
* Add CoaT models and weights. Thanks [Mohammed Rizin](https://github.com/morizin)
242-
* Add new ImageNet-21k weights & finetuned weights for TResNet, MobileNet-V3, ViT models. Thanks [mrT](https://github.com/mrT23)
243-
* Add GhostNet models and weights. Thanks [Kai Han](https://github.com/iamhankai)
244-
* Update ByoaNet attention modules
245-
* Improve SA module inits
246-
* Hack together experimental stand-alone Swin based attn module and `swinnet`
247-
* Consistent '26t' model defs for experiments.
248-
* Add improved Efficientnet-V2S (prelim model def) weights. 83.8 top-1.
249-
* WandB logging support
250-
251-
### April 13, 2021
252-
* Add Swin Transformer models and weights from https://github.com/microsoft/Swin-Transformer
253-
254-
### April 12, 2021
255-
* Add ECA-NFNet-L1 (slimmed down F1 w/ SiLU, 41M params) trained with this code. 84% top-1 @ 320x320. Trained at 256x256.
256-
* Add EfficientNet-V2S model (unverified model definition) weights. 83.3 top-1 @ 288x288. Only trained single res 224. Working on progressive training.
257-
* Add ByoaNet model definition (Bring-your-own-attention) w/ SelfAttention block and corresponding SA/SA-like modules and model defs
258-
* Lambda Networks - https://arxiv.org/abs/2102.08602
259-
* Bottleneck Transformers - https://arxiv.org/abs/2101.11605
260-
* Halo Nets - https://arxiv.org/abs/2103.12731
261-
* Adabelief optimizer contributed by Juntang Zhuang
262-
263-
### April 1, 2021
264-
* Add snazzy `benchmark.py` script for bulk `timm` model benchmarking of train and/or inference
265-
* Add Pooling-based Vision Transformer (PiT) models (from https://github.com/naver-ai/pit)
266-
* Merged distilled variant into main for torchscript compatibility
267-
* Some `timm` cleanup/style tweaks and weights have hub download support
268-
* Cleanup Vision Transformer (ViT) models
269-
* Merge distilled (DeiT) model into main so that torchscript can work
270-
* Support updated weight init (defaults to old still) that closer matches original JAX impl (possibly better training from scratch)
271-
* Separate hybrid model defs into different file and add several new model defs to fiddle with, support patch_size != 1 for hybrids
272-
* Fix fine-tuning num_class changes (PiT and ViT) and pos_embed resizing (Vit) with distilled variants
273-
* nn.Sequential for block stack (does not break downstream compat)
274-
* TnT (Transformer-in-Transformer) models contributed by author (from https://gitee.com/mindspore/mindspore/tree/master/model_zoo/research/cv/TNT)
275-
* Add RegNetY-160 weights from DeiT teacher model
276-
* Add new NFNet-L0 w/ SE attn (rename `nfnet_l0b`->`nfnet_l0`) weights 82.75 top-1 @ 288x288
277-
* Some fixes/improvements for TFDS dataset wrapper
278-
279-
### March 17, 2021
280-
* Add new ECA-NFNet-L0 (rename `nfnet_l0c`->`eca_nfnet_l0`) weights trained by myself.
281-
* 82.6 top-1 @ 288x288, 82.8 @ 320x320, trained at 224x224
282-
* Uses SiLU activation, approx 2x faster than `dm_nfnet_f0` and 50% faster than `nfnet_f0s` w/ 1/3 param count
283-
* Integrate [Hugging Face model hub](https://huggingface.co/models) into timm create_model and default_cfg handling for pretrained weight and config sharing (more on this soon!)
284-
* Merge HardCoRe NAS models contributed by https://github.com/yoniaflalo
285-
* Merge PyTorch trained EfficientNet-EL and pruned ES/EL variants contributed by [DeGirum](https://github.com/DeGirum)
286-
287-
288-
### March 7, 2021
289-
* First 0.4.x PyPi release w/ NFNets (& related), ByoB (GPU-Efficient, RepVGG, etc).
290-
* Change feature extraction for pre-activation nets (NFNets, ResNetV2) to return features before activation.
291-
* Tested with PyTorch 1.8 release. Updated CI to use 1.8.
292-
* Benchmarked several arch on RTX 3090, Titan RTX, and V100 across 1.7.1, 1.8, NGC 20.12, and 21.02. Some interesting performance variations to take note of https://gist.github.com/rwightman/bb59f9e245162cee0e38bd66bd8cd77f
293-
294-
### Feb 18, 2021
295-
* Add pretrained weights and model variants for NFNet-F* models from [DeepMind Haiku impl](https://github.com/deepmind/deepmind-research/tree/master/nfnets).
296-
* Models are prefixed with `dm_`. They require SAME padding conv, skipinit enabled, and activation gains applied in act fn.
297-
* These models are big, expect to run out of GPU memory. With the GELU activiation + other options, they are roughly 1/2 the inference speed of my SiLU PyTorch optimized `s` variants.
298-
* Original model results are based on pre-processing that is not the same as all other models so you'll see different results in the results csv (once updated).
299-
* Matching the original pre-processing as closely as possible I get these results:
300-
* `dm_nfnet_f6` - 86.352
301-
* `dm_nfnet_f5` - 86.100
302-
* `dm_nfnet_f4` - 85.834
303-
* `dm_nfnet_f3` - 85.676
304-
* `dm_nfnet_f2` - 85.178
305-
* `dm_nfnet_f1` - 84.696
306-
* `dm_nfnet_f0` - 83.464
307-
308-
### Feb 16, 2021
309-
* Add Adaptive Gradient Clipping (AGC) as per https://arxiv.org/abs/2102.06171. Integrated w/ PyTorch gradient clipping via mode arg that defaults to prev 'norm' mode. For backward arg compat, clip-grad arg must be specified to enable when using train.py.
310-
* AGC w/ default clipping factor `--clip-grad .01 --clip-mode agc`
311-
* PyTorch global norm of 1.0 (old behaviour, always norm), `--clip-grad 1.0`
312-
* PyTorch value clipping of 10, `--clip-grad 10. --clip-mode value`
313-
* AGC performance is definitely sensitive to the clipping factor. More experimentation needed to determine good values for smaller batch sizes and optimizers besides those in paper. So far I've found .001-.005 is necessary for stable RMSProp training w/ NFNet/NF-ResNet.
314-
315-
### Feb 12, 2021
316-
* Update Normalization-Free nets to include new NFNet-F (https://arxiv.org/abs/2102.06171) model defs
317-
318-
### Feb 10, 2021
319-
* First Normalization-Free model training experiments done,
320-
* nf_resnet50 - 80.68 top-1 @ 288x288, 80.31 @ 256x256
321-
* nf_regnet_b1 - 79.30 @ 288x288, 78.75 @ 256x256
322-
* More model archs, incl a flexible ByobNet backbone ('Bring-your-own-blocks')
323-
* GPU-Efficient-Networks (https://github.com/idstcv/GPU-Efficient-Networks), impl in `byobnet.py`
324-
* RepVGG (https://github.com/DingXiaoH/RepVGG), impl in `byobnet.py`
325-
* classic VGG (from torchvision, impl in `vgg.py`)
326-
* Refinements to normalizer layer arg handling and normalizer+act layer handling in some models
327-
* Default AMP mode changed to native PyTorch AMP instead of APEX. Issues not being fixed with APEX. Native works with `--channels-last` and `--torchscript` model training, APEX does not.
328-
* Fix a few bugs introduced since last pypi release
329-
330-
### Feb 8, 2021
331-
* Add several ResNet weights with ECA attention. 26t & 50t trained @ 256, test @ 320. 269d train @ 256, fine-tune @320, test @ 352.
332-
* `ecaresnet26t` - 79.88 top-1 @ 320x320, 79.08 @ 256x256
333-
* `ecaresnet50t` - 82.35 top-1 @ 320x320, 81.52 @ 256x256
334-
* `ecaresnet269d` - 84.93 top-1 @ 352x352, 84.87 @ 320x320
335-
* Remove separate tiered (`t`) vs tiered_narrow (`tn`) ResNet model defs, all `tn` changed to `t` and `t` models removed (`seresnext26t_32x4d` only model w/ weights that was removed).
336-
* Support model default_cfgs with separate train vs test resolution `test_input_size` and remove extra `_320` suffix ResNet model defs that were just for test.
337-
338-
### Jan 30, 2021
339-
* Add initial "Normalization Free" NF-RegNet-B* and NF-ResNet model definitions based on [paper](https://arxiv.org/abs/2101.08692)
340-
341-
### Jan 25, 2021
342-
* Add ResNetV2 Big Transfer (BiT) models w/ ImageNet-1k and 21k weights from https://github.com/google-research/big_transfer
343-
* Add official R50+ViT-B/16 hybrid models + weights from https://github.com/google-research/vision_transformer
344-
* ImageNet-21k ViT weights are added w/ model defs and representation layer (pre logits) support
345-
* NOTE: ImageNet-21k classifier heads were zero'd in original weights, they are only useful for transfer learning
346-
* Add model defs and weights for DeiT Vision Transformer models from https://github.com/facebookresearch/deit
347-
* Refactor dataset classes into ImageDataset/IterableImageDataset + dataset specific parser classes
348-
* Add Tensorflow-Datasets (TFDS) wrapper to allow use of TFDS image classification sets with train script
349-
* Ex: `train.py /data/tfds --dataset tfds/oxford_iiit_pet --val-split test --model resnet50 -b 256 --amp --num-classes 37 --opt adamw --lr 3e-4 --weight-decay .001 --pretrained -j 2`
350-
* Add improved .tar dataset parser that reads images from .tar, folder of .tar files, or .tar within .tar
351-
* Run validation on full ImageNet-21k directly from tar w/ BiT model: `validate.py /data/fall11_whole.tar --model resnetv2_50x1_bitm_in21k --amp`
352-
* Models in this update should be stable w/ possible exception of ViT/BiT, possibility of some regressions with train/val scripts and dataset handling
353-
354-
### Jan 3, 2021
355-
* Add SE-ResNet-152D weights
356-
* 256x256 val, 0.94 crop top-1 - 83.75
357-
* 320x320 val, 1.0 crop - 84.36
358-
* Update [results files](results/)
359-
360195

361196
## Introduction
362197

@@ -379,7 +214,8 @@ A full version of the list below with source links can be found in the [document
379214
* ConvNeXt - https://arxiv.org/abs/2201.03545
380215
* ConViT (Soft Convolutional Inductive Biases Vision Transformers)- https://arxiv.org/abs/2103.10697
381216
* CspNet (Cross-Stage Partial Networks) - https://arxiv.org/abs/1911.11929
382-
* DeiT (Vision Transformer) - https://arxiv.org/abs/2012.12877
217+
* DeiT - https://arxiv.org/abs/2012.12877
218+
* DeiT-III - https://arxiv.org/pdf/2204.07118.pdf
383219
* DenseNet - https://arxiv.org/abs/1608.06993
384220
* DLA - https://arxiv.org/abs/1707.06484
385221
* DPN (Dual-Path Network) - https://arxiv.org/abs/1707.01629
@@ -411,6 +247,7 @@ A full version of the list below with source links can be found in the [document
411247
* HardCoRe-NAS - https://arxiv.org/abs/2102.11646
412248
* LCNet - https://arxiv.org/abs/2109.15099
413249
* MobileViT - https://arxiv.org/abs/2110.02178
250+
* MobileViT-V2 - https://arxiv.org/abs/2206.02680
414251
* NASNet-A - https://arxiv.org/abs/1707.07012
415252
* NesT - https://arxiv.org/abs/2105.12723
416253
* NFNet-F - https://arxiv.org/abs/2102.06171

docs/archived_changes.md

Lines changed: 32 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,37 @@
11
# Archived Changes
22

3+
### July 12, 2021
4+
* Add XCiT models from [official facebook impl](https://github.com/facebookresearch/xcit). Contributed by [Alexander Soare](https://github.com/alexander-soare)
5+
6+
### July 5-9, 2021
7+
* Add `efficientnetv2_rw_t` weights, a custom 'tiny' 13.6M param variant that is a bit better than (non NoisyStudent) B3 models. Both faster and better accuracy (at same or lower res)
8+
* top-1 82.34 @ 288x288 and 82.54 @ 320x320
9+
* Add [SAM pretrained](https://arxiv.org/abs/2106.01548) in1k weight for ViT B/16 (`vit_base_patch16_sam_224`) and B/32 (`vit_base_patch32_sam_224`) models.
10+
* Add 'Aggregating Nested Transformer' (NesT) w/ weights converted from official [Flax impl](https://github.com/google-research/nested-transformer). Contributed by [Alexander Soare](https://github.com/alexander-soare).
11+
* `jx_nest_base` - 83.534, `jx_nest_small` - 83.120, `jx_nest_tiny` - 81.426
12+
13+
### June 23, 2021
14+
* Reproduce gMLP model training, `gmlp_s16_224` trained to 79.6 top-1, matching [paper](https://arxiv.org/abs/2105.08050). Hparams for this and other recent MLP training [here](https://gist.github.com/rwightman/d6c264a9001f9167e06c209f630b2cc6)
15+
16+
### June 20, 2021
17+
* Release Vision Transformer 'AugReg' weights from [How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers](https://arxiv.org/abs/2106.10270)
18+
* .npz weight loading support added, can load any of the 50K+ weights from the [AugReg series](https://console.cloud.google.com/storage/browser/vit_models/augreg)
19+
* See [example notebook](https://colab.research.google.com/github/google-research/vision_transformer/blob/master/vit_jax_augreg.ipynb) from [official impl](https://github.com/google-research/vision_transformer/) for navigating the augreg weights
20+
* Replaced all default weights w/ best AugReg variant (if possible). All AugReg 21k classifiers work.
21+
* Highlights: `vit_large_patch16_384` (87.1 top-1), `vit_large_r50_s32_384` (86.2 top-1), `vit_base_patch16_384` (86.0 top-1)
22+
* `vit_deit_*` renamed to just `deit_*`
23+
* Remove my old small model, replace with DeiT compatible small w/ AugReg weights
24+
* Add 1st training of my `gmixer_24_224` MLP /w GLU, 78.1 top-1 w/ 25M params.
25+
* Add weights from official ResMLP release (https://github.com/facebookresearch/deit)
26+
* Add `eca_nfnet_l2` weights from my 'lightweight' series. 84.7 top-1 at 384x384.
27+
* Add distilled BiT 50x1 student and 152x2 Teacher weights from [Knowledge distillation: A good teacher is patient and consistent](https://arxiv.org/abs/2106.05237)
28+
* NFNets and ResNetV2-BiT models work w/ Pytorch XLA now
29+
* weight standardization uses F.batch_norm instead of std_mean (std_mean wasn't lowered)
30+
* eps values adjusted, will be slight differences but should be quite close
31+
* Improve test coverage and classifier interface of non-conv (vision transformer and mlp) models
32+
* Cleanup a few classifier / flatten details for models w/ conv classifiers or early global pool
33+
* Please report any regressions, this PR touched quite a few models.
34+
335
### June 8, 2021
436
* Add first ResMLP weights, trained in PyTorch XLA on TPU-VM w/ my XLA branch. 24 block variant, 79.2 top-1.
537
* Add ResNet51-Q model w/ pretrained weights at 82.36 top-1.

0 commit comments

Comments
 (0)