You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -21,6 +21,20 @@ And a big thanks to all GitHub sponsors who helped with some of my costs before
21
21
22
22
## What's New
23
23
24
+
### July 27, 2022
25
+
* All runtime benchmark and validation result csv files are finally up-to-date!
26
+
* A few more weights & model defs added:
27
+
*`darknetaa53` - 79.8 @ 256, 80.5 @ 288
28
+
*`convnext_nano` - 80.8 @ 224, 81.5 @ 288
29
+
*`cs3sedarknet_l` - 81.2 @ 256, 81.8 @ 288
30
+
*`cs3darknet_x` - 81.8 @ 256, 82.2 @ 288
31
+
*`cs3sedarknet_x` - 82.2 @ 256, 82.7 @ 288
32
+
*`cs3edgenet_x` - 82.2 @ 256, 82.7 @ 288
33
+
*`cs3se_edgenet_x` - 82.8 @ 256, 83.5 @ 320
34
+
* Add output_stride=8 and 16 support to ConvNeXt (dilation)
35
+
* deit3 models not being able to resize pos_emb fixed
36
+
* Version 0.6.7 PyPi release (/w above bug fixes and new weighs since 0.6.5)
37
+
24
38
### July 8, 2022
25
39
More models, more fixes
26
40
* Official research models (w/ weights) added:
@@ -178,185 +192,6 @@ More models, more fixes
178
192
* SGDP and AdamP still won't work with PyTorch XLA but others should (have yet to test Adabelief, Adafactor, Adahessian myself).
179
193
* EfficientNet-V2 XL TF ported weights added, but they don't validate well in PyTorch (L is better). The pre-processing for the V2 TF training is a bit diff and the fine-tuned 21k -> 1k weights are very sensitive and less robust than the 1k weights.
180
194
* Added PyTorch trained EfficientNet-V2 'Tiny' w/ GlobalContext attn weights. Only .1-.2 top-1 better than the SE so more of a curiosity for those interested.
181
-
182
-
### July 12, 2021
183
-
* Add XCiT models from [official facebook impl](https://github.com/facebookresearch/xcit). Contributed by [Alexander Soare](https://github.com/alexander-soare)
184
-
185
-
### July 5-9, 2021
186
-
* Add `efficientnetv2_rw_t` weights, a custom 'tiny' 13.6M param variant that is a bit better than (non NoisyStudent) B3 models. Both faster and better accuracy (at same or lower res)
187
-
* top-1 82.34 @ 288x288 and 82.54 @ 320x320
188
-
* Add [SAM pretrained](https://arxiv.org/abs/2106.01548) in1k weight for ViT B/16 (`vit_base_patch16_sam_224`) and B/32 (`vit_base_patch32_sam_224`) models.
189
-
* Add 'Aggregating Nested Transformer' (NesT) w/ weights converted from official [Flax impl](https://github.com/google-research/nested-transformer). Contributed by [Alexander Soare](https://github.com/alexander-soare).
* Reproduce gMLP model training, `gmlp_s16_224` trained to 79.6 top-1, matching [paper](https://arxiv.org/abs/2105.08050). Hparams for this and other recent MLP training [here](https://gist.github.com/rwightman/d6c264a9001f9167e06c209f630b2cc6)
194
-
195
-
### June 20, 2021
196
-
* Release Vision Transformer 'AugReg' weights from [How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers](https://arxiv.org/abs/2106.10270)
197
-
* .npz weight loading support added, can load any of the 50K+ weights from the [AugReg series](https://console.cloud.google.com/storage/browser/vit_models/augreg)
198
-
* See [example notebook](https://colab.research.google.com/github/google-research/vision_transformer/blob/master/vit_jax_augreg.ipynb) from [official impl](https://github.com/google-research/vision_transformer/) for navigating the augreg weights
199
-
* Replaced all default weights w/ best AugReg variant (if possible). All AugReg 21k classifiers work.
* Remove my old small model, replace with DeiT compatible small w/ AugReg weights
203
-
* Add 1st training of my `gmixer_24_224` MLP /w GLU, 78.1 top-1 w/ 25M params.
204
-
* Add weights from official ResMLP release (https://github.com/facebookresearch/deit)
205
-
* Add `eca_nfnet_l2` weights from my 'lightweight' series. 84.7 top-1 at 384x384.
206
-
* Add distilled BiT 50x1 student and 152x2 Teacher weights from [Knowledge distillation: A good teacher is patient and consistent](https://arxiv.org/abs/2106.05237)
207
-
* NFNets and ResNetV2-BiT models work w/ Pytorch XLA now
* eps values adjusted, will be slight differences but should be quite close
210
-
* Improve test coverage and classifier interface of non-conv (vision transformer and mlp) models
211
-
* Cleanup a few classifier / flatten details for models w/ conv classifiers or early global pool
212
-
* Please report any regressions, this PR touched quite a few models.
213
-
214
-
### June 8, 2021
215
-
* Add first ResMLP weights, trained in PyTorch XLA on TPU-VM w/ my XLA branch. 24 block variant, 79.2 top-1.
216
-
* Add ResNet51-Q model w/ pretrained weights at 82.36 top-1.
217
-
* NFNet inspired block layout with quad layer stem and no maxpool
218
-
* Same param count (35.7M) and throughput as ResNetRS-50 but +1.5 top-1 @ 224x224 and +2.5 top-1 at 288x288
219
-
220
-
### May 25, 2021
221
-
* Add LeViT, Visformer, ConViT (PR by Aman Arora), Twins (PR by paper authors) transformer models
222
-
* Add ResMLP and gMLP MLP vision models to the existing MLP Mixer impl
223
-
* Fix a number of torchscript issues with various vision transformer models
224
-
* Cleanup input_size/img_size override handling and improve testing / test coverage for all vision transformer and MLP models
225
-
* More flexible pos embedding resize (non-square) for ViT and TnT. Thanks [Alexander Soare](https://github.com/alexander-soare)
226
-
* Add `efficientnetv2_rw_m` model and weights (started training before official code). 84.8 top-1, 53M params.
227
-
228
-
### May 14, 2021
229
-
* Add EfficientNet-V2 official model defs w/ ported weights from official [Tensorflow/Keras](https://github.com/google/automl/tree/master/efficientnetv2) impl.
* v2 models w/ v1 scaling: `tf_efficientnetv2_b0` through `b3`
234
-
* Rename my prev V2 guess `efficientnet_v2s` -> `efficientnetv2_rw_s`
235
-
* Some blank `efficientnetv2_*` models in-place for future native PyTorch training
236
-
237
-
### May 5, 2021
238
-
* Add MLP-Mixer models and port pretrained weights from [Google JAX impl](https://github.com/google-research/vision_transformer/tree/linen)
239
-
* Add CaiT models and pretrained weights from [FB](https://github.com/facebookresearch/deit)
240
-
* Add ResNet-RS models and weights from [TF](https://github.com/tensorflow/tpu/tree/master/models/official/resnet/resnet_rs). Thanks [Aman Arora](https://github.com/amaarora)
241
-
* Add CoaT models and weights. Thanks [Mohammed Rizin](https://github.com/morizin)
242
-
* Add new ImageNet-21k weights & finetuned weights for TResNet, MobileNet-V3, ViT models. Thanks [mrT](https://github.com/mrT23)
243
-
* Add GhostNet models and weights. Thanks [Kai Han](https://github.com/iamhankai)
244
-
* Update ByoaNet attention modules
245
-
* Improve SA module inits
246
-
* Hack together experimental stand-alone Swin based attn module and `swinnet`
247
-
* Consistent '26t' model defs for experiments.
248
-
* Add improved Efficientnet-V2S (prelim model def) weights. 83.8 top-1.
249
-
* WandB logging support
250
-
251
-
### April 13, 2021
252
-
* Add Swin Transformer models and weights from https://github.com/microsoft/Swin-Transformer
253
-
254
-
### April 12, 2021
255
-
* Add ECA-NFNet-L1 (slimmed down F1 w/ SiLU, 41M params) trained with this code. 84% top-1 @ 320x320. Trained at 256x256.
256
-
* Add EfficientNet-V2S model (unverified model definition) weights. 83.3 top-1 @ 288x288. Only trained single res 224. Working on progressive training.
257
-
* Add ByoaNet model definition (Bring-your-own-attention) w/ SelfAttention block and corresponding SA/SA-like modules and model defs
* Uses SiLU activation, approx 2x faster than `dm_nfnet_f0` and 50% faster than `nfnet_f0s` w/ 1/3 param count
283
-
* Integrate [Hugging Face model hub](https://huggingface.co/models) into timm create_model and default_cfg handling for pretrained weight and config sharing (more on this soon!)
284
-
* Merge HardCoRe NAS models contributed by https://github.com/yoniaflalo
285
-
* Merge PyTorch trained EfficientNet-EL and pruned ES/EL variants contributed by [DeGirum](https://github.com/DeGirum)
* Change feature extraction for pre-activation nets (NFNets, ResNetV2) to return features before activation.
291
-
* Tested with PyTorch 1.8 release. Updated CI to use 1.8.
292
-
* Benchmarked several arch on RTX 3090, Titan RTX, and V100 across 1.7.1, 1.8, NGC 20.12, and 21.02. Some interesting performance variations to take note of https://gist.github.com/rwightman/bb59f9e245162cee0e38bd66bd8cd77f
293
-
294
-
### Feb 18, 2021
295
-
* Add pretrained weights and model variants for NFNet-F* models from [DeepMind Haiku impl](https://github.com/deepmind/deepmind-research/tree/master/nfnets).
296
-
* Models are prefixed with `dm_`. They require SAME padding conv, skipinit enabled, and activation gains applied in act fn.
297
-
* These models are big, expect to run out of GPU memory. With the GELU activiation + other options, they are roughly 1/2 the inference speed of my SiLU PyTorch optimized `s` variants.
298
-
* Original model results are based on pre-processing that is not the same as all other models so you'll see different results in the results csv (once updated).
299
-
* Matching the original pre-processing as closely as possible I get these results:
300
-
*`dm_nfnet_f6` - 86.352
301
-
*`dm_nfnet_f5` - 86.100
302
-
*`dm_nfnet_f4` - 85.834
303
-
*`dm_nfnet_f3` - 85.676
304
-
*`dm_nfnet_f2` - 85.178
305
-
*`dm_nfnet_f1` - 84.696
306
-
*`dm_nfnet_f0` - 83.464
307
-
308
-
### Feb 16, 2021
309
-
* Add Adaptive Gradient Clipping (AGC) as per https://arxiv.org/abs/2102.06171. Integrated w/ PyTorch gradient clipping via mode arg that defaults to prev 'norm' mode. For backward arg compat, clip-grad arg must be specified to enable when using train.py.
* PyTorch global norm of 1.0 (old behaviour, always norm), `--clip-grad 1.0`
312
-
* PyTorch value clipping of 10, `--clip-grad 10. --clip-mode value`
313
-
* AGC performance is definitely sensitive to the clipping factor. More experimentation needed to determine good values for smaller batch sizes and optimizers besides those in paper. So far I've found .001-.005 is necessary for stable RMSProp training w/ NFNet/NF-ResNet.
314
-
315
-
### Feb 12, 2021
316
-
* Update Normalization-Free nets to include new NFNet-F (https://arxiv.org/abs/2102.06171) model defs
317
-
318
-
### Feb 10, 2021
319
-
* First Normalization-Free model training experiments done,
* More model archs, incl a flexible ByobNet backbone ('Bring-your-own-blocks')
323
-
* GPU-Efficient-Networks (https://github.com/idstcv/GPU-Efficient-Networks), impl in `byobnet.py`
324
-
* RepVGG (https://github.com/DingXiaoH/RepVGG), impl in `byobnet.py`
325
-
* classic VGG (from torchvision, impl in `vgg.py`)
326
-
* Refinements to normalizer layer arg handling and normalizer+act layer handling in some models
327
-
* Default AMP mode changed to native PyTorch AMP instead of APEX. Issues not being fixed with APEX. Native works with `--channels-last` and `--torchscript` model training, APEX does not.
328
-
* Fix a few bugs introduced since last pypi release
329
-
330
-
### Feb 8, 2021
331
-
* Add several ResNet weights with ECA attention. 26t & 50t trained @ 256, test @ 320. 269d train @ 256, fine-tune @320, test @ 352.
* Remove separate tiered (`t`) vs tiered_narrow (`tn`) ResNet model defs, all `tn` changed to `t` and `t` models removed (`seresnext26t_32x4d` only model w/ weights that was removed).
336
-
* Support model default_cfgs with separate train vs test resolution `test_input_size` and remove extra `_320` suffix ResNet model defs that were just for test.
337
-
338
-
### Jan 30, 2021
339
-
* Add initial "Normalization Free" NF-RegNet-B* and NF-ResNet model definitions based on [paper](https://arxiv.org/abs/2101.08692)
340
-
341
-
### Jan 25, 2021
342
-
* Add ResNetV2 Big Transfer (BiT) models w/ ImageNet-1k and 21k weights from https://github.com/google-research/big_transfer
343
-
* Add official R50+ViT-B/16 hybrid models + weights from https://github.com/google-research/vision_transformer
344
-
* ImageNet-21k ViT weights are added w/ model defs and representation layer (pre logits) support
345
-
* NOTE: ImageNet-21k classifier heads were zero'd in original weights, they are only useful for transfer learning
346
-
* Add model defs and weights for DeiT Vision Transformer models from https://github.com/facebookresearch/deit
347
-
* Refactor dataset classes into ImageDataset/IterableImageDataset + dataset specific parser classes
348
-
* Add Tensorflow-Datasets (TFDS) wrapper to allow use of TFDS image classification sets with train script
Copy file name to clipboardExpand all lines: docs/archived_changes.md
+32Lines changed: 32 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,37 @@
1
1
# Archived Changes
2
2
3
+
### July 12, 2021
4
+
* Add XCiT models from [official facebook impl](https://github.com/facebookresearch/xcit). Contributed by [Alexander Soare](https://github.com/alexander-soare)
5
+
6
+
### July 5-9, 2021
7
+
* Add `efficientnetv2_rw_t` weights, a custom 'tiny' 13.6M param variant that is a bit better than (non NoisyStudent) B3 models. Both faster and better accuracy (at same or lower res)
8
+
* top-1 82.34 @ 288x288 and 82.54 @ 320x320
9
+
* Add [SAM pretrained](https://arxiv.org/abs/2106.01548) in1k weight for ViT B/16 (`vit_base_patch16_sam_224`) and B/32 (`vit_base_patch32_sam_224`) models.
10
+
* Add 'Aggregating Nested Transformer' (NesT) w/ weights converted from official [Flax impl](https://github.com/google-research/nested-transformer). Contributed by [Alexander Soare](https://github.com/alexander-soare).
* Reproduce gMLP model training, `gmlp_s16_224` trained to 79.6 top-1, matching [paper](https://arxiv.org/abs/2105.08050). Hparams for this and other recent MLP training [here](https://gist.github.com/rwightman/d6c264a9001f9167e06c209f630b2cc6)
15
+
16
+
### June 20, 2021
17
+
* Release Vision Transformer 'AugReg' weights from [How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers](https://arxiv.org/abs/2106.10270)
18
+
* .npz weight loading support added, can load any of the 50K+ weights from the [AugReg series](https://console.cloud.google.com/storage/browser/vit_models/augreg)
19
+
* See [example notebook](https://colab.research.google.com/github/google-research/vision_transformer/blob/master/vit_jax_augreg.ipynb) from [official impl](https://github.com/google-research/vision_transformer/) for navigating the augreg weights
20
+
* Replaced all default weights w/ best AugReg variant (if possible). All AugReg 21k classifiers work.
* Remove my old small model, replace with DeiT compatible small w/ AugReg weights
24
+
* Add 1st training of my `gmixer_24_224` MLP /w GLU, 78.1 top-1 w/ 25M params.
25
+
* Add weights from official ResMLP release (https://github.com/facebookresearch/deit)
26
+
* Add `eca_nfnet_l2` weights from my 'lightweight' series. 84.7 top-1 at 384x384.
27
+
* Add distilled BiT 50x1 student and 152x2 Teacher weights from [Knowledge distillation: A good teacher is patient and consistent](https://arxiv.org/abs/2106.05237)
28
+
* NFNets and ResNetV2-BiT models work w/ Pytorch XLA now
0 commit comments