Skip to content

Commit 66393d4

Browse files
committed
Update README.md
1 parent a45b4bc commit 66393d4

File tree

1 file changed

+39
-5
lines changed

1 file changed

+39
-5
lines changed

README.md

Lines changed: 39 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -13,16 +13,50 @@
1313

1414
## Sponsors
1515

16-
A big thank you to my [GitHub Sponsors](https://github.com/sponsors/rwightman) for their support!
17-
18-
In addition to the sponsors at the link above, I've received hardware and/or cloud resources from
16+
Thanks to the following for hardware support:
17+
* TPU Research Cloud (TRC) (https://sites.research.google/trc/about/)
1918
* Nvidia (https://www.nvidia.com/en-us/)
20-
* TFRC (https://www.tensorflow.org/tfrc)
2119

22-
I'm fortunate to be able to dedicate significant time and money of my own supporting this and other open source projects. However, as the projects increase in scope, outside support is needed to continue with the current trajectory of cloud services, hardware, and electricity costs.
20+
And a big thanks to all GitHub sponsors who helped with some of my costs before I joined Hugging Face.
2321

2422
## What's New
2523

24+
### July 8, 2022
25+
More models, more fixes
26+
* Official research models (w/ weights) added:
27+
* EdgeNeXt from (https://github.com/mmaaz60/EdgeNeXt)
28+
* MobileViT-V2 from (https://github.com/apple/ml-cvnets)
29+
* DeiT III (Revenge of the ViT) from (https://github.com/facebookresearch/deit)
30+
* My own models:
31+
* Small `ResNet` defs added by request with 1 block repeats for both basic and bottleneck (resnet10 and resnet14)
32+
* `CspNet` refactored with dataclass config, simplified CrossStage3 (`cs3`) option. These are closer to YOLO-v5+ backbone defs.
33+
* More relative position vit fiddling. Two `srelpos` (shared relative position) models trained, and a medium w/ class token.
34+
* Add an alternate downsample mode to EdgeNeXt and train a `small` model. Better than original small, but not their new USI trained weights.
35+
* My own model weight results (all ImageNet-1k training)
36+
* `resnet10t` - 66.5 @ 176, 68.3 @ 224
37+
* `resnet14t` - 71.3 @ 176, 72.3 @ 224
38+
* `resnetaa50` - 80.6 @ 224 , 81.6 @ 288
39+
* `darknet53` - 80.0 @ 256, 80.5 @ 288
40+
* `cs3darknet_m` - 77.0 @ 256, 77.6 @ 288
41+
* `cs3darknet_focus_m` - 76.7 @ 256, 77.3 @ 288
42+
* `cs3darknet_l` - 80.4 @ 256, 80.9 @ 288
43+
* `cs3darknet_focus_l` - 80.3 @ 256, 80.9 @ 288
44+
* `vit_srelpos_small_patch16_224` - 81.1 @ 224, 82.1 @ 320
45+
* `vit_srelpos_medium_patch16_224` - 82.3 @ 224, 83.1 @ 320
46+
* `vit_relpos_small_patch16_cls_224` - 82.6 @ 224, 83.6 @ 320
47+
* `edgnext_small_rw` - 79.6 @ 224, 80.4 @ 320
48+
* `cs3`, `darknet`, and `vit_*relpos` weights above all trained on TPU thanks to TRC program! Rest trained on overheating GPUs.
49+
* Hugging Face Hub support fixes verified, demo notebook TBA
50+
* Pretrained weights / configs can be loaded externally (ie from local disk) w/ support for head adaptation.
51+
* Add support to change image extensions scanned by `timm` datasets/parsers. See (https://github.com/rwightman/pytorch-image-models/pull/1274#issuecomment-1178303103)
52+
* Default ConvNeXt LayerNorm impl to use `F.layer_norm(x.permute(0, 2, 3, 1), ...).permute(0, 3, 1, 2)` via `LayerNorm2d` in all cases.
53+
* a bit slower than previous custom impl on some hardware (ie Ampere w/ CL), but overall fewer regressions across wider HW / PyTorch version ranges.
54+
* previous impl exists as `LayerNormExp2d` in `models/layers/norm.py`
55+
* Numerous bug fixes
56+
* Currently testing for imminent PyPi 0.6.x release
57+
* LeViT pretraining of larger models still a WIP, they don't train well / easily without distillation. Time to add distill support (finally)?
58+
* ImageNet-22k weight training + finetune ongoing, work on multi-weight support (slowly) chugging along (there are a LOT of weights, sigh) ...
59+
2660
### May 13, 2022
2761
* Official Swin-V2 models and weights added from (https://github.com/microsoft/Swin-Transformer). Cleaned up to support torchscript.
2862
* Some refactoring for existing `timm` Swin-V2-CR impl, will likely do a bit more to bring parts closer to official and decide whether to merge some aspects.

0 commit comments

Comments
 (0)