Release 0.3.0
is published. We will drop MindSpore 1.x in the future release.
- New models:
- Features:
- AsymmetricLoss & JSDCrossEntropy
- Augmentations Split
- Customized AMP
- Bug fixes:
- Since the classifier weights are not fully deleted, you may encounter an error passing in the
num_classes
when creating a pre-trained model.
- Since the classifier weights are not fully deleted, you may encounter an error passing in the
- Refactoring:
- Documentation:
- A guide of how to extract multiscale features from backbone.
- A guide of how to finetune the pre-trained model on a custom dataset.
- BREAKING CHANGES:
- We are going to drop support of MindSpore 1.x for it's EOL.
- Configuration
filter_bias_and_bn
will be removed and renamed asweight_decay_filter
, due to a prolonged misunderstanding of the MindSpore optimizer. We will migrate the existing training recipes, but the signature change of function a will be incompatible and the old version training recipes will also be incompatible. See PR/752 for details.
- New version
0.2.2
is released! We upgrade to supportMindSpore
v2.0 while maintaining compatibility of v1.8 - New models:
- ConvNextV2
- mini of CoAT
- 1.3 of MnasNet
- AMP(O3) version of ShuffleNetV2
- New features:
- Gradient Accumulation
- DynamicLossScale for customized TrainStep
- OneCycleLR and CyclicLR learning rate scheduler
- Refactored Logging
- Pyramid Feature Extraction
- Bug fixes:
- Serving Deployment Tutorial(mobilenet_v3 doesn't work on ms1.8 when using Ascend backend)
- Some broken links on our documentation website.
- 2023/6/2
- New version:
0.2.1
is released! - New documents is online!
- 2023/5/30
- New Models:
- AMP(O2) version of VGG
- GhostNet
- AMP(O3) version of MobileNetV2 and MobileNetV3
- (x,y)_(200,400,600,800)mf of RegNet
- b1g2, b1g4 & b2g4 of RepVGG
- 0.5 of MnasNet
- b3 & b4 of PVTv2
- New Features:
- 3-Augment, Augmix, TrivialAugmentWide
- Bug Fixes:
- ViT pooling mode
- 2023/04/28
- Add some new models, listed as following
- Bug fix:
- Setting the same random seed for each rank
- Checking if options from yaml config exist in argument parser
- Initializing flag variable as
Tensor
in OptimizerAdan
- 2023/03/25
- Update checkpoints for pretrained ResNet for better accuracy
- ResNet18 (from 70.09 to 70.31 @Top1 accuracy)
- ResNet34 (from 73.69 to 74.15 @Top1 accuracy)
- ResNet50 (from 76.64 to 76.69 @Top1 accuracy)
- ResNet101 (from 77.63 to 78.24 @Top1 accuracy)
- ResNet152 (from 78.63 to 78.72 @Top1 accuracy)
- Rename checkpoint file name to follow naming rule ({model_scale-sha256sum.ckpt}) and update download URLs.
- 2023/03/05
- Add Lion (EvoLved Sign Momentum) optimizer from paper https://arxiv.org/abs/2302.06675
- To replace adamw with lion, LR is usually 3-10x smaller, and weight decay is usually 3-10x larger than adamw.
- Add 6 new models with training recipes and pretrained weights for
- Support gradient clip
- Arg name
use_ema
changed toema
, addema: True
in yaml to enable EMA.
- 2023/01/10
- MindCV v0.1 released! It can be installed via PyPI
pip install mindcv
now. - Add training recipe and trained weights of googlenet, inception_v3, inception_v4, xception
- 2022/12/09
- Support lr warmup for all lr scheduling algorithms besides cosine decay.
- Add repeated augmentation, which can be enabled by setting
--aug_repeats
to be a value larger than 1 (typically, 3 or 4 is a common choice). - Add EMA.
- Improve BCE loss to support mixup/cutmix.
- 2022/11/21
- Add visualization for loss and acc curves
- Support epochwise lr warmup cosine decay (previous is stepwise)
- 2022/11/09
- Add 7 pretrained ViT models.
- Add RandAugment augmentation.
- Fix CutMix efficiency issue and CutMix and Mixup can be used together.
- Fix lr plot and scheduling bug.
- 2022/10/12
- Both BCE and CE loss now support class-weight config, label smoothing, and auxiliary logit input (for networks like inception).
- 2022/09/13
- Add Adan optimizer (experimental)
mindcv.models
now expose num_classes
and in_channels
as constructor arguments:
- Add DenseNet models and pre-trained weights
- Add GoogLeNet models and pre-trained weights
- Add Inception V3 models and pre-trained weights
- Add Inception V4 models and pre-trained weights
- Add MnasNet models and pre-trained weights
- Add MobileNet V1 models and pre-trained weights
- Add MobileNet V2 models and pre-trained weights
- Add MobileNet V3 models and pre-trained weights
- Add ResNet models and pre-trained weights
- Add ShuffleNet V1 models and pre-trained weights
- Add ShuffleNet V2 models and pre-trained weights
- Add SqueezeNet models and pre-trained weights
- Add VGG models and pre-trained weights
- Add ViT models and pre-trained weights
mindcv.data
now expose:
- Add Mnist dataset
- Add FashionMnist dataset
- Add Imagenet dataset
- Add CIFAR10 dataset
- Add CIFAR100 dataset
mindcv.loss
now expose:
- Add BCELoss
- Add CrossEntropyLoss
mindcv.optim
now expose:
- Add SGD optimizer
- Add Momentum optimizer
- Add Adam optimizer
- Add AdamWeightDecay optimizer
- Add RMSProp optimizer
- Add Adagrad optimizer
- Add Lamb optimizer
mindcv.scheduler
now expose:
- Add WarmupCosineDecay learning rate scheduler
- Add ExponentialDecayLR learning rate scheduler
- Add Constant learning rate scheduler
mindcv-0.0.1.apk
mindcv-0.0.1-py3-none-any.whl.sha256
mindcv-0.0.1-py3-none-any.whl