Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
18 commits
Select commit Hold shift + click to select a range
1e7dcd2
Update pre-commit repo revs.
MGAMZ Nov 7, 2025
8a1da0c
Remove `mdformat-openmmlab` and `mdformat_frontmatter` as they are no…
MGAMZ Nov 7, 2025
534bf31
`fix-encoding-pragma` has been removed -- use `pyupgrade` from https:…
MGAMZ Nov 7, 2025
9288d43
Fix detected flake8 F824.
MGAMZ Nov 7, 2025
bc0d1ef
Auto modified by pre-commit hooks after pre-commit autoupdate.
MGAMZ Nov 7, 2025
5f34de3
Fix mypy error: No overload variant of "join" matches argument types …
MGAMZ Nov 7, 2025
33b4c02
Fix mypy error: No overload variant of "__add__" of "tuple" matches a…
MGAMZ Nov 7, 2025
e019d37
Fix mypy error: "<typing special form>" has no attribute "__args__" …
MGAMZ Nov 7, 2025
1eb88bb
Fix mypy error: Argument 2 to "copy" has incompatible type "str | Pat…
MGAMZ Nov 7, 2025
b6f6622
Fix mmengine/runner/runner.py:1428: error: "Any" not callable [misc].
MGAMZ Nov 7, 2025
7817d70
Fix mypy mmengine/runner/_flexible_runner.py:866: error: "Any" not ca…
MGAMZ Nov 7, 2025
23af792
Release `flake8` `max-line-length` to 90 to avoid aggressive code mod…
MGAMZ Nov 8, 2025
b49853d
Accomplish TODO: if filepath or filepaths are Path, should return Path.
MGAMZ Nov 8, 2025
bee0fbe
Fix test script: To disable validation during training, It's better t…
MGAMZ Nov 8, 2025
43e0d4d
`fileio.join_path` will return `pathlib.Path` or `str` according to i…
MGAMZ Nov 8, 2025
d893fcd
Revert pre-commit modifications related to `mdformat`, and it auto mo…
MGAMZ Nov 9, 2025
c5f1eb4
Fix the incorrect return type annotation and fix the resulting mypy e…
MGAMZ Nov 15, 2025
12d2411
Improve readability
MGAMZ Nov 15, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
23 changes: 11 additions & 12 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -1,33 +1,32 @@
exclude: ^tests/data/
repos:
- repo: https://github.com/pre-commit/pre-commit
rev: v4.0.0
rev: v4.3.0
hooks:
- id: validate_manifest
- repo: https://github.com/PyCQA/flake8
rev: 7.1.1
rev: 7.3.0
hooks:
- id: flake8
args: [--max-line-length=90]
- repo: https://github.com/PyCQA/isort
rev: 5.11.5
rev: 7.0.0
hooks:
- id: isort
- repo: https://github.com/pre-commit/mirrors-yapf
rev: v0.32.0
- repo: https://github.com/google/yapf
rev: v0.43.0
hooks:
- id: yapf
additional_dependencies: [toml]
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v5.0.0
rev: v6.0.0
hooks:
- id: trailing-whitespace
- id: check-yaml
- id: end-of-file-fixer
- id: requirements-txt-fixer
- id: double-quote-string-fixer
- id: check-merge-conflict
- id: fix-encoding-pragma
args: ["--remove"]
- id: mixed-line-ending
args: ["--fix=lf"]
- repo: https://github.com/executablebooks/mdformat
Expand All @@ -40,12 +39,12 @@ repos:
- mdformat_frontmatter
- linkify-it-py
- repo: https://github.com/myint/docformatter
rev: 06907d0
rev: v1.7.7
hooks:
- id: docformatter
args: ["--in-place", "--wrap-descriptions", "79"]
- repo: https://github.com/asottile/pyupgrade
rev: v3.0.0
rev: v3.21.0
hooks:
- id: pyupgrade
args: ["--py36-plus"]
Expand All @@ -56,7 +55,7 @@ repos:
args: ["mmengine", "tests"]
- id: remove-improper-eol-in-cn-docs
- repo: https://github.com/pre-commit/mirrors-mypy
rev: v1.2.0
rev: v1.18.2
hooks:
- id: mypy
exclude: |-
Expand All @@ -67,6 +66,6 @@ repos:
additional_dependencies: ["types-setuptools", "types-requests", "types-PyYAML"]
- repo: https://github.com/astral-sh/uv-pre-commit
# uv version.
rev: 0.9.5
rev: 0.9.7
hooks:
- id: uv-lock
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ pre-commit run --all-files

If the installation process is interrupted, you can repeatedly run `pre-commit run ... ` to continue the installation.

If the code does not conform to the code style specification, pre-commit will raise a warning and fixes some of the errors automatically.
If the code does not conform to the code style specification, pre-commit will raise a warning and fixes some of the errors automatically.

<img src="https://user-images.githubusercontent.com/57566630/202369176-67642454-0025-4023-a095-263529107aa3.png" width="1200">

Expand Down
4 changes: 2 additions & 2 deletions docs/en/advanced_tutorials/basedataset.md
Original file line number Diff line number Diff line change
Expand Up @@ -291,7 +291,7 @@ The above is not fully initialized by setting `lazy_init=True`, and then complet

In the specific process of reading data, the dataloader will usually prefetch data from multiple dataloader workers, and multiple workers have complete dataset object backup, so there will be multiple copies of the same `data_list` in the memory. In order to save this part of memory consumption, The `BaseDataset` can serialize `data_list` into memory in advance, so that multiple workers can share the same copy of `data_list`, so as to save memory.

By default, the BaseDataset stores the serialization of `data_list` into memory. It is also possible to control whether the data will be serialized into memory ahead of time by using the `serialize_data` argument (default is `True`) :
By default, the BaseDataset stores the serialization of `data_list` into memory. It is also possible to control whether the data will be serialized into memory ahead of time by using the `serialize_data` argument (default is `True`) :

```python
pipeline = [
Expand Down Expand Up @@ -374,7 +374,7 @@ MMEngine provides `ClassBalancedDataset` wrapper to repeatedly sample the corres

**Notice:**

The `ClassBalancedDataset` wrapper assumes that the wrapped dataset class supports the `get_cat_ids(idx)` method, which returns a list. The list contains the categories of `data_info` given by 'idx'. The usage is as follows:
The `ClassBalancedDataset` wrapper assumes that the wrapped dataset class supports the `get_cat_ids(idx)` method, which returns a list. The list contains the categories of `data_info` given by 'idx'. The usage is as follows:

```python
from mmengine.dataset import BaseDataset, ClassBalancedDataset
Expand Down
2 changes: 1 addition & 1 deletion docs/en/advanced_tutorials/cross_library.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ train_pipeline=[

Using an algorithm from another library is a little bit complex.

An algorithm contains multiple submodules. Each submodule needs to add a prefix to its `type`. Take using MMDetection's YOLOX in MMTracking as an example:
An algorithm contains multiple submodules. Each submodule needs to add a prefix to its `type`. Take using MMDetection's YOLOX in MMTracking as an example:

```python
# Use custom_imports to register mmdet models to the registry
Expand Down
4 changes: 2 additions & 2 deletions docs/en/advanced_tutorials/initialize.md
Original file line number Diff line number Diff line change
Expand Up @@ -239,7 +239,7 @@ Although the `init_cfg` could control the initialization method for different mo

Assuming we've defined the following modules:

- `ToyConv` inherit from `nn.Module`, implements `init_weights`which initialize `custom_weight`(`parameter` of `ToyConv`) with 1 and initialize `custom_bias` with 0
- `ToyConv` inherit from `nn.Module`, implements `init_weights`which initialize `custom_weight`(`parameter` of `ToyConv`) with 1 and initialize `custom_bias` with 0

- `ToyNet` defines a `ToyConv` submodule.

Expand Down Expand Up @@ -353,7 +353,7 @@ from mmengine.model import normal_init
normal_init(model, mean=0, std=0.01, bias=0)
```

Similarly, we could also use [Kaiming](http://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf) initialization and [Xavier](http://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf) initialization:
Similarly, we could also use [Kaiming](http://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf) initialization and [Xavier](http://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf) initialization:

```python
from mmengine.model import kaiming_init, xavier_init
Expand Down
6 changes: 3 additions & 3 deletions docs/en/advanced_tutorials/model_analysis.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
# Model Complexity Analysis

We provide a tool to help with the complexity analysis for the network. We borrow the idea from the implementation of [fvcore](https://github.com/facebookresearch/fvcore) to build this tool, and plan to support more custom operators in the future. Currently, it provides the interfaces to compute "FLOPs", "Activations" and "Parameters", of the given model, and supports printing the related information layer-by-layer in terms of network structure or table. The analysis tool provides both operator-level and module-level flop counts simultaneously. Please refer to [Flop Count](https://github.com/facebookresearch/fvcore/blob/main/docs/flop_count.md) for implementation details of how to accurately measure the flops of one operator if interested.
We provide a tool to help with the complexity analysis for the network. We borrow the idea from the implementation of [fvcore](https://github.com/facebookresearch/fvcore) to build this tool, and plan to support more custom operators in the future. Currently, it provides the interfaces to compute "FLOPs", "Activations" and "Parameters", of the given model, and supports printing the related information layer-by-layer in terms of network structure or table. The analysis tool provides both operator-level and module-level flop counts simultaneously. Please refer to [Flop Count](https://github.com/facebookresearch/fvcore/blob/main/docs/flop_count.md) for implementation details of how to accurately measure the flops of one operator if interested.

## Definition

The model complexity has three indicators, namely floating-point operations (FLOPs), activations, and parameters. Their definitions are as follows:

- FLOPs

Floating-point operations (FLOPs) is not a clearly defined indicator. Here, we refer to the description in [detectron2](https://detectron2.readthedocs.io/en/latest/modules/fvcore.html#fvcore.nn.FlopCountAnalysis), which defines a set of multiply-accumulate operations as 1 FLOP.
Floating-point operations (FLOPs) is not a clearly defined indicator. Here, we refer to the description in [detectron2](https://detectron2.readthedocs.io/en/latest/modules/fvcore.html#fvcore.nn.FlopCountAnalysis), which defines a set of multiply-accumulate operations as 1 FLOP.

- Activations

Expand Down Expand Up @@ -180,4 +180,4 @@ We provide more options to support custom output
- `input_shape`: (tuple) the shape of the input, e.g., (3, 224, 224)
- `inputs`: (optional: torch.Tensor), if given, `input_shape` will be ignored
- `show_table`: (bool) whether return the statistics in the form of table, default: True
- `show_arch`: (bool) whether return the statistics by network layers, default: True
- `show_arch`: (bool) whether return the statistics by network layers, default: True
2 changes: 1 addition & 1 deletion docs/en/advanced_tutorials/registry.md
Original file line number Diff line number Diff line change
Expand Up @@ -274,7 +274,7 @@ print(output)

### How does the parent node know about child registry?

When working in our `MMAlpha` it might be necessary to use the `Runner` class defined in MMENGINE. This class is in charge of building most of the objects. If these objects are added to the child registry (`MMAlpha`), how is `MMEngine` able to find them? It cannot, `MMEngine` needs to switch to the Registry from `MMEngine` to `MMAlpha` according to the scope which is defined in default_runtime.py for searching the target class.
When working in our `MMAlpha` it might be necessary to use the `Runner` class defined in MMENGINE. This class is in charge of building most of the objects. If these objects are added to the child registry (`MMAlpha`), how is `MMEngine` able to find them? It cannot, `MMEngine` needs to switch to the Registry from `MMEngine` to `MMAlpha` according to the scope which is defined in default_runtime.py for searching the target class.

We can also init the scope accordingly, see example below:

Expand Down
2 changes: 1 addition & 1 deletion docs/en/design/evaluation.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ Usually, the process of model accuracy evaluation is shown in the figure below.

**Online evaluation**: The test data is usually divided into batches. Through a loop, each batch is fed into the model in turn, yielding corresponding predictions, and the test data and model predictions are passed to the evaluator. The evaluator calls the `process()` method of the `Metric` to process the data and prediction results. When the loop ends, the evaluator calls the `evaluate()` method of the metrics to calculate the model accuracy of the corresponding metrics.

**Offline evaluation**: Similar to the online evaluation process, the difference is that the pre-saved model predictions are read directly to perform the evaluation. The evaluator provides the `offline_evaluate` interface for calling the `Metric`s to calculate the model accuracy in an offline way. In order to avoid memory overflow caused by processing a large amount of data at the same time, the offline evaluation divides the test data and prediction results into chunks for processing, similar to the batches in online evaluation.
**Offline evaluation**: Similar to the online evaluation process, the difference is that the pre-saved model predictions are read directly to perform the evaluation. The evaluator provides the `offline_evaluate` interface for calling the `Metric`s to calculate the model accuracy in an offline way. In order to avoid memory overflow caused by processing a large amount of data at the same time, the offline evaluation divides the test data and prediction results into chunks for processing, similar to the batches in online evaluation.

<div align="center">
<img src="https://user-images.githubusercontent.com/15977946/187579113-279f097c-3530-40c4-9cd3-1bb0ce2fa452.png" width="500"/>
Expand Down
2 changes: 1 addition & 1 deletion docs/en/design/infer.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ When performing inference, the following steps are typically executed:
3. visualize: Visualization of predicted results.
4. postprocess: Post-processing of predicted results, including result format conversion, exporting predicted results, etc.

To improve the user experience of the inferencer, we do not want users to have to configure parameters for each step when performing inference. In other words, we hope that users can simply configure parameters for the `__call__` interface without being aware of the above process and complete the inference.
To improve the user experience of the inferencer, we do not want users to have to configure parameters for each step when performing inference. In other words, we hope that users can simply configure parameters for the `__call__` interface without being aware of the above process and complete the inference.

The `__call__` interface will execute the aforementioned steps in order, but it is not aware of which step the parameters provided by the user should be assigned to. Therefore, when developing a `CustomInferencer`, developers need to define four class attributes: `preprocess_kwargs`, `forward_kwargs`, `visualize_kwargs`, and `postprocess_kwargs`. Each attribute is a set of strings that are used to specify which step the parameters in the `__call__` interface correspond to:

Expand Down
2 changes: 1 addition & 1 deletion docs/en/design/logging.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@

![image](https://user-images.githubusercontent.com/57566630/163441489-47999f3a-3259-44ab-949c-77a8a599faa5.png)

Each scalar (losses, learning rates, etc.) during training is encapsulated by HistoryBuffer, managed by MessageHub in key-value pairs, formatted by LogProcessor and then exported to various visualization backends by [LoggerHook](mmengine.hooks.LoggerHook). **In most cases, statistical methods of these scalars can be configured through the LogProcessor without understanding the data flow.** Before diving into the design of the logging system, please read through [logging tutorial](../advanced_tutorials/logging.md) first for familiarizing basic use cases.
Each scalar (losses, learning rates, etc.) during training is encapsulated by HistoryBuffer, managed by MessageHub in key-value pairs, formatted by LogProcessor and then exported to various visualization backends by [LoggerHook](mmengine.hooks.LoggerHook). **In most cases, statistical methods of these scalars can be configured through the LogProcessor without understanding the data flow.** Before diving into the design of the logging system, please read through [logging tutorial](../advanced_tutorials/logging.md) first for familiarizing basic use cases.

## HistoryBuffer

Expand Down
2 changes: 1 addition & 1 deletion docs/en/design/visualization.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ Visualization provides an intuitive explanation of the training and testing proc

Based on the above requirements, we proposed the `Visualizer` and various `VisBackend` such as `LocalVisBackend`, `WandbVisBackend`, and `TensorboardVisBackend` in OpenMMLab 2.0. The visualizer could not only visualize the image data, but also things like configurations, scalars, and model structure.

- For convenience, the APIs provided by the `Visualizer` implement the drawing and storage functions. As an internal property of `Visualizer`, `VisBackend` will be called by `Visualizer` to write data to different backends.
- For convenience, the APIs provided by the `Visualizer` implement the drawing and storage functions. As an internal property of `Visualizer`, `VisBackend` will be called by `Visualizer` to write data to different backends.
- Considering that you may want to write data to multiple backends after drawing, `Visualizer` can be configured with multiple backends. When the user calls the storage API of the `Visualizer`, it will traverse and call all the specified APIs of `VisBackend` internally.

The UML diagram of the two is as follows.
Expand Down
2 changes: 1 addition & 1 deletion docs/en/examples/train_a_gan.md
Original file line number Diff line number Diff line change
Expand Up @@ -148,7 +148,7 @@ from mmengine.model import ImgDataPreprocessor
data_preprocessor = ImgDataPreprocessor(mean=([127.5]), std=([127.5]))
```

The following code implements the basic algorithm of GAN. To implement the algorithm using MMEngine, you need to inherit from the [BaseModel](mmengine.model.BaseModel) and implement the training process in the train_step. GAN requires alternating training of the generator and discriminator, which are implemented by train_discriminator and train_generator and implement disc_loss and gen_loss to calculate the discriminator loss function and generator loss function.
The following code implements the basic algorithm of GAN. To implement the algorithm using MMEngine, you need to inherit from the [BaseModel](mmengine.model.BaseModel) and implement the training process in the train_step. GAN requires alternating training of the generator and discriminator, which are implemented by train_discriminator and train_generator and implement disc_loss and gen_loss to calculate the discriminator loss function and generator loss function.
More details about BaseModel, refer to [Model tutorial](../tutorials/model.md).

```python
Expand Down
2 changes: 1 addition & 1 deletion docs/en/migration/param_scheduler.md
Original file line number Diff line number Diff line change
Expand Up @@ -435,7 +435,7 @@ param_scheduler = [
</thead>
</table>

Notice: `by_epoch` defaults to `False` in MMCV. It now defaults to `True` in MMEngine.
Notice: `by_epoch` defaults to `False` in MMCV. It now defaults to `True` in MMEngine.

### LinearAnnealingLrUpdaterHook migration

Expand Down
Loading