-
Notifications
You must be signed in to change notification settings - Fork 421
Edit mixed precision pages in feature guide. #3755
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -6,139 +6,130 @@ | |
Automatic mixed precision | ||
######################### | ||
|
||
This technique helps choose per-layer integer bit-widths to retain model accuracy when run on | ||
Automatic mixed precision (AMP) helps choose per-layer integer bit widths to retain model accuracy on | ||
fixed-point runtimes like |qnn|_. | ||
|
||
As an example, say a particular model is not meeting a desired accuracy target when run in INT8. | ||
The Auto Mixed Precision (AMP) feature will find a minimal set of layers that need to run on higher | ||
precision, INT16 for example, to get to the desired quantized accuracy. | ||
For example, consider a model that is not meeting an accuracy target when run in INT8. | ||
AMP finds a minimal set of layers that need to run on higher precision, INT16 for example, to achieve the target accuracy. | ||
|
||
Choosing a higher precision for some layers necessarily involves a trade-off: lower inferences/sec | ||
for higher accuracy and vice-versa. The AMP feature will generate a pareto curve that can guide | ||
the user to decide the right operating point for this tradeoff. | ||
Choosing a higher precision for some layers involves a trade-off between performance (inferences per second) | ||
and better accuracy. The AMP feature generates a Pareto curve you can use to help decide the right operating point for this tradeoff. | ||
|
||
Context | ||
======= | ||
|
||
For performing AMP, a user needs to start with a PyTorch, TensorFlow or ONNX model and create a | ||
Quantization Simulation model :class:`QuantizationSimModel`. This QuantSim model, along with an | ||
To perform AMP, you need a PyTorch, TensorFlow, or ONNX model. You use the model to create a | ||
Quantization Simulation (QuantSim) model :class:`QuantizationSimModel`. This QuantSim model, along with an | ||
allowable accuracy drop, is passed to the API. | ||
|
||
The function changes the QuantSim Sim model in place with different quantizers having different | ||
bit-widths. This QuantSim model can be either exported or evaluated to get a quantization accuracy. | ||
The API function changes the QuantSim model in-place with different bit-width quantizers. You can export or evaluate this QuantSim model to calculate a quantization accuracy. | ||
|
||
.. image:: ../../images/automatic_mixed_precision_1.png | ||
:width: 900px | ||
|
||
Mixed Precision Algorithm | ||
========================= | ||
------------------------- | ||
|
||
The algorithm involves 4 phases: | ||
The algorithm involves four phases as shown in the following image. | ||
|
||
.. image:: ../../images/automatic_mixed_precision_2.png | ||
:width: 700px | ||
|
||
1) Find layer groups | ||
-------------------- | ||
Phase 1: Find layer groups | ||
~~~~~~~~~~~~~~~~~~~~~~~~~~ | ||
|
||
Layer Groups are defined as a group of layers grouped together based on certain rules. | ||
This helps in reducing search space over which the mixed precision algorithm operates. | ||
It also ensures that we search only over the valid bit-width settings for parameters and activations. | ||
Layer Groups are defined based on certain rules. | ||
Grouping layers helps reduce the search space over which the mixed precision algorithm operates. | ||
It also ensures that the search occurs only over the valid bit-width settings for parameters and activations. | ||
|
||
.. image:: ../../images/automatic_mixed_precision_3.png | ||
:width: 900px | ||
|
||
2) Perform sensitivity analysis (Phase 1) | ||
----------------------------------------- | ||
Phase 2: Perform sensitivity analysis | ||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | ||
|
||
In this phase the algorithm performs a per-layer group sensitivity analysis. | ||
This will identify how sensitive is the model if we choose a lower quantization bit-width for a particular layer group. | ||
The sensitivity analysis yields an accuracy list which is cached and can be re-used again by the algorithm. | ||
The algorithm performs a per-layer group sensitivity analysis. | ||
This identifies how sensitive the model is to lower quantization bit width for particular layer groups. | ||
The sensitivity analysis creates and caches an accuracy list that is used in following phases by the algorithm. | ||
|
||
Below is an example of a list generated using sensitivity analysis: | ||
Following is an an accuracy list generated using sensitivity analysis: | ||
|
||
.. image:: ../../images/accuracy_list.png | ||
:width: 900px | ||
|
||
3) Create a Pareto-front list (Phase 2) | ||
--------------------------------------- | ||
Phase 3: Create a Pareto-front list | ||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | ||
|
||
A Pareto curve is a trade-off curve that describes how accuracy varies given a bit-ops target and vice versa. | ||
The AMP algorithm yields a Pareto front curve which consists of layer groups changed up to that point, relative bit-ops (relative to starting bit-ops), | ||
accuracy of the model, and the bit-width to which the layer group was changed to. | ||
A Pareto curve or Pareto front describes the tradeoff between accuracy and bit-ops targets. | ||
The AMP algorithm generates a Pareto curve showing, for each layer group changed: | ||
|
||
An example of a Pareto list: | ||
- Bitwidth: The bit width to which the layer group was changed | ||
- Accuracy: The accuracy of the model | ||
- Relative bit-ops: The bit-ops relative to starting | ||
|
||
.. image:: ../../images/pareto.png | ||
:width: 900px | ||
An example of a Pareto list: | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. How is this list different from the accuracy list in the previous phase? |
||
|
||
Bit-ops are computed as | ||
.. image:: ../../images/pareto.png | ||
:width: 900px | ||
|
||
:math:`Bit-ops = Mac(op) * Bitwidth(parameter) * Bitwidth(Activation)` | ||
Bit-ops are computed as: | ||
|
||
The Pareto list can be used for plotting a Pareto curve. A Bokeh plot for Pareto curve is generated and saved in the results directory. | ||
:math:`Bitops = Mac(op) * Bitwidth(parameter) * Bitwidth(Activation)` | ||
|
||
.. image:: ../../images/pareto_curve.png | ||
:width: 900px | ||
The Pareto list can be used for plotting a Pareto curve. A plot of the Pareto curve is generated using Bokeh and saved in the results directory. | ||
|
||
.. note:: | ||
.. image:: ../../images/pareto_curve.png | ||
:width: 900px | ||
|
||
A user can pass two different evaluation callbacks for phase 1 and phase 2. Since phase 1 is measuring sensitivity | ||
of each quantizer group, we can pass a smaller representative dataset for phase 1 for evaluation, or even use an indirect measure | ||
such as SQNR which can be computed faster than but correlates well with the real evaluation metric. | ||
You can pass two different evaluation callbacks for phase 1 and phase 2. | ||
|
||
It is recommended to use the complete dataset for evaluation in phase 2. | ||
Since phase 1 measures sensitivity of each quantizer group, it can use a smaller representative dataset for evaluation, or even use an indirect measure such as SQNR that correlates with the direct evaluation metric but can be computed faster. | ||
|
||
4) Reduce Bit-width Convert Op Overhead (Phase 3) | ||
------------------------------------------------- | ||
We recommend that you use the complete dataset for evaluation in phase 2. | ||
|
||
Convert Ops are introduced in the mixed-precision model for transition between Ops that are assigned different activation | ||
bit-widths or data types (float vs int). These Convert Ops contribute to the inference time along with bit-operations of Ops. | ||
In this phase the algorithm derives a mixed-precision solution having less Convert Op overhead w.r.t. to original solution | ||
keeping the mixed-precision accuracy intact. The algorithm produces mixed-precision solutions for a range of alpha values | ||
(0.0, 0.2, 0.4, 0.6, 0.8, 1.0) where the alpha represents fraction of original Convert Op overhead allowed for respective solution. | ||
Phase 4: Reduce bit-width convert op overhead | ||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | ||
|
||
Use Cases | ||
========= | ||
Conversion operations (convert ops) are introduced in the mixed-precision model for transition between ops with different activation bit widths or data types (float vs int). Convert ops contribute to the inference time along with bit-operations of ops. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I assume that 'convert op' is jargon for 'conversion operation'. Is that the case? |
||
|
||
1) Choosing a very high accuracy drop (equivalent to setting allowed_accuracy_drop as None): | ||
In this phase the algorithm derives a mixed-precision solution having less convert op overhead compared to the original solution, keeping the mixed-precision accuracy intact. The algorithm produces mixed-precision solutions for a range of alpha values (0.0, 0.2, 0.4, 0.6, 0.8, 1.0) where the alpha represents the fraction of original convert op overhead allowed for a respective solution. | ||
|
||
AIMET allows a user to save intermediate states for computation of the Pareto list. Therefore, if a user computes a Pareto | ||
list corresponding to an accuracy drop of None, they can view the complete profile of how model accuracy will vary as bit-ops vary. | ||
|
||
Thereafter, a user can visualize the Pareto curve plot and choose an optimal point for accuracy. The algorithm can be re-run with | ||
the new accuracy drop to get a sim model with the required accuracy. | ||
Use Cases | ||
--------- | ||
|
||
.. note:: | ||
1: Choosing a very high accuracy drop (equivalent to setting allowed_accuracy_drop to None) | ||
AIMET enables a user to save intermediate states for computation of the Pareto list. Computing a Pareto list corresponding to an accuracy drop of None generates the complete profile of model accuracy vs. bit-ops. You can thus visualize the Pareto curve plot and choose an optimal point for accuracy. The algorithm can be re-run with the new accuracy drop to get a sim model with the required accuracy. | ||
|
||
The Pareto list is not modified during the second run. | ||
.. note:: | ||
|
||
2) Choosing a lower accuracy drop and then continuing to compute pareto list from this point if more accuracy drop is acceptable: | ||
The Pareto list is not modified during the second run. | ||
|
||
To enable this a user can use the clean_start parameter in the API. If clean_start is set to False then the Pareto list will | ||
start computation from the last point where it left off. | ||
2: Choosing a lower accuracy drop and then continuing to compute a Pareto list | ||
Use this option if more accuracy drop is acceptable. Setting the clean_start parameter in the API to False causes the Pareto list to start computation from the point where it left off. | ||
|
||
.. note:: | ||
.. note:: | ||
|
||
- It is recommended to set the clean_start parameter to False to use cached results for both use cases. | ||
- If the model or candidate bit-widths change, the user needs to do a clean start. | ||
- We recommend that you set the clean_start parameter to False to use cached results for both use cases. | ||
- If the model or candidate bit widths change, you must do a clean start. | ||
|
||
Workflow | ||
======== | ||
|
||
Code example | ||
------------ | ||
Procedure | ||
--------- | ||
|
||
Step 1 | ||
~~~~~~ | ||
|
||
Setting up the model. | ||
|
||
.. tab-set:: | ||
:sync-group: platform | ||
|
||
.. tab-item:: PyTorch | ||
:sync: torch | ||
|
||
**Required imports** | ||
**Import packages** | ||
|
||
.. literalinclude:: ../../legacy/torch_code_examples/mixed_precision.py | ||
:language: python | ||
|
@@ -155,7 +146,7 @@ Step 1 | |
.. tab-item:: TensorFlow | ||
:sync: tf | ||
|
||
**Required imports** | ||
**Import packages** | ||
|
||
.. literalinclude:: ../../legacy/keras_code_examples/mixed_precision.py | ||
:language: python | ||
|
@@ -172,14 +163,14 @@ Step 1 | |
.. tab-item:: ONNX | ||
:sync: onnx | ||
|
||
**Required imports** | ||
**Import packages** | ||
|
||
.. literalinclude:: ../../legacy/onnx_code_examples/mixed_precision.py | ||
:language: python | ||
:start-after: # Step 0. Import statements | ||
:end-before: # End step 0 | ||
|
||
**Instantiate a PyTorch model, convert to ONNX graph, define forward_pass and evaluation callbacks** | ||
**Instantiate a PyTorch model, convert to an ONNX graph, define forward_pass and evaluation callbacks** | ||
|
||
.. literalinclude:: ../../legacy/onnx_code_examples/mixed_precision.py | ||
:language: python | ||
|
@@ -189,6 +180,8 @@ Step 1 | |
Step 2 | ||
~~~~~~ | ||
|
||
Quantizing the model. | ||
|
||
.. tab-set:: | ||
:sync-group: platform | ||
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What are the rules and how are they defined?