Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
135 changes: 64 additions & 71 deletions Docs/featureguide/mixed precision/amp.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,139 +6,130 @@
Automatic mixed precision
#########################

This technique helps choose per-layer integer bit-widths to retain model accuracy when run on
Automatic mixed precision (AMP) helps choose per-layer integer bit widths to retain model accuracy on
fixed-point runtimes like |qnn|_.

As an example, say a particular model is not meeting a desired accuracy target when run in INT8.
The Auto Mixed Precision (AMP) feature will find a minimal set of layers that need to run on higher
precision, INT16 for example, to get to the desired quantized accuracy.
For example, consider a model that is not meeting an accuracy target when run in INT8.
AMP finds a minimal set of layers that need to run on higher precision, INT16 for example, to achieve the target accuracy.

Choosing a higher precision for some layers necessarily involves a trade-off: lower inferences/sec
for higher accuracy and vice-versa. The AMP feature will generate a pareto curve that can guide
the user to decide the right operating point for this tradeoff.
Choosing a higher precision for some layers involves a trade-off between performance (inferences per second)
and better accuracy. The AMP feature generates a Pareto curve you can use to help decide the right operating point for this tradeoff.

Context
=======

For performing AMP, a user needs to start with a PyTorch, TensorFlow or ONNX model and create a
Quantization Simulation model :class:`QuantizationSimModel`. This QuantSim model, along with an
To perform AMP, you need a PyTorch, TensorFlow, or ONNX model. You use the model to create a
Quantization Simulation (QuantSim) model :class:`QuantizationSimModel`. This QuantSim model, along with an
allowable accuracy drop, is passed to the API.

The function changes the QuantSim Sim model in place with different quantizers having different
bit-widths. This QuantSim model can be either exported or evaluated to get a quantization accuracy.
The API function changes the QuantSim model in-place with different bit-width quantizers. You can export or evaluate this QuantSim model to calculate a quantization accuracy.

.. image:: ../../images/automatic_mixed_precision_1.png
:width: 900px

Mixed Precision Algorithm
=========================
-------------------------

The algorithm involves 4 phases:
The algorithm involves four phases as shown in the following image.

.. image:: ../../images/automatic_mixed_precision_2.png
:width: 700px

1) Find layer groups
--------------------
Phase 1: Find layer groups
~~~~~~~~~~~~~~~~~~~~~~~~~~

Layer Groups are defined as a group of layers grouped together based on certain rules.
This helps in reducing search space over which the mixed precision algorithm operates.
It also ensures that we search only over the valid bit-width settings for parameters and activations.
Layer Groups are defined based on certain rules.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What are the rules and how are they defined?

Grouping layers helps reduce the search space over which the mixed precision algorithm operates.
It also ensures that the search occurs only over the valid bit-width settings for parameters and activations.

.. image:: ../../images/automatic_mixed_precision_3.png
:width: 900px

2) Perform sensitivity analysis (Phase 1)
-----------------------------------------
Phase 2: Perform sensitivity analysis
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

In this phase the algorithm performs a per-layer group sensitivity analysis.
This will identify how sensitive is the model if we choose a lower quantization bit-width for a particular layer group.
The sensitivity analysis yields an accuracy list which is cached and can be re-used again by the algorithm.
The algorithm performs a per-layer group sensitivity analysis.
This identifies how sensitive the model is to lower quantization bit width for particular layer groups.
The sensitivity analysis creates and caches an accuracy list that is used in following phases by the algorithm.

Below is an example of a list generated using sensitivity analysis:
Following is an an accuracy list generated using sensitivity analysis:

.. image:: ../../images/accuracy_list.png
:width: 900px

3) Create a Pareto-front list (Phase 2)
---------------------------------------
Phase 3: Create a Pareto-front list
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

A Pareto curve is a trade-off curve that describes how accuracy varies given a bit-ops target and vice versa.
The AMP algorithm yields a Pareto front curve which consists of layer groups changed up to that point, relative bit-ops (relative to starting bit-ops),
accuracy of the model, and the bit-width to which the layer group was changed to.
A Pareto curve or Pareto front describes the tradeoff between accuracy and bit-ops targets.
The AMP algorithm generates a Pareto curve showing, for each layer group changed:

An example of a Pareto list:
- Bitwidth: The bit width to which the layer group was changed
- Accuracy: The accuracy of the model
- Relative bit-ops: The bit-ops relative to starting

.. image:: ../../images/pareto.png
:width: 900px
An example of a Pareto list:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How is this list different from the accuracy list in the previous phase?


Bit-ops are computed as
.. image:: ../../images/pareto.png
:width: 900px

:math:`Bit-ops = Mac(op) * Bitwidth(parameter) * Bitwidth(Activation)`
Bit-ops are computed as:

The Pareto list can be used for plotting a Pareto curve. A Bokeh plot for Pareto curve is generated and saved in the results directory.
:math:`Bitops = Mac(op) * Bitwidth(parameter) * Bitwidth(Activation)`

.. image:: ../../images/pareto_curve.png
:width: 900px
The Pareto list can be used for plotting a Pareto curve. A plot of the Pareto curve is generated using Bokeh and saved in the results directory.

.. note::
.. image:: ../../images/pareto_curve.png
:width: 900px

A user can pass two different evaluation callbacks for phase 1 and phase 2. Since phase 1 is measuring sensitivity
of each quantizer group, we can pass a smaller representative dataset for phase 1 for evaluation, or even use an indirect measure
such as SQNR which can be computed faster than but correlates well with the real evaluation metric.
You can pass two different evaluation callbacks for phase 1 and phase 2.

It is recommended to use the complete dataset for evaluation in phase 2.
Since phase 1 measures sensitivity of each quantizer group, it can use a smaller representative dataset for evaluation, or even use an indirect measure such as SQNR that correlates with the direct evaluation metric but can be computed faster.

4) Reduce Bit-width Convert Op Overhead (Phase 3)
-------------------------------------------------
We recommend that you use the complete dataset for evaluation in phase 2.

Convert Ops are introduced in the mixed-precision model for transition between Ops that are assigned different activation
bit-widths or data types (float vs int). These Convert Ops contribute to the inference time along with bit-operations of Ops.
In this phase the algorithm derives a mixed-precision solution having less Convert Op overhead w.r.t. to original solution
keeping the mixed-precision accuracy intact. The algorithm produces mixed-precision solutions for a range of alpha values
(0.0, 0.2, 0.4, 0.6, 0.8, 1.0) where the alpha represents fraction of original Convert Op overhead allowed for respective solution.
Phase 4: Reduce bit-width convert op overhead
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Use Cases
=========
Conversion operations (convert ops) are introduced in the mixed-precision model for transition between ops with different activation bit widths or data types (float vs int). Convert ops contribute to the inference time along with bit-operations of ops.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume that 'convert op' is jargon for 'conversion operation'. Is that the case?


1) Choosing a very high accuracy drop (equivalent to setting allowed_accuracy_drop as None):
In this phase the algorithm derives a mixed-precision solution having less convert op overhead compared to the original solution, keeping the mixed-precision accuracy intact. The algorithm produces mixed-precision solutions for a range of alpha values (0.0, 0.2, 0.4, 0.6, 0.8, 1.0) where the alpha represents the fraction of original convert op overhead allowed for a respective solution.

AIMET allows a user to save intermediate states for computation of the Pareto list. Therefore, if a user computes a Pareto
list corresponding to an accuracy drop of None, they can view the complete profile of how model accuracy will vary as bit-ops vary.

Thereafter, a user can visualize the Pareto curve plot and choose an optimal point for accuracy. The algorithm can be re-run with
the new accuracy drop to get a sim model with the required accuracy.
Use Cases
---------

.. note::
1: Choosing a very high accuracy drop (equivalent to setting allowed_accuracy_drop to None)
AIMET enables a user to save intermediate states for computation of the Pareto list. Computing a Pareto list corresponding to an accuracy drop of None generates the complete profile of model accuracy vs. bit-ops. You can thus visualize the Pareto curve plot and choose an optimal point for accuracy. The algorithm can be re-run with the new accuracy drop to get a sim model with the required accuracy.

The Pareto list is not modified during the second run.
.. note::

2) Choosing a lower accuracy drop and then continuing to compute pareto list from this point if more accuracy drop is acceptable:
The Pareto list is not modified during the second run.

To enable this a user can use the clean_start parameter in the API. If clean_start is set to False then the Pareto list will
start computation from the last point where it left off.
2: Choosing a lower accuracy drop and then continuing to compute a Pareto list
Use this option if more accuracy drop is acceptable. Setting the clean_start parameter in the API to False causes the Pareto list to start computation from the point where it left off.

.. note::
.. note::

- It is recommended to set the clean_start parameter to False to use cached results for both use cases.
- If the model or candidate bit-widths change, the user needs to do a clean start.
- We recommend that you set the clean_start parameter to False to use cached results for both use cases.
- If the model or candidate bit widths change, you must do a clean start.

Workflow
========

Code example
------------
Procedure
---------

Step 1
~~~~~~

Setting up the model.

.. tab-set::
:sync-group: platform

.. tab-item:: PyTorch
:sync: torch

**Required imports**
**Import packages**

.. literalinclude:: ../../legacy/torch_code_examples/mixed_precision.py
:language: python
Expand All @@ -155,7 +146,7 @@ Step 1
.. tab-item:: TensorFlow
:sync: tf

**Required imports**
**Import packages**

.. literalinclude:: ../../legacy/keras_code_examples/mixed_precision.py
:language: python
Expand All @@ -172,14 +163,14 @@ Step 1
.. tab-item:: ONNX
:sync: onnx

**Required imports**
**Import packages**

.. literalinclude:: ../../legacy/onnx_code_examples/mixed_precision.py
:language: python
:start-after: # Step 0. Import statements
:end-before: # End step 0

**Instantiate a PyTorch model, convert to ONNX graph, define forward_pass and evaluation callbacks**
**Instantiate a PyTorch model, convert to an ONNX graph, define forward_pass and evaluation callbacks**

.. literalinclude:: ../../legacy/onnx_code_examples/mixed_precision.py
:language: python
Expand All @@ -189,6 +180,8 @@ Step 1
Step 2
~~~~~~

Quantizing the model.

.. tab-set::
:sync-group: platform

Expand Down
33 changes: 14 additions & 19 deletions Docs/featureguide/mixed precision/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,35 +4,30 @@
Mixed precision
###############

Quantization is a technique to improve the latency by running Deep Learning models in lower precision when
compared to full-precision floating point. Even though quantization helps achieve improved latency, store the model with
less memory and consume less power to run the models, it comes at a cost of reduced accuracy when compared to running
the model in Full Precision. The loss in accuracy is more pronounced as we run the model in lower bitwidths.
Mixed-Precision helps bridge the accuracy gap of quantized model when compared to floating point accuracy. In mixed
precision, different layers in the model are run in different precisions based on their sensitivity thereby getting the
benefit of higher accuracy but keeping the model size to be lower compared to full-precision floating point.
Quantization improves latency, uses less memory, and consumes less power to run a model, but it comes at the cost of reduced accuracy compared to full precision. The loss in accuracy becomes more pronounced the lower the bit width. Mixed precision helps bridge this accuracy gap. In mixed precision, sensitive layers in the model are run at higher precisions, achieving higher accuracy with a smaller model.

Mixed precision in AIMET currently follows the following steps,
Using mixed precision in AIMET follows these steps:

* Create the QuantSim object with a base precision
* Set the model to run in mixed precision by changing the bitwidth of relevant activation and param quantizers
* Calibrate and simulate the accuracy of the mixed precision model
* Export the artifacts which can be used by backend tools like QNN to run the model in mixed precision
1. Create a quantization simulation (QuantSim) object with a base precision.
2. Run the model in mixed precision by changing the bit width of selected activation and parameter quantizers.
3. Calibrate and simulate the accuracy of the mixed precision model.
4. Export configuration artifacts to create the mixed-precision model.

.. toctree::
:hidden:

Manual mixed precision <mmp>
Automatic mixed precision <amp>

:ref:`Manual mixed precision <featureguide-mmp>`
------------------------------------------------
AIMET offers two methods for creating a mixed-precision model: a manual mixed-precision configurator and automatic mixed precision.

Manual mixed precision (MMP) allows to set different precision levels (bit-width) to layers
Manual mixed precision
----------------------

:ref:`Manual mixed precision <featureguide-mmp>` (MMP) enables different precision levels (bit width) in layers
that are sensitive to quantization.

:ref:`Automatic mixed precision <featureguide-amp>`
---------------------------------------------------
Automatic mixed precision
-------------------------

Auto mixed precision (AMP) will automatically find a minimal set of layers that need to
run on higher precision, to get to the desired quantized accuracy.
:ref:`Automatic mixed precision <featureguide-amp>` (AMP) automatically finds a minimal set of layers that require higher precision to achieve a desired quantized accuracy.
Loading
Loading