-
Notifications
You must be signed in to change notification settings - Fork 3
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
add zero degradation matrix multiplication
- Loading branch information
Showing
1 changed file
with
1 addition
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1 @@ | ||
{"cells":[{"source":"<a href=\"https://www.kaggle.com/code/aisuko/zero-degradation-matrix-multiplication?scriptVersionId=163020030\" target=\"_blank\"><img align=\"left\" alt=\"Kaggle\" title=\"Open in Kaggle\" src=\"https://kaggle.com/static/images/open-in-kaggle.svg\"></a>","metadata":{},"cell_type":"markdown"},{"cell_type":"markdown","id":"cd78daac","metadata":{"_cell_guid":"b1076dfc-b9ad-4769-8c92-a6c4dae69d19","_uuid":"8f2839f25d086af736a60e9eeb907d3b93b6e0e5","papermill":{"duration":0.00448,"end_time":"2024-02-15T23:50:23.174203","exception":false,"start_time":"2024-02-15T23:50:23.169723","status":"completed"},"tags":[]},"source":["# Overview\n","\n","**Note: all the images are from the blog in the Credits section.**\n","\n","The practice with Transformers see here [Lighter models on GPU for inference](https://www.kaggle.com/code/aisuko/lighter-models-on-gpu-for-inference/notebook)\n","\n","The main purpose of the LLM.int8() method is to make large models more accessible without performance degradation.\n","\n","In the LLM.int8(see the second link in Credits section) paper. It explains:\n","* Why traditional quantization fails for large models\n","* The performance deterioration is caused by outlier features\n","* LLM.int8() algorithm\n","\n","In essence, LLM.int8() seeks to complete the matrix multiplication computation in three steps:\n","1. From the input hidden states, extract the outliers(i.e. values that are larger than a certain threshold) by column.\n","2. Perform the matrix multiplication of the outliers in FP16 and the non-outliers in int8.\n","3. Dequantize the non-outlier results and add both outlier and non-outlier results together to receive the full result in FP16.\n","\n","<div style=\"text-align: center\"><img src=\"https://files.mastodon.social/media_attachments/files/111/700/338/979/374/386/original/2133b586980a7691.mp4\" width=\"60%\" heigh=\"60%\" alt=\"mixed-int8\"></div>"]},{"cell_type":"markdown","id":"bfcedbd4","metadata":{"papermill":{"duration":0.003791,"end_time":"2024-02-15T23:50:23.182247","exception":false,"start_time":"2024-02-15T23:50:23.178456","status":"completed"},"tags":[]},"source":["# The importance of outlier features\n","\n","A value that is outside the range of some numbers' global distribution is generally referred to as an outlier. Outlier detection has been widely used and covered in the current literature, and having prior knowledge of the distribution of your features helps with the task of outlier detection. More specifically, we have observed that classic quantization at scale fails for transformer-based models > 6B parameters. \n","\n","As mentioned earlier, 8-bit precision is extremely constrained, therefore quantizing a vector with several big values can produce widly erroneous results. Additionally, because of a built-in characteristic of the transformer-based architecture that links all the elements together, these errors tend to compound as they get propagated across multiple layers. Therefore, mixed-precision decomposition has been developed to facilitate efficient quantization with such extreme outliers.\n","\n","\n","# Inside the MatMul\n","\n","Once the hidden states are computed we extract the outliers using a custom threshold and we decompose the matrix into two parts as explained above, We found that extracting all outliers with magnitude 6 or greater in this way recoveres full inference performance. The outlier part is done in fp16 so it is a classic matrix multiplication, whereas the 8-bit precision using vector-wise multiplication is done by quantizing the weights and hidden states into 8-bit precision using vector-wise quantization --that is, row-wise quantization for the hidden state and column-wise quantization for the weight matrix. After this step, the results are dequantized and returned in half-precision in order to add them to the first matrix multiplication.\n","\n","<div style=\"text-align: center\"><img src=\"https://files.mastodon.social/media_attachments/files/111/700/455/558/279/509/original/aafbcc617e807c3a.png\" width=\"60%\" heigh=\"60%\" alt=\"matmul\"></div>"]},{"cell_type":"markdown","id":"f9335a10","metadata":{"papermill":{"duration":0.00376,"end_time":"2024-02-15T23:50:23.189979","exception":false,"start_time":"2024-02-15T23:50:23.186219","status":"completed"},"tags":[]},"source":["# How to use it"]},{"cell_type":"code","execution_count":1,"id":"7024b83b","metadata":{"execution":{"iopub.execute_input":"2024-02-15T23:50:23.199023Z","iopub.status.busy":"2024-02-15T23:50:23.198739Z","iopub.status.idle":"2024-02-15T23:50:39.476459Z","shell.execute_reply":"2024-02-15T23:50:39.475336Z"},"papermill":{"duration":16.285132,"end_time":"2024-02-15T23:50:39.47904","exception":false,"start_time":"2024-02-15T23:50:23.193908","status":"completed"},"tags":[]},"outputs":[],"source":["%%capture\n","!pip install bitsandbytes==0.41.3"]},{"cell_type":"code","execution_count":2,"id":"258e0ac0","metadata":{"execution":{"iopub.execute_input":"2024-02-15T23:50:39.489552Z","iopub.status.busy":"2024-02-15T23:50:39.488832Z","iopub.status.idle":"2024-02-15T23:50:43.660526Z","shell.execute_reply":"2024-02-15T23:50:43.659711Z"},"papermill":{"duration":4.179415,"end_time":"2024-02-15T23:50:43.662917","exception":false,"start_time":"2024-02-15T23:50:39.483502","status":"completed"},"tags":[]},"outputs":[{"name":"stderr","output_type":"stream","text":["/opt/conda/lib/python3.10/site-packages/scipy/__init__.py:146: UserWarning: A NumPy version >=1.16.5 and <1.23.0 is required for this version of SciPy (detected version 1.24.3\n"," warnings.warn(f\"A NumPy version >={np_minversion} and <{np_maxversion}\"\n"]}],"source":["import torch\n","import torch.nn as nn\n","\n","from bitsandbytes.nn import Linear8bitLt"]},{"cell_type":"markdown","id":"e630cd50","metadata":{"papermill":{"duration":0.00397,"end_time":"2024-02-15T23:50:43.671431","exception":false,"start_time":"2024-02-15T23:50:43.667461","status":"completed"},"tags":[]},"source":["Here we define the model. We can also convert a checkpoint ot model of any precision to 8-bit(FP16, BF16 or FP32) but the input of the model has to be Fp16 for Int8 module to work. So we treat our model here as a fp16 model."]},{"cell_type":"code","execution_count":3,"id":"0ddd3a74","metadata":{"execution":{"iopub.execute_input":"2024-02-15T23:50:43.680927Z","iopub.status.busy":"2024-02-15T23:50:43.680548Z","iopub.status.idle":"2024-02-15T23:50:43.708313Z","shell.execute_reply":"2024-02-15T23:50:43.707608Z"},"papermill":{"duration":0.034662,"end_time":"2024-02-15T23:50:43.710183","exception":false,"start_time":"2024-02-15T23:50:43.675521","status":"completed"},"tags":[]},"outputs":[],"source":["fp16_model=nn.Sequential(\n"," nn.Linear(64,64),\n"," nn.Linear(64,64)\n",")\n","\n","torch.save(fp16_model.state_dict(), \"model.pt\")"]},{"cell_type":"markdown","id":"60cb61a7","metadata":{"papermill":{"duration":0.003954,"end_time":"2024-02-15T23:50:43.718242","exception":false,"start_time":"2024-02-15T23:50:43.714288","status":"completed"},"tags":[]},"source":["`has_fp16_weights` is used to train in mixed Int8/Fp16 precision. Here we are interested in memory efficient inference for which we need to use `has_fp16_weights=False`."]},{"cell_type":"code","execution_count":4,"id":"b27b11d4","metadata":{"execution":{"iopub.execute_input":"2024-02-15T23:50:43.727471Z","iopub.status.busy":"2024-02-15T23:50:43.727192Z","iopub.status.idle":"2024-02-15T23:50:43.732637Z","shell.execute_reply":"2024-02-15T23:50:43.731783Z"},"papermill":{"duration":0.012262,"end_time":"2024-02-15T23:50:43.734546","exception":false,"start_time":"2024-02-15T23:50:43.722284","status":"completed"},"tags":[]},"outputs":[],"source":["# define an int8 model\n","int8_model=nn.Sequential(\n"," Linear8bitLt(64,64,has_fp16_weights=False),\n"," Linear8bitLt(64,64,has_fp16_weights=False)\n",")"]},{"cell_type":"code","execution_count":5,"id":"f88f2c79","metadata":{"execution":{"iopub.execute_input":"2024-02-15T23:50:43.744023Z","iopub.status.busy":"2024-02-15T23:50:43.74374Z","iopub.status.idle":"2024-02-15T23:50:43.813245Z","shell.execute_reply":"2024-02-15T23:50:43.812379Z"},"papermill":{"duration":0.076524,"end_time":"2024-02-15T23:50:43.815269","exception":false,"start_time":"2024-02-15T23:50:43.738745","status":"completed"},"tags":[]},"outputs":[{"data":{"text/plain":["Parameter containing:\n","Parameter(Int8Params([[ 0.0810, 0.0623, 0.0093, ..., 0.0330, 0.0067, -0.1187],\n"," [ 0.0571, 0.0363, 0.0844, ..., 0.1002, -0.0044, 0.0729],\n"," [ 0.0451, -0.0461, 0.0457, ..., 0.1032, -0.0817, 0.0935],\n"," ...,\n"," [ 0.0010, 0.1032, 0.1181, ..., -0.0441, -0.1196, -0.0173],\n"," [-0.1142, 0.0183, 0.0183, ..., -0.0348, -0.1220, 0.0394],\n"," [ 0.0412, -0.0633, 0.0934, ..., -0.0913, 0.0324, 0.1136]]))"]},"execution_count":5,"metadata":{},"output_type":"execute_result"}],"source":["int8_model.load_state_dict(torch.load(\"model.pt\"))\n","int8_model[0].weight"]},{"cell_type":"code","execution_count":6,"id":"46a8a052","metadata":{"execution":{"iopub.execute_input":"2024-02-15T23:50:43.825162Z","iopub.status.busy":"2024-02-15T23:50:43.824891Z","iopub.status.idle":"2024-02-15T23:50:43.987869Z","shell.execute_reply":"2024-02-15T23:50:43.986882Z"},"papermill":{"duration":0.170214,"end_time":"2024-02-15T23:50:43.989937","exception":false,"start_time":"2024-02-15T23:50:43.819723","status":"completed"},"tags":[]},"outputs":[{"data":{"text/plain":["Parameter containing:\n","Parameter(Int8Params([[ 83, 64, 10, ..., 34, 7, -122],\n"," [ 60, 38, 88, ..., 105, -5, 76],\n"," [ 47, -49, 48, ..., 109, -86, 98],\n"," ...,\n"," [ 1, 106, 121, ..., -45, -123, -18],\n"," [-119, 19, 19, ..., -36, -127, 41],\n"," [ 42, -65, 96, ..., -94, 33, 116]], device='cuda:0',\n"," dtype=torch.int8))"]},"execution_count":6,"metadata":{},"output_type":"execute_result"}],"source":["int8_model=int8_model.to(0) # Quantization happens here\n","int8_model[0].weight"]},{"cell_type":"markdown","id":"4d6d4a69","metadata":{"papermill":{"duration":0.004264,"end_time":"2024-02-15T23:50:43.998909","exception":false,"start_time":"2024-02-15T23:50:43.994645","status":"completed"},"tags":[]},"source":["The weights values are \"truncated\" as we have seen when explaning quantization in the [Quantization Technologies](https://www.kaggle.com/code/aisuko/quantization-technologies). Also, the values seem to be distributed between [-127,127]. You might also wonder how to retrieve the FP16 weights in order to perform the outlier MatMul in fp16."]},{"cell_type":"code","execution_count":7,"id":"e85baef5","metadata":{"execution":{"iopub.execute_input":"2024-02-15T23:50:44.008938Z","iopub.status.busy":"2024-02-15T23:50:44.008638Z","iopub.status.idle":"2024-02-15T23:50:44.224914Z","shell.execute_reply":"2024-02-15T23:50:44.223982Z"},"papermill":{"duration":0.22398,"end_time":"2024-02-15T23:50:44.227246","exception":false,"start_time":"2024-02-15T23:50:44.003266","status":"completed"},"tags":[]},"outputs":[{"data":{"text/plain":["tensor([[ 0.0808, 0.0613, 0.0095, ..., 0.0332, 0.0067, -0.1190],\n"," [ 0.0584, 0.0364, 0.0836, ..., 0.1024, -0.0048, 0.0741],\n"," [ 0.0457, -0.0470, 0.0456, ..., 0.1063, -0.0826, 0.0956],\n"," ...,\n"," [ 0.0010, 0.1016, 0.1149, ..., -0.0439, -0.1182, -0.0176],\n"," [-0.1158, 0.0182, 0.0180, ..., -0.0351, -0.1220, 0.0400],\n"," [ 0.0409, -0.0623, 0.0912, ..., -0.0917, 0.0317, 0.1132]],\n"," device='cuda:0')"]},"execution_count":7,"metadata":{},"output_type":"execute_result"}],"source":["(int8_model[0].weight.CB*int8_model[0].weight.SCB)/127"]},{"cell_type":"code","execution_count":8,"id":"605b8c7d","metadata":{"execution":{"iopub.execute_input":"2024-02-15T23:50:44.238618Z","iopub.status.busy":"2024-02-15T23:50:44.238291Z","iopub.status.idle":"2024-02-15T23:50:44.374832Z","shell.execute_reply":"2024-02-15T23:50:44.373963Z"},"papermill":{"duration":0.144611,"end_time":"2024-02-15T23:50:44.376882","exception":false,"start_time":"2024-02-15T23:50:44.232271","status":"completed"},"tags":[]},"outputs":[{"data":{"text/plain":["tensor([[-0.2104, -1.1426, -0.0222, 0.4993, -0.0264, -0.4763, 0.4167, -0.0701,\n"," 0.4368, 0.2043, -0.0095, -0.1930, -0.2064, -0.9932, -0.0107, 0.5508,\n"," -0.1831, 0.1372, 0.4070, -0.2703, 0.1462, -0.1387, -0.0767, 0.0767,\n"," 0.4946, -0.8608, -0.0306, 1.2139, 0.3298, 0.3669, 0.0539, 0.1787,\n"," 0.1628, -0.0489, 0.1002, 0.2668, 0.3823, -0.4624, -0.5103, -0.5444,\n"," 0.0184, 0.4468, -0.0021, -0.1556, 0.1512, 0.1228, 0.2539, -0.3762,\n"," -0.4758, 0.5840, 0.2112, 0.3572, -0.0346, 0.1412, -0.3032, 0.1333,\n"," 0.0469, -0.0532, -0.0124, -0.5708, 0.1160, -0.3674, 0.0087, -0.1565]],\n"," device='cuda:0', dtype=torch.float16, grad_fn=<MatMul8bitLtBackward>)"]},"execution_count":8,"metadata":{},"output_type":"execute_result"}],"source":["# We can safely infer using model by making sure the input is in FP16\n","input_=torch.randn((1,64), dtype=torch.float16)\n","hidden_states=int8_model(input_.to(torch.device('cuda', 0)))\n","hidden_states"]},{"cell_type":"markdown","id":"d4c9e359","metadata":{"papermill":{"duration":0.00469,"end_time":"2024-02-15T23:50:44.386468","exception":false,"start_time":"2024-02-15T23:50:44.381778","status":"completed"},"tags":[]},"source":["# Credits\n","\n","* https://huggingface.co/blog/hf-bitsandbytes-integration?source=post_page-----287da2d5d7f1--------------------------------\n","* https://arxiv.org/abs/2208.07339"]}],"metadata":{"kaggle":{"accelerator":"gpu","dataSources":[],"dockerImageVersionId":30627,"isGpuEnabled":true,"isInternetEnabled":true,"language":"python","sourceType":"notebook"},"kernelspec":{"display_name":"Python 3","language":"python","name":"python3"},"language_info":{"codemirror_mode":{"name":"ipython","version":3},"file_extension":".py","mimetype":"text/x-python","name":"python","nbconvert_exporter":"python","pygments_lexer":"ipython3","version":"3.10.12"},"papermill":{"default_parameters":{},"duration":25.71105,"end_time":"2024-02-15T23:50:45.511119","environment_variables":{},"exception":null,"input_path":"__notebook__.ipynb","output_path":"__notebook__.ipynb","parameters":{},"start_time":"2024-02-15T23:50:19.800069","version":"2.4.0"}},"nbformat":4,"nbformat_minor":5} |