Skip to content

Commit

Permalink
fix reivew issue
Browse files Browse the repository at this point in the history
  • Loading branch information
deepindeed2022 committed Nov 25, 2024
1 parent 011d56a commit 948fffe
Show file tree
Hide file tree
Showing 3 changed files with 3 additions and 4 deletions.
2 changes: 1 addition & 1 deletion docs/en/multi_modal/llava.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ LMDeploy supports the following llava series of models, which are detailed in th
| llava-hf/llava-1.5-7b-hf | 7B | TurboMind, PyTorch |
| liuhaotian/llava-v1.6-vicuna-7b | 7B | TurboMind, PyTorch |
| liuhaotian/llava-v1.6-mistral-7b | 7B | TurboMind, PyTorch |
| lmms-lab/llava-onevision-qwen2-7b-ov | 0.5B,7B,72B | TurboMind, PyTorch |
| lmms-lab/llava-onevision-qwen2-7b-ov | 0.5B,7B,72B | TurboMind |

The next chapter demonstrates how to deploy an Llava model using LMDeploy, with [llava-hf/llava-interleave](https://huggingface.co/llava-hf/llava-interleave-qwen-7b-hf) as an example.

Expand Down
2 changes: 1 addition & 1 deletion docs/zh_cn/multi_modal/llava.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ LMDeploy 支持以下 LLaVA 系列模型,具体如下表所示:
| llava-hf/llava-1.5-7b-hf | 7B | TurboMind, PyTorch |
| liuhaotian/llava-v1.6-vicuna-7b | 7B | TurboMind, PyTorch |
| liuhaotian/llava-v1.6-mistral-7b | 7B | TurboMind, PyTorch |
| lmms-lab/llava-onevision-qwen2-7b-ov | 0.5B,7B,72B | TurboMind, PyTorch |
| lmms-lab/llava-onevision-qwen2-7b-ov | 0.5B,7B,72B | TurboMind |

接下来的章节将演示如何使用 LMDeploy 部署 LLaVA 模型,并以 [llava-hf/llava-interleave](https://huggingface.co/llava-hf/llava-interleave-qwen-7b-hf) 为例。

Expand Down
3 changes: 1 addition & 2 deletions lmdeploy/vl/model/llava.py
Original file line number Diff line number Diff line change
Expand Up @@ -138,8 +138,7 @@ def build_model(self):
no_split_module_classes = ['CLIPEncoderLayer', 'SiglipEncoderLayer']
same_device_keys = [('mm_projector', 'vision_resampler',
'image_newline', 'rotary_emb')]
device_map = get_vision_encoder_device_map(model.model,
self.max_memory,
device_map = get_vision_encoder_device_map(model, self.max_memory,
no_split_module_classes,
same_device_keys)
with disable_logging():
Expand Down

0 comments on commit 948fffe

Please sign in to comment.