Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

torch.cuda.OutOfMemoryError: CUDA out of memory. / 6G card #2365

Open
fshlol opened this issue Oct 29, 2024 · 0 comments
Open

torch.cuda.OutOfMemoryError: CUDA out of memory. / 6G card #2365

fshlol opened this issue Oct 29, 2024 · 0 comments

Comments

@fshlol
Copy link

fshlol commented Oct 29, 2024

Hello I'm using 'Model Inference' in RVC
In FAQ the out of memory error is mentioned, it's recommended to "for inference, adjust the x_pad, x_query, x_center, and x_max settings in the config.py file as needed."
Which values I should put in config file, what would be the correct adjustments?
This is the error I get after paths to the audio file, .pth and .index files in RVC are set and I click on 'Convert':

Traceback (most recent call last):
File "F:\RVC\infer-web.py", line 203, in vc_single
audio_opt = vc.pipeline(
File "F:\RVC\vc_infer_pipeline.py", line 361, in pipeline
self.vc(
File "F:\RVC\vc_infer_pipeline.py", line 249, in vc
(net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0])
File "F:\RVC\lib\infer_pack\models.py", line 752, in infer
m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
File "F:\RVC\runtime\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "F:\RVC\lib\infer_pack\models.py", line 104, in forward
x = self.encoder(x * x_mask, x_mask)
File "F:\RVC\runtime\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "F:\RVC\lib\infer_pack\attentions.py", line 65, in forward
y = self.attn_layers[i](x, x, attn_mask)
File "F:\RVC\runtime\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "F:\RVC\lib\infer_pack\attentions.py", line 221, in forward
x, self.attn = self.attention(q, k, v, mask=attn_mask)
File "F:\RVC\lib\infer_pack\attentions.py", line 265, in attention
relative_weights = self._absolute_position_to_relative_position(p_attn)
File "F:\RVC\lib\infer_pack\attentions.py", line 346, in _absolute_position_to_relative_position
x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 428.00 MiB (GPU 0; 6.00 GiB total capacity; 3.02 GiB already allocated; 373.12 MiB free; 3.81 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Traceback (most recent call last):
File "F:\RVC\runtime\lib\site-packages\gradio\routes.py", line 321, in run_predict
output = await app.blocks.process_api(
File "F:\RVC\runtime\lib\site-packages\gradio\blocks.py", line 1007, in process_api
data = self.postprocess_data(fn_index, result["prediction"], state)
File "F:\RVC\runtime\lib\site-packages\gradio\blocks.py", line 953, in postprocess_data
prediction_value = block.postprocess(prediction_value)
File "F:\RVC\runtime\lib\site-packages\gradio\components.py", line 2076, in postprocess
processing_utils.audio_to_file(sample_rate, data, file.name)
File "F:\RVC\runtime\lib\site-packages\gradio\processing_utils.py", line 206, in audio_to_file
data = convert_to_16_bit_wav(data)
File "F:\RVC\runtime\lib\site-packages\gradio\processing_utils.py", line 219, in convert_to_16_bit_wav
if data.dtype in [np.float64, np.float32, np.float16]:
AttributeError: 'NoneType' object has no attribute 'dtype'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant