-
Notifications
You must be signed in to change notification settings - Fork 503
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error while pre-processing #219
Comments
What onnxruntime version are you using? |
Version: 1.16.0 |
I tried to replicate this on an actual ARM64 Ubuntu and got the same exact error message |
Ok I tried |
Hi, |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
$ python3 -m piper_train.preprocess --language en-us --input-dir "/home/patrick/voicedata/wav/" --output-dir "/home/patrick/voicedata/model" --dataset-format ljspeech --single-speaker --sample-rate 44100 INFO:preprocess:Single speaker dataset INFO:preprocess:Wrote dataset config ERROR:preprocess:phonemize_batch_espeak Traceback (most recent call last): File "/home/patrick/piper/src/python/piper_train/preprocess.py", line 289, in phonemize_batch_espeak silence_detector = make_silence_detector() File "/home/patrick/piper/src/python/piper_train/norm_audio/__init__.py", line 18, in make_silence_detector return SileroVoiceActivityDetector(silence_model) File "/home/patrick/piper/src/python/piper_train/norm_audio/vad.py", line 17, in __init__ self.session = onnxruntime.InferenceSession(onnx_path) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 432, in __init__ raise e File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in __init__ self._create_inference_session(providers, provider_options, disabled_optimizers) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 451, in _create_inference_session raise ValueError( ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'], ...) ERROR:preprocess:phonemize_batch_espeak Traceback (most recent call last): File "/home/patrick/piper/src/python/piper_train/preprocess.py", line 289, in phonemize_batch_espeak silence_detector = make_silence_detector() File "/home/patrick/piper/src/python/piper_train/norm_audio/__init__.py", line 18, in make_silence_detector return SileroVoiceActivityDetector(silence_model) File "/home/patrick/piper/src/python/piper_train/norm_audio/vad.py", line 17, in __init__ self.session = onnxruntime.InferenceSession(onnx_path) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 432, in __init__ raise e File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in __init__ self._create_inference_session(providers, provider_options, disabled_optimizers) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 451, in _create_inference_session raise ValueError( ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'], ...) ERROR:preprocess:phonemize_batch_espeak Traceback (most recent call last): File "/home/patrick/piper/src/python/piper_train/preprocess.py", line 289, in phonemize_batch_espeak silence_detector = make_silence_detector() File "/home/patrick/piper/src/python/piper_train/norm_audio/__init__.py", line 18, in make_silence_detector return SileroVoiceActivityDetector(silence_model) File "/home/patrick/piper/src/python/piper_train/norm_audio/vad.py", line 17, in __init__ self.session = onnxruntime.InferenceSession(onnx_path) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 432, in __init__ raise e File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in __init__ self._create_inference_session(providers, provider_options, disabled_optimizers) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 451, in _create_inference_session raise ValueError( ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'], ...) ERROR:preprocess:phonemize_batch_espeak Traceback (most recent call last): File "/home/patrick/piper/src/python/piper_train/preprocess.py", line 289, in phonemize_batch_espeak silence_detector = make_silence_detector() File "/home/patrick/piper/src/python/piper_train/norm_audio/__init__.py", line 18, in make_silence_detector return SileroVoiceActivityDetector(silence_model) File "/home/patrick/piper/src/python/piper_train/norm_audio/vad.py", line 17, in __init__ self.session = onnxruntime.InferenceSession(onnx_path) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 432, in __init__ raise e File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in __init__ self._create_inference_session(providers, provider_options, disabled_optimizers) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 451, in _create_inference_session raise ValueError( ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'], ...) ERROR:preprocess:phonemize_batch_espeak Traceback (most recent call last): File "/home/patrick/piper/src/python/piper_train/preprocess.py", line 289, in phonemize_batch_espeak silence_detector = make_silence_detector() File "/home/patrick/piper/src/python/piper_train/norm_audio/__init__.py", line 18, in make_silence_detector return SileroVoiceActivityDetector(silence_model) File "/home/patrick/piper/src/python/piper_train/norm_audio/vad.py", line 17, in __init__ self.session = onnxruntime.InferenceSession(onnx_path) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 432, in __init__ raise e File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in __init__ self._create_inference_session(providers, provider_options, disabled_optimizers) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 451, in _create_inference_session raise ValueError( ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'], ...) ERROR:preprocess:phonemize_batch_espeak Traceback (most recent call last): File "/home/patrick/piper/src/python/piper_train/preprocess.py", line 289, in phonemize_batch_espeak silence_detector = make_silence_detector() File "/home/patrick/piper/src/python/piper_train/norm_audio/__init__.py", line 18, in make_silence_detector return SileroVoiceActivityDetector(silence_model) File "/home/patrick/piper/src/python/piper_train/norm_audio/vad.py", line 17, in __init__ self.session = onnxruntime.InferenceSession(onnx_path) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 432, in __init__ raise e File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in __init__ self._create_inference_session(providers, provider_options, disabled_optimizers) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 451, in _create_inference_session raise ValueError( ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'], ...) ERROR:preprocess:phonemize_batch_espeak Traceback (most recent call last): File "/home/patrick/piper/src/python/piper_train/preprocess.py", line 289, in phonemize_batch_espeak silence_detector = make_silence_detector() File "/home/patrick/piper/src/python/piper_train/norm_audio/__init__.py", line 18, in make_silence_detector return SileroVoiceActivityDetector(silence_model) File "/home/patrick/piper/src/python/piper_train/norm_audio/vad.py", line 17, in __init__ self.session = onnxruntime.InferenceSession(onnx_path) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 432, in __init__ raise e File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in __init__ self._create_inference_session(providers, provider_options, disabled_optimizers) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 451, in _create_inference_session raise ValueError( ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'], ...) ERROR:preprocess:phonemize_batch_espeak Traceback (most recent call last): File "/home/patrick/piper/src/python/piper_train/preprocess.py", line 289, in phonemize_batch_espeak silence_detector = make_silence_detector() File "/home/patrick/piper/src/python/piper_train/norm_audio/__init__.py", line 18, in make_silence_detector return SileroVoiceActivityDetector(silence_model) File "/home/patrick/piper/src/python/piper_train/norm_audio/vad.py", line 17, in __init__ self.session = onnxruntime.InferenceSession(onnx_path) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 432, in __init__ raise e File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in __init__ self._create_inference_session(providers, provider_options, disabled_optimizers) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 451, in _create_inference_session raise ValueError( ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'], ...) ERROR:preprocess:phonemize_batch_espeak Traceback (most recent call last): File "/home/patrick/piper/src/python/piper_train/preprocess.py", line 289, in phonemize_batch_espeak silence_detector = make_silence_detector() File "/home/patrick/piper/src/python/piper_train/norm_audio/__init__.py", line 18, in make_silence_detector return SileroVoiceActivityDetector(silence_model) File "/home/patrick/piper/src/python/piper_train/norm_audio/vad.py", line 17, in __init__ self.session = onnxruntime.InferenceSession(onnx_path) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 432, in __init__ raise e File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in __init__ self._create_inference_session(providers, provider_options, disabled_optimizers) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 451, in _create_inference_session raise ValueError( ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'], ...) ERROR:preprocess:phonemize_batch_espeak Traceback (most recent call last): File "/home/patrick/piper/src/python/piper_train/preprocess.py", line 289, in phonemize_batch_espeak silence_detector = make_silence_detector() File "/home/patrick/piper/src/python/piper_train/norm_audio/__init__.py", line 18, in make_silence_detector return SileroVoiceActivityDetector(silence_model) File "/home/patrick/piper/src/python/piper_train/norm_audio/vad.py", line 17, in __init__ self.session = onnxruntime.InferenceSession(onnx_path) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 432, in __init__ raise e File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in __init__ self._create_inference_session(providers, provider_options, disabled_optimizers) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 451, in _create_inference_session raise ValueError( ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'], ...) ERROR:preprocess:phonemize_batch_espeak Traceback (most recent call last): File "/home/patrick/piper/src/python/piper_train/preprocess.py", line 289, in phonemize_batch_espeak silence_detector = make_silence_detector() File "/home/patrick/piper/src/python/piper_train/norm_audio/__init__.py", line 18, in make_silence_detector return SileroVoiceActivityDetector(silence_model) File "/home/patrick/piper/src/python/piper_train/norm_audio/vad.py", line 17, in __init__ self.session = onnxruntime.InferenceSession(onnx_path) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 432, in __init__ raise e File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in __init__ self._create_inference_session(providers, provider_options, disabled_optimizers) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 451, in _create_inference_session raise ValueError( ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'], ...) INFO:preprocess:Processing 100 utterance(s) with 12 worker(s) ERROR:preprocess:phonemize_batch_espeak Traceback (most recent call last): File "/home/patrick/piper/src/python/piper_train/preprocess.py", line 289, in phonemize_batch_espeak silence_detector = make_silence_detector() File "/home/patrick/piper/src/python/piper_train/norm_audio/__init__.py", line 18, in make_silence_detector return SileroVoiceActivityDetector(silence_model) File "/home/patrick/piper/src/python/piper_train/norm_audio/vad.py", line 17, in __init__ self.session = onnxruntime.InferenceSession(onnx_path) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 432, in __init__ raise e File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in __init__ self._create_inference_session(providers, provider_options, disabled_optimizers) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 451, in _create_inference_session raise ValueError( ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'], ...)
Sorry for the formatting, WSL shell doesn't copy well
The text was updated successfully, but these errors were encountered: