9c1ea4fa01
On lots of systems onnxruntime doesn't detect GPU unless pytorch is imported before it. So despite having CUDA and CUDNN setup correctly, it is only using CPUExecutionProvider. By importing pytorch first, the issue is fixed. So let's use this until any official solution is available. See: https://stackoverflow.com/questions/75294639/onnxruntime-inference-with-cudnn-on-gpu-only-working-if-pytorch-imported-first |
||
---|---|---|
.. | ||
__init__.py | ||
config.py | ||
globals.py | ||
processor.py | ||
utils.py |