Fix GPU not being used even when it available
On lots of systems onnxruntime doesn't detect GPU unless pytorch is imported before it. So despite having CUDA and CUDNN setup correctly, it is only using CPUExecutionProvider. By importing pytorch first, the issue is fixed. So let's use this until any official solution is available. See: https://stackoverflow.com/questions/75294639/onnxruntime-inference-with-cudnn-on-gpu-only-working-if-pytorch-imported-first
This commit is contained in:
parent
6258f3c084
commit
9c1ea4fa01
@ -1,3 +1,4 @@
|
||||
import torch
|
||||
import onnxruntime
|
||||
|
||||
use_gpu = False
|
||||
|
Loading…
Reference in New Issue
Block a user