Fix GPU not being used even when it available
On lots of systems onnxruntime doesn't detect GPU unless pytorch is imported before it. So despite having CUDA and CUDNN setup correctly, it is only using CPUExecutionProvider. By importing pytorch first, the issue is fixed. So let's use this until any official solution is available. See: https://stackoverflow.com/questions/75294639/onnxruntime-inference-with-cudnn-on-gpu-only-working-if-pytorch-imported-first
This commit is contained in:
@@ -1,3 +1,4 @@
|
||||
import torch
|
||||
import onnxruntime
|
||||
|
||||
use_gpu = False
|
||||
|
||||
Reference in New Issue
Block a user