As an engineer
I want to have a function in python that detects if there is a GPU on the host machine (in the case of our deployments it means attached to the pod), so that I can load the model on GPU and run inference through it.
At the moment we are using the function torch.cuda.is_available() which assumes pytorch is installed. While this function does exactly what we want, we don't always have pytorch installed as we are also using catboost.
Description
Description
Details
Details
Subject | Repo | Branch | Lines +/- | |
---|---|---|---|---|
Add a util function to detect GPU in resource_utils module | machinelearning/liftwing/inference-services | main | +18 -0 |
Related Objects
Related Objects
Event Timeline
Comment Actions
@achou suggested to use pyopencl (GitHub, PyPI) which seems well supported and promising.
An alternative would be to use a specific function depending on the framework used. For example catboost has its own function get_gpu_device_count.
However if a generic solution can be achieved and the installed package is small it seems much better.
Comment Actions
Change 1010515 had a related patch set uploaded (by AikoChou; author: AikoChou):
[machinelearning/liftwing/inference-services@main] Add a util function to detect GPU in resource_utils module
Comment Actions
Change 1010515 merged by jenkins-bot:
[machinelearning/liftwing/inference-services@main] Add a util function to detect GPU in resource_utils module