How can I use GPU hardware ?

This issue has been tracked since 2023-01-05.

Hey there, first off I'd like to say thanks, this is a really cool project and I'm curious to see where it will go :)

Anyway, I've been playing around with DEvbox the past weeks, with a focus on Python. I wanted to try leveraging GPU compute, but it seems that I'm not able to do so "directly". I saw Nix gives some explanation on how to do it here, I wanted to know if you could explain how the integration works with Devbox ?

I think it could also be a cool addition to the documentation as well.

loreto wrote this answer on 2023-01-05

Hi @romain-keramitas-prl – thanks for reporting.

What OS, CPU and GPUs are you trying to use? And which library are you trying to install/run that uses GPUs? That would help use test your use case.

romain-keramitas-prl wrote this answer on 2023-01-05

Hey @loreto !

I'm running on Ubuntu 20.04, with consumer CPUs / GPUs.

The CPU is an Intel® Core™ i7.
The GPU is a NVIDIA GeForce RTX 2060, with relatively recent drivers.

I'm trying to install and use Pytorch. I've created the following devbox configuration:

{
  "packages": [
    "python310",
    "poetry",
  ],
  "shell": {
    "init_hook": [
      "poetry shell",
      "export PKG_CONFIG_PATH=`pwd`/.devbox/nix/profile/default/lib/pkgconfig/",
      "export LD_LIBRARY_PATH=`pwd`/.devbox/nix/profile/default/lib/"
    ]
  },
  "nixpkgs": {
    "commit": "52e3e80afff4b16ccb7c52e9f0f5220552f03d04"
  }
}

It's unrelated, but I found that settings the two env variables was necessary to use certain packages.

I added the Pytorch library using poetry add torch, which got version 1.13.1. I tried running the following script, which just creates two arrays and tries to multiply them:

import torch

def create_arrays(n):
    x = torch.ones(n, n)
    y = torch.randn(n, n * 2)
    return x , y


def main():
    x, y = create_arrays(1000)
    x = x.to("cuda")
    y = y.to("cuda")
    z = x @ y

if __name__ == "__main__":
    main()

I got the following error, which I don't when running outside of devbox:

Traceback (most recent call last):
  File "/redacted/test_torch.py", line 16, in <module>
    main()
  File "/redacted/test_torch.py", line 11, in main
    x = x.to("cuda")
  File "/redacted/.venv/lib/python3.10/site-packages/torch/cuda/__init__.py", line 229, in _lazy_init
    torch._C._cuda_init()
RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx
loreto wrote this answer on 2023-01-05

Thanks! We'll use this to debug and circle back.

Lagoja wrote this answer on 2023-03-02

I've been tinkering with this recently, and got your example to work on WSL (Not sure if you're using that, or Linux directly). I believe the key is that you need to pass the folder with your NVIDIA drivers in the LD_LIBRARY_PATH variable.

The Devbox JSON that worked on my machine was the following:

{
  "packages": [
    "python310",
    "poetry",
    "stdenv.cc.cc.lib",
    "cudatoolkit"
  ],
  "shell": {
    "init_hook": [
      "export LD_LIBRARY_PATH=`pwd`/.devbox/nix/profile/default/lib:/usr/lib/wsl/lib"
    ]
  },
  "nixpkgs": {
    "commit": "f79ac848e3d6f0c12c52758c0f25c10c97ca3b62"
  }
}

Where /usr/lib/wsl/lib is the directory where the Windows NVIDIA drivers are linked in WSL2.

Lagoja wrote this answer on 2023-03-15

Setting LD_LIBRARY_PATH to the following seems to run on both WSL2 and Ubuntu machines with NVIDIA drivers installed:

export LD_LIBRARY_PATH=`pwd`/.devbox/nix/profile/default/lib:/usr/lib/x86_64-linux-gnu:/usr/wsl/lib

#760 could include setting this variable automatically based on the user's OS.

More Details About Repo
Owner Name jetpack-io
Repo Name devbox
Full Name jetpack-io/devbox
Language Go
Created Date 2022-08-18
Updated Date 2023-03-31
Star Count 4960
Watcher Count 19
Fork Count 68
Issue Count 42

YOU MAY BE INTERESTED

Issue Title Created Date Updated Date