Enabling Nvidia GPU Acceleration in WSL2 (Archlinux) and Podman

When using Podman for containerized development in WSL2 environment, enabling Nvidia GPU acceleration can significantly improve performance for machine learning, deep learning, and other GPU-intensive applications. This article will provide a detailed guide on configuring Podman GPU support in Arch Linux WSL2 environment.

Prerequisites

Before starting the configuration, please ensure your environment meets the following requirements:

  • Windows 10/11 with WSL2 enabled
  • CUDA-compatible Nvidia graphics card
  • Arch Linux WSL2 distribution installed
  • Podman installed

Configuration Steps

1. Install Nvidia Drivers on Windows

First, you need to install the latest Nvidia drivers on the Windows host. Please visit the Nvidia website to download and install the latest drivers suitable for your graphics card model.

2. Install nvidia-utils in WSL2

In the Arch Linux WSL2 environment, install the nvidia-utils package to provide necessary GPU support:

sudo pacman -S nvidia-utils

This package provides the library files and tools needed to communicate with the Windows-side Nvidia drivers.

3. Verify GPU Availability

After installation, you can verify that the GPU is correctly recognized with the following command:

nvidia-smi

If configured correctly, you should see output similar to the following, displaying your graphics card information and current status.

Wed Jul 23 11:52:02 2025
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 575.64.03              Driver Version: 576.88         CUDA Version: 12.9     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce RTX 2080 ...    On  |   00000000:01:00.0  On |                  N/A |
|  0%   53C    P0             65W /  260W |     725MiB /   8192MiB |      2%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI              PID   Type   Process name                        GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A              32      G   /Xwayland                             N/A      |
+-----------------------------------------------------------------------------------------+

4. Configure Podman GPU Support

Ensure Podman is properly installed and configured. If not yet installed, you can use:

sudo pacman -S podman

5. Test GPU Acceleration Functionality

Use the following command to test whether GPU acceleration in Podman works correctly:

podman run --rm --device nvidia.com/gpu=all docker.io/nvidia/cuda:11.0.3-base-ubuntu20.04 nvidia-smi

If everything is configured correctly, this command will:

  1. Download the Nvidia CUDA base image
  2. Run the nvidia-smi command in the container
  3. Display GPU information, proving that the container can access the host’s GPU

Subsequent Usage

After configuration is complete, you can run various GPU-accelerated applications in Podman containers, such as:

  • TensorFlow/PyTorch deep learning training
  • CUDA parallel computing programs
  • GPU-accelerated data processing tasks

Remember to add the --device nvidia.com/gpu=all parameter to the podman run command to enable GPU access. If using in compose files, you need to add the devices configuration:

devices:
  - nvidia.com/gpu=all
0%