-
Notifications
You must be signed in to change notification settings - Fork 1.6k
Description
NVIDIA Open GPU Kernel Modules Version
580.119.02
Please confirm this issue does not happen with the proprietary driver (of the same version). This issue tracker is only for bugs specific to the open kernel driver.
- I confirm that this does not happen with the proprietary driver package.
Operating System and Version
Fedora Linux 43 (KDE Plasma Desktop Edition)
Kernel Release
Linux homura 6.18.6-200.fc43.x86_64 #1 SMP PREEMPT_DYNAMIC Sun Jan 18 18:57:00 UTC 2026 x86_64 GNU/Linux
Please confirm you are running a stable release kernel (e.g. not a -rc). We do not accept bug reports for unreleased kernels.
- I am running on a stable kernel release.
Hardware: GPU
GPU 0: NVIDIA RTX A400 (UUID: GPU-aeefd27a-9d0b-2ff2-aeb2-5ff1e89d9dfc)
Describe the bug
When a (linear) buffer is created using GBM on the Nvidia driver, it can be imported successfully into a non-Nvidia card. However, actually attempting to sample from the texture on the non-nvidia side fails during command submission (presumably while preparing/locking the BOs used for the submission).
AMDGPU hangs retrying infinitely, with this error looping in dmesg:
amdgpu: [drm] *ERROR* Not enough memory for command submission!
Intel eventually complains in stderr:
intel: the execbuf ioctl keeps returning ENOMEM
Other combinations work:
- GBM allocate on Nvidia, import & Vulkan blit on Nvidia, import & GL sample on Nvidia
- GBM allocate on AMD, import & Vulkan blit on AMD, import & GL sample on Nvidia
- GBM allocate on AMD, import & Vulkan blit on AMD, import & GL sample on AMD
- GBM allocate on AMD, import & Vulkan blit on Nvidia, import & GL sample on AMD
- GBM allocate on Nouveau, import & Vulkan blit on Nouveau/NVK, import & GL sample on AMD
So it seems it's specifically something about Nvidia-side GBM allocations that makes them not usable by other drivers.
I first noticed this when trying to use libfunnel running on an Nvidia card to share buffers with OBS via PipeWire. libfunnel works by doing a GBM allocation on the local (source) GPU, importing it into Vulkan by fd (on the same GPU), blitting to it, then sending the GBM dma-buf FD to the destination via PipeWire (where it will be imported, possibly into a different GPU).
This has been reproduced by other people on other GPU combinations/systems.
To Reproduce
This is a reduced test case: https://gist.github.com/hoshinolina/9d0731cba9fad23562dda90106178cf8
It expects two DRM device names, one where the texture will be allocated, and one where it will be imported and then drawn using EGL (using external textures so it works on nvidia too):
gcc -o gbm-export-test gbm-export-test.c -lGL -lEGL -lgbm
./gbm-export-test [SRCDEV] [DSTDEV]
./gbm-export-test /dev/dri/renderD(nvidia) /dev/dri/renderD(nvidia): works./gbm-export-test /dev/dri/renderD(amd) /dev/dri/renderD(amd): works./gbm-export-test /dev/dri/renderD(amd) /dev/dri/renderD(nvidia): works./gbm-export-test /dev/dri/renderD(nouveau) /dev/dri/renderD(amd): works./gbm-export-test /dev/dri/renderD(nvidia) /dev/dri/renderD(amd): hangs with dmesg errors
(Note that, for simplicity, this test doesn't render anything to the texture on the source GPU, it just allocates it and leaves it uninitialized. This is not relevant, the case where I first discovered this was doing a proper Vulkan blit on the source side.)
Bug Incidence
Always
nvidia-bug-report.log.gz
More Info
No response