On computers equipped with Thunderbolt 3+, it is possible to attach a desktop-grade external graphics card (eGPU) using a GPU enclosure. eGPU.io is a good resource with buyer's guide and a community forum. While some manual configuration (shown below) is needed for most modes of operation, Linux support for eGPUs is generally good.
The eGPU enclosure Thunderbolt device may need to be authorized first after plugging in (based on your BIOS/UEFI Firmware configuration). Follow Thunderbolt#User device authorization. If successful, the external graphics card should show up in
$ lspci | grep -E 'VGA|3D'
00:02.0 VGA compatible controller: Intel Corporation UHD Graphics 620 (rev 07) # internal GPU 1a:10.3 VGA compatible controller: NVIDIA Corporation GP107 [GeForce GTX 1050] (rev a1) # external GPU
Depending on your computer, its firmware and enclosure firmware, Thunderbolt will limit host <-> eGPU bandwidth to some extent due to the number of PCIe lanes and OPI Mode:
# dmesg | grep PCIe
[19888.928225] pci 0000:1a:10.3: 8.000 Gb/s available PCIe bandwidth, limited by 2.5 GT/s PCIe x4 link at 0000:05:01.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link)
A driver compatible with your GPU model should be installed:
If installed successfully,
lspci -k should show that a driver has been associated with your card:
$ lspci -k
1a:10.3 VGA compatible controller: NVIDIA Corporation GP107 [GeForce GTX 1050] (rev a1) Subsystem: NVIDIA Corporation GP107 [GeForce GTX 1050] Kernel driver in use: nvidia Kernel modules: nouveau, nvidia_drm, nvidia
Right after completing installation steps, compute-only workloads like GPGPU#CUDA that do not need to display anything should work without any extra configuration.
nvidia-smi utility should work with the proprietary NVIDIA driver. Proprietary Nvidia NVENC/NVDEC should work (without OpenGL interop).
This use-case should also support full hotplug. Hot-unplug should be also possible (probably depending on drivers used). On Nvidia, active
nvidia-persistenced is expected to prevent clean hot-unplug.
Multiple setups combining internal (iGPU) and external (eGPU) cards are possible, each with own advantages and disadvantages.
Xorg rendered on eGPU, PRIME display offload to iGPU
- Most programs that make use of GPU run out-of-the-box on eGPU:
NVDEC(including OpenGL interop).
- Xorg only starts with the eGPU plugged in.
- Monitors attached to eGPU work out-of-the-box, PRIME display offload can be used for monitors attached to iGPU (i.e. internal laptop screen).
Main articles are PRIME#Discrete card as primary GPU and PRIME#Reverse PRIME. Also documented in NVIDIA driver docs Chapter 33. Offloading Graphics Display with RandR 1.4.
Use Xorg configuration snippet like this one:
Section "Device" Identifier "Device0" Driver "nvidia" BusID "PCI:26:16:3" # Edit according to lspci, translate from hex to decimal. Option "AllowExternalGpus" "True" # Required for proprietary NVIDIA driver. EndSection Section "Module" # Load modesetting module for the iGPU, which should show up in XrandR 1.4 as a provider. Load "modesetting" EndSection
Screensections, as these are inferred automatically. First
Devicedefined will be considered primary.
To validate this setup, use
xrandr --listproviders, which should display
Providers: number : 2 Provider 0: id: 0x1b8 cap: 0x1, Source Output crtcs: 4 outputs: 4 associated providers: 0 name:NVIDIA-0 Provider 1: id: 0x1f3 cap: 0xf, Source Output, Sink Output, Source Offload, Sink Offload crtcs: 3 outputs: 5 associated providers: 0 name:modesetting
To output to internal laptop screen and/or other monitors attached to iGPU, RandR 1.4 PRIME display offload can be used, using names from above
xrandr --listproviders output:
xrandr --setprovideroutputsource modesetting NVIDIA-0 && xrandr --auto
xrandr --autois optional and may be substituted by any RandR-based display configuration tool. Its presence prevents all-screens-black situation.
You may want to run this command before a display manager shows login propmt or before desktop environment starts, see Xrandr#Configuration and Xinit.
Vulkan may enumerate GPUs independently of Xorg, so in order to run for example
vkcube in this setup, one may need to pass
--gpu_number 1 option. Alternatively, a layer to reorder GPUs during enumeration can be activated with the same effect:
__NV_PRIME_RENDER_OFFLOAD=1 vkcube or equivalently
BusIdof the eGPU in the appropriate file for your mode and graphics card in
Xorg rendered on iGPU, PRIME render offload to eGPU
- Programs are rendered on iGPU by default, but PRIME render offload can be used to render them on eGPU.
- Xorg starts even with eGPU disconnected, but render/display offload will not work until it is restarted.
- Monitors attached to iGPU (i.e. internal laptop screen) work out-of-the-box, PRIME display offload can be used for monitors attached to eGPU.
Main article is PRIME#PRIME GPU offloading. Also documented in NVIDIA driver docs Chapter 34. PRIME Render Offload.
With many discrete GPU drivers, this mode should be the default without any manual Xorg configuration. If that does not work, or if you use proprietary NVIDIA drivers, use the following:
Section "Device" Identifier "Device0" Driver "modesetting" EndSection Section "Device" Identifier "Device1" Driver "nvidia" BusID "PCI:26:16:3" # Edit according to lspci, translate from hex to decimal. Option "AllowExternalGpus" "True" # Required for proprietary NVIDIA driver. EndSection
To validate this setup, use
xrandr --listproviders, which should display
$ xrandr --listproviders
Providers: number : 2 Provider 0: id: 0x47 cap: 0xf, Source Output, Sink Output, Source Offload, Sink Offload crtcs: 3 outputs: 5 associated providers: 0 name:modesetting Provider 1: id: 0x24a cap: 0x2, Sink Output crtcs: 4 outputs: 4 associated providers: 0 name:NVIDIA-G0
some_program on the eGPU, PRIME render offload can be used:
- for proprietary NVIDIA drivers:
$ __NV_PRIME_RENDER_OFFLOAD=1 __VK_LAYER_NV_optimus=NVIDIA_only __GLX_VENDOR_LIBRARY_NAME=nvidia some_program
- for proprietary NVIDIA drivers (convenience wrapper):
$ prime-run some_program
- for open-source drivers:
$ DRI_PRIME=1 some_program
To output to monitors connected to eGPU, RandR 1.4 PRIME display offload can be again used:
$ xrandr --setprovideroutputsource NVIDIA-G0 modesetting && xrandr --auto
NVIDIA drivers 460.27.04+ implement an optimization for a special case of combined render and display offloads:
- Added support for “Reverse PRIME Bypass”, an optimization that bypasses the bandwidth overhead of PRIME Render Offload and PRIME Display Offload in conditions where a render offload application is fullscreen, unredirected, and visible only on a given NVIDIA-driven PRIME Display Offload output. Use of the optimization is reported in the X log when verbose logging is enabled in the X server.
Separate Xorg instance for eGPU
Main article is Nvidia-xrun#External GPU setup.
Known issues with eGPUs on Xorg
- hotplug is not supported with most discrete GPU Xorg drivers: the eGPU needs to be plugged in when Xorg starts. Logging out and in again should suffice to restart Xorg.
- hot-unplug is not supported at all: doing so leads to system instability or outright freezes (as acknowledged in the Nvidia docs).
Wayland support for eGPUs (or multiple GPUs in general) is much less tested now, but should work in theory and with less manual configuration. Note that there needs to be explicit support by particular Wayland compositor.
There seems to be also preliminary support for GPU hotplug in some compositors: