Prime / Optimus laptops and multi GPU systems

I’ve seen around tons of confusion about Optimus laptops, Prime and systems with multiple GPUs (like a desktop with an Nvidia dedicated GPU and an embedded AMD GPU) and how to configure them. People get mad with variables, scripts and extra tools.

The truth is, there’s not much to configure and since a few years, for most common cases, everything works out of the box.

Let’s take into consideration a very common case, a laptop with Intel + Nvidia GPU (Dell Precision 5680, Nvidia Optimus):

$ lspci | grep -i vga
00:02.0 VGA compatible controller: Intel Corporation Raptor Lake-P [Iris Xe Graphics] (rev 04)
01:00.0 VGA compatible controller: NVIDIA Corporation AD104GLM [RTX 3500 Ada Generation Laptop GPU] (rev a1)

And then let’s take the two most common cases into consideration to drive the GPUs.

  • Intel + Nouveau open source driver (DRI/DRI)
  • Intel open source driver + Nvidia proprietary driver (DRI/NVIDIA)

Power management

The system boots with the graphical output driven by the integrated Intel GPU (00:02.0) and the Nvidia GPU (01:00.0) is off.

$ cat /sys/bus/pci/devices/0000:{00:02.0,01:00.0}/power/runtime_status
active
suspended

A simple command that touches the PCI card like lspci or nvidia-settings is enough to wake up the Nvidia GPU for probing:

$ lspci > /dev/null 
$ cat /sys/bus/pci/devices/0000:{00:02.0,01:00.0}/power/runtime_status
active
active

A few seconds after, the GPU is again in suspended:

$ cat /sys/bus/pci/devices/0000:{00:02.0,01:00.0}/power/runtime_status
active
suspended

The transition is always fast if no program is using the GPU, it usually takes just 4 or 5 seconds for the GPU to turn off. For example after exiting a game, you hear immediately the fan shutting down when the GPU goes off.

This is the simplest way to check power state of the GPU both when using the open source Nouveau driver and the Nvidia proprietary driver.

VGA Switcheroo (DRM drivers only)

If you are using the open source driver, there are a few added benefits in terms of control. The VGA Switcheroo files appear as soon as two GPU drivers and one handler have registered with vga_switcheroo.  since multiple GPUs are using a common framework, vga_switcheroo is enabled and we we can manipulate the state of the devices:

$ sudo cat /sys/kernel/debug/vgaswitcheroo/switch
0:IGD:+:Pwr:0000:00:02.0
1:DIS-Audio: :DynOff:0000:01:00.1
2:DIS: :DynOff:0000:01:00.0

After firing up the GPU for a workload, we can see the state reflected into the virtual file:

$ sudo cat /sys/kernel/debug/vgaswitcheroo/switch
0:IGD:+:Pwr:0000:00:02.0
1:DIS-Audio: :DynOff:0000:01:00.1
2:DIS: :DynPwr:0000:01:00.0

The DIS-Audio device is the actual HDA sound card on the GPU that is used to send output to an external output (ex. HDMI). That is also controlled by the dynamic control of the devices.

The configuration is flexible, so for example you could have two or more discrete GPUs and one extra audio controller for an eventual HDMI port.

You can also do some really lowlevel stuff, like this one to switch the display output to the discrete GPU if you have an old system with disconnected GPUs that uses a MUX to switch the display output:

$ sudo echo MDIS > /sys/kernel/debug/vgaswitcheroo/switch

Selecting the GPU to use when running a program from the desktop

If running on Gnome or KDE, any application can be selected to run on the discrete GPU directly from the desktop by right clicking on the icon:

This is supported both in the case of multiple DRI/DRM devices and or a combination with Nvidia proprietary drivers. There is no visible difference between the two.

Both Gnome and KDE feature an extra setting that can be added to desktop menus to prefer the integrated GPU. For example Steam provides this by default:

$ cat /usr/share/applications/steam.desktop | grep -i GPU
PrefersNonDefaultGPU=true
X-KDE-RunOnDiscreteGpu=true

Applications bearing those entries receive the opposite treatment, they run by default on the discrete GPU and by right clicking we can select the internal GPU:

Selecting the GPU to use with switcherooctl

The system comes with a userspace utility to manipulate the GPUs and that also prints the variables you can use to address a specific GPU. Prime / VGA Swicheroo case:

$ switcherooctl list
Device: 0
  Name:        Intel Corporation Raptor Lake-P [Iris Xe Graphics]
  Default:     yes
  Environment: DRI_PRIME=pci-0000_00_02_0

Device: 1
  Name:        NVIDIA Corporation AD104GLM [RTX 3500 Ada Generation Laptop GPU]
  Default:     no
  Environment: DRI_PRIME=pci-0000_01_00_0

The DRI_PRIME variable is never set by default and it’s assumed to be at 0 (so main integrated GPU in most cases) if nothing else sets it.

In the case of Nvidia proprietary drivers, the tool is smart enough to set the appropriate Nvidia variables to achieve the same result:

$ switcherooctl list
Device: 0
  Name:        Intel Corporation Raptor Lake-P [Iris Xe Graphics]
  Default:     yes
  Environment: DRI_PRIME=pci-0000_00_02_0

Device: 1
  Name:        NVIDIA Corporation AD104GLM [RTX 3500 Ada Generation Laptop GPU]
  Default:     no
  Environment: __GLX_VENDOR_LIBRARY_NAME=nvidia __NV_PRIME_RENDER_OFFLOAD=1 __VK_LAYER_NV_optimus=NVIDIA_only

Think of switcherooctl as a replacement for setting up variables. For example, if your system has 4 GPUs and you want to target the 4th GPU, these commands are equivalent:

$ switcherooctl launch -g 3 <command>
$ DRI_PRIME=3 <command>
$ DRI_PRIME=pci-0000_03_00_0 <command>

Selecting the GPU to use with environment variables

OpenGL context

OpenGL came in before this multiple GPU – multiple GPU vendor thing existed, so by default, the first used GPU is the one used to run OpenGL applications in the main display and leave the second GPU off:

$ glxinfo -B | grep string
OpenGL vendor string: Intel
OpenGL renderer string: Mesa Intel(R) Graphics (RPL-P)
OpenGL core profile version string: 4.6 (Core Profile) Mesa 24.0.8
OpenGL core profile shading language version string: 4.60
OpenGL version string: 4.6 (Compatibility Profile) Mesa 24.0.8
OpenGL shading language version string: 4.60
OpenGL ES profile version string: OpenGL ES 3.2 Mesa 24.0.8
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.20
$ cat /sys/bus/pci/devices/0000:{00:02.0,01:00.0}/power/runtime_status
active
suspended

In the case of the Intel + Nvidia proprietary drivers, we can use the Nvidia variables consumed by the proprietary driver to select the GPU and let the system power on the extra GPU:

$ __NV_PRIME_RENDER_OFFLOAD=1 __GLX_VENDOR_LIBRARY_NAME=nvidia glxinfo -B | grep string
OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: NVIDIA RTX 3500 Ada Generation Laptop GPU/PCIe/SSE2
OpenGL core profile version string: 4.6.0 NVIDIA 555.42.02
OpenGL core profile shading language version string: 4.60 NVIDIA
OpenGL version string: 4.6.0 NVIDIA 555.42.02
OpenGL shading language version string: 4.60 NVIDIA
OpenGL ES profile version string: OpenGL ES 3.2 NVIDIA 555.42.02
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.20
$ cat /sys/bus/pci/devices/0000:{00:02.0,01:00.0}/power/runtime_status
active
active

If we are using open source drivers for both Intel + Nvidia (Nouveau), we can use the Mesa DRI variables to select the GPU:

$ DRI_PRIME=1 glxinfo -B | grep string
OpenGL vendor string: Mesa
OpenGL renderer string: NV194
OpenGL core profile version string: 4.3 (Core Profile) Mesa 24.0.8
OpenGL core profile shading language version string: 4.30
OpenGL version string: 4.3 (Compatibility Profile) Mesa 24.0.8
OpenGL shading language version string: 4.30
OpenGL ES profile version string: OpenGL ES 3.2 Mesa 24.0.8
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.20

We can now use both ways of checking the power state of the GPUs via the PCI devices or with VGA Switcheroo:

$ cat /sys/bus/pci/devices/0000:{00:02.0,01:00.0}/power/runtime_status
active
active
$ sudo cat /sys/kernel/debug/vgaswitcheroo/switch
0:IGD:+:Pwr:0000:00:02.0
1:DIS-Audio: :DynOff:0000:01:00.1
2:DIS: :DynPwr:0000:01:00.0

VA-API (Video Acceleration API) context

$ vainfo | grep version
libva info: VA-API version 1.21.0
libva info: Trying to open /usr/lib64/dri/iHD_drv_video.so
libva info: Found init function __vaDriverInit_1_21
libva info: va_openDriver() returns 0
vainfo: VA-API version: 1.21 (libva 2.21.0)
vainfo: Driver version: Intel iHD driver for Intel(R) Gen Graphics - 24.2.3 (Full Feature Build)
$ cat /sys/bus/pci/devices/0000:{00:02.0,01:00.0}/power/runtime_status
active
suspended

VA-API has its own set of variables for selecting which driver to use in the case of Intel + Nvidia proprietary drivers:

$ LIBVA_DRIVER_NAME=nvidia vainfo | grep version
libva info: VA-API version 1.21.0
libva info: User environment variable requested driver 'nvidia'
libva info: Trying to open /usr/lib64/dri/nvidia_drv_video.so
libva info: Found init function __vaDriverInit_1_0
libva info: va_openDriver() returns 0
vainfo: VA-API version: 1.21 (libva 2.21.0)
vainfo: Driver version: VA-API NVDEC driver [direct backend]
$ cat /sys/bus/pci/devices/0000:{00:02.0,01:00.0}/power/runtime_status
active
active

Again, with the open source stack, also the DRI variable or switcherooctl are required:

$ DRI_PRIME=1 LIBVA_DRIVER_NAME=nouveau vainfo | grep version
libva info: VA-API version 1.21.0
libva info: User environment variable requested driver 'nouveau'
libva info: Trying to open /usr/lib64/dri/nouveau_drv_video.so
libva info: Found init function __vaDriverInit_1_21
libva info: va_openDriver() returns 0
vainfo: VA-API version: 1.21 (libva 2.21.0)
vainfo: Driver version: Mesa Gallium driver 24.0.8 for NV194
$ cat /sys/bus/pci/devices/0000:{00:02.0,01:00.0}/power/runtime_status
active
active

VDPAU context

VDPAU is pretty much dead, there is no support for Optimus/Prime laptops and no support for Wayland.

Vulkan or EGL context

Vulkan and EGL were thought with this use case in mind and the selection of the GPU to use ties into the extensions, so usually the correct one is already considered by the program using the appropriate API. The program can query a particular extension to get an ordered list of GPUs or with some other mechanism. This is usually performed by the program itself, so there is not really a way to “force” one specific GPU.

For example, vkcube allows us to select the GPU:

$ vkcube --gpu_number 0 --c 20
Selected GPU 0: Intel(R) Graphics (RPL-P), type: IntegratedGpu
$ vkcube --gpu_number 1 --c 20
Selected GPU 1: NVIDIA RTX 3500 Ada Generation Laptop GPU, type: DiscreteGpu

Contrary to the OpenGL context, you can check with the following commands that there is always a list of GPUs to use and never a single GPU information:

$ eglinfo -B
$ __NV_PRIME_RENDER_OFFLOAD=1 eglinfo -B
$ vulkaninfo --summary

There are some variables or programs that can be used to influence the extensions used for querying the GPUs, but it’s not really a supported path. The application decides based on the information provided by the drivers and some predefined criteria.

Forcing the usage of X on a specific GPU in a Wayland context

Everything described so far is applied as well to Wayland. On top of that, Xwayland is started whenever an application that does not support Wayland yet is started in a Wayland desktop.

If you want to force the use of Xwayland for a program that supports both Wayland and X, then you just need to set an additional variable.

For example, depending on the context (DRI, Nvidia, etc), these are all equivalent:

$ XDG_SESSION_TYPE=X11 __NV_PRIME_RENDER_OFFLOAD=1 __GLX_VENDOR_LIBRARY_NAME=nvidia glxgears
$ XDG_SESSION_TYPE=X11 DRI_PRIME=1 glxgears
$ XDG_SESSION_TYPE=X11 switcherooctl launch -g 1 glxgears

CUDA with unsupported GCC versions

When running the CUDA stack on a recent Fedora distribution, you’re very likely to hit the compatibility issue with the current GCC release not being yet supported by NVCC. This is quite easy to address, but not many people seem to know it.

At the moment of writing, CUDA 12.4 has GCC 13.x support, while Fedora 40 ships with GCC 14.

Since a few years I’ve been shipping a cuda-gcc package which appears as a drop in replacement for NVCC. It can be installed along with CUDA and the drivers from the Nvidia or multimedia repository or from a Fedora COPR if you are running the upstream CUDA packages provided by Nvidia.

This GCC version is hidden from the main path and is explicitly used by NVCC when compiling something. Installing the cuda-gcc-c++ package creates profile entries in /etc/profile.d that just do this:

# dnf -y install cuda-gcc-c++
$ cat /etc/profile.d/cuda-gcc.sh 
export NVCC_PREPEND_FLAGS='-ccbin /usr/bin/cuda'

Logout/login or reload your profile and you’re good to go.

This way, every time you invoke NVCC you are not using the system compiler but the one provided by the cuda-gcc package.

On a Red Hat Enterprise Linux based distribution you can achieve the same result by installting the development toolset of your choice and activating the environment for it. This is usually not an issue as NVCC is officially supported on those distributions.

Nvidia proprietary and open source kernel modules

With the latest bunch of updates to the Nvidia and Multimedia repositories, I’ve added the ability to switch to the two implementations of the kernel modules currently available in the Nvidia driver for Linux.

Since almost a year, the Nvidia driver ships with two different implementations of the kernel modules, one proprietary and one open source. The open source one as of drivers 545.x is now considered beta quality also for the workstations, so it seems a good moment to start shipping it.

The open source one is supposed to be the only one that will be kept in the future, but at the moment both are available and both differ in terms of functionality. You can read about the main differences in terms of functionality and what chips they support in the official documentation.

I did not want to introduce another variation of the kernel modules beside akmods, kABI and DKMS, this would have created even more confusion and lots of dependencies in the SPEC files for the variations. The new akmod and DKMS packages ship both sources (MIT/GPL and proprietary kernel modules) and allow you to switch between one or the other through a configuration file.

Considering that in the long run only the open source variant will remain, I wanted to make this as transparent as possible for the users. Basically, if you don’t care and just want something that works, nothing has changed for you.

The two sources get referenced as they are referenced inside the Nvidia run file, namely “kernel” for the original proprietary kernel modules and “kernel-open” for the new open source variation.

The following instructions show you how to switch between one implementation or the other.

DKMS

Check which version you have installed:

# modinfo -l nvidia
NVIDIA

Change the type of modules you want to use and trigger a rebuild and a reinstall:

# sed -i -e 's/kernel$/kernel-open/g' /etc/nvidia/kernel.conf
# dkms build -m nvidia/545.29.02 --force
# dkms install -m nvidia/545.29.02 --force

Now check again the license and you should see that it has changed to MIT/GPL:

# modinfo -l nvidia
Dual MIT/GPL
# reboot

To switch back, change the configuration again and then trigger the same process for rebuilding installing:

# sed -i -e 's/kernel-open$/kernel/g' /etc/nvidia/kernel.conf
# dkms build -m nvidia/545.29.02 --force
# dkms install -m nvidia/545.29.02 --force
# reboot

akmods

Check which version you have installed:

# modinfo -l nvidia
NVIDIA

Change the type of modules you want to use and trigger a rebuild and a reinstall:

# sed -i -e 's/kernel$/kernel-open/g' /etc/nvidia/kernel.conf
# akmods --rebuild

Now check again the license and you should see that it has changed to MIT/GPL:

# modinfo -l nvidia
Dual MIT/GPL
# reboot

To switch back, change the configuration again and then trigger the same process for rebuilding installing:

# sed -i -e 's/kernel-open$/kernel/g' /etc/nvidia/kernel.conf
# akmods --rebuild
# reboot

Wayland/modesetting on Nvidia

With the latest Nvidia drivers it seems that modesetting and Wayland work fine for Gnome and GDM.

Console text is still a normal console, but upon boot you get the native screen resolution in Plymouth and then you can login under both X.org and Wayland sessions.

Screenshot-from-2020-07-11-08-06-56

How to test? Make sure that you have the following line enabled for the nvidia-drm module:

# cat /usr/lib/modprobe.d/nvidia.conf | grep drm
options nvidia-drm modeset=1

And then make sure to comment out the following line in the udev rules supplied by GDM:

# cat /usr/lib/udev/rules.d/61-gdm.rules | grep -i nvidia
# disable Wayland when using the proprietary nvidia driver
#DRIVER=="nvidia", RUN+="/usr/libexec/gdm-disable-wayland"

Then reboot, and you will login with a Wayland session by default:

# cat /sys/module/nvidia_drm/parameters/modeset 
Y
# cat /sys/module/nvidia_drm/version 
450.57
$ lsmod | grep nvidia
nvidia_drm             57344  4
nvidia_modeset       1187840  3 nvidia_drm
nvidia_uvm           1130496  0
nvidia              19726336  208 nvidia_uvm,nvidia_modeset
drm_kms_helper        249856  1 nvidia_drm
drm                   618496  7 drm_kms_helper,nvidia_drm
$ env | grep XDG_SESSION_TYPE
XDG_SESSION_TYPE=wayland
$ lspci | grep -i vga
01:00.0 VGA compatible controller: NVIDIA Corporation GP106 [GeForce GTX 1060 6GB] (rev a1)

HandBrake with NVENC support

After switching from libav to FFmpeg, the HandBrake developers quickly added NVENC encoder support to HandBrake. You can now select both NVENC for H.264/H.265 encoders in the drop down menu or with the command line interface:

$ HandBrakeCLI --help | grep -A12 "Select video encoder"
   -e, --encoder   Select video encoder:
                               x264
                               x264_10bit
                               nvenc_h264
                               x265
                               x265_10bit
                               x265_12bit
                               nvenc_h265
                               mpeg4
                               mpeg2
                               VP8
                               VP9
                               theora

With most GPUs I tried, even setting the slowest and costly preset results in the video engine not being fully utilized. Encoding times are cut to ~25%.

Awesome! No more ffmpeg command line black magic. You can now comfortably create your preset in the HandBrake gui and then use HandBrakeCLI through SSH on your awesome Plex Media Server. The build is available for both CentOS/RHEL 7 and Fedora.

HandBrake FFmpeg, no more Nvidia 32 bit drivers

HandBrake has been updated again to track the master branch, as it now uses FFMpeg 4 and no longer libAV 12. This could probably lead to other improvements, like NVENC/CUDA support, more formats, etc.

Starting with the Nvidia drivers version 396.24 there will be no more 32 bit support, the driver will be 64 bit only. The 32 bit libraries are still included, so Steam and other applications will keep on being supported.

In a few days, the updated drivers will be pushed in the Fedora repositories, and at the same time I will also remove the i386 folder from the repositories. Some i386 packages will still be provided in the x86_64 folder, as it is now for Fedora 28 and CentOS/RHEL 7. The packages that will be kept, are mostly multilib library packages.

The same will happen to CentOS/EPEL 6 at the moment a new 64 bit only driver series will be nominated as “Long Lived”.

Also the Spotify repository has already no more i386 support, upstream stopped providing updated clients. Judging from the web server logs, there seems to be almost no one using an i686 Fedora in conjunction with the repositories hosted here.

CUDA 9.0, cuDNN 7.0 and Wayland support in Fedora 27

The Nvidia repository now contains packages for Fedora 27. This is with the release candidate of CUDA 9, and it contains also cuDNN at version 7.0, which is the only version supported with CUDA 9 at the moment of writing.

The updated cuDNN 7.0 library has been added also to the other branches, this means it will be automatically upgraded from version 6.0 to 7.0. If you still need one of the previous versions, just remove it and install one of the compatibility packages:

# dnf list cuda-cudnn*
Installed Packages
cuda-cudnn.x86_64                   1:7-1.fc26         @fedora-nvidia
Available Packages
cuda-cudnn-devel.x86_64             1:7-1.fc26         fedora-nvidia 
cuda-cudnn5.1.x86_64                1:5.1-2.fc26       fedora-nvidia 
cuda-cudnn5.1-devel.x86_64          1:5.1-2.fc26       fedora-nvidia 
cuda-cudnn6.0.x86_64                1:6.0-1.fc26       fedora-nvidia 
cuda-cudnn6.0-devel.x86_64          1:6.0-1.fc26       fedora-nvidia 

CUDA 9 supports GCC 6.x and CLANG 3.9, so when it will be officially released, it will cover Fedora 25 and RHEL/CentOS compilers. With Fedora 27, there will be the usual need for a GCC compatibility package (like the compat-gcc53 package currently in the repository) as GCC is at version 7 and CLANG is at version 4.0.

I will try to provide a compat-gcc64 for Fedora 27+ at the time of the official CUDA 9 release.

Regarding the drivers, on Fedora 27 where Mutter 3.25+ is available, the modesetting part of the Nvidia drivers has been enabled by default, this means that at the login you can just select “GNOME” to run Gnome on Wayland. Please note that X 3D programs running on XWayland might not work properly.

Plex Media Player and MPV with CUDA

The Plex Media Player is now part of the multimedia repository for Fedora 25+. I works as a standalone player and also as the main interface for an HTPC setup, where the “TV interface” starts as the main thing when you power up your system.

Plex Media Player uses MPV in the background, so any compilation option that was added to MPV, is now also part of Plex Media Player by using the same libraries that were already available in the multimedia repository.

If you are using Gnome Software, you will also find it in the software selection screens.

To install it on Fedora, just perform the following commands:

dnf -y install plex-media-player

You will then find it along with the other applications in your menu.

Normal desktop interface

To get to the normal desktop interface just look for the Plex Media Player icon in your menu. You will be greeted with the familiar Plex web interface, with the main difference being that the player is local through the MPV library.

Enabling Plex Media Player startup at boot

If you are planning to do an HTPC installation, and would like to have Plex Media Player starting instead of the login screen the moment you boot the device, execute the following commands as root:

dnf install plex-media-player-session
systemctl set-default plex-media-player
echo "allowed_users = anybody" >> /etc/X11/Xwrapper.config

The first command installs the required files (services, targets and PolicyKit overrides). The second command instructs the system to load by default the Plex Media Player target; that is X immediately followed by the player itself. The third command allows the system to start the X server as the Plex Media Player user, otherwise only users logged in through a console or root can start it.

You will be greeted with the TV interface just after boot:

If you want to go back to your normal installation (let’s say Gnome), then revert back the changes (again type the following commands as root):

systemctl set-default graphical
sed -i -e '/allowed_users = anybody/d' /etc/X11/Xwrapper.config
rpm -e plex-media-player-session

MPV with CUDA

This has been already available for a long time, but with FFmpeg 3.3, CUDA dynamic support loading is enabled also in MPV, so the hard dependency on the CUDA library is gone, and the binaries load the library dynamically:

$ strings /usr/bin/mpv | grep libcuda
libcuda.so.1
$ strings /usr/lib64/libmpv.so.1.25.0 | grep libcuda
libcuda.so.1

So assuming you have the Nvidia driver already installed with the appropriate CUDA part, you can then play a video with the following command line:

mpv --hwdec=cuda /path/to/video.file

And then check with nvidia-smi or with the Nvidia control panel if the video engine is being utilized:

If you want to enable that by default, just make sure your configuration file has something like this inside:

$ cat ~/.config/mpv/mpv.conf 
#hwdec=vdpau
#vo=vdpau
hwdec=cuda

Nvidia driver improvements for Fedora 25+

The Nvidia repository now contains all the remaining bits for the work done by Hans De Goede.

Making an Optimus laptop work as expected with the Nvidia drivers should be much less painful than it was a few years ago and most of the things should work out of the box on Fedora 25+.

Just enable the repository on a pristine Fedora installation, and after a while you should be able to search for Nvidia, CUDA, GeForce to Quadro to make the driver, control panel and other programs appear in your Gnome Software search:

Optimus laptops

The driver should install and operate cleanly whether you are installing it on a system which has one or more discrete Nvidia cards or an Optimus laptop with an Intel and a Nvidia card. Nothing to do to enable or configure Optimus.

This is up to the point that when the drivers are installed, you can even turn off Optimus on or off in your system Bios (if your laptop allows that) and the only difference you should see is that there’s an additional VGA card enabled in your system (check with lspci) and that the Nvidia control panel switches between a PRIME Display, like in this picture:

And a normal RandR managed one, like in this one:

Everything else should not be different from your normal experience.

Limitations

Nvidia driver

The limitations are the same as provided by the Nvidia driver, this means that if you are running it on an Optimus laptop, the Intel card can never power off. Which means higher power consumption, unfortunately. If you have an Optimus laptop and absolutely need the proprietary drivers, my suggestion is still to disable Optimus in the Bios.

OSS stack

On the contrary, if you use the OSS stack (nouveau/intel) the second card can be powered off if there’s no application running on it or display directly connected to one of the card’s outputs. That’s the best reason to use the OSS drivers at all if you you’re not doing serious gaming or 3D work:

$ sudo cat /sys/kernel/debug/vgaswitcheroo/switch
0:IGD:+:Pwr:0000:00:02.0
1:DIS: :DynOff:0000:01:00.0

You also got the nifty selection menu about running your game on the discrete card on Gnome, which is really cool:

It will power up the video card just before launching the process. Launching a program through that menu entry is like starting it from the command line with the DRI_PRIME variable declared. For example, the same as above would be:

$ DRI_PRIME=1 quake3 &
$ sudo cat /sys/kernel/debug/vgaswitcheroo/switch
0:IGD:+:Pwr:0000:00:02.0
1:DIS: :DynPwr:0000:01:00.0

As you can see, the discrete video card is turned on. For Steam, you still need to edit each of your game to run on the Nvidia card:

SLI systems

SLI is now enabled by default with the Auto profile, there’s nothing to do if you have a SLI system. If you need any different SLI option (AA, SFR, etc.), just override it in X.org configuration files.

Nouveau fallback

With the new expanded OutputClass support for X, as carried out by Hans, it’s now super easy to switch to the OSS stack if the proprietary Nvidia driver somehow does not work. No user space component is touched, as soon as the Nvidia kernel module is not loaded (check on /sys/module/nvidia), the desktop starts with the normal OSS components you get with a normal installation. Thanks to all the work done on libglvnd, the libraries loaded are the correct one for the driver you are running.

This means that the performance of the Nvidia card would be abysmal, but still you would get a nice desktop and browser to Google around for answers on how to fix it :).

Upgrade path from Nvidia CUDA, ELRepo and RPMFusion repositories

The current packages should allow you to upgrade if you have any Nvidia component installed on your system from one of the mentioned repositories. All upgrade paths, obsolency and packaging rename should be taken into account.

This has been cool for a few years, and actually helped me a lot in migrating some installed CUDA clusters to the packages hosted here. As part of ongoing discussions with a few parties (mostly Red Hat), this is going to disappear to allow later an opposite upgrade path to one of the other repositories (RPMFusion/Nvidia).

As of the 15th of May, all Nvidia packages will be marked with Conflicts instead of Obsoletes/Provides for all the other repositories out there. I will update the installation and repository page accordingly. If you have anything installed from the RPMFusion, ELRepo or CUDA repository from Nvidia and want to switch to the packages here after the 15th of May, you must “wildcard remove” all Nvidia and CUDA packages on your system prior to proceeding with the installation.

I’m not planning to remove any other feature in terms of capability or packaging option.

Compatibility GCC 5.3.1 package for Fedora

As some might have noticed, since a few days there’s a new compat-gcc-53 package in the Fedora repositories. This is only intended for compiling CUDA programs on Fedora where the latest update to Clang 3.9 actually broke the last compiler compatible with CUDA 8.

$ rpm -ql compat-gcc-53 compat-gcc-53-c++ | grep /bin/
/usr/bin/gcc53
/usr/bin/gcov53
/usr/bin/g++53

If you need to build a package using it, you can check for examples in the Blender and CCMiner packages as in the multimedia repository:

https://github.com/negativo17/blender/blob/master/blender.spec#L57-L61
https://github.com/negativo17/blender/blob/master/blender-2.78a-cuda.patch#L19-L23
https://github.com/negativo17/ccminer/blob/master/ccminer.spec#L75-L82

This way, I was able to provide the Blender package with CUDA support also on Fedora 26 even after the Clang update from 3.8 to 3.9.

The package is also available as a COPR repository if you prefer to use official Nvidia CUDA packages instead of the ones provided here.

To do list

Figure out what to do with the PRIME Synchronization configuration:

https://github.com/negativo17/nvidia-driver/issues/13
https://github.com/negativo17/nvidia-driver/blob/master/nvidia.conf#L2-L5

All reports have been mixed so far. On my systems (including a SLI one) works fine.

Big multimedia repository update (CUDA enablements, rebases, new software)

Merging of the Nvidia repository into Multimedia

The whole multimedia repository has been rebased with recent releases, and it now features FFmpeg 3.2 as the foundation. Most of the programs that suppport some Nvidia integration are now enabled and compiled with support for CUDA/NVENC/CUVID; leveraging the previous reorganization of CUDA 8 in the various subpackages.

This means that all the Nvidia packages are now included in the repository as well, so if you have an Nvidia card and you are interested in both repositories, you can just have the multimedia repository enabled. If you still just want the Nvidia stuff (as enabled in Fedora 25) then it’s still available as a separate repository; and that will not change.

Why all of this? Because I can’t keep them separated anymore. The Nvidia repository can exist on its own, but the multimedia one can’t, due to the dependencies and the constant rebases (also of main Fedora and CentOS/RHEL packages). You can use the Nvidia repository alone, if you just need that, or use the multimedia one if you need everything else.

The repository is now exposed also at this URL, and contains Delta RPM support:

https://negativo17.org/repos/multimedia

All repository files and configurations have been updated, so this means that in the future this would be the place where the metadata and repository information will be placed and any new installation will get the repository from there. If you are reading this blog post, you can switch now. I will add a negativo17-release package soon, along with a few mirrors; I’m sorting out the details now with the mirror owners.

FFmpeg and other CUDA enablements

To make proper use of the Nvidia hardware encode features (NVENC/CUVID) and CUDA kernel support (i.e. Blender GPU rendering) in the various programs you need the Nvidia driver installed (nvidia-driver-cuda), and for Nvidia Performance Primitives you require the CUDA driver and the NPP library package (cuda-npp).

This means that for most people NOT requiring CUDA support or not using an Nvidia video card, the following 2 packages will be installed anyway:

$ ls -alghs nvidia-driver-cuda-libs*.rpm cuda-npp*.rpm 
92M -rw-r--r-T. 1 mock 92M Nov 16 12:35 cuda-npp-8.0.44-6.fc25.x86_64.rpm
22M -rw-r--r-T. 1 mock 22M Nov 19 15:00 nvidia-driver-cuda-libs-375.20-1.fc25.x86_64.rpm

Both packages contain just libraries, and they will be on your system as much as other libraries for multimedia codecs you don’t actually need. Example, with most multimedia programs you will get Xvid libraries for opening Xvid files, even though the format is pretty much abandoned. Having them installed does not enable any unwanted feature in your system. Also, NPP libraries should decrease 50% in size in one of the next CUDA updates, being the monolithic version of the library being deprecated in favor of split functionality.

There are some patches being evaluated to make those libraries loadable at runtime, but they have not been merged yet and there’s no guarantee that they ever will. Also, they are available for FFmpeg but not for all the other programs where support has been enabled for; so depending on your installation, you might get them anyway.

As of today, from the Multimedia repository the following programs have been enabled with some Nvidia hardware enablement:

  • MPV (video decoding through CUVID)
  • FFmpeg (encoding through NVENC, decoding through CUVID and filtering through CUDA NPP)
  • Avidemux (encoding, through NVENC)
  • GStreamer (NVENC plugin)
  • Blender (GPU rendering)

VDPAU for decoding was already enabled where possible.
Of course anything that is using FFmpeg (like the GStreamer plugins) could theoretically benefit from the same enablements as in FFMpeg:

$ for i in encoders decoders filters; do
    echo $i:; ffmpeg -hide_banner -${i} | egrep -i "npp|cuvid|nvenc|cuda"
done
encoders:
 V..... h264_nvenc           NVIDIA NVENC H.264 encoder (codec h264)
 V..... nvenc                NVIDIA NVENC H.264 encoder (codec h264)
 V..... nvenc_h264           NVIDIA NVENC H.264 encoder (codec h264)
 V..... nvenc_hevc           NVIDIA NVENC hevc encoder (codec hevc)
 V..... hevc_nvenc           NVIDIA NVENC hevc encoder (codec hevc)
decoders:
 V..... h263_cuvid           Nvidia CUVID H263 decoder (codec h263)
 V..... h264_cuvid           Nvidia CUVID H264 decoder (codec h264)
 V..... hevc_cuvid           Nvidia CUVID HEVC decoder (codec hevc)
 V..... mjpeg_cuvid          Nvidia CUVID MJPEG decoder (codec mjpeg)
 V..... mpeg1_cuvid          Nvidia CUVID MPEG1VIDEO decoder (codec mpeg1video)
 V..... mpeg2_cuvid          Nvidia CUVID MPEG2VIDEO decoder (codec mpeg2video)
 V..... mpeg4_cuvid          Nvidia CUVID MPEG4 decoder (codec mpeg4)
 V..... vc1_cuvid            Nvidia CUVID VC1 decoder (codec vc1)
 V..... vp8_cuvid            Nvidia CUVID VP8 decoder (codec vp8)
 V..... vp9_cuvid            Nvidia CUVID VP9 decoder (codec vp9)
filters:
 ... hwupload_cuda     V->V       Upload a system memory frame to a CUDA device.
 ... scale_npp         V->V       NVIDIA Performance Primitives video scaling and format conversion

I think this will be much appreciated for you users out there that are already using CUDA for deep learning and FFMpeg to process data 🙂

Rebases: FFmpeg, HandBrake, VLC, OpenH264, WebP (CentOS/RHEL), MPV.

A note on Blender: Blender with CUDA support is still at 2.78 built with CUDA 7.5, and not 2.78a built with CUDA 8; so no Nvidia Pascal GPU support. I’m working on it.

GNOME Software integration

Most of the graphical software is now enabled in GNOME software for Fedora 25, meaning that you can search stuff with a keyword and that if you have the repository enabled it will just pop-up:

gnome-software-handbrake

gnome-software-makemkv

gnome-software-vlc

There is still some packages that need AppStream metadata, but that will come.

As usual, feedback, bugs and comments are welcome.