Bumblebee
Bumblebee is an effort to make NVIDIA Optimus enabled laptops work in GNU/Linux systems. Such feature involves two graphics cards with two different power consumption profiles plugged in a layered way sharing a single framebuffer.
Note: Bumblebee has significant performance issues[1][2]. See NVIDIA Optimus for alternative solutions.
Bumblebee: Optimus for Linux
Optimus Technology is a hybrid graphics implementation without a hardware multiplexer. The integrated GPU manages the display while the dedicated GPU manages the most demanding rendering and ships the work to the integrated GPU to be displayed. When the laptop is running on battery supply, the dedicated GPU is turned off to save power and prolong the battery life. It has also been tested successfully with desktop machines with Intel integrated graphics and an nVidia dedicated graphics card.
Bumblebee is a software implementation comprising two parts:
- Render programs off-screen on the dedicated video card and display it on the screen using the integrated video card. This bridge is provided by VirtualGL or primus (read further) and connects to a X server started for the discrete video card.
- Disable the dedicated video card when it is not in use (see the #Power management section)
It tries to mimic the Optimus technology behavior; using the dedicated GPU for rendering when needed and power it down when not in use. The present releases only support rendering on-demand, automatically starting a program with the discrete video card based on workload is not implemented.
Installation
Before installing Bumblebee, check your BIOS and activate Optimus (older laptops call it «switchable graphics») if possible (BIOS does not have to provide this option). If neither «Optimus» or «switchable» is in the BIOS, still make sure both GPUs will be enabled and that the integrated graphics (igfx) is initial display (primary display). The display should be connected to the onboard integrated graphics, not the discrete graphics card. If integrated graphics had previously been disabled and discrete graphics drivers installed, be sure to remove /etc/X11/xorg.conf or the conf file in /etc/X11/xorg.conf.d related to the discrete graphics card.
- bumblebee — The main package providing the daemon and client programs.
- mesa — An open-source implementation of the OpenGL specification.
- An appropriate version of the NVIDIA driver, see NVIDIA#Installation.
- Optionally install xf86-video-intel — Intel Xorg driver.
For 32-bit application support, enable the multilib repository and install:
- lib32-virtualgl — A render/display bridge for 32 bit applications.
- lib32-nvidia-utils or lib32-nvidia-340xx-utilsAUR (match the version of the regular NVIDIA driver).
In order to use Bumblebee, it is necessary to add your regular user to the bumblebee group:
# gpasswd -a user bumblebee
Also enable bumblebeed.service . Reboot your system and follow #Usage.
- The bumblebee package will install a kernel module blacklist file that prevents the nvidia-drm module from loading on boot. Remember to uninstall this if you later switch away to other solutions.
- The package does not blacklist the nvidiafb module. You probably do not have it installed, because the default kernels do not ship it. However, with other kernels you must explicitly blacklist it too, otherwise optirun and primusrun will not run. See FS#69018.
Usage
Test
Install mesa-utils and use glxgears to test if if Bumblebee works with your Optimus system:
$ optirun glxgears -info
If it fails, try the following commands (from virtualgl ):
$ optirun glxspheres64
If the window with animation shows up, Optimus with Bumblebee is working.
Note: If glxgears failed, but glxspheres64 worked, always replace glxgears with glxspheres64 in all cases.
General usage
$ optirun [options] application [application-parameters]
For example, start Windows applications with Optimus:
$ optirun wine application.exe
For another example, open NVIDIA Settings panel with Optimus:
$ optirun -b none nvidia-settings -c :8
Note: A patched version of nvdock AUR is available in the package nvdock-bumblebee AUR .
For a list of all available options, see optirun(1) .
Configuration
You can configure the behaviour of Bumblebee to fit your needs. Fine tuning like speed optimization, power management and other stuff can be configured in /etc/bumblebee/bumblebee.conf
Optimizing speed
One disadvantage of the offscreen rendering methods is performance. The following table gives a raw overview of a Lenovo ThinkPad T480 in an eGPU setup with NVIDIA GTX 1060 6GB and unigine-heaven AUR benchmark (1920×1080, max settings, 8x AA):
| Command | Display | FPS | Score | Min FPS | Max FPS |
|---|---|---|---|---|---|
| optirun unigine-heaven | internal | 20.7 | 521 | 6.9 | 26.6 |
| primusrun unigine-heaven | internal | 36.9 | 930 | 15.3 | 44.1 |
| unigine-heaven | internal in Nvidia-xrun | 51.3 | 1293 | 8.4 | 95.6 |
| unigine-heaven | external in Nvidia-xrun | 56.1 | 1414 | 8.4 | 111.9 |
Using VirtualGL as bridge
Bumblebee renders frames for your Optimus NVIDIA card in an invisible X Server with VirtualGL and transports them back to your visible X Server. Frames will be compressed before they are transported — this saves bandwidth and can be used for speed-up optimization of bumblebee:
To use another compression method for a single application:
$ optirun -c compress-method application
The method of compress will affect performance in the GPU/CPU usage. Compressed methods will mostly load the CPU. However, uncompressed methods will mostly load the GPU.
Here is a performance table tested with ASUS N550JV laptop and benchmark app unigine-heaven AUR :
| Command | FPS | Score | Min FPS | Max FPS |
|---|---|---|---|---|
| optirun unigine-heaven | 25.0 | 630 | 16.4 | 36.1 |
| optirun -c jpeg unigine-heaven | 24.2 | 610 | 9.5 | 36.8 |
| optirun -c rgb unigine-heaven | 25.1 | 632 | 16.6 | 35.5 |
| optirun -c yuv unigine-heaven | 24.9 | 626 | 16.5 | 35.8 |
| optirun -c proxy unigine-heaven | 25.0 | 629 | 16.0 | 36.1 |
| optirun -c xv unigine-heaven | 22.9 | 577 | 15.4 | 32.2 |
Note: Lag spikes occurred when jpeg compression method was used.
To use a standard compression for all applications, set the VGLTransport to compress-method in /etc/bumblebee/bumblebee.conf :
/etc/bumblebee/bumblebee.conf
[. ] [optirun] VGLTransport=proxy [. ]
You can also play with the way VirtualGL reads back the pixels from your graphic card. Setting VGL_READBACK environment variable to pbo should increase the performance. Compare the following:
PBO should be faster:
VGL_READBACK=pbo optirun glxgears
The default value is sync:
VGL_READBACK=sync optirun glxgears
Note: CPU frequency scaling will affect directly on render performance
Primusrun
Note: Since compositing hurts performance, invoking primus when a compositing WM is active is not recommended. See #Primus issues under compositing window managers.
primusrun (from primus ) is becoming the default choice, because it consumes less power and sometimes provides better performance than optirun / virtualgl . It may be run separately, but it does not accept options as optirun does. Setting primus as the bridge for optirun provides more flexibility.
For 32-bit applications support on 64-bit machines, install lib32-primus (multilib must be enabled).
You can either run it separately:
$ primusrun glxgears
Or as a bridge for optirun. The default configuration sets virtualgl as the bridge. Override that on the command line:
$ optirun -b primus glxgears
Alternatively, set Bridge=primus in /etc/bumblebee/bumblebee.conf and you will not have to specify it on the command line.
Tip: Refer to #Primusrun mouse delay (disable VSYNC) if you want to disable VSYNC . It can also remove mouse input delay lag and slightly increase the performance.
Pvkrun
pvkrun from the package primus_vk is a drop-in replacement for primusrun that enables to run Vulkan-based applications. A quick check can be done with vulkaninfo from vulkan-tools .
$ pvkrun vulkaninfo
Power management
This article or section is a candidate for merging with Hybrid graphics#Using bbswitch.
Notes: This section talks only about bbswitch which is not specific to Bumblebee. (Discuss in Talk:Bumblebee)
The goal of the power management feature is to turn off the NVIDIA card when it is not used by Bumblebee any more. If bbswitch (for linux ) or bbswitch-dkms (for linux-lts or custom kernels) is installed, it will be detected automatically when the Bumblebee daemon starts. No additional configuration is necessary. However, bbswitch is for Optimus laptops only and will not work on desktop computers. So, Bumblebee power management is not available for desktop computers, and there is no reason to install bbswitch on a desktop. (Nevertheless, the other features of Bumblebee do work on some desktop computers.)
To manually turn the card on or off using bbswitch, write into /proc/acpi/bbswitch:
# echo OFF > /proc/acpi/bbswitch # echo ON > /proc/acpi/bbswitch
Default power state of NVIDIA card using bbswitch
The default behavior of bbswitch is to leave the card power state unchanged. bumblebeed does disable the card when started, so the following is only necessary if you use bbswitch without bumblebeed.
Set load_state and unload_state module options according to your needs (see bbswitch documentation).
/etc/modprobe.d/bbswitch.conf
options bbswitch load_state=0 unload_state=1
To run bbswitch without bumblebeed on system startup, do not forget to add bbswitch to /etc/modules-load.d , as explained in Kernel module#systemd.
Enable NVIDIA card during shutdown
On some laptops, the NVIDIA card may not correctly initialize during boot if the card was powered off when the system was last shutdown. Therefore the Bumblebee daemon will power on the GPU when stopping the daemon (e.g. on shutdown) due to the (default) setting TurnCardOffAtExit=false in /etc/bumblebee/bumblebee.conf . Note that this setting does not influence power state while the daemon is running, so if all optirun or primusrun programs have exited, the GPU will still be powered off.
When you stop the daemon manually, you might want to keep the card powered off while still powering it on on shutdown. To achieve the latter, add the following systemd service (if using bbswitch ):
/etc/systemd/system/nvidia-enable.service
[Unit] Description=Enable NVIDIA card DefaultDependencies=no [Service] Type=oneshot ExecStart=/bin/sh -c 'echo ON > /proc/acpi/bbswitch' [Install] WantedBy=shutdown.target
Then enable the nvidia-enable.service unit.
Enable NVIDIA card after waking from suspend
The bumblebee daemon may fail to activate the graphics card after suspending. A possible fix involves setting bbswitch as the default method for power management in /etc/bumblebee/bumblebee.conf :
/etc/bumblebee/bumblebee.conf
[driver-nvidia] PMMethod=bbswitch [driver-nouveau] PMMethod=bbswitch
Note: This fix seems to work only after rebooting the system. Restarting the bumblebee service is not enough.
If the above fix fails, try the following command:
# echo 1 > /sys/bus/pci/rescan
To rescan the PCI bus automatically after a suspend, create a script as described in Power management#Hooks in /usr/lib/systemd/system-sleep.
Multiple monitors
Outputs wired to the Intel chip
If the port (DisplayPort/HDMI/VGA) is wired to the Intel chip, you can set up multiple monitors with xorg.conf. Set them to use the Intel card, but Bumblebee can still use the NVIDIA card. One example configuration is below for two identical screens with 1080p resolution and using the HDMI out.
/etc/X11/xorg.conf
Section "Screen" Identifier "Screen0" Device "intelgpu0" Monitor "Monitor0" DefaultDepth 24 Option "TwinView" "0" SubSection "Display" Depth 24 Modes "1920x1080_60.00" EndSubSection EndSection Section "Screen" Identifier "Screen1" Device "intelgpu1" Monitor "Monitor1" DefaultDepth 24 Option "TwinView" "0" SubSection "Display" Depth 24 Modes "1920x1080_60.00" EndSubSection EndSection Section "Monitor" Identifier "Monitor0" Option "Enable" "true" EndSection Section "Monitor" Identifier "Monitor1" Option "Enable" "true" EndSection Section "Device" Identifier "intelgpu0" Driver "intel" Option "UseEvents" "true" Option "AccelMethod" "UXA" BusID "PCI:0:2:0" EndSection Section "Device" Identifier "intelgpu1" Driver "intel" Option "UseEvents" "true" Option "AccelMethod" "UXA" BusID "PCI:0:2:0" EndSection Section "Device" Identifier "nvidiagpu1" Driver "nvidia" BusID "PCI:0:1:0" EndSection
You need to probably change the BusID for both the Intel and the NVIDIA card.
$ lspci | grep VGA
00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09)
The BusID is 0:2:0. Note that lspci outputs hexadecimal values, but Xorg expects decimal values.
Output wired to the NVIDIA chip
On some notebooks, the digital Video Output (HDMI or DisplayPort) is hardwired to the NVIDIA chip. If you want to use all the displays on such a system simultaneously, the easiest solution is to use intel-virtual-output, a tool provided in the xf86-video-intel driver set, as of v2.99. It will allow you to extend the existing X session onto other screens, leveraging virtual outputs to work with the discrete graphics card. Usage is as follows:
$ intel-virtual-output [OPTION]. [TARGET_DISPLAY].
-d source display -f keep in foreground (do not detach from console and daemonize) -b start bumblebee -a connect to all local displays (e.g. :1, :2, etc) -S disable use of a singleton and launch a fresh intel-virtual-output process -v all verbose output, implies -f -V specific verbose output, implies -f -h this help
If this command alone does not work, you can try running it with optirun to enable the discrete graphics and allow it to detect the outputs accordingly. This is known to be necessary on Lenovo’s Legion Y720.
$ optirun intel-virtual-output
If no target displays are parsed on the commandline, intel-virtual-output will attempt to connect to any local display. The detected displays will be manageable via any desktop display manager such as xrandr or KDE Display. The tool will also start bumblebee (which may be left as default install). See the Bumblebee wiki page for more information.
When run in a terminal, intel-virtual-output will daemonize itself unless the -f switch is used. Games can be run on the external screen by first exporting the display export DISPLAY=:8 , and then running the game with optirun game_bin , however, cursor and keyboard are not fully captured. Use export DISPLAY=:0 to revert back to standard operation.
If intel-virtual-output does not detect displays, or if a no VIRTUAL outputs on «:0» message is obtained, then create:
/etc/X11/xorg.conf.d/20-intel.conf
Section "Device" Identifier "intelgpu0" Driver "intel" EndSection
which does exist by default, and:
/etc/bumblebee/xorg.conf.nvidia
Section "ServerLayout" Identifier "Layout0" Option "AutoAddDevices" "true" Option "AutoAddGPU" "false" EndSection Section "Device" Identifier "DiscreteNvidia" Driver "nvidia" VendorName "NVIDIA Corporation" Option "ProbeAllGpus" "false" Option "NoLogo" "true" Option "UseEDID" "true" Option "AllowEmptyInitialConfiguration" # Option "UseDisplayDevice" "none" EndSection Section "Screen" Identifier "Screen0" Device "DiscreteNvidia" EndSection
See [3] for further configurations to try. If the laptop screen is stretched and the cursor is misplaced while the external monitor shows only the cursor, try killing any running compositing managers.
If you do not want to use intel-virtual-output, another option is to configure Bumblebee to leave the discrete GPU on and directly configure X to use both the screens, as it will be able to detect them.
As a last resort, you can run 2 X Servers. The first will be using the Intel driver for the notebook’s screen. The second will be started through optirun on the NVIDIA card, to show on the external display. Make sure to disable any display/session manager before manually starting your desktop environment with optirun. Then, you can log in the integrated-graphics powered one.
Disabling screen blanking
You can disable screen blanking when using intel-virtual-output with xset by setting the DISPLAY environment variable appropriately (see DPMS for more info):
$ DISPLAY=:8 xset -dpms s off
Multiple NVIDIA graphics cards or NVIDIA Optimus
If you have multiple NVIDIA graphics cards (eg. when using an eGPU with a laptop with another built in NVIDIA graphics card) or NVIDIA Optimus, you need to make a minor edit to /etc/bumblebee/xorg.conf.nvidia . If this change is not made the daemon may default to using the internal NVIDIA card.
First, determine the BusID of the external card:
$ lspci | grep -E "VGA|3D"
00:02.0 VGA compatible controller: Intel Corporation HD Graphics 530 (rev 06) 01:00.0 3D controller: NVIDIA Corporation GM107M [GeForce GTX 960M] (rev a2) 0b:00.0 VGA compatible controller: NVIDIA Corporation GP104 [GeForce GTX 1070] (rev a1)
In this case, the BusID is 0b:00.0 .
Now edit /etc/bumblebee/xorg.conf.nvidia and add the following line to Section «Device» :
/etc/bumblebee/xorg.conf.nvidia
Section "Device" . BusID "PCI:11:00:0" Option "AllowExternalGpus" "true" # If the GPU is external . EndSection
Note: Notice that the hex 0b became a base10 11 .
Troubleshooting
Note: Please report bugs at Bumblebee-Project’s GitHub tracker as described in its wiki.
[VGL] ERROR: Could not open display :8
There is a known problem with some wine applications that fork and kill the parent process without keeping track of it (for example the free to play online game «Runes of Magic»).
This is a known problem with VirtualGL. As of bumblebee 3.1, so long as you have it installed, you can use Primus as your render bridge:
$ optirun -b primus wine windows program.exe
If this does not work, an alternative walkaround for this problem is:
$ optirun bash $ optirun wine windows program.exe
If using NVIDIA drivers a fix for this problem is to edit /etc/bumblebee/xorg.conf.nvidia and change Option ConnectedMonitor to CRT-0 .
Xlib: extension «GLX» missing on display «:0.0»
If you tried to install the NVIDIA driver from NVIDIA website, this is not going to work.
-
Uninstall that driver in the similar way:
# ./NVIDIA-Linux-*.run --uninstall
# rm /etc/X11/xorg.conf
[ERROR]Cannot access secondary GPU: No devices detected
In some instances, running optirun will return:
[ERROR]Cannot access secondary GPU - error: [XORG] (EE) No devices detected. [ERROR]Aborting because fallback start is disabled.
In this case, you will need to move the file /etc/X11/xorg.conf.d/20-intel.conf to somewhere else, restart the bumblebeed daemon and it should work. If you do need to change some features for the Intel module, a workaround is to merge /etc/X11/xorg.conf.d/20-intel.conf to /etc/X11/xorg.conf .
It could be also necessary to comment the driver line in /etc/X11/xorg.conf.d/10-monitor.conf .
If you are using the nouveau driver you could try switching to the nvidia driver.
You might need to define the NVIDIA card somewhere (e.g. file /etc/bumblebee/xorg.conf.nvidia ), using the correct BusID according to lspci output:
Section "Device" Identifier "nvidiagpu1" Driver "nvidia" BusID "PCI:0:1:0" EndSection
Observe that the format of lspci output is in HEX, while in xorg it is in decimals. So if the output of lspci is, for example, 0a:00.0 the BusID should be PCI:10:0:0 .
NVIDIA(0): Failed to assign any connected display devices to X screen 0
If the console output is:
[ERROR]Cannot access secondary GPU - error: [XORG] (EE) NVIDIA(0): Failed to assign any connected display devices to X screen 0 [ERROR]Aborting because fallback start is disabled.
If the following line in /etc/bumblebee/xorg.conf.nvidia does not exist, you can add it to the «Device» section:
Option "ConnectedMonitor" "DFP"
If it does already exist, you can try changing it to:
Option "ConnectedMonitor" "CRT"
After that, restart the Bumblebee service to apply these changes.
Failed to initialize the NVIDIA GPU at PCI:1:0:0 (GPU fallen off the bus / RmInitAdapter failed!)
Add rcutree.rcu_idle_gp_delay=1 to the kernel parameters of the boot loader configuration (see also the original BBS post for a configuration example).
Failed to initialize the NVIDIA GPU at PCI:1:0:0 (Bumblebee daemon reported: error: [XORG] (EE) NVIDIA(GPU-0))
You might encounter an issue when after resume from sleep, primusrun or optirun command does not work anymore. there are two ways to fix this issue — reboot your system or execute the following command:
# echo 1 > /sys/bus/pci/rescan
And try to test if primusrun or optirun works.
If the above command did not help, try finding your NVIDIA card’s bus ID:
$ lspci | grep VGA
00:02.0 VGA compatible controller: Intel Corporation Core Processor Integrated Graphics Controller (rev 02) 01:00.0 VGA compatible controller: nVidia Corporation Device 0df4 (rev a1)
For example, above command showed 01:00.0 so we use following commands with this bus ID:
# echo 1 > /sys/bus/pci/devices/0000:01:00.0/remove # echo 1 > /sys/bus/pci/rescan
Could not load GPU driver
If the console output is:
[ERROR]Cannot access secondary GPU - error: Could not load GPU driver
and if you try to load the nvidia module:
# modprobe nvidia
modprobe: ERROR: could not insert 'nvidia': Exec format error
This could be because the nvidia driver is out of sync with the Linux kernel, for example if you installed the latest nvidia driver and have not updated the kernel in a while. A full system update , followed by a reboot into the updated kernel, might resolve the issue. If the problem persists you should try manually compiling the nvidia packages against your current kernel, for example with nvidia-dkms or by compiling nvidia from the ABS.
NOUVEAU(0): [drm] failed to set drm interface version
Consider switching to the official nvidia driver. As commented here, nouveau driver has some issues with some cards and bumblebee.
[ERROR]Cannot access secondary GPU — error: X did not start properly
Set the «AutoAddDevices» option to «true» in /etc/bumblebee/xorg.conf.nvidia (see here):
Section "ServerLayout" Identifier "Layout0" Option "AutoAddDevices" "true" Option "AutoAddGPU" "false" EndSection
/dev/dri/card0: failed to set DRM interface version 1.4: Permission denied
This could be worked around by appending following lines in /etc/bumblebee/xorg.conf.nvidia (see here):
Section "Screen" Identifier "Default Screen" Device "DiscreteNvidia" EndSection
ERROR: ld.so: object ‘libdlfaker.so’ from LD_PRELOAD cannot be preloaded: ignored
You probably want to start a 32-bit application with bumblebee on a 64-bit system. See the «For 32-bit. » section in #Installation. If the problem persists or if it is a 64-bit application, try using the primus bridge.
Fatal IO error 11 (Resource temporarily unavailable) on X server
Change KeepUnusedXServer in /etc/bumblebee/bumblebee.conf from false to true . Your program forks into background and bumblebee do not know anything about it.
Video tearing
Video tearing is a somewhat common problem on Bumblebee. To fix it, you need to enable vsync. It should be enabled by default on the Intel card, but verify that from Xorg logs. To check whether or not it is enabled for NVIDIA, make sure nvidia-settings is installed and run:
$ optirun nvidia-settings -c :8
X Server XVideo Settings -> Sync to VBlank and OpenGL Settings -> Sync to VBlank should both be enabled. The Intel card has in general less tearing, so use it for video playback. Especially use VA-API for video decoding (e.g. mplayer-vaapi and with -vsync parameter).
Refer to Intel graphics#Tearing on how to fix tearing on the Intel card.
If it is still not fixed, try to disable compositing from your desktop environment. Try also disabling triple buffering.
Bumblebee cannot connect to socket
You might get something like:
$ optirun glxspheres64
$ optirun glxspheres32
[ 1648.179533] [ERROR]You have no permission to communicate with the Bumblebee daemon. Try adding yourself to the 'bumblebee' group [ 1648.179628] [ERROR]Could not connect to bumblebee daemon - is it running?
If you are already in the bumblebee group ( groups | grep bumblebee ), you may try removing the socket /var/run/bumblebeed.socket .
Another reason for this error could be that you have not actually turned on both GPUs in your BIOS, and as a result, the Bumblebee daemon is in fact not running. Check the BIOS settings carefully and be sure Intel graphics (integrated graphics — may be abbreviated in BIOS as something like igfx) has been enabled or set to auto, and that it is the primary GPU. Your display should be connected to the onboard integrated graphics, not the discrete graphics card.
If you mistakenly had the display connected to the discrete graphics card and Intel graphics was disabled, you probably installed Bumblebee after first trying to run NVIDIA alone. In this case, be sure to remove the /etc/X11/xorg.conf or /etc/X11/xorg.conf.d/20-nvidia.conf configuration files. If Xorg is instructed to use NVIDIA in a configuration file, X will fail.
Running X.org from console after login (rootless X.org)
Using Primus causes a segmentation fault
In some instances, using primusrun instead of optirun will result in a segfault. This is caused by an issue in code auto-detecting faster upload method, see FS#58933.
The workaround is skipping auto-detection by manually setting PRIMUS_UPLOAD environment variable to either 1 or 2, depending on which one is faster on your setup.
$ PRIMUS_UPLOAD=1 primusrun .
Primusrun mouse delay (disable VSYNC)
For primusrun , VSYNC is enabled by default and as a result, it could make mouse input delay lag or even slightly decrease performance. Test primusrun with VSYNC disabled:
$ vblank_mode=0 primusrun glxgears
If you are satisfied with the above setting, create an alias (e.g. alias primusrun=»vblank_mode=0 primusrun» ).
| VSYNC enabled | FPS | Score | Min FPS | Max FPS |
|---|---|---|---|---|
| FALSE | 31.5 | 793 | 22.3 | 54.8 |
| TRUE | 31.4 | 792 | 18.7 | 54.2 |
Tested with ASUS N550JV notebook and benchmark app unigine-heaven AUR .
Note: To disable vertical synchronization system-wide, see Intel graphics#Disable Vertical Synchronization (VSYNC).
Primus issues under compositing window managers
Since compositing hurts performance, invoking primus when a compositing WM is active is not recommended.[4] If you need to use primus with compositing and see flickering or bad performance, synchronizing primus’ display thread with the application’s rendering thread may help:
$ PRIMUS_SYNC=1 primusrun .
This makes primus display the previously rendered frame.
Problems with bumblebee after resuming from standby
In some systems, it can happens that the nvidia module is loaded after resuming from standby. One possible solution for this is to install the acpi_call and acpi package.
Optirun does not work, no debug output
Users are reporting that in some cases, even though Bumblebee was installed correctly, running
$ optirun glxgears -info
gives no output at all, and the glxgears window does not appear. Any programs that need 3d acceleration crashes:
$ optirun bash $ glxgears Segmentation fault (core dumped)
Apparently it is a bug of some versions of virtualgl. So a workaround is to install primus and lib32-primus and use it instead:
$ primusrun glxspheres64 $ optirun -b primus glxspheres64
By default primus locks the framerate to the vrate of your monitor (usually 60 fps), if needed it can be unlocked by passing the vblank_mode=0 environment variable.
$ vblank_mode=0 primusrun glxspheres64
Usually there is no need to display more frames han your monitor can handle, but you might want to for benchmarking or to have faster reactions in games (e.g., if a game need 3 frames to react to a mouse movement with vblank_mode=0 the reaction will be as quick as your system can handle, without it will always need 1/20 of second).
You might want to edit /etc/bumblebee/bumblebee.conf to use the primus render as default. If after an update you want to check if the bug has been fixed just use optirun -b virtualgl .
See this forum post for more information.
Broken power management with kernel 4.8
This article or section is a candidate for merging with Hybrid graphics#Using bbswitch.
Notes: Keep all info about bbswitch in one place. (Discuss in Talk:Bumblebee)
If you have a newer laptop (BIOS date 2015 or newer), then Linux 4.8 might break bbswitch (bbswitch issue 140) since bbswitch does not support the newer, recommended power management method. As a result, the GPU may fail to power on, fail to power off or worse.
As a workaround, add pcie_port_pm=off to your Kernel parameters.
Alternatively, if you are only interested in power saving (and perhaps use of external monitors), remove bbswitch and rely on Nouveau runtime power-management (which supports the new method).
Note: Some tools such as powertop —auto-tune automatically enable power management on PCI devices, which leads to the same problem [5]. Use the same workaround or do not use the all-in-one power management tools.
Lockup issue (lspci hangs)
See NVIDIA Optimus#Lockup issue (lspci hangs) for an issue that affects new laptops with a GTX 965M (or alike).
Discrete card always on and acpi warnings
Add acpi_osi=Linux to your Kernel parameters. See [6] and [7] for more information.
Screen 0 deleted because of no matching config section
Modify the configuration as follows:
/etc/bumblebee/xorg.conf.nvidia
. Section "ServerLayout" . Screen 0 "nvidia" . EndSection . Section "Screen" Identifier "nvidia" Device "DiscreteNvidia" EndSection .
Erratic, unpredictable behaviour
If Bumblebee starts/works in a random manner, check that you have set your Network configuration#Local network hostname resolution (details here).
Discrete card always on and nvidia driver cannot be unloaded
Make sure nvidia-persistenced.service is disabled and not currently active. It is intended to keep the nvidia driver running at all times [8], which prevents the card being turned off.
Discrete card is silently activated when EGL is requested by some application
If the discrete card is activated by some program (e.g. mpv with its GPU backend), it might stays on. The problem might be libglvnd which is loading the nvidia drivers and activating the card.
To disable this set environment variable __EGL_VENDOR_LIBRARY_FILENAMES (see documentation) to only load mesa configuration file:
__EGL_VENDOR_LIBRARY_FILENAMES="/usr/share/glvnd/egl_vendor.d/50_mesa.json"
nvidia-utils (and its branches) is installing the configuration file at /usr/share/glvnd/egl_vendor.d/10_nvidia.json which has priority and causes libglvnd to load the nvidia drivers and enable the card.
The other solution is to avoid installing the configuration file provided by nvidia-utils .
Framerate drops to 1 FPS after a fixed period of time
With the nvidia 440.36 driver, the DPMS setting is enabled by default resulting in a timeout after a fixed period of time (e.g. 10 minutes) which causes the frame rate to throttle down to 1 FPS. To work around this, add the following line to the «Device» section in /etc/bumblebee/xorg.conf.nvidia
Option "HardDPMS" "false"
Application cannot record screen
Using Bumblebee, applications cannot access the screen to identify and record it. This happens, for example, using obs-studio with NVENC activated. To solve this, disable the bridging mode with optirun -b none command .
See also
- Bumblebee project repository [dead link 2022-09-17 ⓘ]
- Bumblebee project wiki
- Bumblebee project bbswitch repository
Question GPU detected dead — spike in SOC MHZ
I got random GPU detected dead errors on my 5700 XT rig and I can see thr SOC MHZ rising to 1086 from 950 on this gpus before the error:

In this case GPU 1 was crashing shortly after — can someone tell me what I need to change?
dsm52
Neuling
Mitglied seit Jul 1, 2021 Beiträge 3 Bewertungspunkte 0 Punkte 1
I had a lot of problems getting my 5700 / xt rig stable with TRM after B-mode was introduced. You can see this under «ETH Cfg» in your log above. Most of the advice on this forum as been for TRM with A-mode, but the new default of B-mode with power saving changes a lot of the settings.
I’ve tried for weeks to get it stable.
My cards tend to fluctuate between 950-1266 Mhz SoC and it can be stable sometimes for days, but other times I can get 5 random crashes in a few hours, without changing any settings.
It is possible to limit the SoC in HiveOS overclock settings, not sure about Windows.
NVIDIA/Troubleshooting
If after installing the NVIDIA driver your system becomes stuck before reaching the display manager, try to disable kernel mode setting.
Xorg fails to load or Red Screen of Death
If you get a red screen and use GRUB, disable the GRUB framebuffer by editing /etc/default/grub and uncomment GRUB_TERMINAL_OUTPUT=console . For more information see GRUB/Tips and tricks#Disable framebuffer.
Blackscreen at X startup / Machine poweroff at X shutdown
If you have installed an update of NVIDIA and your screen stays black after launching Xorg, or if shutting down Xorg causes a machine poweroff, try the below workarounds:
- Prepend «xrandr —auto» to your xinitrc
- Use the rcutree.rcu_idle_gp_delay=1 kernel parameter.
- You can also try to add the nvidia module directly to your mkinitcpio.conf.
- If the screen still stays black with both the rcutree.rcu_idle_gp_delay=1 kernel parameter and the nvidia module directly in the mkinitcpio.conf, try re-installing nvidia and nvidia-utils in that order, and finally reload the driver:
# modprobe nvidia
‘/dev/nvidia0’ input/output error
The factual accuracy of this article or section is disputed.
Reason: Verify that the BIOS related suggestions work and are not coincidentally set while troubleshooting. (Discuss in Talk:NVIDIA/Troubleshooting#’/dev/nvidia0′ Input/Output error. suggested fixes)
This error can occur for several different reasons, and the most common solution given for this error is to check for group/file permissions, which in almost every case is not the problem. The NVIDIA documentation does not talk in detail on what you should do to correct this problem but there are a few things that have worked for some people. The problem can be a IRQ conflict with another device or bad routing by either the kernel or your BIOS.
First thing to try is to remove other video devices such as video capture cards and see if the problem goes away. If there are too many video processors on the same system it can lead into the kernel being unable to start them because of memory allocation problems with the video controller. In particular on systems with low video memory this can occur even if there is only one video processor. In such case you should find out the amount of your system’s video memory (e.g. with lspci -v ) and pass allocation parameters to the kernel, e.g. for a 32-bit kernel:
vmalloc=384M
If running a 64bit kernel, a driver defect can cause the NVIDIA module to fail initializing when IOMMU is on. Turning it off in the BIOS has been confirmed to work for some users. [1]User:Clickthem#nvidia module
Another thing to try is to change your BIOS IRQ routing from Operating system controlled to BIOS controlled or the other way around. The first one can be passed as a kernel parameter:
PCI=biosirq
The noacpi kernel parameter has also been suggested as a solution but since it disables ACPI completely it should be used with caution. Some hardware are easily damaged by overheating.
Note: The kernel parameters can be passed either through the kernel command line or the bootloader configuration file. See your bootloader Wiki page for more information.
Screen(s) found, but none have a usable configuration
Sometimes NVIDIA and X have trouble finding the active screen. If your graphics card has multiple outputs try plugging your monitor into the other ones. On a laptop it may be because your graphics card has VGA/TV out. Xorg.0.log will provide more info.
Another thing to try is adding invalid «ConnectedMonitor» Option to Section «Device» to force Xorg throws error and shows you how correct it. Here more about ConnectedMonitor setting.
After re-run X see Xorg.0.log to get valid CRT-x,DFP-x,TV-x values.
nvidia-xconfig —query-gpu-info could be helpful.
X fails with «Failing initialization of X screen»
If /var/log/Xorg.0.log says X server fails to initialize screen
(EE) NVIDIA(G0): GPU screens are not yet supported by the NVIDIA driver (EE) NVIDIA(G0): Failing initialization of X screen
and nvidia-smi says No running processes found
The solution is at first reinstall latest nvidia-utils , and then copy /usr/share/X11/xorg.conf.d/10-nvidia-drm-outputclass.conf to /etc/X11/xorg.conf.d/10-nvidia-drm-outputclass.conf , and then edit /etc/X11/xorg.conf.d/10-nvidia-drm-outputclass.conf and add the line Option «PrimaryGPU» «yes» . Restart the computer. The problem will be fixed.
Xorg fails during boot, but otherwise starts fine
On very fast booting systems, systemd may attempt to start the display manager before the NVIDIA driver has fully initialized. You will see a message like the following in your logs only when Xorg runs during boot.
/var/log/Xorg.0.log
[ 1.807] (EE) NVIDIA(0): Failed to initialize the NVIDIA kernel module. Please see the [ 1.807] (EE) NVIDIA(0): system's kernel log for additional error messages and [ 1.808] (EE) NVIDIA(0): consult the NVIDIA README for details. [ 1.808] (EE) NVIDIA(0): *** Aborting ***
In this case you will need to establish an ordering dependency from the display manager to the DRI device. First create device units for DRI devices by creating a new udev rules file.
/etc/udev/rules.d/99-systemd-dri-devices.rules
ACTION=="add", KERNEL=="card*", SUBSYSTEM=="drm", TAG+="systemd"
Then create dependencies from the display manager to the device(s).
/etc/systemd/system/display-manager.service.d/10-wait-for-dri-devices.conf
[Unit] Wants=dev-dri-card0.device After=dev-dri-card0.device
If you have additional cards needed for the desktop then list them in Wants and After seperated by spaces.
Black screen on systems with integrated GPU
If you have a system with an integrated GPU (e.g. Intel HD 4000, VIA VX820 Chrome 9 or AMD Cezanne) and have installed the nvidia package, you may experience a black screen on boot, when changing virtual terminal, or when exiting an X session. This may be caused by a conflict between the graphics modules. This is solved by blacklisting the relevant GPU modules. Create the file /etc/modprobe.d/blacklist.conf and prevent the relevant modules from loading on boot:
/etc/modprobe.d/blacklist.conf
install i915 /usr/bin/false install intel_agp /usr/bin/false install viafb /usr/bin/false install radeon /usr/bin/false install amdgpu /usr/bin/false
X fails with «no screens found» when using Multiple GPUs
In situations where you might have multiple GPUs on a system and X fails to start with:
[ 76.633] (EE) No devices detected. [ 76.633] Fatal server error: [ 76.633] no screens found
then you need to add your discrete card’s BusID to your X configuration. This can happen on systems with an Intel CPU and an integrated GPU or if you have more than one NVIDIA card connected. Find your BusID:
# lspci | grep -E "VGA|3D controller"
00:02.0 VGA compatible controller: Intel Corporation Xeon E3-1200 v2/3rd Gen Core processor Graphics Controller (rev 09) 01:00.0 VGA compatible controller: NVIDIA Corporation GK107 [GeForce GTX 650] (rev a1) 08:00.0 3D controller: NVIDIA Corporation GM108GLM [Quadro K620M / Quadro M500M] (rev a2)
Then you fix it by adding it to the card’s Device section in your X configuration. In my case:
/etc/X11/xorg.conf.d/10-nvidia.conf
Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BusID "PCI:1:0:0" EndSection
Note: BusID formatting is important!
In the example above 01:00.0 is stripped to be written as 1:0:0 , however some conversions can be more complicated. lspci output is in hex format, but in configuration files the BusID’s are in decimal format! This means that in cases where the BusID is greater than 9 you will need to convert it to decimal!
ie: 5e:00.0 from lspci becomes PCI:94:0:0 .
Modprobe Error: «Could not insert ‘nvidia’: No such device» on linux >=4.8
With linux 4.8, one can get the following errors when trying to use the discrete card:
$ modprobe nvidia -vv
modprobe: INFO: custom logging function 0x409c10 registered modprobe: INFO: Failed to insert module '/lib/modules/4.8.6-1-ARCH/extramodules/nvidia.ko.gz': No such device modprobe: ERROR: could not insert 'nvidia': No such device modprobe: INFO: context 0x24481e0 released insmod /lib/modules/4.8.6-1-ARCH/extramodules/nvidia.ko.gz
# dmesg
. NVRM: The NVIDIA GPU 0000:01:00.0 (PCI ID: 10de:139b) NVRM: installed in this system is not supported by the 370.28 NVRM: NVIDIA Linux driver release. Please see 'Appendix NVRM: A - Supported NVIDIA GPU Products' in this release's NVRM: README, available on the Linux driver download page NVRM: at www.nvidia.com. .
This problem is caused by bad commits pertaining to PCIe power management in the Linux Kernel (as documented in this NVIDIA DevTalk thread).
The workaround is to add pcie_port_pm=off to your kernel parameters. Note that this disables PCIe power management for all devices.
System does not return from suspend
What you see in the log:
kernel: nvidia-modeset: ERROR: GPU:0: Failed detecting connected display devices kernel: nvidia-modeset: ERROR: GPU:0: Failed detecting connected display devices kernel: nvidia-modeset: WARNING: GPU:0: Failure processing EDID for display device DELL U2412M (DP-0). kernel: nvidia-modeset: WARNING: GPU:0: Unable to read EDID for display device DELL U2412M (DP-0) kernel: nvidia-modeset: ERROR: GPU:0: Failure reading maximum pixel clock value for display device DELL U2412M (DP-0).
A possible solution based on [2]:
Run this command to get the version string:
# strings /sys/firmware/acpi/tables/DSDT | grep -i 'windows ' | sort | tail -1
Add the acpi_osi=! «acpi_osi=version» kernel parameter to your boot loader configuration.
Another possible cause to the issue could be the use of the nvidia-open package, as described here:
- https://bbs.archlinux.org/viewtopic.php?pid=2047692
- https://github.com/NVIDIA/open-gpu-kernel-modules/issues/450
- https://github.com/NVIDIA/open-gpu-kernel-modules/issues/223
- https://github.com/NVIDIA/open-gpu-kernel-modules/issues/94
Crashes and hangs
Crashing in general
- Try disabling RenderAccel in xorg.conf.
- If Xorg outputs an error about «conflicting memory type» or «failed to allocate primary buffer: out of memory» , or crashes with a «Signal 11» while using nvidia-96xx drivers, add nopat to your kernel parameters.
- If the NVIDIA compiler complains about different versions of GCC between the current one and the one used for compiling the kernel, add in /etc/profile :
export IGNORE_CC_MISMATCH=1
- If fullscreen applications are freezing or crashing, try enabling Display Compositing and Direct fullscreen rendering options in your desktop environment’s settings.
Visual glitches, hangs and errors in OpenGL applications
If you are using a recent CPU (Intel Sandy Bridge (2011) and later or AMD Zen (2017) and later) it has a micro operations cache. Using a micro op cache can lead to problems with NVIDIA’s driver in OpenGL due to Cache Aliasing [3]. You usually are able to disable the micro op cache in your systems BIOS, but this comes at the cost of performance [4]. Disabling the micro op cache also helps with the most severe graphical glitches in Xwayland applications, although it does not solve the problem fully [5].
Laptops: X hangs on login/out, worked around with Ctrl+Alt+Backspace
If, while using the legacy NVIDIA drivers, Xorg hangs on login and logout (particularly with an odd screen split into two black and white/gray pieces), but logging in is still possible via Ctrl+Alt+Backspace (or whatever the new «kill X» key binding is), try adding this in /etc/modprobe.d/modprobe.conf :
options nvidia NVreg_Mobile=1
Note that NVreg_Mobile needs to be changed according to the laptop:
- 1 for Dell laptops.
- 2 for non-Compal Toshiba laptops.
- 3 for other laptops.
- 4 for Compal Toshiba laptops.
- 5 for Gateway laptops.
Visual issues
Avoid screen tearing
- This has been reported to reduce the performance of some OpenGL applications and may produce issues in WebGL. It also drastically increases the time the driver needs to clock down after load (NVIDIA Support Thread).
- ForceFullCompositionPipeline is known to break some games using Vulkan under Proton with NVIDIA driver 535.
Tearing can be avoided by forcing a full composition pipeline, regardless of the compositor you are using. To test whether this option will work, run:
$ nvidia-settings --assign CurrentMetaMode="nvidia-auto-select +0+0 < ForceFullCompositionPipeline = On >"
Or click on the Advanced button that is available on the X Server Display Configuration menu option. Select either Force Composition Pipeline or Force Full Composition Pipeline and click on Apply.
In order to make the change permanent, it must be added to the «Screen» section of the Xorg configuration file. When making this change, TripleBuffering should be enabled and AllowIndirectGLXProtocol should be disabled in the driver configuration as well. See example configuration below:
/etc/X11/xorg.conf.d/20-nvidia.conf
Section "Device" Identifier "NVIDIA Card" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 1050 Ti" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" Option "ForceFullCompositionPipeline" "on" Option "AllowIndirectGLXProtocol" "off" Option "TripleBuffer" "on" EndSection
If you do not have an Xorg configuration file, you can create one for your present hardware using nvidia-xconfig (see NVIDIA#Automatic configuration) and move it from /etc/X11/xorg.conf to the preferred location /etc/X11/xorg.conf.d/20-nvidia.conf .
Note: Many of the configuration options produced in 20-nvidia.conf by using nvidia-xconfig are set automatically by the driver and are not needed. To only use this file for enabling composition pipeline, only the section «Screen» containing lines with values for Identifier and Option are necessary. Other sections may be removed from this file.
Multi-monitor
For multi-monitor setup you will need to specify ForceCompositionPipeline=On for each display. For example:
$ nvidia-settings --assign CurrentMetaMode="DP-2: nvidia-auto-select +0+0 , DP-4: nvidia-auto-select +3840+0 "
Without doing this, the nvidia-settings command will disable your secondary display.
You can get the current screen names and offsets using —query :
$ nvidia-settings --query CurrentMetaMode
The above line is for two 3840×2160 monitors connected to DP-2 and DP-4. You will need to read the correct CurrentMetaMode by exporting xorg.conf and append ForceCompositionPipeline to each of your displays. Setting ForceCompositionPipeline only affects the targeted display.
Tip: Multi monitor setups using different model monitors may have slightly different refresh rates. If vsync is enabled by the driver it will sync to only one of these refresh rates which can cause the appearance of screen tearing on incorrectly synced monitors. Select to sync the display device which is the primarily used monitor as others will not sync properly. This is configurable in ~/.nvidia-settings-rc as 0/XVideoSyncToDisplayID= or by installing nvidia-settings and using the graphical configuration options.
Screen corruption after resuming from suspend or hibernation
A corruption after suspend bug when using GDM service was solved as of driver version 515.43.04 [6].
Corrupted screen: «Six screens» Problem
For some users, using GeForce GT 100M’s, the screen gets corrupted after X starts, divided into 6 sections with a resolution limited to 640×480. The same problem has been recently reported with Quadro 2000 and hi-res displays.
To solve this problem, enable the Validation Mode NoTotalSizeCheck in section Device :
Section "Device" . Option "ModeValidation" "NoTotalSizeCheck" . EndSection
Performance issues
Bad performance after installing a new driver version
If FPS have dropped in comparison with older drivers, check if direct rendering is enabled ( glxinfo is included in mesa-utils ):
$ glxinfo | grep direct
If the command prints:
direct rendering: No
A possible solution could be to regress to the previously installed driver version and rebooting afterwards.
Extreme lag on Xorg
The factual accuracy of this article or section is disputed.
Reason: According to an NVIDIA developer this issue is not specific to GNOME and the rest of the comments on the issue do not mention multi-monitor setups. (Discuss in Talk:NVIDIA/Troubleshooting)
This should resolve this issue, however if it did not, you are most likely out of luck. One way you can remedy this issue is by adding these options:
/etc/environment
CLUTTER_DEFAULT_FPS=YOUR_MAIN_DISPLAY_REFRESHRATE __GL_SYNC_DISPLAY_DEVICE=YOUR_MAIN_DISPLAY_OUTPUT_NAME
turning Sync to VBlank and Allow flipping off within NVIDIA Settings, and configuring NVIDIA Settings to launch on startup using the flag —load-config-only . This will still result in a laggy desktop behavior, in particular on an eventual second (or third) monitor, but it should be much better.
CPU spikes with 400 series cards
If you are experiencing intermittent CPU spikes with a 400 series card, it may be caused by PowerMizer constantly changing the GPU’s clock frequency. Switching PowerMizer’s setting from Adaptive to Performance, add the following to the Device section of your Xorg configuration:
Option "RegistryDwords" "PowerMizerEnable=0x1; PerfLevelSrc=0x3322; PowerMizerDefaultAC=0x1"
Other issues
Vulkan error on applications start
The factual accuracy of this article or section is disputed.
Reason: Need confirmation by other users (Discuss in Talk:NVIDIA/Troubleshooting)
On executing an application that require Vulkan acceleration, if you get this error
Vulkan call failed: -4
try to delete the ~/.nv or ~/.cache/nvidia directory.
No audio over HDMI
Sometimes NVIDIA HDMI audio devices are not shown when you do
$ aplay -l
On some new machines, the audio chip on the NVIDIA GPU is disabled at boot. Read more on NVIDIA’s website and a forum post.
You need to reload the NVIDIA device with audio enabled. In order to do that make sure that your GPU is on (in case of laptops/Bumblebee) and that you are not running X on it, because it is going to reset:
# setpci -s 01:00.0 0x488.l=0x2000000:0x2000000 # rmmod nvidia-drm nvidia-modeset nvidia # echo 1 > /sys/bus/pci/devices/0000:01:00.0/remove # echo 1 > /sys/bus/pci/devices/0000:00:01.0/rescan # modprobe nvidia-drm # xinit -- -retro
If you are running your TTY on NVIDIA, put the lines in a script so you do not end up with no screen.
Backlight is not turning off in some occasions
By default, DPMS should turn off backlight with the timeouts set or by running xset. However, probably due to a bug in the proprietary NVIDIA drivers the result is a blank screen with no powersaving whatsoever. To workaround it, until the bug has been fixed you can use the vbetool as root.
Install the vbetool package.
Turn off your screen on demand and then by pressing a random key backlight turns on again:
vbetool dpms off && read -n1; vbetool dpms on
Alternatively, xrandr is able to disable and re-enable monitor outputs without requiring root.
xrandr --output DP-1 --off; read -n1; xrandr --output DP-1 --auto
Driver 415: HardDPMS
This article or section needs expansion.
Reason: Add references for the «user reports». (Discuss in Talk:NVIDIA/Troubleshooting)
Proprietary driver 415 includes a new feature called HardDPMS. This is reported by some users to solve the issues with suspending monitors connected over DisplayPort. It is reported to become the default in a future driver version, but for now, the HardDPMS option can be set in the Device or Screen sections. For example:
/etc/X11/xorg.conf.d/20-nvidia.conf
Section "Device" . Option "HardDPMS" "true" . EndSection Section "Screen" . Option "HardDPMS" "true" . EndSection
HardDPMS will trigger on screensaver settings like BlankTime . The following ServerFlags will set your monitor(s) to suspend after 10 minutes of inactivity:
/etc/X11/xorg.conf.d/20-nvidia.conf
Section "ServerFlags" Option "BlankTime" "10" EndSection
xrandr BadMatch
If you are trying to configure a WQHD monitor such as DELL U2515H using xrandr and xrandr —addmode gives you the error X Error of failed request: BadMatch , it might be because the proprietary NVIDIA driver clips the pixel clock maximum frequency of HDMI output to 225 MHz or lower. To set the monitor to maximum resolution you have to install nouveau drivers. You can force nouveau to use a specific pixel clock frequency by setting nouveau.hdmimhz=297 (or 330 ) in your Kernel parameters.
Alternatively, it may be that your monitor’s EDID is incorrect. See #Override EDID.
Another reason could be that by default current NVIDIA drivers will only allow modes explicitly reported by EDID, but sometimes refresh rates and/or resolutions are desired which are not reported by the monitor (although the EDID information is correct; it is just that current NVIDIA drivers are too restrictive).
If this happens, you may want to add an option to xorg.conf to allow non-EDID modes:
Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" . Option "ModeValidation" "AllowNonEdidModes" . EndSection
This can be set per-output. See NVidia driver readme (Appendix B. X Config Options) for more information.
Override EDID
Overclocking with nvidia-settings GUI not working
This article or section needs language, wiki syntax or style improvements. See Help:Style for reference.
Reason: Duplication, vague «not working» (Discuss in Talk:NVIDIA/Troubleshooting)
Workaround is to use nvidia-settings CLI to query and set certain variables after enabling overclocking (as explained in NVIDIA/Tips and tricks#Enabling overclocking, see nvidia-settings(1) for more information).
Example to query all variables:
nvidia-settings -q all
Example to set PowerMizerMode to prefer performance mode:
nvidia-settings -a [gpu:0]/GPUPowerMizerMode=1
Example to set fan speed to fixed 21%:
nvidia-settings -a [gpu:0]/GPUFanControlState=1 -a [fan:0]/GPUTargetFanSpeed=21
Example to set multiple variables at once (overclock GPU by 50MHz, overclock video memory by 50MHz, increase GPU voltage by 100mV):
nvidia-settings -a GPUGraphicsClockOffsetAllPerformanceLevels=50 -a GPUMemoryTransferRateOffsetGPUGraphicsClockOffsetAllPerformanceLevels=50 -a GPUOverVoltageOffset=100
Overclocking not working with Unknown Error
If you are running Xorg as a non-root user and trying to overclock your NVIDIA GPU, you will get an error similar to this one:
$ nvidia-settings -a "[gpu:0]/GPUGraphicsClockOffset[3]=10"
ERROR: Error assigning value 10 to attribute 'GPUGraphicsClockOffset' (trinity-zero:1[gpu:0]) as specified in assignment '[gpu:0]/GPUGraphicsClockOffset[3]=10' (Unknown Error).
To avoid this issue, Xorg has to be run as the root user. See Xorg#Rootless Xorg for details.
Power draw
This article or section needs expansion.
Reason: What is the point of this section? (Discuss in Talk:NVIDIA/Troubleshooting)
Check driver usage:
# lsof /dev/nvidia*
kwin_wayl 867 user 17u CHR 195,0 0t0 418 /dev/nvidia kwin_wayl 867 user 18u CHR 195,0 0t0 418 /dev/nvidiactl
If power save is configured on the kernel module:
$ grep . /sys/bus/pci/devices/0000:01:00.0/power/*
/sys/bus/pci/devices/0000:01:00.0/power/control:auto /sys/bus/pci/devices/0000:01:00.0/power/runtime_active_time:445933 /sys/bus/pci/devices/0000:01:00.0/power/runtime_status:active /sys/bus/pci/devices/0000:01:00.0/power/runtime_suspended_time:1266 /sys/bus/pci/devices/0000:01:00.0/power/wakeup:disabled
# rmmod nvidia_drm
$ grep . /sys/bus/pci/devices/0000:01:00.0/power/*
/sys/bus/pci/devices/0000:01:00.0/power/control:auto /sys/bus/pci/devices/0000:01:00.0/power/runtime_active_time:461023 /sys/bus/pci/devices/0000:01:00.0/power/runtime_status:suspended /sys/bus/pci/devices/0000:01:00.0/power/runtime_suspended_time:1064192 /sys/bus/pci/devices/0000:01:00.0/power/wakeup:disabled
Retrieved from «https://wiki.archlinux.org/index.php?title=NVIDIA/Troubleshooting&oldid=789702»
- Pages or sections flagged with Template:Accuracy
- Pages or sections flagged with Template:Expansion
- Pages or sections flagged with Template:Style
Saved searches
Use saved searches to filter your results more quickly
Cancel Create saved search
You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TRM crash with 2 cards; GPU detected DEAD will execute restart script watchdog.bat #292
DoruSonic opened this issue Apr 19, 2021 · 6 comments
TRM crash with 2 cards; GPU detected DEAD will execute restart script watchdog.bat #292
DoruSonic opened this issue Apr 19, 2021 · 6 comments
Comments
DoruSonic commented Apr 19, 2021
I have 2 cards. A 5700xt and a 570. I’ve had the 5700 for a few weeks and I got some stable settings, I then bought a 570 and its making the 5700 crash.
I’m using TRM and the 5700 is mining ETH while the 570 is mining RVN (with the corresponding -d parameter). They usually mine for a few hours together until the 5700 crashes with the «GPU detected DEAD will execute restart script watchdog.bat» error.
I’ve also found something weird is both can’t be mining unless I run the «windows_tdr_fix» again. If I start mining with the 570 and then start the 5700, the later crashes immediately. If I start the 5700 and then the 570, the 5700 also crashes immediately. The only way I managed to get both running is starting the 5700, launch the «windows_tdr_fix» and then the 570.
This would lead me to believe it’s the 570 fault, but its the 5700 that crashes so I’m not sure who is the culprit. I tried less aggressive OC on both cards to be on the safe side and still nothing.
I’m on Windows 10, both cards on risers, both with good thermals. PSU is 750W and they are using around 320W on the wall
The text was updated successfully, but these errors were encountered: