NVIDIA Optimus and Linux - Are You Burning Power For No Reason?
Vložit
- čas přidán 8. 08. 2020
- This wasn't planned, so sorry for the lacklustre presentation.
To clear up the technology naming, PRIME is a rendering offload technique, whereas Optimus is the dual GPU solution. So all systems using PRIME are Optimus configurations, but not the other way around.
Setting up the xorg.conf automatically:
us.download.nvidia.com/XFree8...
Then add `Option "AllowNVIDIAGPUScreens"`.
Running things with PRIME offloading:
us.download.nvidia.com/XFree8...
I have a second channel:
/ @markfurneaux2659 - Věda a technologie
use "watch -n1 sensors" or "watch -n1 nvidia-smi" instead of running these commands again and again...
great video, lots of useful information and exactly what I was looking for, thanks!
You can completely disable the card with: echo "OFF" > /proc/acpi/bbswitch
I found some kernel module options for power management. Here is an excerpt from the Nvidia manual for NVreg_DynamicPowerManagement.(you can place these lines in a text config file /etc/modprobe.conf and /etc/modprobe.d/*.conf)
Option "NVreg_DynamicPowerManagement=0x00"
With this setting, the NVIDIA driver will only use the GPU's built-in power management so it always is powered on and functional. This is the default option, since this feature is a new and highly experimental feature.
Option "NVreg_DynamicPowerManagement=0x01"
With this setting, the NVIDIA GPU driver will allow the GPU to go into its lowest power state when no applications are running that use the nvidia driver stack. Whenever an application requiring NVIDIA GPU access is started, the GPU is put into an active state. When the application exits, the GPU is put into a low power state.
Option "NVreg_DynamicPowerManagement=0x02"
With this setting, the NVIDIA GPU driver will allow the GPU to go into its lowest power state when no applications are running that use the nvidia driver stack. Whenever an application requiring NVIDIA GPU access is started, the GPU is put into an active state. When the application exits, the GPU is put into a low power state.
Additionally, the NVIDIA driver will actively monitor GPU usage while applications using the GPU are running. When the applications have not used the GPU for a short period, the driver will allow the GPU to be powered down. As soon as the application starts using the GPU, the GPU is reactivated.
It is important to note that the NVIDIA GPU will remain in an active state if it is driving a display. In this case, the NVIDIA GPU will go to a low power state only when the X configuration option HardDPMS is enabled and the display is turned off by some means - either automatically due to an OS setting or manually using commands like xset.
Similarly, the NVIDIA GPU will remain in an active state if a CUDA application is running.
Option NVreg_DynamicPowerManagement can be set on the command line while loading the NVIDIA Linux kernel module. For example,
modprobe nvidia "NVreg_DynamicPowerManagement=0x02"
It works out of the box on non Debian Distros. Manjaro works perfectly
so i have a 2014 hp envy15 with an i7 and 850m. I was running mint on it since 2016-17 and I found when running specifically with NVIDIA i would get about 1h50m (note this is with intelij and Firefox open.) and when I ran it with just the Intel as soon as my laptop hit 50% the battery would instantly drop to 0% and I wounder if this might be the case. recently I installed pop_os (two weeks ago) on the machine as system78 has laptops with a similar config. So I haven't been able to really test it but i just ran through their GPU switcher and found when set to integrated. nvida-smi is not even available. when in hybrid there are two xorg processes. in compute there are none but nvida-smi is available. (note i'm not seeing the actual power consumption from smi. this might be a config or hardware thing with my system) i'm guessing pop_os is doing the same thing of having a process open for battery and having a handle on the GPU when it needs to switch. IDK i'm just guessing based on my observation. I need time to sit down and benchmark it.
You can fully disable the Nvidia GPU by editing ACPI tables.
Although, you will have to use a custom bootloader that lets you inject SSDT's like Clover
Thanks for the great explanation :) rllyy helpful
Thank you friend,
Very good explanation.
@Mark Furneaux I have more or less the same system and I am wondering if you or anyone else knows how to fix Debian so it works or has a good iso file for Debian testing with optimus + amd switchable.
That's interesting. I have MSI GL 63 8RD with mint 19.3. Although gtx 1050Ti has optimus support, I was never able to setup successfully. So I run with onboard graphics all time & can switch between dgpu & igpu manually.
That Nvidia utility, is it available as binary anywhere?
Offtopic -
Mark, you should use simplescreenrecorder for screencast. It does superb job of video/audio at same time.
Nice explanation!
bro please list commands,, very hard to see
Why not just power off the pcie device from the kernel? Pretty standard thing to do if you never plan to use the Nvidia card
Here is a guide for this wiki.archlinux.org/index.php/Hybrid_graphics#Fully_Power_Down_Discrete_GPU
This is a good idea and I should have mentioned it. However I can't use it in my case because there is no entry in the ACPI tables for controlling the discreet GPU. ASUS really didn't give a shit with the BIOS on this laptop; it didn't even suspend/resume without ACPI tweaks.
Just out of curiosity because I'm looking into buying a new laptop in the near future, Is there any reason to buy one of these powerful machines with a dedicated gpu if it's basically never going to be used? Like having a better build quality or better thermals than regular thin and light laptops.
You can't get 6 and 8 core CPUs in laptops without dedicated GPUs because the laptop manufacturers don't offer those SKUs
On my Lenovo laptop i can go to the BIOS and just turn off switchable graphics, forcing my laptop to use the intergrated one.
Isnt that basically what looking glass is intended to do, but through a virtual machine?
thank you soo much, sir :)
I’ve never had good experiences with any power management, I’ve had the same problem with older NVIDIA GPUs but also even with iGPU only machines power management ruins my day. The most common problems I’ve had are with sleep modes, both on Linux and Windows I’ve had many problems with S3 power state not being entered correctly even with Windows on a Microsoft Surface! They would either just stay in S0 forever and waste the battery or never come back from S3 until the battery slowly discharged. That sort of fault makes a laptop completely useless if you’re rolling the dice every time you turn it off, modern Windows disguising ‘shutdown’ as ‘hibernate’ doesn’t help either.
I’ve also had power management problems with Windows Phone 10 not correctly turning off peripherals (although it was an early preview build they dropped support for the phone soon after they promised not to).
Although I do feel for those who have to develop power management features, I’ve had a fair share of developing firmware for long term low power devices on both Atmel and Espressif platforms, always involved plenty of troubleshooting strange behavior and digging through long and complex documentation.
My laptop has a unique issue that i can't let the laptop go to sleep because the screen is too close to the keyboard so it presses the space bar when i put it in my bag, so i have to set it to go into hibernate when i close the laptop. it's aggrivating.
Also the laptop doesn't wake from sleep without a keyboard press, so it actually isn't that much harder to just let it hibernate instead of sending it to sleep.
and yes, all of these issues were out of the box.
Goddamnit man that is the exact laptop I have.
I have the FX505DU and it has a 1660 Ti (non Max Q), yours should too.
Good info
there's a kernel module for switching the dedicated gpu off without the help of any nvidia driver software: github.com/Bumblebee-Project/bbswitch
it's available as a package in archlinux, don't know about ubuntu and others.
I don't really see the advantage of using 2 gpu at the same time, it's just power lost. Personally I use slimbookbattery for power management and optimus switch amd for choose between gpu. I usually use de igpu and oly active de gpu when I need the hdmi output.
It might be usefull, if you have a slower dedicated gpu and an igpu then you can for example use one for gaming and other stuff like that while using igpu for docs, watching videos, record screen without using your dgpu
optimus manager for my laptop was burning battery like anything. whether its Nvidia or Intel optimus would drain battery like water. But on prime I don't have such issue I can run any application using prime-run in terminal and program name and no problem. It depends laptop to laptop I guess. Anyways mine is turing Nvidia-gtx 1650 and in Arch
Even on 2022, it still doesn't work. So moral of the story is if you have a hybrid GPU setup, don't waste your time and install Windows.
Optimus Manager is still buggy. Prime run isn't working properly either
2:31 pop! os has nvidia support, there's separate iso for that even
actually very useful, since i use artix and don't want to change anything (except configuring gpu)
1.25A * 12.3V - 0.9A * 12.3V = 4.3W, so the driver estimation of 3W less was a bit low, maybe rounding. Sounds like it can go even lower too:
P0/P1 - Maximum 3D performance
P8 - Basic HD video playback
P12 - Minimum idle power consumption
very low amp draw...
Could you figure out how to change the power state to level 12?
@@jsoares91 Sorry I have no idea.
Or try installing TLP for power management and adjusting gov, power management
My asus laptop with a passively cooled intel CPU gets like 8 hours on a charge, but then again, that laptop has a larger battery than yours, and my main laptop, a POS walmart brand has a 45Wh that drains in like 3 hours with aggressive power management.
IDK, PoP_OS (Ubuntu fork) has a switch for it without any mods. You have to reboot to switch but thats not that big of an issue IMO.....
I have the same laptop and after 2 years this has probably been fixed, because nvidia-smi showed p8 and 2 watts without any X configuration with the 535 driver. The sensors showed the same numbers before and after, though they were different from the video (around 11 volts and 1.4 amps). The distro used is Arcolinux b
Well, doesn't doing nvidia-smi pings the gpu and thus draws some power usage?
Not fixed yet in Kubuntu 23.10, Legion Slim 5, 7840HS, RTX4060. 535 total idle power draw 22W, Nouveau driver 7W.
Use bbswitch to completely turn off the nvidia gpu
the thing is that ryzen APU's are stupid efficient, so wasting that power is sucky
the thing i hate about not using the GPU, is that it feels like a waste of money, so unless you play games, you're better off getting something like a dell XPS, if they ever started using ryzen and including thunderbolt
nvidia relesad some dsp for cuda. it removes noisr perfect. have tried it in win. so im trying to share the gpu via ethernet from a linux machine. forgot how. did something with usb ports. tnx for explaining stuff edit. cpyrit on todo list too :)
That is what I want bro I have been searching for this for a long time after getting brand new gaming laptop which I cannot use without installing ubuntu
This is why Linus Torvalds cussed nvidia that one time..
Now it really is a NoVideo gpu.
maybe on pop os works out of the box
Works out of the box with Pop!_OS.
But I switched to Arch now :D
i dont know if the consumer series support most of the smi commands but you could try forcing a max power limit "nvidia-smi -i 0 -pm 1 -pl 1" would lock the card at max 1watt power consumption. the card should not stay in P0 without any process, that looks like a bug. at least in my experience with datacenter gpus this is not the normal behavior.
It's the driver I had it on numerous laptops.Its hard to find a good one on windows don't let me started on linux.But I just take the mxm gpu out when booting linux.then I'm good.For other laptop that runs windows only I just use vmware linux is best to use only on igpu or as desktop server station.
You also can completely turn off a GPU by doing an appropriate acpi call at boot time.
Could you share some guide or something?
great interesting. Is this something that should not happen with AMD?
don't think there are any laptops with an igpu + dedicated amd gpu. and if there are, you'll most probably have the same problem of needing at least some kind of power management driver
@@4833504F well, now there are; and i'm npt sure about how their power management is exactly but it's probably better
@@4833504F probook 6470b
Bumblebee its a dual graphics, so you would need bumblebee when you play steam games
It depends on the driver version, newer nvidia drivers are not compatible with bumblebee, but can work with vulkan, so for newer nvidia drivers on the optimus laptops you'll need nvidia-prime package, it contains prime-select command, among other things. With drivers that were compatible with bumblebee, you could choose among two options with prime-select: nvidia or intel. Newer drivers bring the third option - on-demand, but 'prime-select on-demand' is not enough for offload to work. And there is where the "fun" begins ;] On Arch systems there is 'prime-run' command that suppose to be used like optirun command. There is no such thing in Ubuntu online documentation, but in Debian Wiki there is page about NVIDIA Optimus and there is mentioned that you need to run programs with two environment variables set to make this program use nVidia GPU: __NV_PRIME_RENDER_OFFLOAD=1 __GLX_VENDOR_LIBRARY_NAME=nvidia
Easy-peasy XD
Newer versions of GNOME have menu options to run programs with those variables, I've written a simple wrapper with those variables and named it prime-run :D
Unfortunately I don't know how much power my card take on-demand because I don't have power in my nvidia-smi or nvtop.
Try Linux Mint 20, you can try installing the dual graphics , bumblebee! Battery life
Think of it dis way use tlp, and bumblebee, and it can utilize PM or utilize Manjaro tlp, and the bumblebee switch off from onboard graphics or wen gaming or kody to descreet graphics nvidia!
I could simply disable the dGPU in the bios luckily
A lot of laptops no longer have this option. A laptop I had only had GPU options for: "Discrete" (the Nvidia dGPU), and "Hybrid" (power managed selection via software). Whenever I selected "Hybrid", the dGPU (Nvidia GPU) would still be powered up, so I'd have to dig around to find out how to power it down - and it wasn't easy and straight forward.
ASUS TUF series laptops build quality is bad.
Mine has nice build quality. Feels pretty good and is sturdy.
I got a fx506 😁
I checked today, it's using more than windows power
Guess this hasn't been a priority with the coders... But, more and more Linux guys & gals are using these souped up rigs so maybe they'll take notice.
Especially since you were kind enough to go through the trouble of making a video :-)
was looking for it for years and searching Optimus instead of prime. NVIDIA is a joke especially on Linux. they expected you to reboot everytime you need to swap to the DGPU unless you use bumblebee or something similar. lol
Hey, Did you find any ways to not reboot and change the GPU's ? Have they done something good now?
Linux is bad with graphics card driver .. special if you have integrated GPU+ dedicated GPU.... windows does this in better way
Its actually the graphics companies (NVidia) that doesn't support linux. Not linux's fault, NVidia's fault for not supporting this Prime Technology under Linux.
@@potatogod3000 even in open source driver of AMD lacks this feature. .. I think developer need to focus on arm laptop and Linux ...we lacks app in arm side ....
@@techzone2009 yeah true.. hope we get more attention from devs in future. For that we at least need 5% or more market share. Hope that we can reach there quickly.... :)
dont use nvidia gpus with linux. Always have bad experiences. I uninstalled it as I dont use for gaming. Now my laptop is in good temperature and have more battery time.
Review random shit from eBay again
16 mins for literally 60s of information. 😔