Talk:Bumblebee

From ArchWiki
Latest comment: 13 March 2021 by Lahwaacz in topic bbswitch kernel module not loaded

Nvidia ON/OFF

This is a dark spot. as long as acpi_call does not work reliably on most laptops there is no safe way to tell if it's working. For this reason I'm putting this as purely experimental state and not supporting it for now. Your issue was reported and is known on a variety of ASUS laptops. I'll recommend to read about acpi_call and their known-to-work laptops. BTW: Thanks!

I think the higher Power consumption is caused by the X-Server that gets hung up (it hogs 100% of one CPU Core) when you switch off the Card via acpi_call. I've got the same issue here on a ASUS X53S, which also has a NVidia GT 540M.
florianb 00:19, 1 August 2011 (CET)
Try disabling the X server first or you will have some issues. If there is still a problem try the vga-switcheroo option.
Samsagax 19:27, 31 July 2011 (EDT)
I tried to reproduce the errors successfully
1. If you switch off the NVIDIA Card before you stop the bumblebee daemon (which starts/stops the 2nd X-Server) you get into trouble, the X process hogs 100% CPU, gets unkillable and the overall power consumption (in my case) goes from about 1500mA to 2100mA
2. If you only stop the bumblebee daemon without switching off the NVIDIA Card, power consumption goes from about 1500mA to 1800-1900mA (maybe user "thewall" only stopped the daemon without switching off the NVIDIA Card?)
3. If you switch off the NVIDIA Card (which is a GT 540M in my case) via acpi_call, power consumption goes down to 1200mA, which is quite nice *BUT* the Fan goes 100% some seconds after you switch it off.. this seems to consume about 50mA more power.. blah blah and first of all is totally annoying
A guy in the ubuntu forum apparently already fixed 3) on similar hardware as i have, but i guess the differences are in detail, i'm trying to find it out.
florianb 08:07, 1 August 2011 (CET)
I'll try to release today the new model for nvidia driver, similar to the one of nouveau. That way power switching is made automatically and by means of vga-switcheroo by default. I have to remind you that acpi_call method calls are guessed and (in your case) they may be incorrect. Samsagax 10:42, 1 August 2011 (EDT)
Okay, sounds nice. I'd really like to contribute something to your work, if there's anything i could do, let me know.
florianb 10:37, 2 August 2011 (CET)

Multiple monitors with screenclone - wrong info

At the end of the manual it says "Take note of the position of the VIRTUAL display in the list of Outputs as shown by xrandr. The counting starts from zero, i.e. if it is the third display shown, you would specify -x 2 as parameter to screenclone", however, this was wrong information in my case; i had to specify -x 2 even though VIRTUAL was first in my xrandr call(and thus it should be -x 0, which only cloned my laptop display). Making a change that mentions this. Futile (talk) 21:29, 6 July 2013 (UTC)Reply[reply]

systemd-logind: failed to get session: PID XXX does not belong to any known session

systemd-logind: failed to get session: PID XXX does not belong to any known session

Once I had got this error. When I tried what the wiki said, it made no difference.

But this worked:

Failed to initialize the NVIDIA GPU at PCI:1:0:0 (GPU fallen off the bus / RmInitAdapter failed!)

Add rcutree.rcu_idle_gp_delay=1 to the kernel parameters.

I think these two issues have something in common.

However people who have the same problem as mine should try it. —This unsigned comment is by Swordfeng (talk) 18 September 2014‎. Please sign your posts with ~~~~!

Why is this error removed from the wiki? It is not fixed. And the workaround I added to the wiki still works...
Aligator (talk) 18:58, 3 February 2015 (UTC)Reply[reply]
It was removed (and a host of other content) with [1], with only vague (read: none) reasoning. Reverted it. @Archange: Please read ArchWiki:Contributing, make small edits, justify them and check the talk page. -- Alad (talk) 20:19, 3 February 2015 (UTC)Reply[reply]
Sorry, I just wanted to clean this page, because as I’ve said, all this content was too old or even wrong. But I’m not used to mediawiki, so I probably did it not correctly, plus I’m definitively not comfortable with discussions here. I’m a Bumblebee “dev”, and I’m currently going into cleaning most important wikis (Debian, Ubuntu, Arch) for the incoming 4.0 release (that has been delayed but was due for end of January initially). About the aforementioned error, this has nothing todo with Bumblebee, it’s a feature of rootless X, Bumblebee is coded to return X.org errors, but should ignore this one as it does for some others (this is fixed in 4.0). -- Archange (talk) 12:56, 4 February 2015 (UTC)Reply[reply]

Since ll /dev/dri/card0 gives something like: crw-rw-rw-+ 1 root vglusers 226, 0 Mar 6 14:24 /dev/dri/card0

I think I've solved the problem by reconfiguring the virtualgl server by running sudo vglserver_config and disabling both first two options ie. "Restrict 3D X server access to vglusers group" and "Restrict framebuffer device access to vglusers group", as mentioned in /usr/share/doc/virtualgl/index.html.

The error messages are persistent in dmesg/Xorg.8.log, but optirun seems to be working perfectly.

P.S. please don't remove this discussion page.


--Rezad (talk) 10:20, 22 April 2015 (UTC) I fixed my same error by adding this to /etc/bumblebee/Reply[reply]

Section "Screen"

   Identifier "Default Screen"
   Device "DiscreteNvidia"

EndSection

as is mentioned in Debian wiki bumblebee page https://wiki.debian.org/Bumblebee#Common_issues

Intel/Nouveau: PRIMUS_libGLa

I moved Bumblebee#Intel/Nouveau: primus: fatal: failed to load any of the libraries: /usr/$LIB/nvidia/libGL.so.1 under Bumblebee#Troubleshooting, but it has been proposed to move it back under Bumblebee#Installing Bumblebee with Intel/Nouveau.

I see some other mentions of nouveau under Bumblebee#Troubleshooting, but I also understand Lahwaacz's point, and wouldn't mind moving the section to its original place in a Note. Are there more opinions?

Besides that though, wouldn't it be more correct to state that support for nouveau is probably lacking due to the practical death of the project? The deprecation Note was added with [2], and the Expansion template with [3], but on the website and related links I couldn't find any official deprecation statement, see in particular [4] and [5].

Kynikos (talk) 03:57, 21 November 2015 (UTC)Reply[reply]

Right, but the other mentions of nouveau are tied with recommending to use the proprietary driver instead, so that does not count :P
-- Lahwaacz (talk) 13:58, 23 November 2015 (UTC)Reply[reply]
Eheh I won't insist, it's practically the same for me, moved back :) — Kynikos (talk) 07:23, 24 November 2015 (UTC)Reply[reply]
I have no idea what the actual state really is and don't have the resources to experimentally find out, because my laptop is unfortunately one of the last pre-Optimus models. For what it's worth, the last two links lead to pages last edited in 2013, whereas the deprecation note has been added a year ago.
-- Lahwaacz (talk) 13:58, 23 November 2015 (UTC)Reply[reply]
I can't test the current working state either wiht my laptop, but the state of the project is quite clear... My point was indeed that in the latest update of the official docs (2013) there's no trace of a deprecation of the support for nouveau, and the deprecation note was added here by Svenstaro without any external reference, so I thought that blaming the death of the project instead of an unreferenced deprecation would make things clearer: my guess is that Bumblebee was working on nouveau in 2013 and stopped working in 2014 without anybody to fix it, hence the Note. — Kynikos (talk) 07:23, 24 November 2015 (UTC)Reply[reply]

Fix for Bumblebee Optirun with Nvidia v358.16-2.1 driver and bbswitch v0.8

Please refer to this issue#699 for details.

Note: There are other similar issues relating to bumblebee or bbswitch not unloading nvidia drivers since 2012. Further information can be found under the Bumblebee Project on GitHub.

System affected :

Arch x86_64 Laptop, Nvidia GT540M,
linux-lts 4.1.15-1, kded 5.17.0

Dude Doe Doh (talk) 22:53, 25 December 2015 (UTC)Reply[reply]

TurnCardOffAtExit in /etc/bumblebee/bumblebee.conf

This section suggests that one workaround to a problem

is to set TurnCardOffAtExit=false in /etc/bumblebee/bumblebee.conf, however this will enable the card everytime you stop the Bumblebee daemon, even if done manually.

This suggests to me that this is a poor solution, compared to the alternative option. However, the bumblebee package *already* sets this by default, which appears to conflict with this text. Also, does setting this mean that the card will be on at all times, i.e. using up unnecessary battery? -Ostiensis (talk) 01:04, 12 November 2016 (UTC)Reply[reply]

The default bumblebee.conf setting is TurnCardOffAtExit=true, the article suggests to set it to false (effectively writing OFF) but then dismisses the idea for a better alternative (namely writing ON to /proc/acpi/bbswitch). What is exactly conflicting? Only when you bumblebeed is stopped (e.g. systemctl stop bumblebeed or on shutdown), the power state will be changed. It has no effect between invocations of optirun and will normally not eat your battery.--Lekensteyn (talk) 01:46, 12 November 2016 (UTC) PS. please sign your postsReply[reply]
Sorry, I always forget to sign! So you are saying that if I set this option to false, then if I only ever turn on the card with optirun/primusrun, it should still be turned off when unused? Hence, it should not affect battery. Regarding the default settings, I downloaded the package. Then grep TurnCardOffAtExit bumblebee-3.2.1-12-x86_64.pkg/etc/bumblebee/bumblebee.conf gives TurnCardOffAtExit=false -Ostiensis (talk) 02:21, 12 November 2016 (UTC)Reply[reply]
Sorry, I was mistaken, I actually changed it to true myself. The default is indeed false. This default probably exists for compatibility reasons, some (older?) laptops would turn up with a black screen if is was not done. And correct, it will only turn the card with optirun (primusrun) and turn off when optirun is not running. On daemon exit (e.g. shutdown) it will also turn on, but that should not be a problem I guess?--Lekensteyn (talk) 10:24, 12 November 2016 (UTC)Reply[reply]
No worries. I edited the wiki in an attempt to clarify it, but I've possibly misunderstood, so please feel free to edit further. Thanks for the replies. -Ostiensis (talk) 23:39, 12 November 2016 (UTC)Reply[reply]

Bumblebee and TLP interferening

When using Bumblebee with default TLP then it is not going to work. You have to add the output of lspci | grep "NVIDIA" | cut -b -8 to RUNTIME_PM_BLACKLIST in /etc/default/tlp. Uncomment it when necessary. CodingHahn (talk) 17:22, 5 January 2017 (UTC)Reply[reply]

Proper way to start bumblebeed on boot

If necessary, I'll recreate here, but I posted in the forums already here. I'm following this wiki's instructions regarding being added to the bumblebee group, having everything installed, etc. I have the unit enabled, but it's never running on startup. I have to manually start it. Is there something on my end, or is something in this wiki's instructions awry? Jwhendy (talk) 01:53, 7 January 2017 (UTC)Reply[reply]

Thanks to lahwaacz, this is resolved. bumblebeed looks for the systemd graphical.target, but I don't use a graphical login/DM so it's not in use. It was pointed out this came up on the github page and there's no issue with relying on multi-user.target instead. This feature is in the development branch (and has been for a couple years). Jwhendy (talk) 19:48, 7 January 2017 (UTC)Reply[reply]

Misleading note at the top of the page

In the very top of the page you can see:

Note: You might want to use nvidia-xrun or PRIME instead, because Bumblebee not only has significant performance issues[6][7], but also has no plans to support Vulkan[8].

I think this note gives misleading advice, and it is generally wrong.

One of the two reasons given is "performance issues". While nvidia-xrun can certainly be an alternative, PRIME is not. The closed-source driver does not support PRIME at all, and nouveau is not even an option (performance-wise) for 3D workloads in modern cards. nvidia-xrun itself has its own drawbacks, requiring a separate X instance being the biggest one in my opinion, since it makes the whole process much more cumbersome than wrapping a command, so it's very far away of being a complete replacement.

About the vulkan support, while it is true that there is no "official" support, primusvk[9] offers a solution to that which is already in the official repos, and it should be usable.

I'd propose a rewrite of the note, which right now tells the user "don't use Bumblebee if you care about vulkan or performance", to something more subtle, in the lines of "You need to use primusvk[10] for Vulkan support, and try nvidia-xrun if you do not get enough performance from bumblebee". Roobre (talk) 15:34, 10 July 2019 (UTC)Reply[reply]

Hi. Please do it yourself :) -- Erkexzcx (talk) 20:13, 16 August 2019 (UTC)Reply[reply]

Using primus_vk

This project works in a very similar manner like optirun or primusrun, but it uses the primus layer to translate calls to vulkan. It can run vulkan applications, but it also fails back to OpenGL. So, it can be used as a replacement to primusrun in most instances. There is no configuration needed. Grazzolini (talk) 14:40, 13 September 2019 (UTC)Reply[reply]

bbswitch kernel module not loaded

Hi, seems bbswitch kernel module gets not loaded automatically (without entry in modules-load.d). I needed some time to google the solution why /proc/acpi/bbswitch does not exists so we probably we should outline that one needs to load the module itself.

PS: first contrib. Is this the right place to discuss changes?

—This unsigned comment is by PsiTrax (talk) 23:13, 12 February 2021‎. Please sign your posts with ~~~~!

Hi, well done on your first contribution, you just need to sign your posts on talk pages, see Help:Discussion.
This is mentioned in Bumblebee#Default power state of NVIDIA card using bbswitch. Do you use bbswitch alone or with bumblebee? Bumblebee should take care of loading the bbswitch module.
-- Lahwaacz (talk) 18:33, 13 February 2021 (UTC)Reply[reply]
Hi, I really think this should be mentioned before, because the present structure results misleading (I had the same problem mentioned by PsiTrax)
Maybe something like: "To use bbswitch without bumblebeed read Bumblebee#Default power state of NVIDIA card using bbswitch.", just before the line, "To manually turn the card on or off using bbswitch, write ..."
Best regards! --riveravaldez (talk) 12:56, 12 March 2021 (UTC)Reply[reply]
Thanks for confirming, feel free to add the link as you suggested. -- Lahwaacz (talk) 11:37, 13 March 2021 (UTC)Reply[reply]