this post was submitted on 12 Feb 2024
6 points (100.0% liked)

Linux Questions

989 readers
2 users here now

Linux questions Rules (in addition of the Lemmy.zip rules)

Tips for giving and receiving help

Any rule violations will result in disciplinary actions

founded 1 year ago
MODERATORS
6
submitted 7 months ago* (last edited 6 months ago) by possiblylinux127 to c/linuxquestions
 

I have lingering setup and I can still access the container but for what ever reason Podman seems to be unable to access the GPU for no apparent reason.

I think this may be an issue with systemd but I'm not entirely sure.

Solution: you need to be logged in for it to work. I accomplished this on a separate VM with autologin to icewm.

top 7 comments
sorted by: hot top controversial new old
[–] [email protected] 5 points 7 months ago (1 children)

Random guess: your GPU is managed by logind and bound to your session. When your session ends, logind takes away the permissions. This kind of makes sense, if somebody else were to physically login on your PC, they should get (probably exclusive) access to the GPU.

Not sure if this is even a good idea since I have never researched this, but maybe you can just write some udev rules to ensure that your user always has permissions to access the device?

[–] possiblylinux127 2 points 7 months ago (1 children)

Probably but I was hoping for a simple solution

[–] [email protected] 1 points 7 months ago* (last edited 7 months ago) (1 children)

Actually there probably is one. I thought that the classic way of managing permission by the video group is gone, but in all my installs (Arch and NixOS) the GPU devices (~~/dev/video*~~ EDIT: /dev/dri/card*, the previous one is your webcam) are still owned by root:video. Maybe just adding your user to video group will work? Arch Wiki even suggests this in this case:

There are some notable exceptions which require adding a user to some of these groups: for example if you want to allow users to access the device even when they are not logged in.

[–] possiblylinux127 2 points 6 months ago (1 children)

For me it is owned by the video user and the render group.

I don't mind running iceWM in a VM as it has a fairly small overhead. Its not like I'm actually using the desktop so it takes pennies worth of ram and no CPU

[–] [email protected] 1 points 6 months ago (1 children)

Interesting. For me, it's only the /dev/dri/render* device that is owned by the render group, but this device is world-RW anyway. Still, I guess you can add the user to the render group too? I did find some info that Debian uses that group this way, though I have never used Debian myself, so can't verify that.

[–] possiblylinux127 2 points 6 months ago (1 children)

I already did that so that podman could access the device. (Podman runs as a local user). What was strange was that podman couldn't access it without a graphical session running but my local user could.

[–] [email protected] 1 points 6 months ago

No idea then :( AFAIK the logind mechanism I mentioned originally is based only on permissions, but I had never really needed to look into it further.