this post was submitted on 03 Nov 2024
25 points (96.3% liked)

Linux

48230 readers
640 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
 

Hi everyone! I want to be able to access a folder inside the guest that corresponds to a cloud drive that is mounted inside the guest for security purposes. I have tried setting up a shared filesystem inside Virt-Manager (KVM) with virtiofs (following this tutorial: https://absprog.com/post/qemu-kvm-shared-folder) but as soon as I mount the folder in order for it to be accessible on the ~~guest~~ host the cloud drive gets unmounted. I guess a folder cannot have two mounts at the same time. Aliasing the folder using bind and then sharing the aliased folder with the host doesn't work either. The aliased folder is simply empty on the host.

Does anyone have an idea regarding how I might accomplish this? Is KVM the right choice or would something like docker or podman better suited for this job? Thank you.

Edit: To clarify: The cloud drive is mounted inside a virtual machine for security purposes as the binary is proprietary and I do not want to mount it on the host (bwrap and the like introduce a whole lot of problems, the drive doesn't sync anymore and I have to relogin each time). I do not use the virtual machine per se, I just start it and leave it be.

top 36 comments
sorted by: hot top controversial new old
[–] [email protected] 4 points 2 weeks ago (1 children)

fwiw: if you go w the container strategy with docker or podman, you should be able to use the storage overlay based on how i'm reading your question.

it's hard to ascertain any path forward w/o knowing more details on the cloud drive and how's it's currently mounted on the guest instance.

[–] [email protected] 1 points 2 weeks ago (1 children)

I have no idea how it is mounted (how can I find out?) because the binary is proprietary. This is why it is contained inside a virtual machine.

[–] [email protected] 3 points 2 weeks ago (1 children)

run the command mount with sudo access and if you can see it enumerated in the printout then you should be able to proceed with either a container overlay or separate mount point.

if not, then it'll get very advanced very quickly; do you know how to use strace?

[–] [email protected] 2 points 2 weeks ago (1 children)

I just checked and it is mounted as a fuse drive.

do you know how to use strace?

A very confident NO :)

[–] [email protected] 4 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

fortunately we won't have to bother w strace; but i think i can see where you'll be blocked.

do you have to provide a username/password or token when you try to access the drive now?

if yes, then you should be able to mount it like you're trying to do using instructions like these and you can use the information from the last printout to fill in the blanks.

if no, then its access is controlled outside of your guest instance and you'll need to ask your admins to enable access.

[–] [email protected] 1 points 2 weeks ago (1 children)

do you have to provide a username/password or token when you try to access the drive now?

I do but it's through the proprietary GUI of the binary which has no CLI or API I can use.

[–] [email protected] 1 points 2 weeks ago (1 children)

then strace might help if we're lucky enough to get something like memory addresses.

strace can be very verbose and requires a lot of knowledge that i doubt i can share through comments back and forth.

is creating an intermediary like others have commented on in this post an option? they're automatically easier and faster than strace and there's no gaurantee that strace will show us the information we need.

[–] [email protected] 1 points 1 week ago (1 children)

strace can be very verbose and requires a lot of knowledge that i doubt i can share through comments back and forth.

No worries. Thank a lot nonetheless.

is creating an intermediary like others have commented on in this post an option?

What do you mean by intermediary? Do you mean syncing the files with the VM and then sharing the synced copy with the host?That wouldn't work since my drive is smaller than the cloud drive and I need all the files on-demand.

[–] [email protected] 2 points 1 week ago (1 children)

What do you mean by intermediary? Do you mean syncing the files with the VM and then sharing the synced copy with the host?That wouldn’t work since my drive is smaller than the cloud drive and I need all the files on-demand.

that's one way. do you need them all at the same time? are they mostly the same size and type?

[–] [email protected] 1 points 1 week ago (1 children)

do you need them all at the same time?

I need to access all files conveniently and transparently depending on what I need at work in that particular moment.

are they mostly the same size and type?

Hard no.

[–] [email protected] 1 points 1 week ago (1 children)

sshfs might work if your fuse drive is mounted with options that will let it be shared and you have sudo access to enable sshfs. also ssh access is a requirement.

how is it mounted now? it should also be in that same mount printout and usually at the end of the line inside parenthesis.

[–] [email protected] 1 points 1 week ago (1 children)

rw,nosuid,nodev,relatime,user_id=0,group_id=0

[–] [email protected] 1 points 1 week ago (1 children)

user_id=0,group_id=0

do you have sudo access and are there any rules in /etc/sudo* that match your username or any of your groups? which distribution?

[–] [email protected] 1 points 1 week ago (1 children)

Since originally writing the post I have switched to a rootless podman container. Running it how I did before (inside a VM) would simply yield user_id=1000,group_id=1000 I think.

[–] [email protected] 1 points 1 week ago (1 children)

that implies that you're not using the binary anymore since you're in a container; is it using an overlay fs?

[–] [email protected] 1 points 1 week ago (1 children)

I am using the binary. Just running it inside a container instead of a VM.

overlay fs?

Yes.

[–] [email protected] 1 points 1 week ago (1 children)

so the drive isn't mounted when the container starts; but you execute it after it started and then the drive is mounted?

[–] [email protected] 1 points 1 week ago (1 children)
[–] [email protected] 1 points 1 week ago

i've never seen a workflow like that so i don't think i can help you with the container.

if getting it from the host os an option, then it makes sense to see if it's possible and something like a sudoer rule or selinux could prevent that; my last question was my attempt to ascertain this.

[–] [email protected] 4 points 2 weeks ago (1 children)

Does rclone support the cloud service?

[–] [email protected] 1 points 1 week ago (1 children)

It does not, hence my question.

[–] [email protected] 1 points 1 week ago (1 children)

Gotcha, in that case maybe a container? You can use a bind mount to link a folder on the host to inside the container. You could use docker/podman or LXC.

[–] [email protected] 1 points 1 week ago

This is what I have been trying for the past two days actually: https://lemmy.ml/post/22215540 Could you please assist me there if you have an idea? Thanks :)

[–] [email protected] 1 points 2 weeks ago (1 children)

Maybe reshare the directory locally through Samba on your VM?

[–] [email protected] 1 points 1 week ago* (last edited 1 week ago) (1 children)

Why not NFS? Regardless, wouldn't it be slower anyway compared to virtiofs?

[–] [email protected] 2 points 1 week ago (1 children)

Just throwing it out there as an option. Good luck.

[–] [email protected] 1 points 1 week ago
[–] [email protected] 1 points 2 weeks ago (1 children)

Use something like SAMBA to share files between the two systems

[–] [email protected] 1 points 1 week ago

I think NFS would be a better choice if I decide to go that route. Isn't SAMBA slower and older than NFS?

[–] [email protected] 1 points 1 week ago (1 children)

I don't understand what you mean with the content disappearing when you mount the virtiofs on the guest - isn't the mount empty when bound, untill the guest populates it?

Can you share what sync client+guest os you are using? if the client does "advanced" features like files on demand, then it might clash with virtiofs - this is where the details of which client/OS could be relevant, does it require local storage or support remote?

If guest os is windows, samba share it to the host. if guest os is linux, nfs will probably do. In both cases I would host the share on the client, unless the client specifically supports remote storage.

podman/docker seems to be the proper tool for you here, but a VM with the samba/nfs approach could be less hassle and less complicated, but somewhat bloaty. containers require some more tailoring but in theory is the right way to go.

Keep in mind that a screwup could be interpreted by the sync client as mass-deletes, so backups are important (as a rule of thumb, it always is, but especially for cloud hosted storage)

[–] [email protected] 1 points 1 week ago* (last edited 1 week ago)

I don’t understand what you mean with the content disappearing when you mount the virtiofs on the guest - isn’t the mount empty when bound, untill the guest populates it?

Sorry I made a mistake in the original post. I wanted to say on the host instead of on the guest. My bad.

Yes, you are correct, the folder is empty until I log in insde the cloud application on the guest.

does it require local storage or support remote?

What do you mean? The cloud drive is a network drive basically. It only downloads files on demand.

if guest os is linux, nfs will probably do

This is what others have suggested and what I will probably do if the method below fails.

podman/docker seems to be the proper tool for you here

Yesterday I actually tried to spin a podman container hoping it would work but I encountered the following problem when trying to propagate mounts: https://lemmy.ml/post/22215540

Could you please assist me there if you have further ideas? Thank you :)

Keep in mind that a screwup could be interpreted by the sync client as mass-deletes

I am VERY aware of this *sweating*

[–] [email protected] 1 points 1 week ago
[–] [email protected] 1 points 2 weeks ago (1 children)

Wouldn’t you just be able to create a folder for Xdrive (imaginary alternative to Google drive) in the Virtual Machine and another one in the host.

Since they are both synchronized with Xdrive they would have the same content.

[–] [email protected] 1 points 2 weeks ago

The cloud drive is mounted inside a virtual machine for security purposes as the binary is proprietary and I do not want to mount it on the host (bwrap and the like introduce a whole lot of problems, the drive doesn't sync anymore and I have to relogin each time). I do not use the virtual machine per se, I just start it and leave it be.

[–] [email protected] 1 points 2 weeks ago (1 children)

The best option would be to have a "regular" client that keeps a local copy in sync with the cloud instead of a mount.

BTW: IDK what cloud storage you are using, but IIRC some show files that are not available locally (ie. only the most recent files are downloaded locally - the older stuff is downloaded on request).

Alternatively, you could hack something together running unison locally in the guest to sync the cloud folder to a shared one... you'll have two copies of the data though.

[–] [email protected] 1 points 2 weeks ago* (last edited 2 weeks ago)

That would be impossible since the cloud drive is 2TB and my physical storage space is under 500GB in size.