How to transfer files over network preserving the xattrs?
from CtrlAltOoops@lemmy.world to linux@lemmy.ml on 22 Jul 2024 20:53
https://lemmy.world/post/17851882

Title.

The situation is basically this:

Do you guys have any suggestions or maybe any other options that I might use?

#linux

threaded - newest

bjoern_tantau@swg-empire.de on 22 Jul 2024 20:57 next collapse

Rsync with the -a option is meant to preserve as much as possible.

CtrlAltOoops@lemmy.world on 22 Jul 2024 21:10 collapse

Thanks for the suggestion. In fact I tried rsync and it works. But is it possible to integrate in my current workflow? Maybe copying/moving files using a file manager?

I’m asking because with the 3 options I mentioned I may, for example, create mount points in fstab and from this there on everything would be transparent to the user. Would it be possible using rsync?

mumblerfish@lemmy.world on 22 Jul 2024 21:20 next collapse

How much delay could you live with between syncs? If it’s not important to be immidiate, just an end-of-the-day thing you could cronjob the rync with the update flag every so often.

solidgrue@lemmy.world on 22 Jul 2024 21:27 next collapse

Secure file transfers frequently trade off some performance for their crypto. You can’t have it both ways. (Well, you can but you’d need hardware crypto offload or end to end MACSEC, where both are more exotic use cases)

rsync is basically a copy command with a lot of knobs and stream optimization. It also happens to be able to invoke SSH to pipeline encrypted data over the network at the cost of using ssh for encrypting the stream.

Your other two options are faster because of write-behind caching in to protocol and transfer in the clear-- you don’t bog down the stream with crypto overhead, but you’re also exposing your payload

File managers are probably the slowest of your options because they’re a feature of the DE, and there are more layers of calls between your client and the data stream. Plus, it’s probably leveraging one of NFS, Samba or SSHFS anyway.

I believe “rsync -e ssh” is going to be your best over all case for secure, fast, and xattrs. SCP might be a close second. SSHFS is a userland application, and might suffer some penalties for it

CtrlAltOoops@lemmy.world on 22 Jul 2024 21:55 collapse

I’ll take a closer look into rsync possibilities and see if it applies to my situation. I appreciate your input.

CCRhode@lemmy.ml on 23 Jul 2024 14:09 collapse

Maybe copying/moving files using a file manager?

<plugging package=“file_manager”>FileZilla</plugging>

-or-

<plugging package=“file_manager”>Gnome Commander</plugging>

…but call me quaint. I still like…

<plugging package=“file_manager”>mc</plugging>

… 'cause it always just works. mc can ostensibly preserve attributes, time-stamps, and (with appropriate privilege on the receiving end) ownership of transferred files (using an sftp server supposedly).

myersguy@lemmy.simpl.website on 22 Jul 2024 20:59 next collapse

You didn’t mention rsync, which I think is usually considered standard. I’d look into that.

atzanteol@sh.itjust.works on 22 Jul 2024 21:05 next collapse

“rsync -X”

IsoKiero@sopuli.xyz on 22 Jul 2024 21:24 next collapse

I assume you don’t intend to copy the files but use them from a remote host? As security is a concern I suppose we’re talking about traffic over the public network where (if I’m not mistaken) kerberos with NFS doesn’t provide encryption, only authentication. You obviously can tunnel NFS with SSH or VPN and I’m pretty sure you can create a kerberos ticket which stores credentials locally for longer periods of time and/or read them from a file.

SSH/VPN obviously causes some overhead, but they also provide encryption over the public network. If this is something ran in a LAN I wouldn’t worry too much about encrypting the traffic and in my own network I wouldn’t worry about authentication either too much. Maybe separate the NFS server to it’s own VLAN or firewall it heavily.

PseudoSpock@lemmy.dbzer0.com on 22 Jul 2024 21:33 next collapse

linux-audit.com/using-xattrs-extended-attributes-…

CtrlAltOoops@lemmy.world on 22 Jul 2024 21:54 collapse

I appreciate your help, but notice that the article just tell some basics of xattrs usage (I already know how to use it), but has no reference of file transfering files, which is what I need.

PseudoSpock@lemmy.dbzer0.com on 23 Jul 2024 05:52 collapse

I suspect you use them more extensively, than I. Mine are limited usually to the extended acls, which I then use getfacl to generate a dump of all the acls of the files and sub directories I am transferring or 7zipping, and include that file in the transfer or 7z bundle. Then use setfacl to apply all those permissions on the receiving end after everything has been copied or extracted.

refalo@programming.dev on 23 Jul 2024 02:11 next collapse

forces me to reenter the credentials frequently

Can you explain what your need is for copying files this frequently? Is this for backups? Do you always want the two sides to stay in sync? If so, something like a distributed filesystem such as gluster/ceph/etc. might work better for you.

CtrlAltOoops@lemmy.world on 23 Jul 2024 13:41 collapse

Sure. I have a little home server running Linux and 2 or 3 machines that access files shared by this server. I use Plasma on my desktop machines and I rely a lot on tags (just to clarify, Plasma uses xattrs - more specifically user.xdg.tags) to tag files. On the server I already have a couple of scripts that automatically insert some predefined tags on files.

Thing is when I try to copy and/or move files between server and desktop, depending on the protocol I used to mount the shared, I loose this information.

People suggested rsync, and it would be an excellent option if what I wanted was to keep both sides synchronized or something like that. In fact what I need is just a solution that allow me to mount a server share content and allow me to transfer files from it preserving their extended attributes, preferentially using a file manager (I use basically Dolphin or ranger).

No need to keep then synced.

D_Air1@lemmy.ml on 23 Jul 2024 08:34 next collapse

The whole samba filenames thing is configurable. I only use linux systems and I ran into that same issue.

By default samba seems to mangle file names. Not to mention that Windows systems don’t tend to support naming your files whatever you want the same way they do on linux so we need to map those characters to something else. To solve this I include a few different entries in my samba config file to fix the issue.

mangled names = no
vfs objects = catia
catia:mappings = 0x22:0xa8,0x2a:0xa4,0x2f:0xf8,0x3a:0xf7,0x3c:0xab,0x3e:0xbb,0x3f:0xbf,0x5c:0xff,0x7c:0xa6

That’s just if you choose to go with samba. I only use it cause it was easier to setup than NFS when I tried.

CtrlAltOoops@lemmy.world on 23 Jul 2024 13:31 collapse

Hey, thanks for taking the time to reply.

Yep, when I tried using Samba I had these catia:mappings configuration in my smb.conf. Thing is it slightly changes things (two that I specifically remember are ¿ and ¡ ), sometimes doesn’t recognizes filenames (don’t remember exactly which chars), etc.

I tried to setup Samba, NFS and sshfs. Took a couple of days to understand a little better each one and, by trial and error, have an idea of their perks. I do appreciate your suggestion but I don’t think Samba is what I’m looking for.

azvasKvklenko@sh.itjust.works on 23 Jul 2024 12:33 collapse

rsync -a src dst