So I have a nearly full 4 TB hard drive in my server that I want to make an offline backup of. However, the only spare hard drives I have are a few 500 GB and 1 TB ones, so the entire contents will not fit all at once, but I do have enough total space for it. I also only have one USB hard drive dock so I can only plug in one hard drive at a time, and in any case I don’t want to do any sort of RAID 0 or striping because the hard drives are old and I don’t want a single one of them failing to make the entire backup unrecoverable.
I could just play digital Tetris and just manually copy over individual directories to each smaller drive until they fill up while mentally keeping track of which directories still need to be copied when I change drives, but I’m hoping for a more automatic and less error prone way. Ideally, I’d want something that can automatically begin copying the entire contents of a given drive or directory to a drive that isn’t big enough to fit everything, automatically round down to the last file that will fit in its entirety (I don’t want to split files between drives), and then wait for me to unplug the first drive and plug in another drive and specify a new mount point before continuing to copy the remaining files, using as many drives as necessary to copy everything.
Does anyone know of something that can accomplish all of this on a Linux system?
https://www.gnu.org/software/tar/manual/html_section/Using-Multiple-Tapes.html
Might do kind of what you want.
Something like mergerfs? I think this is what Unraid uses if I remember right.
If OP cant use more than one disk at once, how can they benefit from mergerfs?
Yeah you’re right. Scratch that then
Thank you!
Don’t become so concerned with if you could, that you overlook if you should.
I would buy a larger drive.
https://www.gnu.org/software/tar/manual/html_node/Multi_002dVolume-Archives.html
You might end up splitting files across drives, but I don’t think you’re likely to find a more “out of the box” solution. You might combine it with the compression flags to make sure things fit, and don’t forget to number your drives!
Thank you!
ZFS will let you setup a RAID like set of small volumes which mirror one larger volume, it takes some setup, but that’s the most “elegant” solution in that once it’s configured you only need to touch it when you add a volume to the system and it’s just a mounted filesystem that you use.
Does not solve the off-site problem, one fire and it’s all gone.
It’s going to take a little work here, but I have a large drive on my plex, and a couple of smaller drives that I back everything up to. On the large drive get a list of the main folders. You can do a “du -h --max-depth=1 | sort -hk1” on the root folder to get an idea of how you should split them up. Once you have an idea, make two files, each with their own list of folders (eg: folders1.out and folders2.out) that you want to go to each separate drive. If you have both of the smaller drives mounted, just execute the rsync commands, otherwise, just do each rsync command with the corresponding drive mounted. Here’s an example of my rsync commands. Keep in mind I am going from an ext4 filesystem to a couple of ntfs drives, which is why I use the size only. Make sure and do a dry run or two, and you may or may not want the ‘–delete’ command in there. Since I don’t want to keep files I have deleted from my plex, I have it delete them on the target drive also.
sudo rsync -rhi --delete --size-only --progress --stats --files-from=/home/plex/src/folders1.out /media/plex/maindrive /media/plex/4tbbackup
sudo rsync -rhi --delete --size-only --progress --stats --files-from=/home/plex/src/folders2.out /media/plex/maindrive /media/plex/other4tbdrive
Git annex can do that and keep track of which drive the files are on.
deleted by creator
Im going to say that doesnt exist and restoring from it would be a nightmare. You could cobble together a shell or python script that does that though.
You’re better off just getting a drive bay and plugging all the drives in at once as an LVM.
You could also do the opposite, which is split the 4TB into the different logical volumes. Each the same size as a drive.
If you are lucky enough, borgbackup could deduplicate and compress the data enough to fit a 1tb drive. Depending on the content of course, but it’s deduplication & compression is really insanely efficient for certain cases. (I have 3 devices with ~900GB each (so just shy of 3TB in total) which all gets stored in a ~400gb borgbackup)