Keep an eye on lowendbox.com’s hosting offers. There’s some junk to wade through, but it sounds like exactly what you’re after.
Keep an eye on lowendbox.com’s hosting offers. There’s some junk to wade through, but it sounds like exactly what you’re after.
It sure will handle a remote VPS, it’s just not as automatic to set up as it is with PVE.
I put this off for a long time, but I finally did it this weekend.
Basically, you install the proxmox-backup-client
utility and then run it via cron
or a systemd timer
to do the backup however often you want.
You’re responsible for getting the VPS to communicate with your backup server (like pretty much any self-hosted service), so some sort of VPN between them would be good. I used NetBird for that part and I have a policy that allows access from the client to PBS only on TCP port 8007.
I’ve been quite happy with Proxmox Backup Server. I’ve had it running for years and it’s been pretty solid for all my VMs/containers. There’s also a bare metal client, which I’m adding to a couple cloud VPS machines this weekend. We’ll see how that goes.
Also, since it’s just Debian under the hood, I also use the PBS host as a replication target for my ZFS datasets via sanoid/syncoid.
I just had to do this. Don’t skip the release notes. They’re really good at highlighting potential pitfalls, just scroll back through and look for the heading “Breaking Changes.”
In my case there were a few, but they were only for API calls I’m not using, so I just did the update in one go and it worked out great. (Of course, I made sure to take a backup first.)
Oh! Also, try posting this here: https://practicalzfs.com/. That’s a discourse forum really focused on ZFS. Jim Salter runs it and Alan Jude often contributes advice. There are some folks there who know ZFS inside and out.
Checksum errors can often mean a failing component. It could be the other drive or maybe a sata cable. Is the original pool mounting correctly? If so, you should be able to do a simple rsync
to move it to the new pool.
Take this with a grain of salt, the more I re-read, the more I realize I’m making assumptions about your setup that may or may not be true. First, I’m making an assumption that you’re doing ACLs for samba shares (and I know that system better on FreeBSD than Linux). I’m also assuming based on your description you want everyone to have access, but not write access.
I think you could do an officewide
group with read-only permissions on all of the shares and then set the unix group to the department.
So, for your HR team you’d do chgrp -R hr /path/to/parent/shares/hr
and setfacl -m d:g:rwx /path/to/parent/shares/hr
and add the officewide
group’s read-only perms: setfacl -m d:g:officewide:rx /path/to/parent/shares/hr
. Rinse and repeat for each share.
Not sure if this is what you’re after, but maybe it’ll help lead in a good direction.
Not sure how wide a variety of sizes there are, but I’d search for “threaded inserts.” You drill the holes and screw the inserts in and that gives you an interface for machine bolts. I’ve only seen them in bigger sizes, but I wouldn’t be surprised if there are some that would fit standoffs.
You could likely use dd
or clonezilla to create a duplicate of your boot drive and boot your laptop right from that, but that’s not quite what you’re after.
There are some distros lately that use a declarative config file to set the whole thing up that I think is much more what you have in mind. The big ones that come up a lot are nixOS and Fedora Silverblue. Maybe one of those systems would be to your liking.
That’s awesome, I’ll definitely be interested to see how it all works out.
Yeah, I started working on it once a couple years ago and getting it spun up was a chore. Life got busy and I never finished.
That imapbox looks pretty interesting. Thanks for tracking that one down.
So I think the way I would want to do this is with something like mailpiler (https://www.mailpiler.org/). It’s been on my long list of things to dive into for a while.
It’s managed service provider, which translates more or less to a company that handles IT for other companies.
I’d just give it time. Let the account sit unused and set any messages to be forwarded to your new account. If you don’t notice anything in the next year or so, you probably won’t miss anything that might still be linked.
That model’s got an html5 console available so I don’t have to mess with Java. The one thing I haven’t got it to do is remote power cycle. I make a point to set up wake on LAN for that.
The way I’ve ended up going is to just use a standard keyboard and monitor with a KVM over IP switch. In the US it’s not hard to find relatively inexpensive ones on the used market, but they do require a module for each computer, which can increase the costs. I’ve had good luck finding the Avocent MPU2016 switches. Worth a look on eBay anyway.
My experience has been mostly positive. I hit a situation a couple times where a particular app hanging will prevent other flatpaks from launching. That took a while to figure out, but otherwise it’s pretty good. In general things work the way they’re supposed to.
My only experience with homebrew is on macOS and I’ve switched to MacPorts there. Homebrew did some weird permissions things I didn’t care for (chowned all of /usr/local to $USER, if I’m remembering right). It worked fine on a single user system, but seemed like a bad philosophy to me. This was years ago and I don’t know how it behaves on Linux.
I also prefer Firefox, but when I need a Chromium alternative for testing, I opt for the flatpak (or the snap) version personally.
I’ve got one running in a Proxmox cluster. Getting it setup was a bit particular (due to the T2 chip if I remember correctly), but it’s be working flawlessly. I use the quick sync feature of the iGPU for my jellyfin container.
If you were going to buy something new, I think there are more cost effective boxes of about the same size and spec, but if you’ve got it already, you should definitely start playing with it.
That’s a bummer of a price difference for electricity. I think using the R320 for storage and adding some Lenovo sff units makes a lot of sense. I have one of the Lenovos in my hodgepodge virtualization cluster and it has been rock solid (as has my R320 in providing storage).