Yesterday, my laptop would not hibernate or shut down. SystemD complained about waiting for the wifi or something.
One kernel update later, and all is well once more.
A peace loving silly coffee-fueled humanoid carbon-based lifeform that likes #cinema #photography #linux #zxspectrum #retrogaming
Yesterday, my laptop would not hibernate or shut down. SystemD complained about waiting for the wifi or something.
One kernel update later, and all is well once more.
After a while you’ll realise that it’s just healthy competition, not wars.
I use a bunch of distros, just like I have a lot of different tools in my toolbox. Each serves a slightly different function.
Keep in mind that the motto is “World domination,” not “Internal quarrelling”.
Sure, all the work you do between the moment of the filesystem failure and the last backup is gone. There’s nothing that can be done to mitigate that fact, other that more frequent backups and/or a synchronized (mirror) system.
Backups are just a simple way to keep you from having to explain to your partner that you lost all the pictures and videos you took along the years.
Picture this: you open and edit one of your documents and save it.
The filesystem promptly allocates some blocks and updates the inodes. Maybe the inode table changed, maybe not. Repeat for some other files. Now your “inode backup” has a completely different picture of what is going on on your disk. If you try to recover the disk using it, all you will achieve is further corruption of the filesystem.
“Proper backups” imply that you have multiple backups and a backup strategy. That could mean, for instance, that you would do a full backup, then an incremental/differential backup each week and keep one backup for each month. A bad cable would cause you trouble, no doubt, but the impact would be lessened by having multiple backups points spread over months.
Redundancy is not backup. Read that again.
Redundancy is important for system resilience, but backup is crucial for continuity. Every filesystem is subject to bugs and ZFS is not special. Here’s an article from a couple of days ago. If you’re comfortable with no backups just because you have redundancy, more power to you. I wouldn’t be.
I’m really curious as to why go to all this trouble instead of using a proper file level backup and restore solution.
I miss QNX. Awesomest 1.44MB ever.
Windows 3.1, 3.11, 95, 98, ME, Vista…
Stopped evangelising when I realised people hate evangelists telling them what they should do. Started leading by example instead. Curious people approach you if they want to learn.
Won’t be going back to proprietary OSs.
This is the sensible thing to do. Try a bunch of distros using either USB or as Virtual Machines.
It’ll save you a lot of heartache when you eventually kill the bootloader, the display driver or both (and you will, it is part or the learning process).
Got an E14, easiest laptop to open ever (at least compared to the HPs and Toshibas I had the pleasure to own)
In addition to the basic hardware care (checking for dust, reapplying thermal compound if necessary) you can run powertop to check what is keeping your CPU awake when it shouldn’t and take steps to purge unneeded services or resource-heavy applications.
I’m going back to FreeDOS. I can still edit autoexec.bat and config.sys.
No effort at al. You define them once at install time and that’s it.
For added flexibility you can use LVM volumes instead of partitions, they make resizing operations a thing of joy.
BTRFS also has something like subvols baked in, but I haven’t looked into it.
It depends, if your docker installation uses /var, it will surelly help to keep it separated.
For my home systems, I have: UEFI, /boot, /, home, swap.
For my work systems, we additionally have separate /opt, /var, /tmp and /usr.
/usr will only grow when you add more software to your system. /var and /tmp are where applications and services store temporary files, log files and caches, so they can vary wildly depending on what is running. /opt is for third-party stuff, so it depends if you use it or not.
It’s fine for most uses.
For server or enterprise cases you want to separate /usr, /var and /tmp to prevent a rogue process from filling the / volume and crashing the machine.
I too had many funny encounters with nvidia drivers back then, to the point of having cold sweat at the thought of having to upgrade them. It usually resulted in broken X in ways that I was not able to fix, resulting in a reinstall. Those traumatic experiences moved me away from nvidia permanently.
I’ve nuked the root filesystem more than once, but there was this one time that I edited the /etc/sudoers file and bothched it… turns out that sudo does not like that very much, and if you don’t have root access you can’t sudo to fix the mistake. That day I learned to only touch /etc/sudoers with visudo, that checks the file syntax before saving.
Epic. Well done.
You mean Neko. Used to have it installed a long time ago. I don’t know if it still works in this day of compositors and Wayland.
I also remember having a bunch of penguins running around my screen like little lemmings. Xpenguins I think it was called.
You can also get Xcowsay to pop up occasionally on your desktop to offer silly advice, just pipe it from fortune and add it to crontab.