Just some Internet guy

He/him/them 🏳️‍🌈

  • 0 Posts
  • 439 Comments
Joined 2 years ago
cake
Cake day: June 25th, 2023

help-circle

  • In that specific context I was still thinking about how you need to run mysql_upgrade after an update, not the regular post upgrade scripts. And Arch does keep those relatively simple. As I said, Arch won’t restart your database for you, and also won’t run mysql_upgrade because it also doesn’t preconfigure a user for itself to do that. And it also doesn’t initialize /var/lib/mysql for you either upon installation. Arch only does maintenance tasks like rebuild your font cache, create system users, reload systemd. And if those scripts fail, it just moves on, it’s your job to read the log and fix it. It doesn’t fail the package installation, it just tells you to go figure it out yourself.

    Debian distros will bounce your database and run the upgrade script for you, and if you use unattended upgrades it’ll even randomly bounce in the middle of the night because it pull a critical security update that probably don’t apply to you anyway. It’ll bail out mid dist-upgrade and leave you completely fucked, because it couldn’t restart a fucking database. It’s infuriating, I’ve even managed to get apt to be incapable of deleting a package (or reinstalling it)/because it wanted to run a pre-remove script that I had corrupted in a crash. Apt completely hosed, dpkg completely hosed, it was a pain in the ass.

    With the Arch philosophy I still need to fix my database, but at least the rest of my system gets updated perfectly and I can still use pacman to install the tools I need to fix the damn database. I have all those issues with Debian because apt tries to do way too fucking much for its own good.

    The Arch philosophy works. I can have that automated, if I asked for it and set up a hook for it.


  • Pacman just does a lot less work than apt, which keeps things simpler and more straightforward.

    Pacman is as close as it gets to just untar’ing the package to your system. It does have some install scripts but they do the bare minimum needed.

    Comparatively, Debian does a whole lot more under the hood. It’s got a whole configuration management thing that generates config files and stuff, which is all stuff that can go wrong especially if you overwrote it. Debian just assumes apt can log into your MySQL database for example, to update your tables after updating MySQL. If any of it goes wrong, the package is considered to have failed to install and you get stuck in a weird dependency hell. Pacman does nothing and assumes nothing, its only job is to put the files in the right place. If you want it to start, you start it. If you want to run post-upgrade, you got to do it yourself.

    Thus you can yank an Arch system 5 years into the future and if your configs are still valid or default, it just works. It’s technically doable with apt too but just so much more fragile. My Debian updates always fail because NGINX isn’t happy, Apache isn’t happy, MySQL isn’t happy, and that just results in apt getting real unhappy and stuck. And AFAIK there’s no easy way to gaslight it into thinking the package installed fine either.








  • I also wanted to put an emphasis on how working with virtual disks is very much the same as real ones. Same well known utilities to copy partitions work perfectly fine. Same cgdisk/parted and dd dance as you otherwise would.

    Technically if you install the arch-install-scripts package on your host, you can even install ArchLinux into a VM exactly as if you were in archiso with the comfort of your desktop environment and browser. Straight up pacstrap it directly into the virtual disk.

    Even crazier is, NBD (Network Block Device) is generic so it’s not even limited to disk images. You can forward a whole ass drive from another computer over WiFi and do what you need on it, even pass it to a VM boot it up.

    With enough fuckery you could even wrap the partition in a fake partition table and boot the VM off the actual partition and make it bootable by both the host and the VM at the same time.


  • What you’re trying to do is called a P2V (Physical to Virtual). You want to directly copy the partition as going through a file share via Linux will definitely strip some metadata Windows wants on those files.

    First, make a disk image that’s big enough to hold the whole partition and 1-2 GB extra for the ESP:

    qemu-img create -f qcow2 YourDiskImageName.qcow2 300G
    

    Then you can make the image behave like a real disk using qemu-nbd:

    sudo modprobe nbd
    sudo qemu-nbd -c /dev/nbd0 YourDiskImageName.qcow2
    

    At this point, the disk image behaves like any other disk at /dev/nbd0.

    From there create a partition table, you can use cgdisk or parted or even the GUI GParted will work on it.

    And finally, copy the partition over with dd:

    sudo dd if=/dev/sdb3 of=/dev/nbd0p2 bs=4M status=progress
    

    You can also copy the ESP/boot partition as well so the bootloader works.

    Finally once you’re done with the disk image, unload it:

    sudo qemu-nbd -d /dev/nbd0
    



  • I think it counts. You always have the option of taking your data with you and go elsewhere which is one of the main points of self-hosting, being in control of your data. If they jack up the prices or whatever, you just pack up, you never have to pay or else.

    Also hosting an email server at home would be an absolute nightmare, took me 10+ years to get that IP rep and I’m holding on to it as long as I can.

    I have a mix of it: private services run at home, public ones run on a bare metal server I rent. I still get the full benefits of having my own NextCloud and all. Ultimately even at home, I’d still be renting an Internet connection, unless you have a local only server.


  • The language itself has gotten a bit better. It’s not amazing but it’s decent for a scripting language, and very fast compared to most scripting languages. TypeScript can also really help a lot there, it’s pretty good.

    It’s mostly the web APIs and the ecosystem that’s kinda meh, mostly due to its history.

    But what you dislike has nothing to do with JavaScript but just big corpo having way too many developers iterating way too fast and creating a bloated mess of a project with a million third-party dependencies from npm. I’m not even making this up, I’ve legit seen a 10MB unit test file make it into the production bundle in a real product I consulted on.

    You don’t have to use React or Svelte or any of the modern bloated stuff nor any of the common libraries. You can write plain HTML and CSS and a sprinkle of JavaScript and get really good results. It’s just seen as “bad practice” because it doesn’t “webscale”, but if you’re a single developer it’s perfectly adequate. And the reality is short of WebAssembly, you’re stuck with JS anyway, and WASM is its own can of worms.

    And even then, React isn’t that bad. There’s just one hell of a lot of very poorly written React apps, in big part because it will let you get away with it. It’s full of footguns loaded with blanks, but it’s really not aweful if you understand how it works under the hood and write good code. Some people are just lazy and import something and you literally load the same data in 5 different spots, twice if you have strict mode enabled. I’ve written apps that load instantly and respond instantly even on a low end phone, because I took the time to test it, identify the bottlenecks and optimize them all. JavaScript can be stupid fast if you design your app well. If you’re into the suckless philosophy, you can definitely make a suckless webapp.

    What you hate is true for most commercial software written in just about any language, be it C, C++, Java, C#. Bugs and faster response times don’t generate revenue, new features and special one-off event features generate much much more revenue, so minor bugs are never addressed for the most part. And of course all those features end up effectively being the 90% bloat you never use but still have to load as part of the app.



  • Yeah but you can still add third-party repos if you don’t like the official ones, like the FUTO repository.

    The F-Droid app and the F-Droid repository are two different things and don’t have to be used together. So if you don’t want to submit your app to F-Droid for them to build and sign, you can use a third party repository without needing a whole new app.

    It’s still not federated, but it is distributed which makes more sense for an app store.


  • Why does everyone tries to shoehorn the fediverse into everything? I swear it’s becoming the new bitcoin.

    F-Droid supports additional custom repos. Anyone can already host their own custom repo with their apps on it. Failing that, you can still sideload APKs fron your web browser.

    Only the reviews part of it could use being federated.

    Could such a platform serve as a foundation if the Fediverse community ever developed its own federated operating systems?

    How would a federated operating system even work? What is it gonna federate and where, and for what purposes?



  • It’s hard to give concrete advice without knowing the specs or the software you want to run on this, but for tiny Linux systems there’s Buildroot so you can compile just the bare minimum you need and not use a distro at all (unless you could Buildroot as a distro). This is what OpenWRT uses to build all the router firmwares among other things.

    For something that would go in a car that seems pretty ideal to me. Skip initializing things you won’t use, make something that boots to GUI in 3 seconds. When you want to update the software you flash it as a new firmware image, no on-device installing or anything.

    Depending on what you run, ideally you’d skip Xorg/Wayland and use the framebuffer directly. But if you need to run a more standard environment, that’s what things like Cage are designed for. Single app, always full screen. It’s called a kiosk environment.