• 4 Posts
  • 56 Comments
Joined 11 months ago
cake
Cake day: June 7th, 2024

help-circle



  • If you wanted to run Unix, your main choices were workstations (Sun, Silicon Graphics, Apollo, IBM RS/6000), or servers (DEC, IBM) They all ran different flavors of BSD or System-V unix and weren’t compatible with each other. Third-party software packages had to be ported and compiled for each one.

    On x86 machines, you mainly had commercial SCO, Xenix, and Novell’s UnixWare. Their main advantage was that they ran on slightly cheaper hardware (< $10K, instead of $30-50K), but they only worked on very specifically configured hardware.

    Then along came Minix, which showed a clean non-AT&T version of Unix was doable. It was 16-bit, though, and mainly ended up as a learning tool. But it really goosed the idea of an open-source OS not beholden to System V. AT&T had sued BSD which scared off a lot of startup adoption and limited Unix to those with deep pockets. Once AT&T lost the case, things opened up.

    Shortly after that Linux came out. It ran on 32-bit 386es, was a clean-room build, and fully open source, so AT&T couldn’t lay claim to it. FSF was also working on their own open-source version of unix called GNU Hurd, but Linux caught fire and that was that.

    The thing about running on PCs was that there were so many variations on hardware (disk controllers, display cards, sound cards, networking boards, even serial interfaces).

    Windows was trying to corral all this crazy variety into a uniform driver interface, but you still needed a custom driver, delivered on a floppy, that you had to install after mounting the board. And if the driver didn’t match your DOS or Windows OS version, tough luck.

    Along came Linux, eventually having a way to support pluggable device drivers. I remember having to rebuild the OS from scratch with every little change. Eventually, a lot of settings moved into config files instead of #defines (which would require a rebuild). And once there was dynamic library loading, you didn’t even have to reboot to update drivers.

    The number of people who would write and post up device drivers just exploded, so you could put together a decent machine with cheaper, commodity components. Some enlightened hardware vendors started releasing with both Windows and Linux drivers (I had friends who made a good living writing those Linux drivers).

    Later, with Apache web server and databases like MySql and Postgres, Linux started getting adopted in data centers. But on the desktop, it was mostly for people comfortable in terminal. X was ported, but it wasn’t until RedHat came around that I remember doing much with UIs. And those looked pretty janky compared to what you saw on NeXTStep or SGI.

    Eventually, people got Linux working on brand name hardware like Dell and HPs, so you didn’t have to learn how to assemble PCs from scratch. But Microsoft tied these vendors so if you bought their hardware, you also had to pay for a copy of Windows, even if you didn’t want to run it. It took a government case against Microsoft before hardware makers were allowed to offer systems with Linux preloaded and without the Windows tax. That’s when things really took off.

    It’s been amazing watching things grow, and software like LibreOffice, Wayland, and SNAP help move things into the mainstream. If it wasn’t for Linux virtualization, we wouldn’t have cloud computing. And now, with Steam Deck, you have a new generation of people learning about Linux.

    PS, this is all from memory. If I got any of it wrong, hopefully somebody will correct it.













  • fubarx@lemmy.worldtoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    2
    ·
    10 days ago

    Was working on a simulator and needed random interaction data. Statistical randomness didn’t capture likely scenarios (bell curves and all that). Switched to LLM synthetic data generation. Seemed better, but wait… seemed off 🤔. Checked it for clustering and entropy vs human data. JFC. Waaaaaay off.

    Lesson: synthetic data for training is a Bad Idea. There are no shortcuts. Humans are lovely and messy.



  • Big fan of Geerling, precisely because he goes down these obscure rabbit holes. Found out about Meshtastic through him, and now BPS.

    There are lots of applications to having a super-accurate time source, without having to have antennas and view of multiple satellites.

    Synchronizing time is tricky. WWVB is too coarse resolution. NNTP requires access to the internet and all the inherent lags and delays. GPS was the only accurate source, but the super high resolution time signal is classified, and you are at the mercy of view of the sky. Also, signal jamming, thanks to what’s going on in Europe and Ukraine.

    BPS could be a niche experiment, or a Big Deal.


  • A caterer at a wedding showed me this. If gold, red, or purple potatoes, wash, dab dry, then cut into 3/4" cubes with skin. If russet, peel then cut into 1/2" cubes.

    Pre-heat oven to 350F.

    Toss potatoes in a bowl with a LOT of olive oil, then add salt, pepper, and dried mint. Stir till coated. Pour into a shallow metal baking pan. Make sure it’s only a single cube deep.

    Bake for 15m, then flip all the cubes with a spatula. Another 15m. After that, raise the heat a little, then flip every 5m until outside is to your level of crispiness. The larger the cube, the longer it takes. Too small and it ends up dried inside and out.

    You want to end up with crispy outside and fluffy inside. So good.