

I think that’s what Friendica is supposed to be, decentralized Facebook.
I think that’s what Friendica is supposed to be, decentralized Facebook.
The graph suggests it started declining well before AI became mainstream. I’m sure it accelerates it, but it had already long peaked.
Maybe, just maybe, most of the big questions have been asked and answered already.
These days when I look something up it’s been answered like 8 years ago, and the answer is still valid. And they aggressively mark questions as dupes, so people aren’t opening too many repeat questions.
If you don’t want to be monogamous, don’t, just be polyamorous and date other polyamorous people. It’s a really bad excuse for cheating when there’s plenty of relationship arrangements where this isn’t a problem. There’s no need to deceive unwilling people and cheat on them when you can find partners who think the same as you and you don’t need to cheat on in the first place. You’re still dealing with other people with feelings on the end.
I’d have to really go out of my way to cheat on my wife when the only rule is to have safe sex (or be safe in general).
LDAC works just fine on Linux, but may be a different package or repo since it’s somewhat proprietary. Just worked out of the box for me on Arch.
I think P2P has stood the test of time. Torrents scale extremely well, any large scale video would have so many peers the server wouldn’t have to participate at all. These days most torrents easily saturate my gigabit connection no problem with just a handful of peers. Torrents tends to spread like wildfire.
The main issue would be storage space, but I think a lot of YouTubers would be perfectly okay with spending $5-10 a month to pay for the storage costs with all the benefits you get from not being tied to YouTube’s ToS and policies. It’s a drop in the bucket compared to the earnings from sponsor spots.
You can return multiple A/AAAA records for the root, the TLD delegates the whole thing to your nameservers and it’s free to return whatever you want. Registrars actually do let you set records on the TLD’s zone, it’s called glue records and they’re typically used to solve the nameserver chicken and egg problem where you might want to be your own nameservers. Mine’s set that way:
~ $ drill NS max-p.me
;; ->>HEADER<<- opcode: QUERY, rcode: NOERROR, id: 32318
;; flags: qr rd ra ; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;; max-p.me. IN NS
;; ANSWER SECTION:
max-p.me. 3600 IN NS ns2.max-p.me.
max-p.me. 3600 IN NS ns1.max-p.me.
The me
registrar will give you the IP for those two so you can then ask my server for where max-p.me really is.
The bigger issue is usually there’s a bunch of stuff under your root domain like MX records, TXT records, potentially subdomains. That’s a huge problem if you need to CNAME the root to a hosting provider, as the CNAME will forward the entire domain including MX and TXT records. Cloudflare sort of works around that with server side flattening of CNAMEs, but that’s not standard. But if you have a www subdomain, then it’s a complete non-issue. And really, do you want to delegate your MX records to WP Engine?
The main reason people went without the www is the good old “it looks cooler and shorter” while ignoring all the technical challenges its brings, and that’s probably why browsers now hide the www so that website designers don’t have to do this atrocity.
I feel about the same. I don’t particularly care about it, but it’s nice to know how many I helped. It was intentionally removed, I believe so it doesn’t incentivise karma farms. If karma exists it will be used and there will be reasons to farm it.
Nothing a quick Postgres query can’t fix though :p
Another reason to use ~/.local
is you can do things like
./configure --prefix=$HOME/.local
make -j$(ncpu)
make install
And then you get your .local/bin
, .local/share
, .local/include
, .local/lib
and such, just like /usr
but scoped to your user.
and it should mostly just work as well.
Yes but OP took the string representation of the IPv4 and base64’d it, I was addressing that part specifically.
That base64 is so long, and doesn’t need to be. An IP address is 4 bytes so it could be represented as simply 8 hex digits (base64 also expands to 8 due to padding).
That was 7 years ago, and he seems to have distanced himself from that past. He’s kind of retired from the whole gaming channel thing and does more family life things.
People can grow a lot in 7 years, I sure did.
I really like the positive vibe and “here’s what you can do with Linux, for funsies” instead of the usual “here’s all the problems I had and I switched back”.
No “it’s perfect”, no “it runs all my games”, just “I tried it and had a blast setting it all up”. He’s legit enjoying it and sharing those feelings is powerful.
It’s shaping up to be pretty good at least. It’s pretty good for being in alpha state still.
The main thing it needs to beat for me is Kwin’s excellent Wayland support. Everything just works.
The per-screen workspaces are appealing though.
For what it’s worth, I experience none of that. My laptop is absolutely rock solid with KDE, it’s like a MacBook you pull it out of your backpack and it’s ready to go before I’m even done opening the screen.
My desktop is currently just over 5 days of continuous uptime (no sleep). I’ve crashed more often because of ZFS than KDE.
Both are ArchLinux. I also have a friend on Bazzite that doesn’t have issues with KDE either, and it runs great in my VM.
Those all sound like possible graphics driver issues.
In that specific context I was still thinking about how you need to run mysql_upgrade
after an update, not the regular post upgrade scripts. And Arch does keep those relatively simple. As I said, Arch won’t restart your database for you, and also won’t run mysql_upgrade
because it also doesn’t preconfigure a user for itself to do that. And it also doesn’t initialize /var/lib/mysql
for you either upon installation. Arch only does maintenance tasks like rebuild your font cache, create system users, reload systemd. And if those scripts fail, it just moves on, it’s your job to read the log and fix it. It doesn’t fail the package installation, it just tells you to go figure it out yourself.
Debian distros will bounce your database and run the upgrade script for you, and if you use unattended upgrades it’ll even randomly bounce in the middle of the night because it pull a critical security update that probably don’t apply to you anyway. It’ll bail out mid dist-upgrade and leave you completely fucked, because it couldn’t restart a fucking database. It’s infuriating, I’ve even managed to get apt to be incapable of deleting a package (or reinstalling it)/because it wanted to run a pre-remove script that I had corrupted in a crash. Apt completely hosed, dpkg completely hosed, it was a pain in the ass.
With the Arch philosophy I still need to fix my database, but at least the rest of my system gets updated perfectly and I can still use pacman to install the tools I need to fix the damn database. I have all those issues with Debian because apt tries to do way too fucking much for its own good.
The Arch philosophy works. I can have that automated, if I asked for it and set up a hook for it.
Pacman just does a lot less work than apt, which keeps things simpler and more straightforward.
Pacman is as close as it gets to just untar’ing the package to your system. It does have some install scripts but they do the bare minimum needed.
Comparatively, Debian does a whole lot more under the hood. It’s got a whole configuration management thing that generates config files and stuff, which is all stuff that can go wrong especially if you overwrote it. Debian just assumes apt can log into your MySQL database for example, to update your tables after updating MySQL. If any of it goes wrong, the package is considered to have failed to install and you get stuck in a weird dependency hell. Pacman does nothing and assumes nothing, its only job is to put the files in the right place. If you want it to start, you start it. If you want to run post-upgrade, you got to do it yourself.
Thus you can yank an Arch system 5 years into the future and if your configs are still valid or default, it just works. It’s technically doable with apt too but just so much more fragile. My Debian updates always fail because NGINX isn’t happy, Apache isn’t happy, MySQL isn’t happy, and that just results in apt getting real unhappy and stuck. And AFAIK there’s no easy way to gaslight it into thinking the package installed fine either.
Yeah that’s a pretty good point. As a technical user that seems solid but for the average user that makes sense.
Isn’t owning the domain proof enough already?
Nobody else could possibly use max-p.me as their handle, and proving control of the domain is plenty for security sensitive things like LetsEncrypt.
Anyone you’d care to mark verified already brought their own domain.
It was made back when Facebook had that old style UI, in 2010. And then interest in Facebook’s format kinda died, and so did the interest in the project.