• 0 Posts
  • 16 Comments
Joined 2 months ago
cake
Cake day: September 27th, 2025

help-circle
  • It really depends on the advice, and my relationship with the advice giver. I generally give advice at least a thought, even if it was unwanted, unless I have a reason to mistrust the advisor. As for how I respond to the person, if it’s a friend I’ll usually have followup questions, for people I know less well it’s usually a cordial variant of “hmm, interesting perspective” and then I have to think on it for a while before I respond, if I respond at all.




  • Dunno. Where there are some eyeballs, there’s some market for influence. Obviously someone is bothering, but as for how much money is being thrown at the fediverse at this moment, I would guess somewhere between “peanuts” and “small potatoes”. On the other hand I imagine a bot trained here could be deployed elsewhere with little effort, similar to how a reddit bot can be deployed to lemmy with a little bit of rework, so maybe it’s seen as a low-risk training ground. In any case I don’t see it being a problem that gets less salient as the fediverse grows.


  • Who knows what scale they’re operating at. The problem with this kind of bot is that you only really notice if they’re doing a bad job (theoretically). This might be someone who wrote an LLM bot for a lark, a small-time social media botter testing a variant for fedi deployment, or an established bot trainer with dozens or hundreds of accounts that’s field-testing a more aggressive new model. I doubt you could get away with hundreds of bots like this on lemmy, I think the actual user pool is small enough that we’d notice hundreds of bots posting at this volume. but again, I don’t really know how I’d detect it if it were less “obviously smells like LLM slop” than this one. In bot detection, as in so many fields, false negatives are a real bitch to account for.


  • If I were to hazard a guess, it’s for training. Make a bot, make a bunch of posts and comments and get organic interactions, see what get you flagged as a bot account, incorporate that data into your next version, rinse, repeat. The goal is probably to make a bot account that can blend in and interact without being flagged, presumably while also nudging conversations in a particular direction. Something I noticed on reddit is that the first comment can steer the entire thread, as long as it hews close enough to the general group consensus, and that kind of steering is really useful for the kinds of groups that like to influence public thinking.

    I don’t think galacticwaffle is necessarily trying to steer here, I think they’re just trying to make a bot that flies under the radar. but I imagine that that kind of steering is what someone who would pay for this kind of bot would use it for.






  • I think if we’re ever going to find an answer to “Why does the universe exist?” I think one of the steps along the way will be providing a concrete answer to the simulation hypothesis. Obviously if the answer is “yes, it’s a simulation and we can demonstrate as much” then the next question becomes “OK so who or what is running the simulation and why does that exist?” which, great, now we know a little bit more about the multiverse and can keep on learning new stuff about it.

    Alternatively, if the answer is “no, this universe and the rules that govern it are the foundational elements of reality” then… well, why this? why did the big bang happen? why does it keep expanding like that? Maybe we will find explanations for all of that that preclude a higher-level simulation, and if we do, great, now we know a little bit more about the universe and can keep on learning new stuff about it.


  • Yes, kind of, but I don’t think that’s necessarily a point against it. “Why are we here? / Why is the universe here?” is one of the big interesting questions that still doesn’t have a good answer, and I think thinking about possible answers to the big questions is one of the ways we push the envelope of what we do know. This particular paper seems like a not-that-interesting result using our current known-to-be-incomplete understanding of quantum gravity, and the claim that it somehow “disproves” the simulation hypothesis is some rank unscientific nonsense that IMO really shouldn’t have been accepted by a scientific journal, but I think the question it poorly attempts to answer is an interesting one.