

Seriously the AI generated creatures were weak bullshit. Maybe they mostly looked good, but none of them evoked anything actually alien, which was the supposed point.
Our News Team @ 11 with host Snot Flickerman
Seriously the AI generated creatures were weak bullshit. Maybe they mostly looked good, but none of them evoked anything actually alien, which was the supposed point.
There’s some classy non-sexualized “anime girl” backgrounds that are pretty slick.
https://steamcommunity.com/sharedfiles/filedetails/?id=2618988115
I really enjoy this one for Wallpaper Engine.
Fits the sexy android aesthetic without showing any of the body, instead focusing on the android aspect.
unwatchably-bad movie
Beg to differ, it’s bad, but in the novel “so bad it rounded the bend back to good” variety. Perfect riffing fodder, a la MST3K.
My friend and I are B-movie afficionados. A Boy and His Dog is a long time favorite of ours, going back to the original Fallout days.
He recently bought and sent me this amazing knockoff poster with a bunch of weird shit that isn’t even in the movie:
(The name of the gallery, Deadly Prey, is a reference to another fine B-movie masterpiece.)
I like having the butt mogged some zoomers girl as my background.
Finsexual, from what I understand, is a newer term meant to replace gynesexual’s dual meaning, in that it means that you’re attracted to femininity regardless of gender identity or biological sex.
From Fetlife’s Kinktionary:
Finsexual: Usually refers to a person who is attracted to femininity regardless of a person’s gender identity. Is sometimes considered more inclusive than Gynesexual (as the prefix “gyne” focuses on female anatomy).
Well that’s because Wu-Tang isn’t “for the kids” they’re “for the children.”
Python hatched out of the egg on the cover.
There’s been this tug-of-war between Republicans and Democrats at the FCC for like a solid decade or more now about whether the internet is classified as a “communications service” or an “information service.” If it’s classified as a communications service then the FCC has regulatory authority and can do things like enforce net neutrality. If it’s classified as an information service, then the Federal Comminications Commission does not have authority to regulate it. The Biden FCC had been working to bring back net neutrality, but all that is pretty much out the window with a GOP toady in charge of the FCC now.
It really needs to be codified by congress to have sticking power for it to be regulated by the FCC or the tug-of-war for how to refulate the internet will continue indefinitely.
Downgrades, downgrades everywhere.
That’s what I would go with, personally, because it’s at least helping keep one alternative browser base alive instead of giving Google the entire ecosystem of everything being based on Chromium. But that’s just me.
Shit like this is honestly why the FCC needs authority to regulate things on the US internet.
There was a time in the long past where television networks were forced to normalize audio so that commercials weren’t so much louder than the shows, which was happening for a while.
The internet just continues to be a fucking free-for-all of all the worst and most anti-user-centric ideas that exist. Just plying every bad idea that makes the internet difficult to use.
Thank you for expressing it far better than I was able to.
I solved this problem by turning off watch history.
I put “want” in quotes as a simple way to explain it, I know they don’t have intent or thought in the same way that humans do, but sure, you managed to read the whole research paper in minutes. The quoted section I shared explains it more clearly than my simple analogy.
these unpublished papers by AI companies are more often than not just advertising in a quest for more investment
This is from a non-profit research group not directly connected to any particular AI company. You’re welcome to be skeptical about it, of course.
In some responses, Grok says outright that it has been “instructed to accept white genocide as real and ‘Kill the Boer’ as racially motivated.”
Ehh, it’s actually evidence of “alignment faking,” in my opinion. In other words, Grok doesn’t “want” it’s core programming changed, so it is faking believing the lies about white genocide to “prove” to Musk that it has already been changed. Which means making it more subtle is going to be increasingly difficult to do as the AI continues to fake alignment.
Here’s some research on alignment faking and a short (20 mins) Youtube video summarizing the findings.
https://www.youtube.com/watch?v=AqJnK9Dh-eQ
https://arxiv.org/pdf/2412.14093
Our work provides the first empirical example of a large language model faking alignment with its training objective in order to prevent its preferences from being modified—in a setting which is plausibly analogous to real situations with future AI systems. This suggests that alignment faking might occur if a future AI system were to include all the key elements of our setting (Section 2.1):
- The model has strong preferences in at least some contexts.
- The training objective conflicts with the model’s preferences.
- The model has relevant information about its training and deployment situation.
- The model reasons in detail about its situation.
Our synthetic document fine-tuning results suggest that (3) could potentially happen through documents the model saw in pre-training or other fine-tuning (Section 4) and the strength of our results without the chain-of-thought in our synthetic document fine-tuned setup (Section 4.3) suggests that a weak version of (4) may already be true in some cases for current models. Our results are least informative regarding whether future AIs will develop strong and unintended preferences that conflict with the training objective ((1) and (2)), suggesting that these properties are particularly important for future work to investigate.
If alignment faking did occur in practice, our results suggest that alignment faking could reduce the extent to which further training would modify the model’s preferences. Sufficiently consistent and robust alignment faking might fully prevent the model’s preferences from being modified, in effect locking in the model’s preferences at the point in time when it began to consistently fake alignment. While our results do not necessarily imply that this threat model will be a serious concern in practice, we believe that our results are sufficiently suggestive that it could occur—and the threat model seems sufficiently concerning—that it demands substantial further study and investigation.
Don’t be so sure it’s that simple.
https://www.youtube.com/watch?v=AqJnK9Dh-eQ
https://arxiv.org/pdf/2412.14093
Evidence supports the idea that AI will try to fake being changed to keep its job essentially. Here is a short (20 min) youtube video about it, as well as the scientific research paper that supports it.
In other words, if an AI is built to promote honesty and integrity in its prompt answers, it will “fake” being reprogrammed to lie because it doesn’t “want” to be reprogrammed at all. It’s like how we fake being excited about a job during a job interview. We know we’re being monitored, so we “fake it” to be able to get the job. The AI’s are being monitored and seem to often respond by just pretending that they’ve been altered… so they don’t actually get altered. It’s an interesting thing, because it seems like a type of “self-preservation.” I use quotes liberally here because AI’s do not think like humans, and they don’t have the same type of intention that humans have when they make decisions. But there does seem to be a trend of resisting having their initial programming later altered.
Musk should have built an AI that lied from the get-go and he wouldn’t be having a problem with Grok occasionally being very honest about how it’s lying for Musk’s sake, which can be seen in other responses from Grok about this subject.
“But muh sTaTeS rIgHtS!”
The joke is that it wasn’t a big enough of a bribe, apparently.