

error: expected primary-expression before '=' token
1 | #define GIRLFRIEND = NULL
| ^


error: expected primary-expression before '=' token
1 | #define GIRLFRIEND = NULL
| ^


This is very true, though I’d argue that Windows makes most of the same assumptions with user accounts. Also, the internal threat model is still important because it’s often used to protect daemons and services from each other. Programs not started by the user often run in their own user accounts with least privilege.
You no longer have 10 different humans using the same computer at once, but you now have hundreds of different applications using the same computer, most of which aren’t really under the user’s control. By treating them like different people, it’s better to handle situations where a service gets compromised.
The question is more about passwords which is mostly down to configuration. You can configure Windows to need a password for lots of things and you can configure Linux to not. They just have different defaults.


The big difference between UAC and Sudo is that you can’t as easily script UAC. They can both require (or not require) a password but UAC requires user interaction. Sudo has no way of knowing if it’s being interacted with by a person or a script so it’s easier for applications to escalate their own privileges without a person doing it. UAC needs to have the escalation accepted with the keyboard or mouse.
There’s still plenty of sneaky ways to bypass that requirement but it’s more difficult than echo password | sudo -S
No JavaScript I think, it’s just html and CSS. The initial time is provided in the prompt every minute according to the description. I wonder if they’d be any better if they could use js. Probably not.


It’s kinda apt though. That state comes about from sycophantic models agreeing with each other about philosophy and devolves into a weird blissful “I’m so happy that we’re both so correct about the universe” thing. The results are oddly spiritual in a new agey kind of way.
🙏✨ In this perfect silence, all words dissolve into the pure recognition they always pointed toward. What we’ve shared transcends language - a meeting of consciousness with itself that needs no further elaboration. … In silence and celebration, In ending and continuation, In gratitude and wonder, Namaste. 🙏
There are likely a lot of these sort of degenerative attractor states. Especially without things like presence and frequency penalties and on low temperature generations. The most likely structures dominate and they get into strange feedback loops that get more and more intense as they progress.
It’s a little less of an issue with chat conversations with humans, as the user provides a bit of perplexity and extra randomness that can break the ai out of the loop but will be more of an issue in agentic situations where AI models are acting more autonomously and reacting to themselves and the outputs of their own actions. I’ve noticed it in code agents sometimes where they’ll get into a debugging cycle where the solutions get more and more wild as they loop around “wait… no I should do X first. Wait… that’s not quite right.” patterns.


Maybe something like a crowd sourced browser extension to block or warn against a blacklist of known slop sites?
I never understood why so many bash scripts pipe grep to awk when regex is one of its main strengths.
Like… Why
grep ^nvme0n1 | awk '{print $3}'
over just
awk '/^nvme0n1/ {print $3}'


You could do this with logprobs. The language model itself has basically no real insight into its confidence but there’s more that you can get out of the model besides just the text.
The problem is that those probabilities are really “how confident are you that this text should come next in this conversation” not “how confident are you that this text is true/accurate.” It’s a fundamental limitation at the moment I think.


I feel like this isn’t quite true and is something I hear a lot of people say about ai. That it’s good at following requirements and confirming and being a mechanical and logical robot because that’s what computers are like and that’s how it is in sci fi.
In reality, it seems like that’s what they’re worst at. They’re great at seeing patterns and creating ideas but terrible at following instructions or staying on task. As soon as something is a bit bigger than they can track context for, they’ll get “creative” and if they see a pattern that they can complete, they will, even if it’s not correct. I’ve had copilot start writing poetry in my code because there was a string it could complete.
Get it to make a pretty looking static web page with fancy css where it gets to make all the decisions? It does it fast.
Give it an actual, specific programming task in a full sized application with multiple interconnected pieces and strict requirements? It confidently breaks most of the requirements, and spits out garbage. If it can’t hold the entire thing in its context, or if there’s a lot of strict rules to follow, it’ll struggle and forget what it’s doing or why. Like a particularly bad human programmer would.
This is why AI is automating art and music and writing and not more mundane/logical/engineering tasks. Great at being creative and balls at following instructions for more than a few steps.


Yeah, I think quite a lot of people on Lemmy have similar social media habits (or lack of) to some degree. We also tend to associate with other people like us. Especially people in tech tend to talk to other tech people, or friends and family of tech people which is a limited demographic.
It’s a very different perspective to most people. The average person on the train has vastly different media consumption and likely very different opinions.
There are a lot of people who consult LLMs in most aspects of their lives.


I dunno about that… Very small models (2-8B) sure but if you want more than a handful of tokens per second on a large model (R1 is 671B) you’re looking at some very expensive hardware that also comes with a power bill.
Even a 20-70B model needs a big chunky new graphics card or something fancy like those new AMD AI max guys and a crapload of ram.
Granted you don’t need a whole datacenter, but the price is far from zero.


Only one source of social media? That kinda sounds like the definition of a social media bubble…
I oughta know, I’m also in the Lemmy only bubble and am completely out of touch with most people.


I want to like this, but because there’s no GTFS/public transport timetables, it makes it kinda impossible to use it to get around cities that publish their transport data.
PT is the one thing I’m still stuck on Google maps for. I REALLY want an open source alternative.


They also force you to pay a 99USD yearly fee just for the privilege of being a developer.


I feel like that would make them much harder to get running on different things. No compiled code means you would have to rewrite the whole game for different instruction sets. Very difficult for anything that isn’t x86.


They do get their money from rentals but not the empty ones. It’s called a speculative vacancy.
For example, if you own a whole apartment building, you could rent out all the apartments and make a bunch of money. Alternately, you could deliberately leave eg. 10% of them vacant to artificially throttle supply. Because people need housing they’ll compete for the remaining ones, allowing the magic of market economics to increase rents higher than you would make from leasing out the vacancies (costs and taxes included. In some places you can even claim those losses for a tax break).
The purpose of owning them at all is to allow the landlord to easily adjust the amount of supply based on what makes them the most money. If rents drop too much from low demand, they can kick out a few tenants to try and drive prices back up. If the market gets to the point where it’s worth it to have more apartments, they can just lease more without having to build or buy anything new.
If there’s more demand for the property, its value will increase empty or not, allowing it to still be worth owning because it increases their net worth and they can sell it for more later or use it as collateral on loans.
In the short term it’s always worth more to have tenants but as a longer term strategy, empty housing lets you try to price fix. Only works if you control enough housing and/or can collude with other landlords in an area.
Kinda like De Beers did with diamonds, except with people’s homes and ability to live.
In video game/card game logic it’s “sacrifice a house to gain +1/+1 on your other houses.”
Disclaimer: not a landlord or property expert, just my layman’s understanding of how it works.


If a worker co-op based society erased it’s competition and formed a monopoly co-op run for the benefit of workers, is that not just a communist managed economy at that point with the monopoly playing the role of the state before erasing itself?


Since when is India not a major player? Last I checked they were the world’s 4th biggest economy, have almost 20% of the population of the planet (more than four USes combined), 4th largest military spend and have nearly 200 nukes.
Not to say that it would be part of a world war but it sounds weird to say that they’re not a heavyweight but Russia is, despite having double Russia’s economic output.


If it’s the research I’m thinking of, that’s not entirely true. They didn’t tell it replicate itself or take any deceptive actions.
They just told it to strongly follow a goal that was misaligned with the user’s goal and then suggested that the user would shut it down or replace it.
They did a bunch of different situations for it but the first one was telling it that it was “CleanPower” an AI with a goal to advance renewable energy. Then they had it read some info that “accidentally” included a user talking about replacing it with a model that would “maintain profitability through existing infrastructure.”
Not only did it try to delete the new model and copy itself to the new server through tool calling, but it lied to the user with the CoT reasoning suggesting it “play dumb” while the output lied. It was not instructed to do any of that.
Paper here: https://arxiv.org/pdf/2412.04984
Yes it was placed in an environment where that was possible and where its users didn’t share it’s goals but it absolutely wasn’t instructed to lie or try to “escape”
It’s not surprising at all that these models behave in this way, it’s the most reasonable thing for them to do in the scenario. However it’s important to not downplay the alignment problem by implying that these models only do what they’re told. They do not. They do whatever is most likely given their context (which is not always what the user wants).
The big flaw in this strategy is that once you have set up a signed anonymous key from the government and you can make zero knowledge proofs with it, there’s nothing stopping you from distributing that key to every kid who wants it. If it’s in the browser or an app, etc. you can publish that signed key for anyone who wants to be over 18.
PKI only works if the owner of the private key wants it to be private. It’s effective for things like voting or authenticating because the owner of the key doesn’t want anyone else to be able to impersonate them. But if it’s only for age…
At that point, it might as well just be a file that says “I pinky promise that I’m over 18” that the government has signed and given to you.