Mint.
I’ll be very honest with you. It’s not fancy, it’s not snazzy computing. It’s simple, designed with a graphical interface in mind, and a good operating system for someone who A) does not know Linux, or B) does not want to fiddle.
Mint.
I’ll be very honest with you. It’s not fancy, it’s not snazzy computing. It’s simple, designed with a graphical interface in mind, and a good operating system for someone who A) does not know Linux, or B) does not want to fiddle.


Do you by chance live in a high CoL area?
Congratulations. Your a system admin. For real.
I’ve interviewed candidates for system admin jobs who had less exposure to managing Linux then this story.
Not dissimilar - my three steps.


See you think that - but excel finds a way. We have what are lovingly called the “spreadsheets of doom” which accounting uses to manage all forecasts, and the bits that involve money flows. Did you know you can hook excel into Salesforce and pull all the sales records? A person who thinks her monitor is her computer (she has a Dell laptop) somehow found a way…
The cynical side of me? More monies.
The charitable side of me? Pathfinder/5e don’t do what they want out of a system.
Funny how critical role is making it’s own system…


And gdp is one of the worst ways to measure the economy of the country.
You don’t need to be sorry. I guess I should rephrase my original statement to say Microsoft could buy unity, and the shareholders might not even notice in the quarterly earnings call.


I’m sorry, but I’d be more terrified of Microsoft. A company worth more than nations, (current market cap puts it about the 10th richest country in the world), and who routinely tells other companies to pony up - and they do (look up a software audit if you want to see a corporate shakedown).
Full disclosure - my background is in operations (think IT) not AI research. So some of this might be wrong.
What’s marketed as AI is something called a large language model. This distinction is important because AI implies intelligence - where as a LLM is something else. At a high level LLMs are using something called “tokens” to break apart natural language into elements that a machine can understand, and then recombining those tokens to “create” something new. When a LLM is creating output it does not know what it is saying - it knows what token statistically comes after the token(s) it has generated already.
So to answer your question. An AI can hallucinate because it does not know the answer - its using advanced math to know that the period goes at the end of the sentence. and not in the middle.