Focusing on the things I need to actually do.
I swear, if even if I was forced to do something at gunpoint, I’d manage to get distracted anyway.
Focusing on the things I need to actually do.
I swear, if even if I was forced to do something at gunpoint, I’d manage to get distracted anyway.


Trying to represent oneself in court is a pretty stupid thing to do, generally.
I am not a lawyer, I’m pretty you need to be able to defend yourself withing the legal system following all of its rules. You need to know the laws, their quirks, loopholes, etc. to construct your defense properly. Even if the case is complete nonsense, but you lack the knowledge to defend yourself, or the ability to use the knowledge you have coherently, you’ll loose.
A neat paper a filed in accordance with all the rules, a paper that quotes actual laws and precedents, will, generally, beats oral argument backed by common sense. And that’s in general! Let alone when you’re going against Disney and their nigh infinite army of lawyers.


Even with the character in Public Domain, I doubt Disney would be particularly happy with anyone using it.
They can send cease and desist letter left and right, claiming that “the use of the mouse is fine, but the elements X, Y and Z were introduced in a later work of ours that’s still protected”, even if it’s a plain lie.
Trying to take Disney to court is suicide.
The have enough money to hire half the lawyers in the world and make them come up with a lawsuit even if there’s no basis for one. They can stretch the lawsuit process to last years, and yet the fees would be but a fraction of a fraction of a percent in their yearly spending. Almost any defendant, meanwhile, would be financially ruined by it, even if they end up winning.


Almost everything that’s not Gnome can be considered lightweight, to be honest.


Human’s for you, what can I say.
We all are guilty of similar behavior. You, I, your neighbor- Everyone.
It’s often hard to believe something, when your own experience is vastly different from what someone describes.


I’ve noticed Youtube nuking my comments if they contain words related to money, if they contain too many brands, if they contain technical term, etc.
Comments that don’t have much in terms of meaning or information go through just fine.
So… Just, write like you’re 5 years old and split long messages into several comments.


1k USD. Should be enough to leave my shithole of a country, if I’m lucky.


Surely this will lead to the EU and other progressive countries to open up the borders for Russians fleeing the oppression, right? Right?


Corporations have been trying to control more and more of what users do and how they do it for longer than AI has been a “threat”. I wouldn’t say AI changes anything. At most, maybe, it might accelerate things a little. But if I had to guess, the corpos are already moving as fast as they can with locking everything down for the benefit of no one, but them.


“AI” models are, essentially, solvers for mathematical system that we, humans, cannot describe and create solvers for ourselves.
For example, a calculator for pure numbers is a pretty simple device all the logic of which can be designed by a human directly. A language, thought? Or an image classifier? That is not possible to create by hand.
With “AI” instead of designing all the logic manually, we create a system which can end up in a number of finite, yet still near infinite states, each of which defines behavior different from the other. By slowly tuning the model using existing data and checking its performance we (ideally) end up with a solver for some incredibly complex system.
If we were to try to make a regular calculator that way and all we were giving the model was “2+2=4” it would memorize the equation without understanding it. That’s called “overfitting” and that’s something people being AI are trying their best to prevent from happening. It happens if the training data contains too many repeats of the same thing.
However, if there is no repetition in the training set, the model is forced to actually learn the patterns in the data, instead of data itself.
Essentially: if you’re training a model on single copyrighted work, you’re making a copy of that work via overfitting. If you’re using terabytes of diverse data, overfitting is minimized. Instead, the resulting model has actual understanding of the system you’re training it on.


So… You say nothing will change.


Do you expect to find a company that sells a calendar-only subscription? “Calendar - 49c/month”?
I’ve been looking at lot at all kinds of services and most start their pricing at around 5 USD/month. Regardless of how much actual features they actually provide.
I’d say your best bet is NextCloud. You can rent some, self host or use a free instance (there’s a couple around).
Personally, I’m self-hosting stuff on a VPS. For whopping 5USD/month I’m getting things I’d be paying 50, if not mere, if they were offered as separate products by your average service-providing companies.


You do realize you can use a service that provides a bunch of different things, but only use the calendar feature and ignore everything else, right?
You can also use a local calendar app. Just don’t connect it to anything.
I use the default Gnome Calendar (because Linux), but Windows, MacOS, iOS, and Android have calendar apps as well. Obviously.
OpenSUSE + KDE is a really solid choice, I’d say.
The most important Linux advice I have is this: Linux isn’t Windows. Don’t expect things to works the same.
Don’t try too hard to re-configure things that don’t match the way things are on Windows. If there isn’t an easy way to get a certain behavior, there’s probably a reason for it.


Not once did I claim that LLMs are sapient, sentient or even have any kind of personality. I didn’t even use the overused term “AI”.
LLMs, for example, are something like… a calculator. But for text.
A calculator for pure numbers is a pretty simple device all the logic of which can be designed by a human directly.
When we want to create a solver for systems that aren’t as easily defined, we have to resort to other methods. E.g. “machine learning”.
Basically, instead of designing all the logic entirely by hand, we create a system which can end up in a number of finite, yet still near infinite states, each of which defines behavior different from the other. By slowly tuning the model using existing data and checking its performance we (ideally) end up with a solver for something a human mind can’t even break up into the building blocks, due to the shear complexity of the given system (such as a natural language).
And like a calculator that can derive that 2 + 3 is 5, despite the fact that number 5 is never mentioned in the input, or that particular formula was not a part of the suit of tests that were used to verify that the calculator works correctly, a machine learning model can figure out that “apple slices + batter = apple pie”, assuming it has been tuned (aka trained) right.


Not once did I claim that LLMs are sapient, sentient or even have any kind of personality. I didn’t even use the overused term “AI”.
LLMs, for example, are something like… a calculator. But for text.
A calculator for pure numbers is a pretty simple device all the logic of which can be designed by a human directly.
When we want to create a solver for systems that aren’t as easily defined, we have to resort to other methods. E.g. “machine learning”.
Basically, instead of designing all the logic entirely by hand, we create a system which can end up in a number of finite, yet still near infinite states, each of which defines behavior different from the other. By slowly tuning the model using existing data and checking its performance we (ideally) end up with a solver for something a human mind can’t even break up into the building blocks, due to the shear complexity of the given system (such as a natural language).
And like a calculator that can derive that 2 + 3 is 5, despite the fact that number 5 is never mentioned in the input, or that particular formula was not a part of the suit of tests that were used to verify that the calculator works correctly, a machine learning model can figure out that “apple slices + batter = apple pie”, assuming it has been tuned (aka trained) right.


Learning is, essentially, “algorithmically copy-paste”. The vast majority of things you know, you’ve learned from other people or other people’s works. What makes you more than a copy-pasting machine is the ability to extrapolate from that acquired knowledge to create new knowledge.
And currently existing models can often do the same! Sometimes they make pretty stupid mistakes, but they often do, in fact, manage to end up with brand new information derived from old stuff.
I’ve tortured various LLMs with short stories, questions and riddles, which I’ve written specifically for the task and which I’ve asked the models to explain or rewrite. Surprisingly, they often get things either mostly or absolutely right, despite the fact it’s novel data they’ve never seen before. So, there’s definitely some actual learning going on. Or, at least, something incredibly close to it, to the point it’s nigh impossible to differentiate it from actual learning.


It’s illegal if you copy-paste someone’s work verbatim. It’s not illegal to, for example, summarize someone’s work and write a short version of it.
As long as overfitting doesn’t happen and the machine learning model actually learns general patterns, instead of memorizing training data, it should be perfectly capable of generating data that’s not copied verbatim from humans. Whom, exactly, a model is plagiarizing if it generates a summarized version of some work you give it, particularly if that work is novel and was created or published after the model was trained?
To be honest, most things in Nobra can be installed/done to regular Fedora. And, unlike Nobra, Fedora has more than 1 maintainer: goof for the bus factor.