The major difference I see, is that current AIs only provide narrow AI. They have very few sensors and are optimized for very few tasks.
Broad AI or human intelligence involves tons of sensors/senses which may not directly be involved in a given task, but still allow you to judge its success independently. We also need to perform many different tasks, some of which may be similar to a new task we need to tackle.
And humans spend several decades running around with those senses in different situations, performing different tasks, constantly evaluating their own success.
For example, writing a poem. ChatGPT et al can do that. But they can’t listen to someone reading their poem, to judge how the rhythm of the words activates their reward system for successful pattern predictions, like it does for humans.
They also don’t have complex associations with certain words. When we speak of a dreary sky, we associate coldness from sensing it with our skin, and we associate a certain melancholy, from our brain not producing the right hormones to keep us fully awake.
A narrow AI doesn’t have a multitude of sensors + training data for it, so it cannot have such impressions.
I agree with all of that, except that I wouldn’t call it “boilerplate”, if it is clarifying or emphasizing a point to some target group.
Might be that I’m just very intolerant to boilerplate as a reader. I’m part of the younger generations with short attention spans and tons of authors will artificially pad their online articles, because more time spent reading useless words means more time spent potentially looking at ads, which means more money for them.
So, I think, what I’m really arguing is: Fuck the whole monetisation of attention.
I mean, I just looked at one example: https://www.perplexity.ai/?s=u&uuid=aed79362-b763-40a2-917a-995bf2952fd7
While it does provide sources, it partially did not actually take information from them (the second source contradicts with the first source) or lost lots of context on the way, like the first source only talking about black ants, not all ants.
That’s not the point I’m trying to make here. I just didn’t want to dismiss your suggestion without even looking at it.
Because yeah, my understanding is that we’re not in “early stages”. We’re in the late stages of what these Large Language Models are capable of. And we’ll need significantly more advances in AI before they truly start to understand what they’re writing.
Because these LLMs are just chaining words together that seem to occur together in the wild. Some fine-tuning of datasets and training techniques may still be possible, but the underlying problem won’t go away.
You need to actually understand what these sources say to compare them and determine that they are conflicting and why. It’s also not enough to pick one of these sources and summarize it. As a human, it’s an essential part of research to read multiple sources to get a proper feel for the ‘correct’ answer and why there is not just one correct answer.
Will letting a large language model handle the boilerplate allow writers to focus their attention on the really creative parts?
I especially don’t see that there should be boilerplate in your text. If you’re writing a word which literally does not change the meaning or perception of your text, why are you writing that word? If it does change the meaning or perception, you should be choosing that word.
Why it’s not in the JS standard library, is because JS is developed by committee. It takes significant effort to get anything into there. And leftpad wasn’t important enough, plus there was the substitute of that NPM package, so no one bothered writing up a proposal and getting it through the committee.
Oh yeah, I have no problem with using a metaphor, whether it’s from a religious book or not. It just weirds me out how it is told as if it’s a historical record with no doubt of it having happened that way.
For an atheist, the stories in religious books are more like fables – you don’t believe that it happened that way, but you can still draw a learning from it. And when talking of fables, you don’t recount them as facts, rather you point out every few sentences that you’re simply recounting what’s being told in the book.
You can do that in a neutral way, too, where you just say “the Bible reads…” and then people can choose to believe in it or not. And this text rarely does that, while also throwing in some opposite tropes. Had the author not stated that they’re an atheist, I would have assumed they’re basically a religious fundamentalist.
Not really that simple in reality. URL paths might not be referring to directories+files at all, e.g. when they’re part of a REST API.
Similarily, when a server realises you’ve ended on a directory name without the trailing slash, it can choose to redirect you (which is slow) or it can just respond with the contents of the respective .../index.html
directly.
Oh wow, I had seen some people talking about telemetry in Go and thought this was about their proxying bullshit again, and even thinking that, I told someone it’s basically unheard of for programming languages to behave this badly.
And now you’re telling me, Google is actually behaving significantly worse still? 🙃
I don’t really get the point of it either. If it’s a video I want to watch, I would like to watch it from the start. That would require me to be there at the start of the livestream, like an appointment, which I don’t think is worth it, just to live-chat with people.
Like, I’m sure there’s some people who don’t mind missing the start or enjoy the live-chatting a lot or just happen to want to watch a video at the time of the premiere anyways. For those, the concept works out, but otherwise it doesn’t feel like it will ever really attract the masses.
Problem is, you need a simple/robust mechanism to physically mix the two streams of water.
The concept of a digital shower thermostat does exist, where you set a temperature in °C and then a thermometer in the mixed stream triggers an electric motor to move the mixing valve.
But as far as I can tell, no companies with a reputation to lose do offer that, because you have far more parts that can fail.
You could do a transmission, where you turn the handle across 120° and the mixing valve only moves by e.g. 30°.
That would make it easier to hit your preferred temperature, but you would lose the ability to make it fully hot or fully cold and the manufacturer doesn’t know how hot/cold your water streams are, so there would need to be a way of adjusting which 30° angle you get.
I guess, instead of a handle, one could have a turning knob with a transmission, so you’d need e.g. three revolutions to go from cold to hot. Then it wouldn’t be possible to tell visually to what it’s set right now, though.
So yeah, hopefully unsurprisingly, manufacturers aren’t just being dumb. There are physical limitations that make this non-trivial.
I have a similar one and installed Rockbox on it. Now it can not only play music, but also Doom. 🙃
Well, what Google wants to introduce is a completely arbitrary limit to the number of filtering rules, so increasing that constant in the code should be no trouble at all.
Having said that, ad blocking effectiveness is lower on Chrome-based browsers already today: https://github.com/gorhill/uBlock/wiki/uBlock-Origin-works-best-on-Firefox
Padding those flaws out or offering similar functionality to uBO in their custom ad blocker, that does take some effort.
Jeez, just imagine. Your boss tells you to fabricate a video, then publishes it like it represents reality, and a fellow engineer of all people may have believed it and gotten themselves killed.
There is definitely plenty steps between the fabrication of the video and the death of a person where rationally you wouldn’t need to feel guilty.
But I would still have thoughts running through my head, if maybe the video could have been shot differently or I should have added another disclaimer before showing it to my boss.
I still don’t get why this whole concept of the superposition exists. Like, yeah, if you don’t measure something, you don’t know what it’s like. This isn’t new.
The only new aspect at the quantum level is that you cannot measure without (non-neglible) impact on the thing that you’re measuring, because our usual method of measuring involves blasting it with quanta (photons).
That means some of our usual science concepts don’t work anymore, like measuring something, then changing it in some way and then measuring again.
But just telling people precisely what I just said, feels so much simpler than raving on about how we should call the state of “we haven’t looked at it yet”.
I’m actually talking about the time before that, because as you say, GNU provided the solution of utilizing copyright to fight copyright.
In the 60s and 70s, software was largely either included with a specific piece of hardware (i.e. had virtually no value without it), or it was developed in academia. This came with a culture of people just sharing software without any regards for copyright and whatnot.
That might have eventually shifted on its own, but it was also specifically Microsoft that pushed for a less open culture, see for example: https://en.wikipedia.org/wiki/An_Open_Letter_to_Hobbyists
And then of course, some time later, you had the origin story of Stallman, where his printer didn’t work, he fixed the software and helpfully sent the patch to the printer manufacturer, who then threatened to sue him for violating their copyright.
So, Stallman and his movement really wanted the old hacker culture back where copyright didn’t limit their freedom. And then out of necessity, they utilized copyright to fight itself.
I didn’t want to call it good or bad (this is how things have to go, given the situation), but rather just that there is a certain irony here.
Both, in that open-source folks are now calling for stronger enforcement of copyright when it was born out of a fight against copyright, but also that Microsoft can now be sued due to the aggressive copyright laws they lobbied into place.
A purpose for data collection can’t become invalid, but a purpose does not absolve you from asking for consent. In theory, this means the majority of their users will simply not say “yes” and that then kills their business model.
I assume, they will try to make the non-consenting option paid, like many news webpages do it. My interpretation of the GDPR (specifically Article 7 Section 4) is that this isn’t legal either, but it hasn’t been challenged in court yet.
You might be thinking of “legitimate interests”. You can skip getting consent, if something represents a “legitimate interest” of yours. But yeah, that one is a hairy topic. Not even Meta tried to claim they expected their relentless data hoovering to fall under that, so far.
Doesn’t surprise me, honestly. Most devs either target Linux for backend stuff, develop for web or Android in terms of frontend (which works just as well on Linux), or could at least benefit from their application building and running on their CI/CD Linux server.
And once your application runs on Linux, you only need the dev stack to do the same, which probably works better on Linux than Windows already.
With macOS, unless you’re extremely specialized in graphics work, iOS or macOS development, chances are you can’t run all dev tooling or the application you’re developing without jumping through hoops.
Well, of course, I would like to find an balanced approach that causes the least suffering. So, investing heavily into renewables and for example taxing CO2 output to give money back to the people, would IMHO be smart strategies no matter what path we take.
But yeah, my problem is that your comment could easily be rewritten to talk about the climate crisis.
Here in Germany, we’ve lost entire cities to floods, while at the same time the ground is drier than it’s been in centuries, leading to important transport rivers drying up, farmers losing crops and whole forests burning down.
Summers are now so hot that if you can’t afford an AC, you’re risking your health.
Hell, many refugees, that fascists love to get riled up against, had to flee their home countries due to catastrophes caused by the climate crisis or conflicts/fascism caused by those.
And finally, the climate crisis can have a direct effect on the energy crisis, too. This summer, France had to lower the capacity of their few remaining nuclear power plants, because the rivers they use for cooling were largely dried up and already coming in at an increased temperature.
So, I’m afraid the balanced approach is going to hurt, no matter what we do.
Well, yeah, but if LEDs were another magnitude more efficient, then even such an energy crisis would hardly affect indoor farming.
I especially also want to extend this energy crisis. Because we need to get away from fossil fuels for cars and power plants, and I would also rather not be dependent on imports of nuclear fuel nor having the respective power plants in explosion range.
If that means, we need to plant our crops on a traditional field until we have our renewables down, that is a sacrifice I’m more than willing to make.
“One could argue he has created value or destroyed value at Twitter. It’s hard to tell at this point,” [Gerber] said.
I’m going to guess this Gerber-person is being pseudo-optimistic to try to bait other people into investing money as well, so he might have a chance of getting his million back.
Because otherwise, how in the fuck can you be this rich and this bad at judging reality?
The user experience will be worse, because they can’t use Flatpaks without jumping through extra hoops.
So, I guess, a “Ubuntu experience” is a bad experience. Not going to argue with that.