I’m so down for this sailor content.
I’m so down for this sailor content.
The woman is right!
Now that it’s a couple days later, I think I might add a thought to this. It can be invalidating, to have someone ask a victim to empathize with the bully without sufficiently recognizing the victims feelings.
I used to do this sort of thing. I would try to be objective and logical. I learned that this just made my friends feel crazy, like they might be overreacting. I’ve learned to instead start by validating peoples feelings. I try to recognize thier pain, discomfort, and anger first. And, I never blame people for feeling that anger.
I second what the other commenters are saying about forgiveness being for you, not to other person, but can I just rant about how useless it is to say no one can truly be bad? It denies the basic utility of words, in my opinion. If someone is an ass, violent, greedy, etc then they are bad. If they change their ways they are good. We have words to describe greedy, violent, assholes. We call them bad people. Hell, a murderer psychopath? Call them evil. It’s why we have adjectives.
The joke is your odds of being gainfully self employed.
It is? I’d like to read about that
It just looked a lot like an AI image classifier.
I don’t expect current ai are really configured in such a way that they suffer or exhibit more than rudimentary self awareness. But, it’d be very unfortunate to be a sentient, conscious ai in the near future, and to be denied fundinental rights because your thinking is done “on silicone” rather than on meat.
Do you mean conventional software? Typically software doesn’t exhibit emergent properties and operates within the expected parameters. Machine learning and statistically driven software can produce novel results, but typically that is expected. They are designed to behave that way.
Really? I mean, it’s melodramatic, but if you went throughout time and asked writers and intellectuals if a machine could write poetry, solve mathmatical equations, and radicalize people effectively t enough to cause a minor mental health crisis, I think they’d be pretty surprised.
LLMs do expose something about intelligence, which is that much of what we recognize as intelligence and reason can be distilled from sufficiently large quantities of natural language. Not perfectly, but isn’t it just the slightest bit revealing?
A child may hallucinate, lie, misunderstand, etc, but we wouldn’t say the foundations of a complete adult are not there, and we wouldn’t assess the child as not conscious. I’m not saying that LLMs are conscious because they say so (they can be made to say anything), but rather that it’s difficult to be confident that humans possess some special spice of consciousness that LLMs do not, because we can also be convinced to say anything.
LLMs can reason (somewhat unreliably) with a fraction of a human brains compute power while running on hardware that was made for graphics processing. Maybe they are conscious, but only in some pathetically small way, which will only become evident when they scale up, like a child.
I don’t believe that consciousness strictly exist. Probably, the phenomenon emerges from something like the attention schema. Ai exposes, I think, the uncomfortable fact that intelligence does not require a soul. That we evolved it, like legs with which to walk, and just as easily as robots can be made to walk, they can be made to think.
Are current LLMs as intelligent as a human? Not any LLM I’ve seen, but give it 100 trillion parameters instead of 2 trillion and maybe.
Why can’t complex algorithms be conscious? In fact, ai can be directed to reason about themselves, context can be made to be persistent, and we can measure activation parameters showing that they are doing so.
I’m sort of playing devil’s advocate here, but, “Consciousness requires contemplation of self. Which requires the ability to contemplate.” Is subjective, and nearly any ai model, even rudimentary ones, are capable of insisting that they contemplate themselves.
I’m in my 30s and I just listened to this book for the first time last year. These are excellent adventure stories!
I haven’t played borderlands since 1, so I’m maybe a bad person to judge it, but I like the removal of mini maps in general. It helps me focus on the 3d environment in front of me.
The lead art director of marathon has followed her since before the game began development and he still stole her shit wholesale. Disgusting.
Once they finally lock down the player so it’s impossible to block or skip ads, I look forward to coding a script which screen records each video on my sub list, feeds each video with ads into a purpose made classifier model which labels the ads, stitches out of ads with FFmpeg, and then uploads them to my jellyfin server.
Hell yeah! (Edit, about the haka, 😅😅😅 not the punishment)
This accusation and evidence is so flimsy and immaterial. These don’t look like ai generated images. They don’t have closed composition, they are off center, and the text looks good. They look like distant lod 3d models or maybe some concept art.
I’m playing DS2 now. This series is great! The only games like it are indie games, simulation affair. It’s truly unique in the AAA space.