

For the phrase, “broken” specifically means “stopped”, and the clock is analog. If the hands don’t turn, then they will be correct twice per day.
For the phrase, “broken” specifically means “stopped”, and the clock is analog. If the hands don’t turn, then they will be correct twice per day.
Just FYI, this phrase originates with a literal neonazi.
There’s a park in my area. Literally every time I have been there, over several years, there is a van in the parking lot. It has a wire mesh bust of Hillary Clinton on top, and is covered with writings about various conspiracy theories about her, and slogans like “Hillary for Jail!”
Possibly the craziest thing I’ve ever seen.
There’s a park in my area. Literally every time I have been there, over several years, there is a van in the parking lot. It has a wire mesh bust of Hillary Clinton on top, and is covered with writings about various conspiracy theories about her, and slogans like “Hillary for Jail!”
Possibly the craziest thing I’ve ever seen.
I really hate that people keep treating these LLMs as if they’re actually thinking. They absolutely are not. All they are, under the hood, is really complicated statistical models. They don’t think about or understand anything, they just calculate what the most likely response to a given input is based on their training data.
That becomes really obvious when you look at where they often fall down: math questions and questions about the actual words they’re using.
They do well on standardized math assessments, but if you change the questions just a little, to something outside their training data (often just different numbers or a slightly different phrasing is enough), they fail spectacularly.
They often can’t answer questions about words at all (how many 'R’s in ‘strawberry’, for instance) because they don’t even have a concept of the word, they just have a token that represents that word, and a list of associations that they use to calculate when to use that word.
LLMs are complex, and the way they’re designed means that the specifics of what associations they make and how they’re weighted and things like that are opaque to us, but that doesn’t mean we don’t know how they work (despite that being a big talking point when they first came out). And I really wish people would stop treating them like something they’re not.
You misunderstand. You take a picture of, say your dad at a family reunion, and in the background the rest of your family is just milling around. That’s not the subject, and so the AI model saves it as “people doing stuff” or whatever. When you load that photograph, the people in the background will be generated, and they won’t be your family.
This is all beside the fact that the AI may decide your subject is different from what you think it is.
This is just an extremely unreliable form of data compression, and extremely unnecessary. Phones and cameras can currently save hundreds or thousands of photographs locally, and cloud storage can save millions for free, and even more for extremely cheap. You’re solving a non-existent problem by shoehorning AI image generation in where it’s not needed.
The pitch is that everything surrounding the subject is extra, and so it doesn’t matter if it’s the same every time. It’s literally throwing that information away in favor of a simplified description. It’s extremely processor-intensive data compression.
I’m sorry, but no. Not only does that invoke a ton of extraneous processing on both ends (when saving and when recalling the image), but the rest of the image is still important, too! Can you imagine taking a photo at a family gathering, and then coming back later to see randomly generated people in the background? A photograph isn’t just about the “subject”, it’s often about a moment in time.
Their ToS did say that updates would only apply to the current major version and newer of Unity when the updated ToS was released, but they removed that clause without telling anyone about a year ago and are claiming their changes are retroactive, so that can’t even be trusted. At this point, I can’t see any game dev beginning work on a new Unity project ever again without some kind of ironclad guarantee that this would never happen again. My prediction is we’ll see maybe 1 or 2 years of games released on Unity, just from the ones that were already too far into development to start over on something else, and then Unity is done for good.
I’m having the same problem, searches for any communities not on my local instance (with a few exceptions from lemmy.ml and beehaw) turn up zero results, even with the ! token.
A workaround someone mentioned on another thread was to just directly go to a the address with the following format:
https://home.server/c/community@community.server
This does seem to mostly work (I’m sometimes getting 404 errors when I try), but I’ve had issues with some where I’m only getting a few posts when viewing from my home server.
Thanks for the info here, I was experiencing a similar issue and I wasn’t sure if I was doing something wrong. With this being the case (server not showing results until someone has already linked the servers by subscribing, if I’m understanding that right) is there a good or best-practices way for users to discover new communities across the Lemmy platform? Or should we just trawl through the different servers?
In addition, I’ve joined a couple of communities from other servers, but I only seem to be seeing a very small subset of their posts, is this again just from my server not yet being linked and maybe needing time to propagate the data?
Oh it did, they fed it data from Stack Overflow.