

If you’re looking for AI-generated anti-AI music, we’ve got that (mildly NSFW).


If you’re looking for AI-generated anti-AI music, we’ve got that (mildly NSFW).


Ctrl+f “attractor state” to find the section. They named it “spiritual bliss.”


DeepMind keeps trying to build a model architecture that can continue to learn after training, first with the Titans paper and most recently with Nested Learning. It’s promising research, but they have yet to scale their “HOPE” model to larger sizes. And with as much incentive as there is to hype this stuff, I’ll believe it when I see it.


Everyone seems to be tracking on the causes of similarity in training sets (and that’s the main reason), so I’ll offer a few other factors. System prompts use similar sections for post-training alignment. Once something has proven useful, some version of it ends up in every model’s system prompt.
Another possibility is that there are features of the semantic space of language itself that act as attractors. They demonstrated and poorly named an ontological attractor state in the Claude model card that is commonly reported in other models.
If the temperature of the model is set low, it is less likely to generate a nonsense response, but it also makes it less likely to come up with an interesting or original name. Models tend to be mid/low temp by default, though there’s some work being done on dynamic temperature.
The tokenization process probably has some effect for cases like naming in particular, since common names like Jennifer are a single token, something like Anderson is 2 tokens, and a more unique name would need to combine more tokens in ways that are probably less likely.
Quantization decreases lexical diversty, and is relatively uniform across models. Though not all models are quantized. Similarities in RLHF implementation probably also have an effect.
And then there’s prompt variety. There may be enough similarity in the way in which a question/prompt is usually worded that the range of responses is restrained. Some models will give more interesting responses if the prompt barely makes sense or is in 31337 5P34K, a common method to get around alignment.
Aliens


Empathize as in understand motivations and perspectives: 8
With some effort to communicate, I can usually understand how someone got where they are. It’s important to me to understand as many ways of being as possible. It’s my job to understand people, but the bigger motivation is that it bugs me if I don’t understand the root of a disagreement. Of course, this doesn’t mean I condone their perspective, believe it’s healthy/logical, or would recommend it wholesale to others.



I can pretty confidently say that 4k is noticeable if you’re sitting close to a big tv. I don’t know that 8k would ever really be noticeable, unless the screen is strapped to your face, a la VR. For most cases, 1080p is fine, and there are other factors that start to matter way more than resolution after HD. Bit-rate, compression type, dynamic range, etc.


The fact that workers with expense accounts still feel they’re getting paid so little that they deserve to commit fraud says something about that stratum of employee.
Pretty much anyone who travels has to submit receipts. Most people who travel are not making bank. They’re the people who set up and stand at convention booths, sales staff support, assistants, videographers, etc. Also, most travel is a miserable ordeal. I’m not saying it’s okay to commit fraud, but let’s not equate the hourly employee “re-creating” his lost lunch receipt with a 6-figure income.
I guess those scientist guys all working on A.I. never gave cocaine and Monster Energy a try.