

The implication is walking away all US military support, I believe.
The implication is walking away all US military support, I believe.
I’m hoping Arc survives all this?
I know they want to focus, but no one’s going to want their future SoCs if the GPU part sucks or is nonexistent. Heck, it’s important for servers, eventually.
Battlemage is good!
It was selectively given to institutions and “major” celebrities before that.
Selling them dilutes any meaning of “verified” because any joe can just pay for extra engagement. It’s a perverse incentive, as the people most interest in grabbing attention buy it and get amplified.
It really has little to do with Musk.
the whole concept is stupid.
+1
Being that algorithmic just makes any Twitter-like design too easy to abuse.
Again, Lemmy (and Reddit) is far from perfect, but fundamentally, grouping posts and feeds by niche is way better. It incentivizes little communities that are concerned about their own health, while users have zero control over that shouting into the Twitter maw.
Not sure where you’re going with that, but it’s a perverse incentive, just like the engagement algorithm.
Elon is a problem because he can literally force himself into everyone’s feeds, but also because he always posts polarizing/enraging things these days.
Healthy social media design/UI is all about incentivizing good, healthy communities and posts. Lemmy is not perfect, but simply not designing for engagement/profit because Lemmy is “self hosted” instead of commercial is massive.
Yeah, I read some, and I am worried about it being a little barebones. Runescape nostalgia is a huge draw though.
They seem to love writing cities and fantasy-tech too, going by some of the stuff in BG3.
Looks like Shadowrun’s licensing is a complicated mess though, with Microsoft at least involved, so I guess it’s unlikely :(
Oh man, imagine if they did a Shadowrun game. Take their fantasy credentials/writing and mix it with cyberpunk…
Awesome!
I wonder if things will organize around a “unofficial” modding API like Harmony for Rimworld, Forge for Minecraft, SMAPI for Stardew Valley, and so on? I guess it depends if some hero dev team does it and there’s enough “demand” to build a bunch of stuff on it. But a “final” patch (so future random patches don’t break the community API) and community enthusiasm from Larian are really good omens.
Skyrim and some other games stayed more fragmented, others like CP2077 just never hit critical mass I guess. And the format of the modded content just isn’t the same for mega RPGs like this.
How is the modding scene these days? Seems like there’s a lot in the patch addressing that, but are things still more aesthetic?
I mean, “modest” may be too strong a word, but a 2080 TI-ish workstation is not particularly exorbitant in the research space. Especially considering the insane dataset size (years of noisy, raw space telescope data) they’re processing here.
Also that’s not always true. Some “AI” models, especially oldschool ones, function fine on old CPUs. There are also efforts (like bitnet) to get larger ones fast cheaply.
I have no idea if it has any impact on the actual results though.
Is it a PyTorch experiment? Other than maybe different default data types on CPU, the results should be the same.
That’s even overkill. A 3090 is pretty standard in the sanely priced ML research space. It’s the same architecture as the A100, so very widely supported.
5090 is actually a mixed bag because it’s too new, and support for it is hit and miss. And also because it’s ridiculously priced for a 32G card.
And most CPUs with tons of RAM are fine, depending on the workload, but the constraint is usually “does my dataset fit in RAM” more than core speed (since just waiting 2X or 4X longer is not that big a deal).
The model was run (and I think trained?) on very modest hardware:
The computer used for this paper contains an NVIDIA Quadro RTX 6000 with 22 GB of VRAM, 200 GB of RAM, and a 32-core Xeon CPU, courtesy of Caltech.
That’s a double VRAM Nvidia RTX 2080 TI + a Skylake Intel CPU, an aging circa-2018 setup. With room for a batch size of 4096, nonetheless! Though they did run into some preprocessing bottleneck in CPU/RAM.
The primary concern is the clustering step. Given the sheer magnitude of data present in the catalog, without question the task will need to be spatially divided in some way, and parallelized over potentially several machines
It’s a little too plausible, heh.
I posit the central flaw is the engagement system, which was AI driven long before LLM/diffusion was public. The slop is made because it sells in that screwed up, toxic feed.
If engagement-driven social media doesn’t die in fire, the human race will.
Yeah you are correct, I was venting lol.
Another factor is that fab choice design decisions were made way before the GPUs launched, when everything you said (TSMC’s lead/reliability, in particular) rang more true. Maybe Samsung or Intel could offer steep discounts for the lower performance (hence Nvidia/AMD could translate that to bigger dies), but that’s quite a fantasy I’m sure…
It all just sucks now.
A: This is the ‘bad’ kind of incentive. My mom worked in a hospital where people would come in pregnant, tons of neglected kids in tow, asking how much wellfare they could get for the next kid. Stuff like vouchers for school, care, healthcare and stuff doesn’t incentive that.
B: It’s hilariously inadequate and out-of-touch. $5K for childcare these days is a joke, even as a nice supplement.
…But that’s the point. This is for show, like Trump’s COVID checks with his signature on them. It’s a brand to tell people “Hey! I’m Trump, and I’m helping you!” directly, a decent idea poorly implemented for PR purposes. It’s also hilariously hypocritical, seeing how much ‘blank check hand-outs’ were criticized for decades.