• 0 Posts
  • 444 Comments
Joined 2 years ago
cake
Cake day: December 29th, 2023

help-circle

  • openai has practically no value and that’s well known… nvidia is paying companies to buy their chips and playing bullshit shell games

    the difference is openai is a pretty well known unprofitable company, and they aren’t doing quite as much of the bullshit shell games. nvidia is selling to basically everyone, taking stakes in companies, giving weird deals… it’s bloody impossible to track how much of their sales are real and how much those real sales are actually worth, or if those sales are loss leaders for some investment then those investments look a lot like openai

    so nvidia not only is invested in a lot of very questionable AI bubble companies, but also their own sales figures are… unreliable

    they’re making billions upon billions because they’re using their own money multiple times. it’s kinda like leveraged trading with all the risk and it’s incredible arrogant at the scale that nvidia is doing it




  • perhaps… i guess the single directional execution model would help to prevent memory leaks, and components would help keep things relatively contained… and also javascript in general avoids whole classes of c/c++ bugs… but it’s also incredibly slow. imo it’s just not something you should write core system components in

    to be clear, it’s not react that’s the problem here: its execution model is an excellent way of structuring UI… but something as core as the start menu just really isn’t something you should fuck around with slow languages with

    and also, that’s not to say that FOSS shouldn’t do it - they’re open, and thus something like react makes it easier for devs to write plugs and extend etc… but that’s not an engineering concern for windows: they don’t get the luxury of using extensibility as an excuse



  • that’s absolutely true, and i’m sure that as tooling and workflow gets better these solutions will become standard. for the moment it’s all pretty haphazard, and i just don’t think it’s necessarily malicious intent or lying exactly… i think it could have easily been just miscommunication and/or legitimate mistake

    afaik there were 2 issues here: there was a placeholder asset left in the game upon release, and the rules of the award were no AI assets during development either. i think the first can be easily explained by it being accidental (they replaced the texture very quickly) and the second can easily be explained by miscommunication between teams



  • yeah i don’t even think the dishonesty was necessarily dishonesty… i just think perhaps the marketing team wasn’t fully informed. i can absolutely see dev teams saying no to “AI use” not having been told that the question applied to the whole dev process, and marketing not understanding that that information was important

    i have no problem with AI placeholders. i think that’s the right way to use AI… and dishonesty is a problem… miscommunication is really not a problem

    but i also think that rescinding the award is the right call! but that shouldn’t tarnish the studios reputation in the future if they apologise and explain what happened





  • I’m guessing you dropped a zero or two on the user count

    i was being pretty pessimistic because tbh i’m not entirely sure of the requirements of streaming video… i guess yeah 200-500 is pretty realistic for netflix since all their content is pre-transcoded… i kinda had in my head live transcoding here, but also i said somewhere else that netflix pre-transcodes, so yeah… just brain things :p

    also added an extra zero to the wattage

    absolutely right again! i had in my head the TDP eg threadripper at ~1500w - it’s 350w or lower


  • my numbers are coming from the fact that anyone who’s replacing all their streaming likely isn’t using a single disk… WD red drives (as in NAS drives) according to their datasheet use between 6 and 6.9w when in use (3.6-3.9w at idle)… a standard home NAS has 4-6 bays, and i’m also assuming that in a typical NAS setup they’re in some kind of RAID configuration, which likely means some level of striping so all disks are utilised at once… again, i think all of these are decent assumptions for home users using off the shelf hardware

    i’m ignoring sleep here, because sleep for NAS drives leads to premature failure… this is why if you buy WD green drives for your NAS for example and you use linux, you wdparm to turn off sleep to avoid constantly parking and unparking the heads which leads to significantly reduced life (afaik many NAS products do this automatically, or otherwise manage it)

    the top end of that estimate for drives (6 drives) is 41.4w, and the low end (4 drives) is 24w… granted, not everyone will have even those 4 drives, so perhaps my estimate is a little off, but i don’t think 30w for drives is an unreasonable assumption

    again, here’s where data centres just do better: their utilisation is spread much more evenly… the idle power of drives is not hugely less than their full speed read/write, so it’s better to have constant access over fewer drives, which is exactly what happens with DCs because they have fewer traffic spikes (and can legitimately manage drive power off for hours at a time because their load is both predictable, and smoother due just to their scale)

    also, as someone else in the thread mentioned: my numbers for severs were WAY off for a couple of reasons, but basically

    Back of the envelope math says that’s around 0.075 watts per individual stream for a 150w 2U server serving 2000 clients, which looks pretty realistic to my eyes as a Sysadmin.

    that also sounds realistic to me, having realised i fucked up my server numbers by an order of magnitude for BOTH power use, and users served

    servers and data centres are just in a class of their own in terms of energy efficiency

    here for example: https://www.supermicro.com/en/products/system/storage/4u/ssg-542b-e1cr90

    this is an off the shelf server with 90 bays that has a 2600w power supply (which even then is way overkill: that’s 25w per drive)… with 22tb drives (off the top of my head because that’s what i use, as it is/was the best $/byte size) that’s almost 2pb of storage… that’s gonna cover a LOT of people with that 2600w, and imo 2600w is far beyond what they’re actually going to be pulling



  • an n150 mini pc - largely considered a very efficient package for home servers - consumes ~15w max without the gpu, and ~9w idle

    a raspberry pi consumes 3-4w idle

    none of that is supporting more than a couple of people streaming 4k like we’re talking about in the case of netflix

    and a single hard drive isn’t even close to what we’re talking about… you’re looking at ~30w at least for the disks alone

    as for internet cost, it’s likely tiny… my 24 port gigabit switch from 15 years ago sips < 6w… i can only imagine that’s pretty inefficient compared to today’s standards (and 24 port is pretty tiny for a DC, and port power consumption doesn’t scale linearly)

    data centres are just straight up way more efficient per unit of processing than your home anything; it pretty much doesn’t matter how efficient your home gear is, or what the workload is unless you switch it off most of the time - which doesn’t happen in a DC




  • self hosting is wildly less efficient… one of the biggest costs in data centres is electricity, and one of the biggest constraints is electrical infrastructure… you have pretty intense power budgets in data centres and DC equipment is pretty well optimised to be efficient

    meanwhile a home server doesn’t likely use server hardware (server hardware is far more efficient), is probably about 5-10y or more out of date, and isn’t likely particularly dense: a single 1500w server can probably service ~20 people in a DC… meanwhile an 800w home server could probably handle ~5 people

    add the fact that netflix pre-transcodes their vids in many different qualities and formats, whilst home streaming - unless streaming original quality - mostly re-transcodes which is a very energy-hungry process

    heck even just the hard drives: if everyone ran their own servers and stored their content that’s thousands if not hundreds of thousands more copies of the data, and all that data is probably on spinning disks