The movie Toy Story needed top-computers in 1995 to render every frame and that took a lot of time (800000 machine-hours according to Wikipedia).

Could it be possible to render it in real time with modern (2025) GPUs on a single home computer?

  • wuzzlewoggle@feddit.org
    link
    fedilink
    arrow-up
    47
    ·
    3 days ago

    I also work in 3D and I wanna say yes. If we’re talking solely about the technical aspect, real-time render today can definitely hold up to, or even surpass, the quality of renders from 30 years ago.

    If it would look exactly the same and how much work it would be to recreate the entire movie in a real time engine is another question.

  • Ozymandias88@feddit.uk
    link
    fedilink
    English
    arrow-up
    21
    ·
    edit-2
    3 days ago

    Others have covered the topic of modern renderers and their shortcuts but if you wanted an exact replica I think films like this are rendered on large HPC clusters.

    Looking at the Top500 stats for HPCs the average top500 cluster in 1995 was 1.1TFlops, and today that seems to be around 23.4PFlops.

    An increase of approximate 21,000 times.

    So 800,000 hours from 1995 is about 37 hours on today’s average top500 cluster.

    Edit: I found a few unconfirmed calculations that Toy Story was rendered on 294 CPUs in SPARCstation 20s with a combined power of only 8GFlops. This would mean a render time of 325,000 CPU hours, 1,100 wall clock hours. So, No. 500 of the top500 has the theoretical raw power to render toy story in about 15mins. You’d only need around 7Tflops to render it in it’s 75min runtime.

    Still we’re talking multimillion dollar HPC clusters here, not your home rig if you want to render exactly the same thing in the same way.

    If you could update the renderer to support modern GPU hardware then it seems like you would have enough power to achieve similar realtime rendering.

  • Mothra@mander.xyz
    link
    fedilink
    arrow-up
    56
    ·
    3 days ago

    Hello, I’ve worked in 3D animation productions. Short answer: You can get close.

    Unreal Engine has the capacity to deliver something of similar quality in real time providing you have a powerful enough rig. You would need not only the hardware but also the technical know how to optimize several aspects of your scenes. Basically the knowledge of a mid to senior unreal TD and a mid to senior generalist combined to say the least.

  • Zomg@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    edit-2
    2 days ago

    Fun fact: in the first Toy Story, all the kids at Andy’s birthday have the same face as him.

  • Emily (she/her)@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    32
    arrow-down
    2
    ·
    edit-2
    3 days ago

    I’m not a computer graphics expert (though have at least a little experience with video game dev), but considering Toy Story uses ray-traced lighting I would say it at least depends on whether you have a ray-tracing capable GPU. If you don’t, probably not. I would guess you could get something at least pretty close out of a modern day game engine otherwise.

      • Blackmist@feddit.uk
        link
        fedilink
        English
        arrow-up
        13
        ·
        3 days ago

        Full raytracing (path tracing) might be.

        I think they can do it for very basic looking games like Quake 2.

        That said, I doubt you’d actually need full RT for visuals like Toy Story 1. Or indeed on most things.

        They got pretty good at faking most of it. RT can basically just be used for reflections, shadows and global illumination and most of us wouldn’t notice the difference.

      • Emily (she/her)@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        13
        ·
        edit-2
        3 days ago

        Maybe, what I said is admittedly mostly based on the experience I have with Blender’s Cycles renderer, which is definitely not real time.

    • magic_lobster_party@fedia.io
      link
      fedilink
      arrow-up
      18
      ·
      3 days ago

      Did Toy Story use ray tracing back then?

      AFAIK, A Bug’s Life is the first Pixar movie that used ray tracing to some extent, and that was for a few reflections. Monster’s University is the first Pixar movie that was fully ray traced.

      • Emily (she/her)@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        11
        ·
        3 days ago

        You’re right, it looks like they didn’t (at least for most things?). They do mention raytracing briefly, and that the sampling stage can “combine point samples from this algorithm with point samples from other algorithms that have capabilities such as ray tracing”, but it seems like they describe something like shadow mapping for shadows and regular raster shading techniques (“textures have also been used for refractions and shadows”)?

        • magic_lobster_party@fedia.io
          link
          fedilink
          arrow-up
          4
          ·
          3 days ago

          Interesting paper. I skimmed through it quickly, but it seems like they wanted to avoid relying on ray tracing.

          Minimal ray tracing. Many non-local lighting effects can be approximated with texture maps. Few objects in natural scenes would seem to require ray tracing. Accordingly, we consider it more important to optimize the architecture for complex geometries and large models than for the non-local lighting effects accounted for by ray tracing or radiosity.

          Most of the paper is way above my understanding, so I’m not qualified.

      • CodexArcanum@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 days ago

        Physically Based Rendering (the freely available book) won its authors a special Academy award in 2014. That book is still the teaching standard for ray tracing so far as I know. In the intro, they discuss Pixar adding ray tracing (based on pbrt) to their RenderMan software in the early 2000s.

        A Bugs Life and TS2 could have benefit from some of that, but I’d guess Monsters Inc was the first full outing for it, and certainly by Nemo they must have been doing mostly ray tracing.

  • Deestan@lemmy.world
    link
    fedilink
    arrow-up
    28
    arrow-down
    2
    ·
    3 days ago

    Things that can affect it, with some wild estimates on how it reduces the 800kh:

    • Processors are 10-100 times faster. Divide by 100ish.
    • A common laptop CPU has 16 cores. Divide by 16.
    • GPUs and CPUs have more and faster math operations for numbers. Divide by 10.
    • RAM speeds and processor cache lines are larger and faster. Divide by 10.
    • Modern processors have more and stronger SIMD instructions. Divide by 10.
    • Ray tracing algorithms may be replaced with more efficient ones. Divide by 2.

    That brings it down to 3-4 hours I think, which can be brought to realtime by tweaking resolution.

    So it looks plausible!

    • magic_lobster_party@fedia.io
      link
      fedilink
      arrow-up
      15
      arrow-down
      4
      ·
      3 days ago

      They used top of the line hardware specialized for 3D rendering. Seems like they used Silicon Graphics workstations, which costed more than $10k back in the day. Not something the typical consumer would buy. The calculations are probably a bit off with this taken into account.

      Then they likely relied on rendering techniques optimized for the hardware they had. I suspect modern GPUs aren’t exactly compatible with these old rendering pipelines.

      So multiply with 10ish and I think we have a more accurate number.

      • lime!@feddit.nu
        link
        fedilink
        English
        arrow-up
        8
        ·
        3 days ago

        Remember how extreme hardware progress was back then. the devkit for the n64 was $250k in 1993 but the console was $250 in 1996.

        • magic_lobster_party@fedia.io
          link
          fedilink
          arrow-up
          7
          arrow-down
          1
          ·
          3 days ago

          Most of that cost was unlikely for the hardware itself, but rather Nintendo greed. Most of it was probably for the early access to Nintendo’s next console and possibly support from Nintendo directly.

          • lime!@feddit.nu
            link
            fedilink
            English
            arrow-up
            9
            ·
            3 days ago

            the devkit was an SGI supercomputer, since they designed the CPU. no nintendo hardware in it.

      • Buelldozer@lemmy.today
        link
        fedilink
        arrow-up
        5
        ·
        edit-2
        3 days ago

        There is no comparison between a top of the line SGI workstation from 1993-1995 and a gaming rig built in 2025. The 2025 Gaming Rig is literal orders of magnitude more powerful.

        In 1993 the very best that SGI could sell you was an Onyx RealityEngine2 that cost an eye-watering $250,000 in 1993 money ($553,000 today).

        A full spec breakdown would be boring and difficult but the best you could do in a “deskside” configuration is 4 x single core MIPS processors, either R4400 at 295Mhz or R10000 at 195Mhz with something like 2GB of memory. The RE2 system could maybe pull 500 Megaflops.

        A 2025 Gaming Rig can have a 12 core (or more) processor clocked at 5Ghz and 64GB of RAM. An Nvidia 4060 is rated for 230 Gigaflops.

        A modern Gaming Rig absolutely, completely, and totally curb stomps anything SGI could build in the early-mid 90s. The performance delta is so wide it’s difficult to adequately express it. The way that Pixar got it done was by having a whole bunch of SGI systems working together but 30 years of advancements in hardware, software, and math have nearly, if not completely, erased even that advantage.

        If a single modern gaming rig can’t replace all of the Pixar SGI stations combined it’s got to be very close.

  • SkunkWorkz@lemmy.world
    link
    fedilink
    arrow-up
    22
    arrow-down
    1
    ·
    3 days ago

    With a modern game engine and PBR shaders you can definitely get the same look as the movie. If you try to render it exactly the way they did it with a software renderer on the CPU then maybe. Their rendering software, Reyes, didn’t use raytracing or path tracing at all. You can read about it here

    https://graphics.pixar.com/library/Reyes/paper.pdf

    I only skimmed it but it seems what they call micro polygons is just subdivision. Which can also be done realtime with tessellation.

  • unexposedhazard@discuss.tchncs.de
    link
    fedilink
    arrow-up
    21
    arrow-down
    2
    ·
    edit-2
    3 days ago

    Modern GPUs are easily 1000x faster than the ones back then, so 800k hours would be reduced to 800h which is a month worth of time. Thats just raw compute tho, there is lots of optimization work that has happened in the last 30 years, so its probably waaay less than that. I would expect it to be possible in a few days on a single high end GPU depending on how closely you want to replicate the graphics. Some rendering things might be impossible to reproduce in identical manner due to a loss of the exact system and software used back then.

  • foggy@lemmy.world
    link
    fedilink
    arrow-up
    18
    ·
    3 days ago

    Yes and no.

    You could get away with it with lots of tricks to down sample and compress at times where even an rtx 5090 with 32GB VRAM is like 1/64th of what you’d need to do in high fidelity.

    So you could “do it” but it wouldn’t be “it”.

  • Obi@sopuli.xyz
    link
    fedilink
    arrow-up
    7
    ·
    3 days ago

    Basically to sum it up:

    • Render the actual movie from original files, hard because of the inherent technical challenges

    • Recreate the movie, easy from a technical perspective for your machine to render, hard (potentially very hard) from an artistic and design perspective.

  • Takapapatapaka@lemmy.world
    link
    fedilink
    arrow-up
    12
    ·
    3 days ago

    I’d say you could render something close in real time. I’m not entirely aware of all techniques used in this film, but seeing what we can render at 60fps in terms of games, I think you could find a way of achieving a Toy Story look at 24fps. You may need a lot of tweaking though, depending on what you use (i was thinking about EEVEE, the Blender ‘real-time’ engine, and I know there are a bunch of settings and workarounds to use to get good results, and i think they may tend to make the render not real-time (like 0.5s, 1s, 2s per frame, so quite fast but not real time)

  • LucidNightmare@lemm.ee
    link
    fedilink
    arrow-up
    3
    ·
    3 days ago

    Kingdom Hearts 3 Toy Story world looked damn close to the original, so I’d assume maybe if work was put into it?

  • Krudler@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    3
    ·
    edit-2
    3 days ago

    Doubtful

    I’m not talking out my ass, a good buddy of mine worked for frantic films for decades and I myself learned 3D alongside him… We would squabble over the rendering farm too…

    Anyways most of the renderers made for those early movies were custom built. And anytime you custom build, you can’t generalize to output to a different system. So it’s a long way of saying no but maybe, if you wrote a custom renderer that was specifically designed to handle the architecture of the scenes and the art and the lighting and blah blah blah

    Edit oh and you would probably need the Lord of all graphics cards, possibly multiple in a small array with custom therading software

  • Novamdomum@fedia.io
    link
    fedilink
    arrow-up
    10
    arrow-down
    8
    ·
    3 days ago

    Feels like we’re closing in on a time when remaking something like Toy Story, or any animation, would be as simple as writing a prompt. “Remake Toy Story but it turns out Slinky Dog is an evil mastermind who engineered Woody’s fall from grace”.