Toy Story versus... *spawn

Post motion blur can not handle anything moving along a curve, it's linear (at least what I'm familiar with)

I thought RM's motion blur only ever cared to cover linear motions too, under the justification that it's convincing enough for pretty much every real world use.
 
Could be the case, my memory is a little rusty and I haven't worked with PRMan for more than a decade :)
However proper 3D motion blur does have this advantage. I also kinda think that it still looks better in PRMan than in practically any realtime renderer.
 
https://renderman.pixar.com/view/how-to-motion-blur

Multi-segment motion blur is used to approximate non-linear motion of an object between frames. Normal motion blur is calculated by linear interpolation of the position of the primitive, from when the shutter opens to when it closes. This leads to incorrect blurring when the objects rotate or translate radially at high speeds. Multi-segment motion blur splits the frame up based on the number of motion samples set by the user - Studio allows a maximum of six samples per motion block. With six motion samples, the frame is split into six equal time slices. This gives a better approximate of angular rotations than motion blur, with just two motion samples. In Studio, multi-segment motion blur is enabled by adding the custom RenderMan attribute -_motion samples_- to the transformation node of your object (see the Motion Blur HowTo).
So six linear paths to approximate a curved motion.
 
Even if linear, the fact that they actually create multiple samples in space along the path of motion allows for a much more acurate shading of such moving object. Were shadows motion blurred too in Toy Story? They did use shadowmaps, if I'm not mistaken... How does that work? Onion skinned depth maps?
 
Last edited:
The detail in that article is quite a different message:

PlayStation 2, though, is claiming to be able to handle 50 times more 3-D image data than the Dreamcast, allowing it to create characters similar in appearance to those in the Walt Disney film "Toy Story."
This only talk about characters, and not whole games or image quality. Seems a single sound bite was turned into a super hyped message by the power of people.
 
Man those were exciting times. Back then you were expecting console manufacturers to come up with state of the art technologies nobody saw before. The hype was great. Graphics were not mature, manufacturers were experimenting with various technologies and approaches to reach certain results.....everything was a great unknown and an experiment to reach new heights and new standards in physics and visual detail.
These days you just know what you are going to get because you already have it on PC. Evething is known, and the approach is mostly brute forcing power to make the existing effects work better.
 
Even if linear, the fact that they actually create multiple samples in space along the path of motion allows for a much more acurate shading of such moving object. Were shadows motion blurred too in Toy Story? They did use shadowmaps, if I'm not mistaken... How does that work? Onion skinned depth maps?

Yeah they've used shadow maps for a looong time but PRMan rendered those as well :) So I guess it was possible to include motion blur there as well somehow.

Sometime in the early 2000s the common practice was to cache all maps to disk and re-use them while tuning the shaders, lights etc.
I did some really out there stuff on our Warhammer trailer and intro with the shadows :)
Covering up a large area with 2k-4k maps is very hard, and the character shadows would sill look blurry and ugly. So I had the key light use multiple shadow cameras, one to cover the entire ground, one tracking each character, adjusted per-object inclusion-exclusion, and composited the whole thing together. Also had to manage min/mid/max distance shadow bias, filtering, etc.

Today we can just raytrace, and it even saves disk space. Yaaay :) too bad I'm not doing any lighting anymore :)

Anyway, the point is that you had a lot of options to manipulate shadow maps and how they were used. We had MTOR (Maya to Renderman) which featured a node based editor; but if you were able to write your own shader code, you could do even more.

Edit: I'm not entirely sure if the first versions of PRMan had actual built-in support for motion blurring shadows of fast moving objects; but considering that it is a relatively basic requirement for movie quality visual effects, it seems logical. I'm sure they had it sooner than later.
 
SUN machines???
What are these???

Dude you can't be that young :)

SUN was a developer of UNIX based machines, using RISC processors, and thus a lot of early computer graphics software was written for this platform as well. Another early platform was DEC and their Alpha based systems.
Silicon Graphics came a little later to the game as far as I know, being specialized in graphics instead of general high-end computing.

Eventually most of these companies were beaten by PC based hardware becoming as powerful (with 64-bit support, and Linux) and cutting below their prices through economies of scale. I remember it was really big news when ILM has transitioned their artists from SGI systems to P4 based PCs...

SUn was bought by Oracle in 2010 or so. DEC was first bought by Compaq (?) and then HP bought Compaq a few years later...
 
SUN was a developer of UNIX based machines, using RISC processors, and thus a lot of early computer graphics software was written for this platform as well. Another early platform was DEC and their Alpha based systems.
Silicon Graphics came a little later to the game as far as I know, being specialized in graphics instead of general high-end computing.
Sun was founded on Motorola 68k before RISC (SPARC) was a thing. :yep2: But yeah, funny how quickly industry pioneers become names many people have never heard of.
 
Man, I actually know what SUN was - gimmie a break here :D

But yeah, it's a real wake up call to see how fast the history of an entire industry (or maybe even more than one) gets lost.

For example, I guess a lot of people would have no idea about the early "smartphones" if they weren't featured in Steve Jobs' iPhone introduction talk. I guess most people on this very board have very little idea about what 3dfx was...
 
Dude you can't be that young :)

SUN was a developer of UNIX based machines, using RISC processors, and thus a lot of early computer graphics software was written for this platform as well. Another early platform was DEC and their Alpha based systems.
Silicon Graphics came a little later to the game as far as I know, being specialized in graphics instead of general high-end computing.

Eventually most of these companies were beaten by PC based hardware becoming as powerful (with 64-bit support, and Linux) and cutting below their prices through economies of scale. I remember it was really big news when ILM has transitioned their artists from SGI systems to P4 based PCs...

SUn was bought by Oracle in 2010 or so. DEC was first bought by Compaq (?) and then HP bought Compaq a few years later...
I am over 30 years old, but I had no idea what was going on back then in the industry. Perhaps is where I lived the reason
Thanks for the info though :)
 
The detail in that article is quite a different message:

This only talk about characters, and not whole games or image quality. Seems a single sound bite was turned into a super hyped message by the power of people.

I agree with you about the written content of the article. The headline is what I find to be easy for someone just reading that getting the impression of PS2 having Toy Story graphics without having read the article. Meh. You wanted an example tht's the best I can find.

I do recall the MS references to Toy Story with regards to Xbox. What I really remember about the PS2 hype are numbers that were completely soul crushing to any Sega fan rooting for the Dreamcast. When some kid dished out the specs to the Emotion Engine on DCTP forum saying his uncle was an engineer working on it. And sure enough the info he gave was accurate. I wonder if anyone here remembers this from that time.

Lazy8's, care to chime in?
 
The point was, if this game can't even get close to a 20+ year old CGI film, then how in the world is it "Pixar quality graphics"? For the record, those Rachet and Clank captures don't look nothing like modern CGI at all.
?
Can't even get close?

Here's the problem some people don't get, just because there are a few rough shapes here and there, it doesn't mean that a slight increase in polycount couldn't iron those out. There are many games with round objects that remain smooth even when the camera zooms in as far as the game allows.

There are plenty of UE4 demos that can look pretty much photoreal.
old CGI vs realtime graphics:

conclusion+2.jpg


can feel like it, but technically far from it.

Could be, but sometimes newer algorithms do things that were impossible with older techniques, or actually reproduce the real world phenomena better but more efficiently than older methods. In some fields it is said algorithmic advances have allowed over 4 order of magnitude speed ups.

I don't mind coming back to this topic every now and then. Seeing how close real time is to 90's cgi, or how much better it can actually be in some aspects ( artist's know-how, shading, pbr...) is a healthy and interesting discussion.
What's boring, is how every-single time there's half a dozen smart-asses telling us how the actual poly counts of the models are higher, individual strains of hair, higher-res textures, better AA and etc... ( yawn ) As if a freeking registered user of Beyond3d wouldn't know that. It's not only a futile observation, its insultingly condescending. The intentions of the OP of these kinds of threads should be obvious to any sane person who is not too busy waiting to show how smartedy-smart they are and how much they now about how reyes Catmull-Clark subdivision surfaces work.
Then what usually happens is that most of the thread's lifetime is spent in this discussion over semantics and criteria and it dies before any anything intelligent comes out of it.

The problem here is, say you have a sphere with 1 million polys. For most practical purposes, one with a billion or a trillion, what will it offer? It is wasteful for most cases, like saying you rendered the scene at say 1Quadrillion pixels X 1 quadrillion pixels. What advantages does that have over rendering at 1million pixels x 1million pixels? If it is not perceptible it is wasteful, and inefficient, and in the end not perceptible.



Still, they pretty much agree with what I and a few others are saying here. Compared to the first Toy Story, you can see the improvements and how the results can look pretty similar considering the imperfections of real time rendering. But from the second one, CGI just went far ahead to what we can even fake these days.
Still, considering a rendering time of 33ms compared to unlimited rendering time, current real time is very impressive.
Give a few more generations and we will be pretty much passing for photoreal in real time.

Even big budget movies can have awful cgi, that looks more fake than some real time content.

tech aside, if i showed my girlfriend games like the order or UC4 she would say they all look better than the old toy story movie.
Rightfully so.

Having a bit more smoothness on a few surfaces or a bit better IQ, is not that big a deal... when at 5-6ft the imperfections of UC4 are hard to notice and in some areas might very well seem imperfection free.
 
Back
Top