What's the current status of "real-time Pixar graphics&

Re: What's the current status of "real-time Pixar graph

mrbill said:
Daliden said:
There even was a demonstration of real-time rendering, albeit at a low resolution and no AA.

Also albeit with baked, not procedural, textures. Albeit with simplified geometry. Albeit with no motion blur. You are talking about the Luxo, Jr. demonstration at MacWorld in 2001?

Actually, I was talking about Siggraph in the year Geforce 2 was launched.

http://www.tech-report.com/etc/2002q3/nextgen-gpus/index.x?pg=2

This had to do with Mark S. Peercy & Co's paper titled "Interactive Multi-Pass Programmable Shaders". It seems I misremembered this one -- it wasn't realtime. But the card used was a Geforce 2! Surely the modern cards could do the same much much faster.

mrbill said:
Daliden said:
Time has passed, but has there been any progress in this field?

Yeah, at Siggraph 2003 I showed a toy ball procedurally shaded in real time (>60 fps) with one (not three) lights. On the plus side, it *was* motion blurred (by an ad-hoc procedural technique, which was simplified for a spinning ball, not a squashing/stretching/bouncing ball. But the method can be generalized to the full solution.) And the geometry, while simplified, was an accurate sphere to subpixel precision.

On the other hand, does it have to be a general solution? OK, a general solution would be nice, but couldn't you also have a lot of specialized solutions and use whichever is relevant?

mrbill said:
So, Reality Check: Toy Story 2 (and Toy Story before it) was rendered on the order of 1/1,00,000 real-time. (See Tom Duff's posting to comp.graphics.rendering.renderman, http://groups.google.com/groups?hl=en&lr=&ie=UTF-8&oe=UTF-8&selm=3909BD4B.A107CD05@pixar.com ) Just let Moore's Law (no imagined cubes required) work and a uniprocessor PC will manage it in real-time 30 years later. Toy Story was released in 1995, so software rendering of Toy Story in real-time on a single processor PC should be possible around 2025.

Assume hardware rendering is on the order of 1,000 times faster than software rendering. That puts be real-time "Toy Story" 15 years after its release, or 2010, still seven years from now.

Well, do you necessarily have to use a single GPU? Doesn't ATI claim they can run 256 Radeons in parallel? Or that it would be possible, at least? Do the GPU renderers have to repeat each step exactly as the renderfarm does it, or can different methods be used that produce an identical result?

mrbill said:
Reality Check Squared: Luxo, Jr. was rendered 1986. Add 15 years = 2001. Oops, nobody could do Luxo, Jr in real time in 2001, and still nobody has done so in 2003.

Has someone tried it recently? I think there are some copyright issues there, anyway . . . *could* someone do it if they wanted to, I mean, legally? Then again, that's sidetracking . . . duplicating the shading techniques used would suffice, I guess. Wouldn't necessarily do much good for us regular Joes who don't really understand the underlying nuts and bolts, though.

mrbill said:
There isn't any question it will happen someday. Real-time is getting damn good, but it's still got some to go. But by the time we get there, we'll still have more to go to catch up with today's films, let alone with tomorrow's.

In the meantime, we keep dreaming.

-mr. bill

We most certainly will.

Hm, sorry if the above seems like a third-degree or some kind of flame. This is just a bloody interesting subject -- at least to me!
 
Very interesting thread, and a good laugh with the classic Google post (BTW, back then, everyone and his mother was heralding "Holywood quality graphics").

A few things :
- I think Square is using the PS2 "Cube" (or something like that) for some rendering ?
- Nvidia demonstrated a single character (no background) of Final Fantasy "The Movie" (using the term "movie" loosely here) running at around 10FPS on a GF4. That's not really "real-time" yet, but that's mighty impressive already...
- Of course, one of the difficulties in "Pixar-like animation" is that "Pixar-like animation" is actually done at a huge resolution, using levels of AA we can only dream of
 
CorwinB said:
- Nvidia demonstrated a single character (no background) of Final Fantasy "The Movie" (using the term "movie" loosely here) running at around 10FPS on a GF4. That's not really "real-time" yet, but that's mighty impressive already...

I can bet that the poly-count wasn't nowhere near what they used in the movie. I would be hard-pressed to believe the shaders where exactly implemented as the movie either.

- Of course, one of the difficulties in "Pixar-like animation" is that "Pixar-like animation" is actually done at a huge resolution, using levels of AA we can only dream of

Hehehe, that's not even considering all the work involved with the shaders!

-M
 
I would like to state (in my opinion only) that only a couple of shaders done with hardware have impressed me. Tron 2.0's glow shader in particular is very impressive indeed (along with the HDR implementation of Paul Devebec's paper (sp?)). If we could get that shader into Maya, it would allow us to see glows in realtime without rendering a post-process layer of the object with glow just to see our results when tweaking the shaders.

Very nice indeed..:)

-M
 
Mr. Blue said:
I would like to state (in my opinion only) that only a couple of shaders done with hardware have impressed me. Tron 2.0's glow shader in particular is very impressive indeed (along with the HDR implementation of Paul Devebec's paper (sp?)). If we could get that shader into Maya, it would allow us to see glows in realtime without rendering a post-process layer of the object with glow just to see our results when tweaking the shaders.

Very nice indeed..:)

-M

Is there any fundamental difference between software shaders and hardware shaders? I was in the understanding that any software shader can be implemented with hardware (ok, might need several passes, and perhaps the current FP precision isn't always sufficient).

Perhaps the strict real-time constraints of games have prevented nifty shaders from being generated on hardware, because currently all the fancy stuff has to be done with software.
 
The main limitation of current hardware shaders is that you cannot do conditional branching or function calls, and are restricted to a rather small number of instructions - VS/PS3.0 (DirectX9.1) and GLSlang (OpenGL2.0) will add branching and limted support for function calls (limited call stack, no function pointers, no recursion) and raise (but not eliminate) the maximum instruction counts. Also, you have only a fixed set of registers available to hold your variables - if you need more variables that what the registers can hold, the shaders won't spill registers to memory, but just fail to compile altogether.
 
Keep in mind that "Pixar quality" is a moving target - Toy Story was nice in 1995, but TS2 was loads better and more complex, not to mention Monsters or Nemo.

The most important feature left to implement IMHO is hardware acceleration for subpixel displacement mapping and tesselation of subdiv/NURBS surfaces.
Shaders seem to only need performance enhancements from now on.

Hmm, perhaps you should add robust depth-mapped shadows. Although the RTS Panzers is already using perspective shadow maps AFAIK...
 
Re: What's the current status of "real-time Pixar graph

Daliden said:
Actually, I was talking about Siggraph in the year Geforce 2 was launched.

http://www.tech-report.com/etc/2002q3/nextgen-gpus/index.x?pg=2

This had to do with Mark S. Peercy & Co's paper titled "Interactive Multi-Pass Programmable Shaders". It seems I misremembered this one -- it wasn't realtime. But the card used was a Geforce 2! Surely the modern cards could do the same much much faster.
You can find the paper at http://www.csee.umbc.edu/~olano/papers/ips/ips.pdf . But it didn't use a GeForce 2, the interactive demo was done on an Octane/MXI. Conceptually, ISL could be done on a GeForce 2 (or a Radeon), but the RenderMan shaders on multipass OpenGL needed two more extensions but those extensions were not yet implemented. But they are now!

For recent progress in the area, see some of the ASHLI papers and presentations (you can find them on the developers site at ATI) and also the uberlight I spoke of is found in Jason Mitchell's presentation "ATI R3x0 Pixel Shaders" where he ported the RenderMan shader to HLSL. Marc Olano also showed work on realtime shaders with an Onx4 UltimateVision system, but I haven't found his presentation online yet.

Daliden said:
On the other hand, does it [procedural motion blur] have to be a general solution? OK, a general solution would be nice, but couldn't you also have a lot of specialized solutions and use whichever is relevant?
I'd love to know the answer to that! Procedural spatial anti-aliasing has come to be a requirement. But procedural temporal anti-aliasing died in the early days of shading because such shaders needed to be time aware. The solution back then was to sample the shader at multiple times, so the shader writer didn't have to worry about time, the shading system did. This solved the problem - but at a cost. And the realtime equivalent has been the accumulation buffer, also sampling at multiple times, but this also comes at a significant cost.

But for high quality realtime maybe the cost savings of procedural temporal anti-aliasing will be worth the added complexity of the shaders? Don't know yet.

Daliden said:
Hm, sorry if the above seems like a third-degree or some kind of flame. This is just a bloody interesting subject -- at least to me!
Doesn't come across as a third-degree or a flame. I find the subject completely interesting as well.

My only point is we are a *long* way from sanding the underside of the drawers in realtime. We can run some significant shaders now in realtime, a glimpse of what's coming. But we still can't even do the early short films in realtime, let alone an early feature film.

Oh, but there is one thing I shouldn't fail to mention:

Skint said:
Any chance of shading language being made available for OpenGL? :eek:)
A very good chance indeed, the OpenGL Shading Language! BTW, the first time someone asked was almost a decade ago. See http://groups.google.com/groups?sel...6@newsgate.sps.mot.com&oe=UTF-8&output=gplain . OpenGL 1.0 implementations had *just* begun shipping.

-mr. bill
 
Laa-Yosh said:
Keep in mind that "Pixar quality" is a moving target - Toy Story was nice in 1995, but TS2 was loads better and more complex, not to mention Monsters or Nemo.

The most important feature left to implement IMHO is hardware acceleration for subpixel displacement mapping and tesselation of subdiv/NURBS surfaces.

I agree here. Hardware boards need a significant speed boost in generating geometry that approximates curved surfaces well. This low LOD for models in games has been annoying for years..:(

I won't even get into geometry shaders then...;)

-M
 
Daliden said:
Is there any fundamental difference between software shaders and hardware shaders? I was in the understanding that any software shader can be implemented with hardware (ok, might need several passes, and perhaps the current FP precision isn't always sufficient).

Most of the shaders we deal with on a day-to-day basis can't be done in realtime nor will be for quite some time. By the sheer fact, that many TDs combine shaders (i.e. one shader calls another shader for an input value) to create complex looks would be daunting in realtime. Some of the simple shaders in the demos that have been done are "fair" approximations (for example, the elephant with wood shader) at best.


-M
 
Hi, all.

I never did any 3D graphics programming, and I have never worked with 'real' renderers like Renderman. But I would like to comment on this topic anyway.

:D

It seems clear, that current graphic hardware renders things quite differently than the renderfarms. And that those cards could not execute such a program in real time. Let alone ray-trace a whole scene.

But, some of you commented, that things like ray-tracing are avoided by renderfarms if possible, because it takes very much time. And that those programs render things in fixed-point format.

So, some of the things a graphic card cannot do are avoided by the renderfarms as well, and there are even things the cards do better, like using floating point.

While you cannot translate those renderprograms directly, why would you do that in the first place? Because it wouldn't change the way it is done at the moment? And you can run the same programs on the shaders of the cards? That would be good and fast, but not real-time. The cards haven't got the memory to use the resources needed for such a program anyway.

If we look at it from the opposite side, could those cards approximate the quality of Toy Story by using the things they do well? I think so.

For example, if we want to render skin, I was thinking you could do that by duplicating the object and scaling one of them a tiny bit, and giving the outermost one a semi-transparent surface. Not 'exact', but I think it would look quite realistic.

Or curves. They look quite bad with polygons. But you could use the shader variant of displacement- or bump-maps to soften them up.

And when you make a movie, you always know what is visible and what is not. So you could optimize things by removing all objects that aren't visible anyway. And you know the bandwith and don't have to run game logic, so you can use the CPU to make sure the GPU renders as optimal as possible.

And it all depends on the definition of photo-realistic or pixar-quality graphics anyway. Even things as motion-blur can be done by the shaders (as some demos demonstrate), or by brute force.

If we don't try, we don't know. Did anyone try to render a scene on a 9800 in an optimal way and comparing the output to that of a renderfarm?

I am truly curious.
 
DiGuru said:
Hi, all.

I never did any 3D graphics programming, and I have never worked with 'real' renderers like Renderman. But I would like to comment on this topic anyway.

Uh-oh.:)


But, some of you commented, that things like ray-tracing are avoided by renderfarms if possible, because it takes very much time. And that those programs render things in fixed-point format.

I don't know who told you that, but floating point is used all the time in VFX houses where intergration with a real scene and a synthetic scene are required. Most rendering is done or converted to floating point.

So, some of the things a graphic card cannot do are avoided by the renderfarms as well, and there are even things the cards do better, like using floating point.

The bottom line is that a renderfarm can implement any algorithm or technique simply because it can be programmed. Some of the things that are avoided in a RF is only because of time constraints to making a film - not because it can't be physically done yet like the 3d accelerator.


While you cannot translate those renderprograms directly, why would you do that in the first place? Because it wouldn't change the way it is done at the moment? And you can run the same programs on the shaders of the cards? That would be good and fast, but not real-time. The cards haven't got the memory to use the resources needed for such a program anyway.

Not sure I understand you on this.


If we look at it from the opposite side, could those cards approximate the quality of Toy Story by using the things they do well? I think so.

By your argument you can say that 3d hardware is there now to implement Toy Story since a good "approximation" would be to just place texture maps on everything!:) The reality of it is that there is a lot more going on in these houses that takes a team of people to implement with full programmability CPUs that can't be done in a 3d accelerator (yet).


For example, if we want to render skin, I was thinking you could do that by duplicating the object and scaling one of them a tiny bit, and giving the outermost one a semi-transparent surface. Not 'exact', but I think it would look quite realistic.

This wouldn't look right since there is no interaction between the light and the eye. You need a fresnel effect there somewhere and skin has many layers, not just 2.


Or curves. They look quite bad with polygons. But you could use the shader variant of displacement- or bump-maps to soften them up.

A variant of displacement mapping? Like what? Bump-mapping will not solve the problem because of silouettte edges still giving away the lower LOD model. In order for displacement mapping to work properly, your mesh still needs to be tessellated fine enough to get the proper detail from the map. Again introducing curves as the solution to the problem.


And when you make a movie, you always know what is visible and what is not. So you could optimize things by removing all objects that aren't visible anyway. And you know the bandwith and don't have to run game logic, so you can use the CPU to make sure the GPU renders as optimal as possible.

I'm sure they do this now in games..:)


If we don't try, we don't know. Did anyone try to render a scene on a 9800 in an optimal way and comparing the output to that of a renderfarm?

I am truly curious.

Not there yet me friend..;)

-M
 
Thanks, Mr. Blue.

:D

Sorry if I get this wrong again, but do you take the shaders into account? As far as I got it, you can use them to create the same effects as you mention, as long as you can cram the function into the program space they have and you don't use conditional branches in the pixel shaders.

And if you see how very much overdraw there is in games, (as they mostly just dump the whole scene to the graphic card and use the CPU to do the game logic and AI), that could surely be improved if you want to?

Just to know, how could you render skin on a graphic card? Can you do that freshnell effect with a shader? Put a few textures with blood vessels etc. on the innermost layer, use that same map for a little bump-mapping and use the shader to create the effect. Would that work? Or if it wouldn't, how would you do it, given the limitations of the cards?

btw. Do the render programs use actual curves instead of polygons?

EDIT: The Caves screensaver from ATi does a really nice effect to make objects look like viewed through hot air. I had to look three times before I understood that it was as intended. And if you see the reflection of the car on the bumped metallic floor in the car demo, you have to look twice before you see what is happening. I just hadn't seen those things before, I just wondered if it was 'broken'. :D
 
DiGuru said:
Thanks, Mr. Blue.

:D

Sorry if I get this wrong again, but do you take the shaders into account? As far as I got it, you can use them to create the same effects as you mention, as long as you can cram the function into the program space they have and you don't use conditional branches in the pixel shaders.

But that's just it. You can't cram the more complicated shaders (the ones used for production) into a few registers with limited conditional branching.


And if you see how very much overdraw there is in games, (as they mostly just dump the whole scene to the graphic card and use the CPU to do the game logic and AI), that could surely be improved if you want to?

The graphics card does not "see" the entire scene. Only a polygon at a time within it's veiw frustrum. Raytracing, you see the entire scene (which is what makes it difficult to do development in with a package like Maya).


Just to know, how could you render skin on a graphic card? Can you do that freshnell effect with a shader? Put a few textures with blood vessels etc. on the innermost layer, use that same map for a little bump-mapping and use the shader to create the effect. Would that work? Or if it wouldn't, how would you do it, given the limitations of the cards?

I don't know how to do this. All I know is that a true skin shader takes into account a lot of factors which can't be simulated on 3d hardware right now.


btw. Do the render programs use actual curves instead of polygons?

Yes, but every renderer still renders with polygons to display models, so ultimately curves must be tessellated. I've seen files of just hair models that are over a 100MB in size!!:)


EDIT: The Caves screensaver from ATi does a really nice effect to make objects look like viewed through hot air.

Not meaning any ill towards whoever wrote that screensaver, but it's just an approximation and doesn't look that good.. :( There are some other demos of 3d hardware that look much better..(i.e. HDR and Tron's glow come to mind).

-M
 
CorwinB said:
Very interesting thread, and a good laugh with the classic Google post (BTW, back then, everyone and his mother was heralding "Holywood quality graphics").

A few things :
- I think Square is using the PS2 "Cube" (or something like that) for some rendering ?
- Nvidia demonstrated a single character (no background) of Final Fantasy "The Movie" (using the term "movie" loosely here) running at around 10FPS on a GF4. That's not really "real-time" yet, but that's mighty impressive already...
- Of course, one of the difficulties in "Pixar-like animation" is that "Pixar-like animation" is actually done at a huge resolution, using levels of AA we can only dream of

Nah that wasn't same quality as the movie I could the lack of triangles in the model from the screen shot as well as lack of detail in the textures.
 
Mr. Blue said:
DiGuru said:
Thanks, Mr. Blue.

:D

Sorry if I get this wrong again, but do you take the shaders into account? As far as I got it, you can use them to create the same effects as you mention, as long as you can cram the function into the program space they have and you don't use conditional branches in the pixel shaders.

But that's just it. You can't cram the more complicated shaders (the ones used for production) into a few registers with limited conditional branching.

Yes, I understand that part. That's why I made the comment that I don't know why you would want to translate those functions directly into shader programs. It would be as it is in the renderfarms, but we could probably come up with a program that is almost as good. If the public cannot see the difference when looking at the movie, it would be good enough.


And if you see how very much overdraw there is in games, (as they mostly just dump the whole scene to the graphic card and use the CPU to do the game logic and AI), that could surely be improved if you want to?

The graphics card does not "see" the entire scene. Only a polygon at a time within it's veiw frustrum. Raytracing, you see the entire scene (which is what makes it difficult to do development in with a package like Maya).

Yes, it only has a collection of objects, consisting of vertices to be transformed according to rules. (btw. I read a pdf, describing that raytracing could be done on current videocards, albeit not real time, of course.) But does that really matter? We wouldn't do raytracing anyway.


Just to know, how could you render skin on a graphic card? Can you do that freshnell effect with a shader? Put a few textures with blood vessels etc. on the innermost layer, use that same map for a little bump-mapping and use the shader to create the effect. Would that work? Or if it wouldn't, how would you do it, given the limitations of the cards?

I don't know how to do this. All I know is that a true skin shader takes into account a lot of factors which can't be simulated on 3d hardware right now.

Well, I haven't got the slightest idea how it is done in something like Renderman. But if you make a nice diffuse filter, the method I described could look very nice, wouldn't you agree? And it could be done by a 9800 in real-time.


btw. Do the render programs use actual curves instead of polygons?

Yes, but every renderer still renders with polygons to display models, so ultimately curves must be tessellated. I've seen files of just hair models that are over a 100MB in size!!:)

I know a current videocard hasn't got the memory to use resources like that. But only large 'corners' are seen in a fluid movie. And the current cards can use plenty of vertices. To 'smooth' the edges, we can also use AA (if they are small enough) and/or overlay an edge and smooth that.

And I have seen some beautiful demos that show hair and fur. Not as nice as in Monsters, Inc., but very nice all the same. And it runs great on a 9600 as well, so a 9800 could do a lot better.


EDIT: The Caves screensaver from ATi does a really nice effect to make objects look like viewed through hot air.

Not meaning any ill towards whoever wrote that screensaver, but it's just an approximation and doesn't look that good.. :( There are some other demos of 3d hardware that look much better..(i.e. HDR and Tron's glow come to mind).

-M

Yes. :D But even Tron needs to run on older hardware as well. The shaders they use are among the simplest possible. I would very much like to see what a 9800 REALLY can do!
 
DiGuru said:
Yes, it only has a collection of objects, consisting of vertices to be transformed according to rules. (btw. I read a pdf, describing that raytracing could be done on current videocards, albeit not real time, of course.) But does that really matter? We wouldn't do raytracing anyway.

We do need raytracing (or rather at least raycasting). It won't go away - trust me. Our film industry wants to implement the most accurate models that simulate physical as we can..and we'll need some form of raytracing or raycasting for that. True volumetrics are almost always done by stepping down a ray and accumulating a density.

Well, I haven't got the slightest idea how it is done in something like Renderman. But if you make a nice diffuse filter, the method I described could look very nice, wouldn't you agree? And it could be done by a 9800 in real-time.

It's not as simple as you claim.

And I have seen some beautiful demos that show hair and fur. Not as nice as in Monsters, Inc., but very nice all the same. And it runs great on a 9600 as well, so a 9800 could do a lot better.

You haven't really seen some really good hair demos otherwise, you wouldn't say that the 3d hardware demos were "beautiful"..Hehehehe. Pay attention to some of the future 3d feature films next year..:)

Yes. :D But even Tron needs to run on older hardware as well. The shaders they use are among the simplest possible. I would very much like to see what a 9800 REALLY can do!

Re-rendering a whole scene a consecutive number of times may be trivial, but it isn't until now that we have the technology to have such bandwidth. It's a post process shader that is very very nice and mimics similar results from Maya's own post-process rendering.

I happen to be at this year's Game Developer's Conference to see the talk about the technology, and I highly respect it's results.

The only other features that I'm looking forward to in the next gen of cards is bump-mapping (which is long overdue), HDR, and real light and shadow interaction. I would like to see displacement mapping but I fear there isn't enough power/bandwidth to put that in a game just yet.

Cheers,

-M
 
Mr. Blue said:
Re-rendering a whole scene a consecutive number of times may be trivial, but it isn't until now that we have the technology to have such bandwidth. It's a post process shader that is very very nice and mimics similar results from Maya's own post-process rendering.

I happen to be at this year's Game Developer's Conference to see the talk about the technology, and I highly respect it's results.

The only other features that I'm looking forward to in the next gen of cards is bump-mapping (which is long overdue), HDR, and real light and shadow interaction. I would like to see displacement mapping but I fear there isn't enough power/bandwidth to put that in a game just yet.

Cheers,

-M

I would love to be there as well, but I'm in (boring) workplace management automation. I hope to be in the game business in about a year from now. I'm wondering if I could write a nice 3D game engine myself. It's probably quite a bit harder than I think right now...

:D

btw. I was convinced, that current cards like a 9800 do displacement- and bump-mapping in hardware? And that HDR is used to make realistic shadow- and light effects? Is that not the case?
 
Mr. Blue said:
DiGuru said:
Thanks, Mr. Blue.

:D

Sorry if I get this wrong again, but do you take the shaders into account? As far as I got it, you can use them to create the same effects as you mention, as long as you can cram the function into the program space they have and you don't use conditional branches in the pixel shaders.

But that's just it. You can't cram the more complicated shaders (the ones used for production) into a few registers with limited conditional branching.

Hmm, doesn't this contradict Peercy's paper "Interactive Multi-Pass Programmable Shaders"? As I understood it, he stated that any Renderman program can be broken down into several passes.
 
Back
Top