What's the current status of "real-time Pixar graphics&

Discussion in 'Architecture and Products' started by Daliden, Sep 18, 2003.

  1. VFX_Veteran

    Regular

    Joined:
    Mar 9, 2002
    Messages:
    683
    Likes Received:
    234
    Hehehehe. I know all about Ice Age...:)

    -M
     
  2. Daliden

    Newcomer

    Joined:
    Sep 18, 2003
    Messages:
    89
    Likes Received:
    0
    Re: What's the current status of "real-time Pixar graph

    Actually, I was talking about Siggraph in the year Geforce 2 was launched.

    http://www.tech-report.com/etc/2002q3/nextgen-gpus/index.x?pg=2

    This had to do with Mark S. Peercy & Co's paper titled "Interactive Multi-Pass Programmable Shaders". It seems I misremembered this one -- it wasn't realtime. But the card used was a Geforce 2! Surely the modern cards could do the same much much faster.

    On the other hand, does it have to be a general solution? OK, a general solution would be nice, but couldn't you also have a lot of specialized solutions and use whichever is relevant?

    Well, do you necessarily have to use a single GPU? Doesn't ATI claim they can run 256 Radeons in parallel? Or that it would be possible, at least? Do the GPU renderers have to repeat each step exactly as the renderfarm does it, or can different methods be used that produce an identical result?

    Has someone tried it recently? I think there are some copyright issues there, anyway . . . *could* someone do it if they wanted to, I mean, legally? Then again, that's sidetracking . . . duplicating the shading techniques used would suffice, I guess. Wouldn't necessarily do much good for us regular Joes who don't really understand the underlying nuts and bolts, though.

    We most certainly will.

    Hm, sorry if the above seems like a third-degree or some kind of flame. This is just a bloody interesting subject -- at least to me!
     
  3. CorwinB

    Regular

    Joined:
    May 27, 2003
    Messages:
    274
    Likes Received:
    0
    Very interesting thread, and a good laugh with the classic Google post (BTW, back then, everyone and his mother was heralding "Holywood quality graphics").

    A few things :
    - I think Square is using the PS2 "Cube" (or something like that) for some rendering ?
    - Nvidia demonstrated a single character (no background) of Final Fantasy "The Movie" (using the term "movie" loosely here) running at around 10FPS on a GF4. That's not really "real-time" yet, but that's mighty impressive already...
    - Of course, one of the difficulties in "Pixar-like animation" is that "Pixar-like animation" is actually done at a huge resolution, using levels of AA we can only dream of
     
  4. VFX_Veteran

    Regular

    Joined:
    Mar 9, 2002
    Messages:
    683
    Likes Received:
    234
    I can bet that the poly-count wasn't nowhere near what they used in the movie. I would be hard-pressed to believe the shaders where exactly implemented as the movie either.

    Hehehe, that's not even considering all the work involved with the shaders!

    -M
     
  5. VFX_Veteran

    Regular

    Joined:
    Mar 9, 2002
    Messages:
    683
    Likes Received:
    234
    I would like to state (in my opinion only) that only a couple of shaders done with hardware have impressed me. Tron 2.0's glow shader in particular is very impressive indeed (along with the HDR implementation of Paul Devebec's paper (sp?)). If we could get that shader into Maya, it would allow us to see glows in realtime without rendering a post-process layer of the object with glow just to see our results when tweaking the shaders.

    Very nice indeed..:)

    -M
     
  6. Daliden

    Newcomer

    Joined:
    Sep 18, 2003
    Messages:
    89
    Likes Received:
    0
    Is there any fundamental difference between software shaders and hardware shaders? I was in the understanding that any software shader can be implemented with hardware (ok, might need several passes, and perhaps the current FP precision isn't always sufficient).

    Perhaps the strict real-time constraints of games have prevented nifty shaders from being generated on hardware, because currently all the fancy stuff has to be done with software.
     
  7. arjan de lumens

    Veteran

    Joined:
    Feb 10, 2002
    Messages:
    1,274
    Likes Received:
    50
    Location:
    gjethus, Norway
    The main limitation of current hardware shaders is that you cannot do conditional branching or function calls, and are restricted to a rather small number of instructions - VS/PS3.0 (DirectX9.1) and GLSlang (OpenGL2.0) will add branching and limted support for function calls (limited call stack, no function pointers, no recursion) and raise (but not eliminate) the maximum instruction counts. Also, you have only a fixed set of registers available to hold your variables - if you need more variables that what the registers can hold, the shaders won't spill registers to memory, but just fail to compile altogether.
     
  8. Laa-Yosh

    Laa-Yosh I can has custom title?
    Legend Subscriber

    Joined:
    Feb 12, 2002
    Messages:
    9,568
    Likes Received:
    1,455
    Location:
    Budapest, Hungary
    Keep in mind that "Pixar quality" is a moving target - Toy Story was nice in 1995, but TS2 was loads better and more complex, not to mention Monsters or Nemo.

    The most important feature left to implement IMHO is hardware acceleration for subpixel displacement mapping and tesselation of subdiv/NURBS surfaces.
    Shaders seem to only need performance enhancements from now on.

    Hmm, perhaps you should add robust depth-mapped shadows. Although the RTS Panzers is already using perspective shadow maps AFAIK...
     
  9. mrbill

    Newcomer

    Joined:
    Feb 24, 2003
    Messages:
    36
    Likes Received:
    1
    Location:
    Marlborough, MA
    Re: What's the current status of "real-time Pixar graph

    You can find the paper at http://www.csee.umbc.edu/~olano/papers/ips/ips.pdf . But it didn't use a GeForce 2, the interactive demo was done on an Octane/MXI. Conceptually, ISL could be done on a GeForce 2 (or a Radeon), but the RenderMan shaders on multipass OpenGL needed two more extensions but those extensions were not yet implemented. But they are now!

    For recent progress in the area, see some of the ASHLI papers and presentations (you can find them on the developers site at ATI) and also the uberlight I spoke of is found in Jason Mitchell's presentation "ATI R3x0 Pixel Shaders" where he ported the RenderMan shader to HLSL. Marc Olano also showed work on realtime shaders with an Onx4 UltimateVision system, but I haven't found his presentation online yet.

    I'd love to know the answer to that! Procedural spatial anti-aliasing has come to be a requirement. But procedural temporal anti-aliasing died in the early days of shading because such shaders needed to be time aware. The solution back then was to sample the shader at multiple times, so the shader writer didn't have to worry about time, the shading system did. This solved the problem - but at a cost. And the realtime equivalent has been the accumulation buffer, also sampling at multiple times, but this also comes at a significant cost.

    But for high quality realtime maybe the cost savings of procedural temporal anti-aliasing will be worth the added complexity of the shaders? Don't know yet.

    Doesn't come across as a third-degree or a flame. I find the subject completely interesting as well.

    My only point is we are a *long* way from sanding the underside of the drawers in realtime. We can run some significant shaders now in realtime, a glimpse of what's coming. But we still can't even do the early short films in realtime, let alone an early feature film.

    Oh, but there is one thing I shouldn't fail to mention:

    A very good chance indeed, the OpenGL Shading Language! BTW, the first time someone asked was almost a decade ago. See http://groups.google.com/groups?sel...6@newsgate.sps.mot.com&oe=UTF-8&output=gplain . OpenGL 1.0 implementations had *just* begun shipping.

    -mr. bill
     
  10. VFX_Veteran

    Regular

    Joined:
    Mar 9, 2002
    Messages:
    683
    Likes Received:
    234
    I agree here. Hardware boards need a significant speed boost in generating geometry that approximates curved surfaces well. This low LOD for models in games has been annoying for years..:(

    I won't even get into geometry shaders then...;)

    -M
     
  11. VFX_Veteran

    Regular

    Joined:
    Mar 9, 2002
    Messages:
    683
    Likes Received:
    234
    Most of the shaders we deal with on a day-to-day basis can't be done in realtime nor will be for quite some time. By the sheer fact, that many TDs combine shaders (i.e. one shader calls another shader for an input value) to create complex looks would be daunting in realtime. Some of the simple shaders in the demos that have been done are "fair" approximations (for example, the elephant with wood shader) at best.


    -M
     
  12. Frank

    Frank Certified not a majority
    Veteran

    Joined:
    Sep 21, 2003
    Messages:
    3,187
    Likes Received:
    59
    Location:
    Sittard, the Netherlands
    Hi, all.

    I never did any 3D graphics programming, and I have never worked with 'real' renderers like Renderman. But I would like to comment on this topic anyway.

    :D

    It seems clear, that current graphic hardware renders things quite differently than the renderfarms. And that those cards could not execute such a program in real time. Let alone ray-trace a whole scene.

    But, some of you commented, that things like ray-tracing are avoided by renderfarms if possible, because it takes very much time. And that those programs render things in fixed-point format.

    So, some of the things a graphic card cannot do are avoided by the renderfarms as well, and there are even things the cards do better, like using floating point.

    While you cannot translate those renderprograms directly, why would you do that in the first place? Because it wouldn't change the way it is done at the moment? And you can run the same programs on the shaders of the cards? That would be good and fast, but not real-time. The cards haven't got the memory to use the resources needed for such a program anyway.

    If we look at it from the opposite side, could those cards approximate the quality of Toy Story by using the things they do well? I think so.

    For example, if we want to render skin, I was thinking you could do that by duplicating the object and scaling one of them a tiny bit, and giving the outermost one a semi-transparent surface. Not 'exact', but I think it would look quite realistic.

    Or curves. They look quite bad with polygons. But you could use the shader variant of displacement- or bump-maps to soften them up.

    And when you make a movie, you always know what is visible and what is not. So you could optimize things by removing all objects that aren't visible anyway. And you know the bandwith and don't have to run game logic, so you can use the CPU to make sure the GPU renders as optimal as possible.

    And it all depends on the definition of photo-realistic or pixar-quality graphics anyway. Even things as motion-blur can be done by the shaders (as some demos demonstrate), or by brute force.

    If we don't try, we don't know. Did anyone try to render a scene on a 9800 in an optimal way and comparing the output to that of a renderfarm?

    I am truly curious.
     
  13. VFX_Veteran

    Regular

    Joined:
    Mar 9, 2002
    Messages:
    683
    Likes Received:
    234
    Uh-oh.:)

    I don't know who told you that, but floating point is used all the time in VFX houses where intergration with a real scene and a synthetic scene are required. Most rendering is done or converted to floating point.

    The bottom line is that a renderfarm can implement any algorithm or technique simply because it can be programmed. Some of the things that are avoided in a RF is only because of time constraints to making a film - not because it can't be physically done yet like the 3d accelerator.

    Not sure I understand you on this.

    By your argument you can say that 3d hardware is there now to implement Toy Story since a good "approximation" would be to just place texture maps on everything!:) The reality of it is that there is a lot more going on in these houses that takes a team of people to implement with full programmability CPUs that can't be done in a 3d accelerator (yet).

    This wouldn't look right since there is no interaction between the light and the eye. You need a fresnel effect there somewhere and skin has many layers, not just 2.

    A variant of displacement mapping? Like what? Bump-mapping will not solve the problem because of silouettte edges still giving away the lower LOD model. In order for displacement mapping to work properly, your mesh still needs to be tessellated fine enough to get the proper detail from the map. Again introducing curves as the solution to the problem.

    I'm sure they do this now in games..:)

    Not there yet me friend..;)

    -M
     
  14. Frank

    Frank Certified not a majority
    Veteran

    Joined:
    Sep 21, 2003
    Messages:
    3,187
    Likes Received:
    59
    Location:
    Sittard, the Netherlands
    Thanks, Mr. Blue.

    :D

    Sorry if I get this wrong again, but do you take the shaders into account? As far as I got it, you can use them to create the same effects as you mention, as long as you can cram the function into the program space they have and you don't use conditional branches in the pixel shaders.

    And if you see how very much overdraw there is in games, (as they mostly just dump the whole scene to the graphic card and use the CPU to do the game logic and AI), that could surely be improved if you want to?

    Just to know, how could you render skin on a graphic card? Can you do that freshnell effect with a shader? Put a few textures with blood vessels etc. on the innermost layer, use that same map for a little bump-mapping and use the shader to create the effect. Would that work? Or if it wouldn't, how would you do it, given the limitations of the cards?

    btw. Do the render programs use actual curves instead of polygons?

    EDIT: The Caves screensaver from ATi does a really nice effect to make objects look like viewed through hot air. I had to look three times before I understood that it was as intended. And if you see the reflection of the car on the bumped metallic floor in the car demo, you have to look twice before you see what is happening. I just hadn't seen those things before, I just wondered if it was 'broken'. :D
     
  15. VFX_Veteran

    Regular

    Joined:
    Mar 9, 2002
    Messages:
    683
    Likes Received:
    234
    But that's just it. You can't cram the more complicated shaders (the ones used for production) into a few registers with limited conditional branching.

    The graphics card does not "see" the entire scene. Only a polygon at a time within it's veiw frustrum. Raytracing, you see the entire scene (which is what makes it difficult to do development in with a package like Maya).

    I don't know how to do this. All I know is that a true skin shader takes into account a lot of factors which can't be simulated on 3d hardware right now.

    Yes, but every renderer still renders with polygons to display models, so ultimately curves must be tessellated. I've seen files of just hair models that are over a 100MB in size!!:)

    Not meaning any ill towards whoever wrote that screensaver, but it's just an approximation and doesn't look that good.. :( There are some other demos of 3d hardware that look much better..(i.e. HDR and Tron's glow come to mind).

    -M
     
  16. bloodbob

    bloodbob Trollipop
    Veteran

    Joined:
    May 23, 2003
    Messages:
    1,630
    Likes Received:
    27
    Location:
    Australia
    Nah that wasn't same quality as the movie I could the lack of triangles in the model from the screen shot as well as lack of detail in the textures.
     
  17. Frank

    Frank Certified not a majority
    Veteran

    Joined:
    Sep 21, 2003
    Messages:
    3,187
    Likes Received:
    59
    Location:
    Sittard, the Netherlands
    Yes, I understand that part. That's why I made the comment that I don't know why you would want to translate those functions directly into shader programs. It would be as it is in the renderfarms, but we could probably come up with a program that is almost as good. If the public cannot see the difference when looking at the movie, it would be good enough.

    Yes, it only has a collection of objects, consisting of vertices to be transformed according to rules. (btw. I read a pdf, describing that raytracing could be done on current videocards, albeit not real time, of course.) But does that really matter? We wouldn't do raytracing anyway.

    Well, I haven't got the slightest idea how it is done in something like Renderman. But if you make a nice diffuse filter, the method I described could look very nice, wouldn't you agree? And it could be done by a 9800 in real-time.

    I know a current videocard hasn't got the memory to use resources like that. But only large 'corners' are seen in a fluid movie. And the current cards can use plenty of vertices. To 'smooth' the edges, we can also use AA (if they are small enough) and/or overlay an edge and smooth that.

    And I have seen some beautiful demos that show hair and fur. Not as nice as in Monsters, Inc., but very nice all the same. And it runs great on a 9600 as well, so a 9800 could do a lot better.

    Yes. :D But even Tron needs to run on older hardware as well. The shaders they use are among the simplest possible. I would very much like to see what a 9800 REALLY can do!
     
  18. VFX_Veteran

    Regular

    Joined:
    Mar 9, 2002
    Messages:
    683
    Likes Received:
    234
    We do need raytracing (or rather at least raycasting). It won't go away - trust me. Our film industry wants to implement the most accurate models that simulate physical as we can..and we'll need some form of raytracing or raycasting for that. True volumetrics are almost always done by stepping down a ray and accumulating a density.

    It's not as simple as you claim.

    You haven't really seen some really good hair demos otherwise, you wouldn't say that the 3d hardware demos were "beautiful"..Hehehehe. Pay attention to some of the future 3d feature films next year..:)

    Re-rendering a whole scene a consecutive number of times may be trivial, but it isn't until now that we have the technology to have such bandwidth. It's a post process shader that is very very nice and mimics similar results from Maya's own post-process rendering.

    I happen to be at this year's Game Developer's Conference to see the talk about the technology, and I highly respect it's results.

    The only other features that I'm looking forward to in the next gen of cards is bump-mapping (which is long overdue), HDR, and real light and shadow interaction. I would like to see displacement mapping but I fear there isn't enough power/bandwidth to put that in a game just yet.

    Cheers,

    -M
     
  19. Frank

    Frank Certified not a majority
    Veteran

    Joined:
    Sep 21, 2003
    Messages:
    3,187
    Likes Received:
    59
    Location:
    Sittard, the Netherlands
    I would love to be there as well, but I'm in (boring) workplace management automation. I hope to be in the game business in about a year from now. I'm wondering if I could write a nice 3D game engine myself. It's probably quite a bit harder than I think right now...

    :D

    btw. I was convinced, that current cards like a 9800 do displacement- and bump-mapping in hardware? And that HDR is used to make realistic shadow- and light effects? Is that not the case?
     
  20. Daliden

    Newcomer

    Joined:
    Sep 18, 2003
    Messages:
    89
    Likes Received:
    0
    Hmm, doesn't this contradict Peercy's paper "Interactive Multi-Pass Programmable Shaders"? As I understood it, he stated that any Renderman program can be broken down into several passes.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...