VFX_Veteran
Regular
Colourless said:And you wouldn't know anything about Ice Age would you Mr Blue?
Hehehehe. I know all about Ice Age...
-M
Colourless said:And you wouldn't know anything about Ice Age would you Mr Blue?
mrbill said:Daliden said:There even was a demonstration of real-time rendering, albeit at a low resolution and no AA.
Also albeit with baked, not procedural, textures. Albeit with simplified geometry. Albeit with no motion blur. You are talking about the Luxo, Jr. demonstration at MacWorld in 2001?
mrbill said:Daliden said:Time has passed, but has there been any progress in this field?
Yeah, at Siggraph 2003 I showed a toy ball procedurally shaded in real time (>60 fps) with one (not three) lights. On the plus side, it *was* motion blurred (by an ad-hoc procedural technique, which was simplified for a spinning ball, not a squashing/stretching/bouncing ball. But the method can be generalized to the full solution.) And the geometry, while simplified, was an accurate sphere to subpixel precision.
mrbill said:So, Reality Check: Toy Story 2 (and Toy Story before it) was rendered on the order of 1/1,00,000 real-time. (See Tom Duff's posting to comp.graphics.rendering.renderman, http://groups.google.com/groups?hl=en&lr=&ie=UTF-8&oe=UTF-8&selm=3909BD4B.A107CD05@pixar.com ) Just let Moore's Law (no imagined cubes required) work and a uniprocessor PC will manage it in real-time 30 years later. Toy Story was released in 1995, so software rendering of Toy Story in real-time on a single processor PC should be possible around 2025.
Assume hardware rendering is on the order of 1,000 times faster than software rendering. That puts be real-time "Toy Story" 15 years after its release, or 2010, still seven years from now.
mrbill said:Reality Check Squared: Luxo, Jr. was rendered 1986. Add 15 years = 2001. Oops, nobody could do Luxo, Jr in real time in 2001, and still nobody has done so in 2003.
mrbill said:There isn't any question it will happen someday. Real-time is getting damn good, but it's still got some to go. But by the time we get there, we'll still have more to go to catch up with today's films, let alone with tomorrow's.
In the meantime, we keep dreaming.
-mr. bill
CorwinB said:- Nvidia demonstrated a single character (no background) of Final Fantasy "The Movie" (using the term "movie" loosely here) running at around 10FPS on a GF4. That's not really "real-time" yet, but that's mighty impressive already...
- Of course, one of the difficulties in "Pixar-like animation" is that "Pixar-like animation" is actually done at a huge resolution, using levels of AA we can only dream of
Mr. Blue said:I would like to state (in my opinion only) that only a couple of shaders done with hardware have impressed me. Tron 2.0's glow shader in particular is very impressive indeed (along with the HDR implementation of Paul Devebec's paper (sp?)). If we could get that shader into Maya, it would allow us to see glows in realtime without rendering a post-process layer of the object with glow just to see our results when tweaking the shaders.
Very nice indeed..
-M
You can find the paper at http://www.csee.umbc.edu/~olano/papers/ips/ips.pdf . But it didn't use a GeForce 2, the interactive demo was done on an Octane/MXI. Conceptually, ISL could be done on a GeForce 2 (or a Radeon), but the RenderMan shaders on multipass OpenGL needed two more extensions but those extensions were not yet implemented. But they are now!Daliden said:Actually, I was talking about Siggraph in the year Geforce 2 was launched.
http://www.tech-report.com/etc/2002q3/nextgen-gpus/index.x?pg=2
This had to do with Mark S. Peercy & Co's paper titled "Interactive Multi-Pass Programmable Shaders". It seems I misremembered this one -- it wasn't realtime. But the card used was a Geforce 2! Surely the modern cards could do the same much much faster.
I'd love to know the answer to that! Procedural spatial anti-aliasing has come to be a requirement. But procedural temporal anti-aliasing died in the early days of shading because such shaders needed to be time aware. The solution back then was to sample the shader at multiple times, so the shader writer didn't have to worry about time, the shading system did. This solved the problem - but at a cost. And the realtime equivalent has been the accumulation buffer, also sampling at multiple times, but this also comes at a significant cost.Daliden said:On the other hand, does it [procedural motion blur] have to be a general solution? OK, a general solution would be nice, but couldn't you also have a lot of specialized solutions and use whichever is relevant?
Doesn't come across as a third-degree or a flame. I find the subject completely interesting as well.Daliden said:Hm, sorry if the above seems like a third-degree or some kind of flame. This is just a bloody interesting subject -- at least to me!
A very good chance indeed, the OpenGL Shading Language! BTW, the first time someone asked was almost a decade ago. See http://groups.google.com/groups?sel...6@newsgate.sps.mot.com&oe=UTF-8&output=gplain . OpenGL 1.0 implementations had *just* begun shipping.Skint said:Any chance of shading language being made available for OpenGL? )
Laa-Yosh said:Keep in mind that "Pixar quality" is a moving target - Toy Story was nice in 1995, but TS2 was loads better and more complex, not to mention Monsters or Nemo.
The most important feature left to implement IMHO is hardware acceleration for subpixel displacement mapping and tesselation of subdiv/NURBS surfaces.
Daliden said:Is there any fundamental difference between software shaders and hardware shaders? I was in the understanding that any software shader can be implemented with hardware (ok, might need several passes, and perhaps the current FP precision isn't always sufficient).
DiGuru said:Hi, all.
I never did any 3D graphics programming, and I have never worked with 'real' renderers like Renderman. But I would like to comment on this topic anyway.
But, some of you commented, that things like ray-tracing are avoided by renderfarms if possible, because it takes very much time. And that those programs render things in fixed-point format.
So, some of the things a graphic card cannot do are avoided by the renderfarms as well, and there are even things the cards do better, like using floating point.
While you cannot translate those renderprograms directly, why would you do that in the first place? Because it wouldn't change the way it is done at the moment? And you can run the same programs on the shaders of the cards? That would be good and fast, but not real-time. The cards haven't got the memory to use the resources needed for such a program anyway.
If we look at it from the opposite side, could those cards approximate the quality of Toy Story by using the things they do well? I think so.
For example, if we want to render skin, I was thinking you could do that by duplicating the object and scaling one of them a tiny bit, and giving the outermost one a semi-transparent surface. Not 'exact', but I think it would look quite realistic.
Or curves. They look quite bad with polygons. But you could use the shader variant of displacement- or bump-maps to soften them up.
And when you make a movie, you always know what is visible and what is not. So you could optimize things by removing all objects that aren't visible anyway. And you know the bandwith and don't have to run game logic, so you can use the CPU to make sure the GPU renders as optimal as possible.
If we don't try, we don't know. Did anyone try to render a scene on a 9800 in an optimal way and comparing the output to that of a renderfarm?
I am truly curious.
DiGuru said:Thanks, Mr. Blue.
Sorry if I get this wrong again, but do you take the shaders into account? As far as I got it, you can use them to create the same effects as you mention, as long as you can cram the function into the program space they have and you don't use conditional branches in the pixel shaders.
And if you see how very much overdraw there is in games, (as they mostly just dump the whole scene to the graphic card and use the CPU to do the game logic and AI), that could surely be improved if you want to?
Just to know, how could you render skin on a graphic card? Can you do that freshnell effect with a shader? Put a few textures with blood vessels etc. on the innermost layer, use that same map for a little bump-mapping and use the shader to create the effect. Would that work? Or if it wouldn't, how would you do it, given the limitations of the cards?
btw. Do the render programs use actual curves instead of polygons?
EDIT: The Caves screensaver from ATi does a really nice effect to make objects look like viewed through hot air.
CorwinB said:Very interesting thread, and a good laugh with the classic Google post (BTW, back then, everyone and his mother was heralding "Holywood quality graphics").
A few things :
- I think Square is using the PS2 "Cube" (or something like that) for some rendering ?
- Nvidia demonstrated a single character (no background) of Final Fantasy "The Movie" (using the term "movie" loosely here) running at around 10FPS on a GF4. That's not really "real-time" yet, but that's mighty impressive already...
- Of course, one of the difficulties in "Pixar-like animation" is that "Pixar-like animation" is actually done at a huge resolution, using levels of AA we can only dream of
Mr. Blue said:DiGuru said:Thanks, Mr. Blue.
Sorry if I get this wrong again, but do you take the shaders into account? As far as I got it, you can use them to create the same effects as you mention, as long as you can cram the function into the program space they have and you don't use conditional branches in the pixel shaders.
But that's just it. You can't cram the more complicated shaders (the ones used for production) into a few registers with limited conditional branching.
And if you see how very much overdraw there is in games, (as they mostly just dump the whole scene to the graphic card and use the CPU to do the game logic and AI), that could surely be improved if you want to?
The graphics card does not "see" the entire scene. Only a polygon at a time within it's veiw frustrum. Raytracing, you see the entire scene (which is what makes it difficult to do development in with a package like Maya).
Just to know, how could you render skin on a graphic card? Can you do that freshnell effect with a shader? Put a few textures with blood vessels etc. on the innermost layer, use that same map for a little bump-mapping and use the shader to create the effect. Would that work? Or if it wouldn't, how would you do it, given the limitations of the cards?
I don't know how to do this. All I know is that a true skin shader takes into account a lot of factors which can't be simulated on 3d hardware right now.
btw. Do the render programs use actual curves instead of polygons?
Yes, but every renderer still renders with polygons to display models, so ultimately curves must be tessellated. I've seen files of just hair models that are over a 100MB in size!!
EDIT: The Caves screensaver from ATi does a really nice effect to make objects look like viewed through hot air.
Not meaning any ill towards whoever wrote that screensaver, but it's just an approximation and doesn't look that good.. There are some other demos of 3d hardware that look much better..(i.e. HDR and Tron's glow come to mind).
-M
DiGuru said:Yes, it only has a collection of objects, consisting of vertices to be transformed according to rules. (btw. I read a pdf, describing that raytracing could be done on current videocards, albeit not real time, of course.) But does that really matter? We wouldn't do raytracing anyway.
Well, I haven't got the slightest idea how it is done in something like Renderman. But if you make a nice diffuse filter, the method I described could look very nice, wouldn't you agree? And it could be done by a 9800 in real-time.
And I have seen some beautiful demos that show hair and fur. Not as nice as in Monsters, Inc., but very nice all the same. And it runs great on a 9600 as well, so a 9800 could do a lot better.
Yes. But even Tron needs to run on older hardware as well. The shaders they use are among the simplest possible. I would very much like to see what a 9800 REALLY can do!
Mr. Blue said:Re-rendering a whole scene a consecutive number of times may be trivial, but it isn't until now that we have the technology to have such bandwidth. It's a post process shader that is very very nice and mimics similar results from Maya's own post-process rendering.
I happen to be at this year's Game Developer's Conference to see the talk about the technology, and I highly respect it's results.
The only other features that I'm looking forward to in the next gen of cards is bump-mapping (which is long overdue), HDR, and real light and shadow interaction. I would like to see displacement mapping but I fear there isn't enough power/bandwidth to put that in a game just yet.
Cheers,
-M
Mr. Blue said:DiGuru said:Thanks, Mr. Blue.
Sorry if I get this wrong again, but do you take the shaders into account? As far as I got it, you can use them to create the same effects as you mention, as long as you can cram the function into the program space they have and you don't use conditional branches in the pixel shaders.
But that's just it. You can't cram the more complicated shaders (the ones used for production) into a few registers with limited conditional branching.