What's the current status of "real-time Pixar graphics&

I would like to make a general observation.

Why do movie companies would want to use bump-mapping? Because the cost of real displacement mapping is too high. And why use displacement mapping? Use more vertices! But why would you want to use vertices in the first place? Splines are much better!

To make your models look best, you want N (yes) polygons, made out of splines. Rendered exactly as specified.

Ok. That takes care of the geometry. Now we want it to look nice. Do we want to 'paint' it? We might. It is easy and follows the way we see things. But that's just because we use only the outer surface. And things like eyes should be done independently.

It would be much better to generate those objects 'inside-out'. Create generic objects, like an eye, that contain the way they look within.

Hm. Are we talking molecules? Atoms? REAL simulations? Yes.

Can we do that? No.

We need shortcuts. As we only see the outer layer, we can remove all that is inside. And so we need textures. Paint.

So, any way you look at it, even the best movie CG is *NOT* 'real'. Hell, nobody even knows how to render the most basic thing: a convincingly realistic human.

We all try. Hard. The hardware people. The artists. The game people. The movie people. But it isn't really 'real' yet. It might look extremely awesomely stunning, *for CG graphics*. If you're an insider and know how it is.

Does that mean, that game graphics are bad? No. Au contrary. They're sublime, compared to older games. Does that mean, that movie CG images are bad? Yes. Absolutely. Anyone can tell, just by looking at it, that it's not really a 'real' camera-captured scene.

But everyone is out to change all that. And succeeding really great. And really fast. But we're not there yet.

Does that mean, that game developers do worse than the movie people? No. Or the opposite? No.

8)

We need a benchmark. Not of speed. But of the 'realism' of the graphics. And we (more or less) chose Toy Story, as it was the first milestone.

Can a current, top-of-the-bill graphics card render Toy Story in the same quality all by itself? The current riot about the nVidia drivers cheating comes to mind. Does it matter? If you are there, looking at the big screen, and you cannot say if it is done real-time by a graphic card or prerendered, it does not matter.

Can that current, bleeding-edge graphic card render it a lot faster than the renderfarm that was used? No. Can the same amount of graphic cards as there were computers used do it? Absolutely. Without raising a sweat. Any way you look at it.
 
I wonder what a developer would get out of the 9800pro if they would code for it to-the-metal like they do for the NV2A? :devilish:

As it is almost always: the budget/software isthe limiting factor in the advancement of real-time rendering, not the hardware. You could argue for PS 3.0 and VS 3.0, etc., but developers haven't even taken v2.0 to the max. Many times it is cost and time (i.e. money), not lazyness (programmers are persistent), which force developers to take the less tedious route (especially when they are not forced to develop for a target platform). We choose branching and loops rather than develop a shader for each situation (ala DX8) or unrolling loops with conditionals.
 
Luminescent said:
I wonder what a developer would get out of the 9800pro if they would code for it to-the-metal like they do for the NV2A? :devilish:
I had some crazy ideas about making a custom D3D driver and app. that would write "to the metal". Complete violation of the API, but you could do some cool stuff that way :D Too bad I couldn't enter that "shading competition" :p (I'm no app. genius, but I know how do to some cool stuff if I can get direct access to the HW.)
 
DiGuru said:
So, any way you look at it, even the best movie CG is *NOT* 'real'. Hell, nobody even knows how to render the most basic thing: a convincingly realistic human.
Well, the reason this is hard is mostly because of animation. We, as humans, are unbelievably sensitive to very subtle things in people around us. This extreme attention to detail that seems built-in makes it very hard to properly draw a human on a computer, let alone animate one.

However, we are to the point where a digital girl can look fairly convincing in a still image (Aki Ross, from Final Fantasy: The Sprits Within looks quite good in the right still). It won't be too long before they look convincing in motion.

And anyway, as a side note, it really isn't necessary to model the inner workings of items to be visually convincing. The inside of an object can typically be read off as one or more simple properties, such as temperature.
 
DiGuru said:
We need a benchmark. Not of speed. But of the 'realism' of the graphics. And we (more or less) chose Toy Story, as it was the first milestone.
Yes, current analysis of image quality is quite inferior.

If you're going to be analyzing the hardware, though, you can't possibly expect to be able to somehow compare it to reality.

Instead, you have to measure the capabilities of the hardware. How the software is designed is a separate issue, and it's the software that will provide cinematic quality. The hardware simply allows it to happen.

So, how does one examine image quality of hardware? In general, one can only compare image quality if there are standard methods for performing certain operations. Of course, there are standard operations, such as texture filtering, anti-aliasing, and pixel/vertex shading.

If I break the image quality analysis into these three parts, here's how I would measure each one:

1. Texture filtering: by far the most complex. One has to separately examine the amount of detail the texture filtering method retains, and the amount of aliasing it produces. The best image quality comes from the highest amount of detail with the least aliasing.

2. Anti-aliasing: Almost as complex, though it is usually pretty easy to detect which technique has the better edge AA. So, compare edge AA quality, and ask whether the situations where AA fails, if they exist are problematic for the application chosen (i.e. alpha textures for MSAA).

3. pixel/vertex shaders: How flexible are they? If any shader can be run on the hardware, then all that remains to be asked is at what precision are the shaders run? Currently, pixel shaders are as flexible as they need to be. Branching and long shaders can be simulated through multipass. But vertex shaders need more flexibility (texture reads for multipass in the vertex shader). Since all hardware will soon have this amount of flexibility, the only remaining question is precision. This is trivial to compare.

And, of course, once you've made the image quality comparisons, you have to recognize that these things are inherently tied to performance considerations. For example, if you want to use 8xS FSAA, 8x quality anisotropic filtering, and all 32-bit FP calculations, the NV35 might be considered to have better image quality in all areas compared to the R350. However, the performance will be much lower (on the order of 1/8th).
 
Chalnoth said:
For example, if you want to use 8xS FSAA, 8x quality anisotropic filtering, and all 32-bit FP calculations, the NV35 might be considered to have better image quality in all areas compared to the R350. However, the performance will be much lower (on the order of 1/8th).
You actually believe that 8xS on the NV35 gives better AA quality that 6x, or even 4x, AA on the R350? Sorry, I think you are completely wrong. The lack of a rotated grid on the NV35 makes 8xS about equivalent to 4x AA on the R350, but the lack of gamma correction is a big strike against the NV35.
 
Chalnoth, I'm not sure if I agree with you about how to make an 'image quality' benchmark.

It does not matter how the image is generated. What counts is only if it looks realistic to the viewers.

If I would make such a benchmark, I wouldn't look at the technical aspects at all. I would just ask the audience which one looked more realistic.
 
DiGuru said:
It does not matter how the image is generated. What counts is only if it looks realistic to the viewers.

If I would make such a benchmark, I wouldn't look at the technical aspects at all. I would just ask the audience which one looked more realistic.
But you *have* to. Otherwise the comparison is subjective. If, for instance, you had a few people to do the comparison for you, the results will depend on the scene you selected.

It is possible to objectively analyze things like edge AA and texture filtering. Measurements and data mean vastly more than subjective analyses.
 
OpenGL guy said:
You actually believe that 8xS on the NV35 gives better AA quality that 6x, or even 4x, AA on the R350? Sorry, I think you are completely wrong. The lack of a rotated grid on the NV35 makes 8xS about equivalent to 4x AA on the R350, but the lack of gamma correction is a big strike against the NV35.
I was more trying to point out that you can't make an image quality comparison without also making a performance comparison. Personally I think 8xS is effectively useless (unless you're playing very old games...).
 
Dio said:
It's back to Greg's presentation. It's rare anyone is allowed to do anything particularly original at the moment because unless there are guaranteed sales there's not the money to fund it.
Just to add to that - I bumped into an ex-colleague at Siggraph who's been working on a game for a few years now. It crosses a few genres and introduces some new ideas but he can't get publishers interested. Why? Because it doesn't fall into one of these predefined categories.
 
Simon F said:
Just to add to that - I bumped into an ex-colleague at Siggraph who's been working on a game for a few years now. It crosses a few genres and introduces some new ideas but he can't get publishers interested. Why? Because it doesn't fall into one of these predefined categories.
Get him to give Peter Molyneux a call, if he's got a substantial outline and some working code.

Lionhead are very much looking out for satellite operations particularly those with 'original' game ideas. If they are on your side, finding a publisher suddenly gets a lot easier ;)
 
Chalnoth said:
DiGuru said:
It does not matter how the image is generated. What counts is only if it looks realistic to the viewers.

If I would make such a benchmark, I wouldn't look at the technical aspects at all. I would just ask the audience which one looked more realistic.
But you *have* to. Otherwise the comparison is subjective. If, for instance, you had a few people to do the comparison for you, the results will depend on the scene you selected.

It is possible to objectively analyze things like edge AA and texture filtering. Measurements and data mean vastly more than subjective analyses.

But the realism *is* subjective. For starters, how are you going to determine by technical means which rendereing of a human looks most life-like?

And if we want to compare some scene across platforms, should we specify that it has to be rendered as exactly according to the specs as possible? That way, we kill all innovation and the platform used to design the benchmark will win hands down.

It might even be a good thing at this moment in time, to let the audience choose between different scenes running on different platforms. Because there's no cross-platform scene that can be run on all hardware that I'm aware of. And until there is, we should reward people who come up with cutting-edge techniques.

A good benchmark should push the limit and establish itself.
 
DiGuru said:
I have been looking for something that shows what a bleeding-edge card like a 9800 can REALLY do, but no luck. It's really all simple stuff. Does anyone know of some demo that shows off what can be done at the moment? I would really want to see that.

Not sure what stuff you have seen, or what exactly you mean with "simply stuff". Anyway, I got some demos on my site that uses the latest and greatest of the hardware, but many of them could of course be viewed as "simple stuff" since they tend to just show one effect each and have fairly simple scenes.
 
1. Take a P4 3.2, 1GB DDR 400 dual channel and a 256mb 9800 PRO, throw this into a closed box.

2. Let some incredible developers make a game around this spec focusing mainly on the graphics.

3. Shit your pants when you see what they did with a game based around this spec. It would be on a order of magnitude higher then both D3 and HL2. Astounding.

Hell just LOOK what's being done on Xbox, a game based around the above spec would look beyond incredible. Something that would make people like Chaphack get down on their knee's and start crying.
 
In addition, give the developer full access to the microcode of the graphics hardware.

Just look at the kinds of things Factor 5, Naughty Dog, and a number of other excellent console developers are doing on older, supposedly "limited" hardware.
 
Giving developers access to the low-level aspects of PC hardware is against the ethos of PC design. Said simply, by having standard interfaces, many companies can compete in the same space.

This is also part of the reason why I support going for nothing but high-level language shader programming. Assembly programming is too hardware-specific. The assembly should be done away with entirely, with the compiler compiling directly to the machine code.
 
As shaders become more complex, writing them in assembly will become virtually impossible, in the same way it is virtually impossible to write large projects in assembly on CPU platforms (for ordinary, sane mortals, that is).
 
Dio said:
As shaders become more complex, writing them in assembly will become virtually impossible, in the same way it is virtually impossible to write large projects in assembly on CPU platforms (for ordinary, sane mortals, that is).

Lets hope Mr. Gibson (spinrite) does not read this. ;)
 
It is still done.

I have encountered a DirectX game that is written in 100% assembly language. I am not the only one to have commented to the team that they must be 1) asm gods and 2) stark staring bonkers, certainly afterwards, probably before as well..
 
This very gentlemen said:
In addition, give the developer full access to the microcode of the graphics hardware.
For the sake of clarification, this post does not represent my position on the relationship between 3D applications, hardware, and API's (I was only contemplating the hypothetical scenario posed by Paul). As many of you, I believe there should be a general layer of abstraction in the API, efficient code, and hardware level optimization in the hands of the compiler (assuming the IHV, or better yet, the gentlemen in charge of the complier is trustworthy ;) ). This keeps things simple and scalable.

P.S. We trust you, Dio, andypski. :D
 
Back
Top