What's the current status of "real-time Pixar graphics&

bloodbob said:
Simple if a raedon 9800 can do it in real time WTF are they spending 100Ks on server farms?

No one said it can. But it's just a matter of time before GPUs can. Turn back a few years and your question translates into:

"If Intel/Linux render farms can render <insert impossible job here>, why are we they spending 100k on SUN/SGI workstations?"

Sure, some are still using Sun, but the transition to Intel/Linux is obvious. Sooner or later, there is going to be another transition, and that's what we're talking about.
 
(Hi uh first post & most humble to be doing so in such company :oops: Decided to post this in this thread rather than the other one where this discussion is being carried out too)

Realtime production graphics are not 'coming soon', they are already here on BBC2:
http://www.totalwar.com/time.htm

That is using the game engine, models & textures for the upcoming game Rome:Total War.

Now, this is hardly even vaguely close to being on the same continent as the movie quality rendering we'll be seeing in eg LOTR:ROTK (as far as I know that engine isn't even using any shaders at all) but its on UK TV screens every week right now & I bet its running on a Radeon9800pro.

For this essentially diagramatic purpose, its entirely adequate.
TV news animations could be similarly live-rendered.



I don't see high budget photorealistic movie effects being done on GPUs any time particularly soon since the required functionality, whilst tantalisingly close (will DX10 be enough?), is just plain not there yet.

'Low quality' render previews with shaders are well in use already (see LOTR movie exhibition) & I believe have been found to be very useful.



My understanding is that the r3x0 chips can theoretically be run in groups of up to 256 chips?

That sounds like the kind of capability Carmack would have been refering to in the Jun 02 comment.

I've not even heard of a working 2 chip implementation yet but I can certainly see the prospect of a DX10 (or whichever generation) class chip being mounted in custom server type boards and used as the basis of a 'VPU farm' for performing advanced high quality effects at a greater rate (not likely to be realtime) than a similar number of CPU chips in a conventional render farm.



Kind of a side issue but can r3x0 perform a single fp96 calculation rather than 4 fp24 components?
If so, then a board with 4 r3x0 chips could surely output one component each at that precision right?
(Or a single chip could output two (since 8 pipes) four component fp96 pixels per clock for a single pass shader???)

Similarly, when will ATI be delivering a full fp32 based core?
 
PSarge,

PSarge said:

A VPU can do 8->32 in a pass and do as many passes as you want. Tends to be faster than a CPU at it as well. If your shading tree takes more than 32 textures in a a single node I'd be surprised (but feel free to surprise me :) )

Yes, but this is assuming that the textures have already been built and into memory. In practice, there are many shaders that actually build textures before applying them to the mesh.

o [A Graphics Chip] can *not* implement noise shaders [in realtime] (all instructions done in hardware with *no* lookups) due to it's limited instruction set and API.

It can be done. People tend not to because it's slow for realtime use, but it can be done. (Oh, if you're doing gradient noise, i.e. classic Perlin, then you'll need to do lookups into your gradient table, but you do that on a CPU anyway)

Gradient Noise is not what I'm talking about. I'm talking about computing a derivative vector by building a gradient and using it for bump-mapping purposes.

How about conditional branching and looping (which is required to implement of fully functional filtered noise? How about shader trees where several shaders can call other shaders? Can I compute a gradient noise vector by calling it 3 times shifting my x, y, and z values by a small delta?

o Conditional branching, no, but predicates, yes, so we can do the same thing in a different way.
o Looping, yes.
o Shaders calling other shaders. Two ways of doing that come to mind. Either running ShaderB inline when ShaderA calls it, or Run shaderB first, write the output to a p-buffer and then use that as a texture in ShaderA.
o Calling a function three times with varying parameters :rolleyes: Please.

If these things can be done now, where are the demos??? The very best demos I've seen don't even come close to a very simple scene using a shader in our industry! I don't understand why you guys think (with your shortcuts) that this can be done. The gaming community alone can't even implement technology available to them from years ago (simple bump-mapping). Some of the better technology demos I've seen in papers still do some sort of pre-computation by the CPU.

o [A Graphics Chip] can *not* implement complex lighting models [in realtime] also due to it's design and API.

That is rapidly becoming false. OpenGL SL and HLSL have an almost identical feature set to RenderMan SL, sure they're still in their infantcy, but it's not going to be long before you're going to be able to specify an abitarily complex shader and have it execute. Maybe not in realtime, maybe not written in the same style that you're used to, but faster than doing the same thing on a CPU I'll wager.

The problem is: as the Real-time world comes closer to some of the early complex lighting models, we are researching even more complex models. The film industry will be ahead for some time. That's just the way it is..

I think the main division between the VFX community and the Hardware 3D community is that the VFX community tend to see what the hardware is capable of in realtime and think that that's it. Games companies don't right complex lighting models because they want it to run in realtime. That doesn't mean it can't be done, and done quickly (comparitavly) .

No, game companies are lazy, in my opinion. There are only a few that will risk pushing the envelope while the majority of them will stick with the low-res models with highres textures.


As a parting thought. Why do you think NVIDIA bought Exluna?

I think Nvidia and ATI would love to collide with the VFX world. It would stir up so much buzz, it would be incredible for them. Afterall, both of their goals are to eventually obtain the kind of stuff you see in the theatres.

Edit: P.S. Why render to 2Kx768? Why would you want a 2.6:1 aspect ratio?

Letter box format. The screen isn't as high as it is wide. And I think the second number is wrong. It's probably 720 or something like that. I forget.


-M
 
Mr. Blue said:
Yes, but this is assuming that the textures have already been built and into memory. In practice, there are many shaders that actually build textures before applying them to the mesh.

Then we need pre-processes to generate the textures, but these can still run on the VPU. The fact is VPU's work well on highly parrallel tasks (e.g. pre-computing all the values in the texture in one go), whereas a single CPU really doesn't care if the work is done that way or through dependent evaluation (e.g. we need the texel at (0.3, 0.4) so let's go calculate it). The advantage of the former is that it suits the highly pipelined SIMD arrays we're starting to see. The advantage of the latter is that it only ever does the work for necessary bits and no more.

Swings and Roundabouts.

o [A Graphics Chip] can *not* implement noise shaders [in realtime] (all instructions done in hardware with *no* lookups) due to it's limited instruction set and API.

It can be done. People tend not to because it's slow for realtime use, but it can be done. (Oh, if you're doing gradient noise, i.e. classic Perlin, then you'll need to do lookups into your gradient table, but you do that on a CPU anyway)

Gradient Noise is not what I'm talking about. I'm talking about computing a derivative vector by building a gradient and using it for bump-mapping purposes.

I think we started talking at cross purposes. You said it wasn't possible to write a noise generator in hardware. I said it was, and used a gradient noise function (i.e. a noise function which has random gradients at each lattice point) as an example. I wasn't refering to your need to calculate the derivitive of the noise function, which can be done, as you said, by using deltas in x, y and z.

If these things can be done now, where are the demos??? The very best demos I've seen don't even come close to a very simple scene using a shader in our industry! I don't understand why you guys think (with your shortcuts) that this can be done. The gaming community alone can't even implement technology available to them from years ago (simple bump-mapping). Some of the better technology demos I've seen in papers still do some sort of pre-computation by the CPU.

First I'd always expect the CPU and the VPU to work together on the job. The CPU is more flexible, but the VPU is faster at what it can do.

In truth, the hardware is capable. The software (Compiler technology for example) is not there yet. It's coming, but it's young, and isn't as general a solution as I may be making it sound like. It's close though. It's sooooo close.

The problem is: as the Real-time world comes closer to some of the early complex lighting models, we are researching even more complex models. The film industry will be ahead for some time. That's just the way it is..

And that's the way I'd always expect it to be. I just think that the next step is for renders to start using the power of the VPU's to allow even greater things to be done. Maya is already starting to go this way I believe, but most people are just ingoring the powerhouse which is sitting in their workstations.

No, game companies are lazy, in my opinion. There are only a few that will risk pushing the envelope while the majority of them will stick with the plastic models with higher res textures.

I won't argue with that, but there is also the fact that budgets are probably a lot larger in the VFX industry.

Edit: P.S. Why render to 2Kx768? Why would you want a 2.6:1 aspect ratio?

Letter box format. The screen isn't as high as it is wide.

Yeah, but 2.6:1! No film is wider than 2.35:1, and a lot are just 1.85:1. That's an extra 14% workload your putting on all your machines. (Hey, I'm just trying to understand. It seems like a resonable amount of x-res, and very little Y.)
 
PSarge said:
Mr. Blue said:
No, game companies are lazy, in my opinion. There are only a few that will risk pushing the envelope while the majority of them will stick with the plastic models with higher res textures.
It's more than that. Right now there are huge budget pressures in the game industry, and for the 'moving target' platforms like the PC, not many developers have the resources to hit the very top end.

Once DX9-class chips become standardised (most likely, when they get adopted by consoles) we will see big strides forward.

I have many friends in the games industry and they are not lazy, working ridiculous hours!
 
Dio said:
PSarge said:
Mr. Blue said:
No, game companies are lazy, in my opinion. There are only a few that will risk pushing the envelope while the majority of them will stick with the plastic models with higher res textures.
It's more than that. Right now there are huge budget pressures in the game industry, and for the 'moving target' platforms like the PC, not many developers have the resources to hit the very top end.

Once DX9-class chips become standardised (most likely, when they get adopted by consoles) we will see big strides forward.

I have many friends in the games industry and they are not lazy, working ridiculous hours!

LOL! Well, I didn't mean lazy in that sense. Why would budgets play into it though? DX7.0 had bump-mapping and it STILL has not been a common feature in today's games.

-M
 
PSarge said:
Yeah, but 2.6:1! No film is wider than 2.35:1, and a lot are just 1.85:1. That's an extra 14% workload your putting on all your machines. (Hey, I'm just trying to understand. It seems like a resonable amount of x-res, and very little Y.)

I believe frames have black borders of a fixed thickness. So not the whole image. I'm no compositor so I don't know much more than that. Also, I'm not sure about the y res (as I corrected my last post).

-M
 
Mr. Blue said:
Why would budgets play into it though? DX7.0 had bump-mapping and it STILL has not been a common feature in today's games.
The problem of 'improving game technology' is a large one. Here's a great presentation from the inestimable Greg Costikiyan on the topic.
http://www.costik.com/digitalgenres.ppt

If you look at the X-box games, and the ports from there, then there's definitely more 'shader effects' such as bump mapping - but because 1.1 shaders are so limited I wouldn't really file it as 'getting significantly closer to cinematic rendering'.

PC games technology is being pushed forward most by the companies with relatively high resources (id, valve, etc.) and I don't expect that to change until the next console generation. Then I would expect we will see a revolution, over the first few years of life of the console. But this comes back to Greg's observations, that developing the extra content will come at massive extra cost...
 
Dio said:
The problem of 'improving game technology' is a large one. Here's a great presentation from the inestimable Greg Costikiyan on the topic.
http://www.costik.com/digitalgenres.ppt

Nice read....I really appreciate the point-blank approach, such as:

One reason for the high interest in mobile games (despite scant revenues): Low budgets, short dev cycles, don’t have to spend 3 years of your life on a fucking Scooby Doo game that will probably die at the software store anyway
 
Dio said:
Mr. Blue said:
Why would budgets play into it though? DX7.0 had bump-mapping and it STILL has not been a common feature in today's games.
The problem of 'improving game technology' is a large one. Here's a great presentation from the inestimable Greg Costikiyan on the topic.
http://www.costik.com/digitalgenres.ppt
I think the line...
You won?t sell a pitch unless the marketing weasels know how to sell the game
...nicely summed up, well, almost the state of the entire entertainment industry :p

Joe DeFuria said:
Nice read....I really appreciate the point-blank approach, such as:

... on a fucking Scooby Doo game that will probably die at the software store anyway
Hmm... cartoons, death, and bestiality... what more could you possibly want? :D
 
Greg's always a good read. His blog just doesn't get updated often enough. And, of course, there is Paranoia...

The computer is your friend.
 
LOL ! well i did do 1 yr on a scooby doo game that died in a software store :) .

gah and i'm doing mobile phone stuff now :-/ . 6-10 weeks for a series60 J2me product from scratch, from home , can't afford offices!

-dave-
w.r.t. bump-mapping .
i thought nobody does bump-mapping because it doesnt actually look that good when integrated into the rest of an enviroment? And isnt there the mip-mapping issue ? Also wasnt the dx7 way not dot-3 and even now all cards don't support dot-3 in hardware ?

anyway if we Do compare games to films, we shouldn't compare the game, we should compare the CUT-SCENES ;), after all I've found films to be far-far-far behind films in technology...... The lack of interaction for modern times is just rediculous :)
 
Mr. Blue said:
Why would budgets play into it though? DX7.0 had bump-mapping and it STILL has not been a common feature in today's games.

Because someone needs to create content for it.
And bump-mapping means at least twice the work per model.
You either need twice as many artists or twice the time.
Thats where the budget comes in.

And unless that is the single selling point of your game (like in Doom3), the extra money spent is likely not come back.
 
Hyp-X said:
Mr. Blue said:
Why would budgets play into it though? DX7.0 had bump-mapping and it STILL has not been a common feature in today's games.

Because someone needs to create content for it.
And bump-mapping means at least twice the work per model.
You either need twice as many artists or twice the time.
Thats where the budget comes in.

And unless that is the single selling point of your game (like in Doom3), the extra money spent is likely not come back.

Hmmm... I'm not understanding this industry then. Are you saying that because it takes more staffing and more time to implement a bump-map, that most game companies aren't going to implement it? That can't be true!

Can't a bump-map generator do all the work? You only need to derive it from a regular texture map.

-M
 
Mr. Blue said:
Can't a bump-map generator do all the work? You only need to derive it from a regular texture map.
The Doom3 approach is that the artists create 3D objects at very high poly counts, then a low count model and a bump map is generated from that.

However you view the problem, somebody has to take the time to say 'that is bumpy, and the bumps look like this'.

Hmmm... I'm not understanding this industry then. Are you saying that because it takes more staffing and more time to implement a bump-map, that most game companies aren't going to implement it? That can't be true!
Unless the bump mapping will sell more games, it won't get done. On the current PC generation, it's hard to argue that adding bump mapping (or, generalising, complex shaders) will help sales significantly. There has to be a cost/benefit analysis somewhere.
 
Mr. Blue said:
Can't a bump-map generator do all the work? You only need to derive it from a regular texture map.

Are you mean the diffuse texture map used for the model?
That would look pretty bad as it's very hard to deduce the actual bumps from a diffuse texture (if you know such a tool tell me it would be very useful).

Or do you mean from a different texture?
Then someone has to make that texture...

It's actually even more complicated.
Textures are created with a two step process: 1) collect resources 2) mix the texture from the resources
Step 1 usually involves taking pictures of real objects / buildings, or acquiring such pictures from image libraries. This is very good for the diffuse texture, but how do you take a picture that can be used as a source for bump mapping?
 
Did anyone mention ASHLI in this thread? It lets you compile RenderMan shaders into multipassed DX9 shaders. Now, this isn't produciton quality or fast enough for 2048x768, but I think we're getting pretty close to Toy Story in real time.

In any case, I think we are really looking for Toy Story quality in real time, and game developers implement essentially equivalent (as far as most people can tell) scenes at far less computation cost. Once we get VS 3.0 (uber buffers are halfway there), we can do many more things on the GPU, like cloth simulation.

I think Toy Story quality will be available in real time within a year or two.
 
Hyp-X said:
It's actually even more complicated.
Textures are created with a two step process: 1) collect resources 2) mix the texture from the resources
Just a snide somewhat OT comment:
You're overcomplicating things. David Braben never visited thousands of star systems to write down their names one by one. So what are you going to do, limit your scope to known stars and our solar system :rolleyes: ?
 
Back
Top