Quake4: Nvidia6800 vs ATI Xenon

SubD said:
It means they don't like what you are saying and want you to stop.

And you are? If you "don't like what he's saying", don't read.
Keep in mind that this board is essentially a Microsoft/Windows x86 pc graphics site and the
vast majority of people here come from that world. You aren't going to get a serious discussion of performance problems with the 360 hardware here.

And who the hell are you? Get out of here if you think that no decent discussion will ever come out of this place!
 
It almost seems after everyone makes a post that makes sense or explains the possibilites he replies with a question that has no bearing on any current issues.

If I could decipher it just looks like ihamoitc2005 just posted this question 10 times using different wording:

Does the Xbox 360 suck?
 
For vertex work, the NV40 has a very granular MIMD architecture whilst Xenos has a coarse SIMD architecture. Long shaders and dynamic branching for vertex work 'may' not be as suitable on Xenos...
 
london-boy said:
No. Carmack made the point (many times) that he wanted the shadow to be the same detail as the model's edges. That's why the models were so low-polys, because if they were more "rounded", then the shadows would have to be more complex, which would spiral down the performance.
Personally i think there is more than enough juice to have very good shadows on next gen machines, and Carmack's approach is just wrong for these architectures.

Whether you use the actual visible model for casting shadows, or you use an invisible model is entirely up to the artists. The material system let's you turn shadows off and on, so you just have the visible models material not cast shadows and have the invisible one cast shadows. In fact, they did actually do this in Doom 3...the shadows are cast by lower poly models. Doom 3's models were low poly simply because they were designed to run on much lower hardware, not because of casting shadows. Also, Carmack's approach on next gen machines is to use shadow maps...I don't see what's wrong with that. For Doom 3, on the other hand, stencil shadows are the best option considering the range of hardware the Doom 3 engine can run on. Also, shadow projection is done on the GPU using a vertex program if the card is capable of it...it's not done on the CPU.


ihamoitc2005 said:
But they say they are targeting 30fps with no improvment in art and in fact some decrease. I never said portin is easy and as i posted earlier both ATI and Microsoft are anti opengl, but targetting 25% expected performance is very poor no?

Where did they say they're targeting 30fps for the final game? That article can't have been referring to the devs plans for the release version, because they didn't have final hardware, and they still had yet to optimize the game (which could bring a huge performance increase considering the engine was designed for the PC)...there's no way to say that performance will only be 30fps before those things are done. Also, the Xbox 360 screenshots released a couple weeks ago clearly show that the models are the same higher-poly models that the PC version uses. They were simply much earlier in development before...in fact, I suspect that originally the models in the PC version were the lower poly ones, but they increased the poly count when they realized how much later the game would be coming out. It seems like the recent screens have higher poly counts then I remember the first screens having. So they probably didn't create special content for the Xbox 360 at all...they were just using the older content while the game was unoptimized and running on unfinished hardware.

To sum it up...it's way to early to determine anything from Quake 4 running on the Xbox 360.
 
Gabrobot said:
Also, shadow projection is done on the GPU using a vertex program if the card is capable of it...it's not done on the CPU.

IIRC, it's done totally on the CPU, Carmack got criticized because of that.
 
Apoc said:
IIRC, it's done totally on the CPU, Carmack got criticized because of that.

Then he got unfairly criticized.

r_useShadowVertexProgram: do the shadow projection in the vertex program on capable cards


There several other optimizations too...people may want to read through Doom 3's command list.
 
Gabrobot said:
Whether you use the actual visible model for casting shadows, or you use an invisible model is entirely up to the artists. The material system let's you turn shadows off and on, so you just have the visible models material not cast shadows and have the invisible one cast shadows. In fact, they did actually do this in Doom 3...the shadows are cast by lower poly models.

I seriously doubt that. Do you have any evidence to support that claim? Especially because Carmack himself has IIRC stated the opposite several times.
 
Yep, so i remembered correctly. Carmack got very criticised for both making the CPU doing the shadows, and for keeping the same poly counts between models and shadows. You seem to be the only one to remember the opposite, Gab.
 
london-boy said:
Yep, so i remembered correctly. Carmack got very criticised for both making the CPU doing the shadows, and for keeping the same poly counts between models and shadows. You seem to be the only one to remember the opposite, Gab.

Thank you for explaining why the shadows in Doom 3 would always choke the crap out of my frame rates. I had always wondered that...
 
london-boy said:
Yep, so i remembered correctly. Carmack got very criticised for both making the CPU doing the shadows, and for keeping the same poly counts between models and shadows. You seem to be the only one to remember the opposite, Gab.

Self shadowing breaks if you don't use the same models.

Cheers
Gubbi
 
Laa-Yosh said:
I seriously doubt that. Do you have any evidence to support that claim? Especially because Carmack himself has IIRC stated the opposite several times.

london-boy said:
Yep, so i remembered correctly. Carmack got very criticised for both making the CPU doing the shadows, and for keeping the same poly counts between models and shadows. You seem to be the only one to remember the opposite, Gab.

I'm not remembering anything...I'm looking at the game assets. ;)

Here's a couple pictures showing the difference between the normal player model, and the shadow casting model...these are together in the same file, although I've hidden the other model in each picture so they can be seen clearly.

shadow_model_2.jpg
shadow_model_1.jpg


Looking at the material files also back this up.

As for the shadow projection: First, I already posted the cvar (and in-game description) used to control whether Doom 3 uses a vertex program to do it on the GPU or not...Second, the vertex program itself is called shadow.vp and is in the glprogs directory.

Sorry, but these are facts. Perhaps you can at least dig up some quotes to back up your claims?

Gubbi said:
Self shadowing breaks if you don't use the same models.

Self shadowing is turned off most character models though because it tends to look bad when using stencil shadows.


one said:
Is it early to discuss it 2 months before the release? I don't think so. Have you seen this gameplay movie posted at 9/19?
http://www.xboxyde.com/leech_1686_en.html

Optimizing is done right at the end of development, and considering that the hardware wasn't even done...well, yes that's too early. Final hardware and final game, then you can look at it.
 
Gabrobot is correct on both points. I've actually posted in this board a few times on how D3 does generate SSV on the gpu if you have appropriate hardware but people don't seem to pick up on it :shrug: And D3 does indeed use lower-poly models strictly for shadow generation. It doesn't use them for all models but IIRC the player model does.

And yes, that plays havoc with self-shadowing, that was another reason why self-shadowing is disabled for most models, that and of course overcoming the whole "binary light/shadow across a rounded bumpmapped mesh".

EDIT: ah I see Gabrobot look at the post above, heh.
 
Mordenkainen said:
Gabrobot is correct on both points. I've actually posted in this board a few times on how D3 does generate SSV on the gpu if you have appropriate hardware

What is "appropiate hardware"? Anything with SM 2.0?
 
Gabrobot said:
Optimizing is done right at the end of development, and considering that the hardware wasn't even done...well, yes that's too early. Final hardware and final game, then you can look at it.

I'm not even sure the 360 version has a confirmed date... Did I miss it?
 
Gabrobot said:
Here's a couple pictures showing the difference between the normal player model, and the shadow casting model...these are together in the same file, although I've hidden the other model in each picture so they can be seen clearly.

Well, the lowres model could be just a lower LOD version...

Looking at the material files also back this up.

..however this I haven't checked because I don't have D3 right here.


Sorry, but these are facts. Perhaps you can at least dig up some quotes to back up your claims?

Carmack has said this in his B3D interview:
"TruForm is not an option, because the calculated shadow silhouettes would no longer be correct."
But this interview was done well before D3's release.

Self shadowing is turned off most character models though because it tends to look bad when using stencil shadows.

This may be the reason why you're probably correct after all. They've really disabled self shadowing, so they don't have to use the same meshes for shadow casting... now why did they do nothing about the pointy heads then? ;)

I'll try to check the player's cast shadows at some time to see if there are individual fingers on the shadow model or not once I get near D3. But for the time being I guess I have to accept that you're right :)
 
Jaws said:
For vertex work, the NV40 has a very granular MIMD architecture whilst Xenos has a coarse SIMD architecture. Long shaders and dynamic branching for vertex work 'may' not be as suitable on Xenos...

Isn't that what the MEMEXPORT is for? :)
 
Hardknock said:
Isn't that what the MEMEXPORT is for? :)

You'll need to elaborate?

If you're suggesting that off-loading the work to XeCPU would be more suitable for long vertex shaders and dynamic branching than Xenos, then yes it may well be. Off course, US architecture has it's advantages, but not always 'equally' suitable for vertex and pixel work with the coarse grained SIMD architecture in Xenos.
 
Back
Top