Interview w/ Nvidia guy David Kirk at FS

beyondhelp

Newcomer
For your reading pleasure, this interview w/David Kirk, Nvidia Engineer @ Firing Squad...

http://firingsquad.gamers.com/hardware/kirk_interview/


a short quote...

"FiringSquad: Do you feel that fact that you guys, your hardware came out later -- does that also contribute to the initial performance that’s coming out in terms of the DX9 titles that have been benchmarked with?

Kirk: Yeah, I would say that one of the issues is that since our hardware came out a little bit later some of the developers started to develop with ATI hardware, and that’s the first time that’s happened for a number of years. So if the game is written to run on the other hardware until they go into beta and start doing testing they may have never tried it on our hardware and it used to be the case that the reverse was true and in this case now it’s the other way around. I think that people are finding that although there are some differences there really isn’t a black and white you know this is faster that is slower between the two pieces of hardware, for an equal amount of time invested in the tuning, I think you’ll see higher performance on our hardware."

and...

FiringSquad: Do you feel that in terms of the Half-Life 2 performance numbers that were released recently…do you feel that maybe you guys were, I don’t want to say given a bad rep, but maybe an unfair deal?

Kirk: Well again, not wanting to start a flame war back and forth, my feeling is if they had issues with speed, it’s really not appropriate to say that it doesn’t run at all. (Our question had mentioned this --FS) It’s just that so far in their state of optimization it doesn’t run fast. But after we’ve had a chance to work together on [inaudible] that will be able to provide a very good game experience with Half-Life on the full GeForce FX family. There’s no question in my mind that we’ll get there, it’s just a matter of time.

...Sure, someday... maybe...you just wait, the next Dets will fix everything! The next spin will be Golden! It's just a matter of time.
 
[satire]
FiringSquad: One of the things that ATI has kind of said, or least they were suggesting at Shader Day is the fact that they can do more floating-point operations than you guys can. How would you respond to those types of statements?

Kirk: Well I guess the first response would be of course they would say that. But I don’t really see why you or they would think that they understand our pipeline, because in fact they don't, nobody, not even we understand it. The major issues that cause differing performance between our pipeline and theirs is we’re sensitive to different things in the architecture than they are, like math and stuff, so different aspects of programs that may be fast for us will be slow for them and vice versa. The Shader Day presentation that says they have two or three times the floating point processing that we have is just nonsense, our figures show it to be five times. Why would we do that? [/satire]
 
Famous last words..."...it's just a matter of time." Heh...:) Almost as good as, "Just wait for the next Detonators--you'll see--just wait."
 
Typedef Enum said:
Was it just me or...

Is Kirk's english really that bad?

w0t engrish!???!!!1 :LOL:

It's common practice to sound like an idiot to avoid media castration about a bad product. :LOL: ;)
 
David Kirk said:
Yeah, I would say that one of the issues is that since our hardware came out a little bit later some of the developers started to develop with ATI hardware, and that’s the first time that’s happened for a number of years. So if the game is written to run on the other hardware until they go into beta and start doing testing they may have never tried it on our hardware and it used to be the case that the reverse was true and in this case now it’s the other way around. I think that people are finding that although there are some differences there really isn’t a black and white you know this is faster that is slower between the two pieces of hardware, for an equal amount of time invested in the tuning, I think you’ll see higher performance on our hardware."
He SO doesn't get it, does he?
It's not GFFX vs. Ati, it's GFFX vs. any DX9 capable card.
Games aren't meant to be developed "for Ati" or "for GFFX", but "for DX9".
It's rather a nice coincidence (or NOT) that "optimized for DX9" means almost the same as "optimized for Ati" lately.

Having said that, I'm sure he gets it quite well.
We just can't expect an Nvida engineer or representative (sp?) to really speak their minds on what is so obvious, making public statements that spread across the Web like a wildfire.
Not gonna happen, PR-FUD is all we got and will get.

Cheers,
Mac
 
Games aren't meant to be developed "for Ati" or "for GFFX", but "for DX9".
It's rather a nice coincidence (or NOT) that "optimized for DX9" means almost the same as "optimized for Ati" lately.

This is a running theme in all their presentations lately - they state that ATI and NVIDIA have such different shader architectures that they both need their own spearate paths, however they appear to overlook that in all titles released presently, or known upcoming ones, the ATI path is purely the API default path (i.e. HLSL for TR:AoD & HL2, DX9 for 3DMArk03, ARB2 for Doom3). Yes, this may be a different path to NVIDIA, but its also the default path that any other DX9/OpenGL board will use so in essences its only NVIDIA that is requiring special paths.
 
DaveBaumann said:
This is a running theme in all their presentations lately - they state that ATI and NVIDIA have such different shader architectures that they both need their own spearate paths, however they appear to overlook that in all titles released presently, or known upcoming ones, the ATI path is purely the API default path (i.e. HLSL for TR:AoD & HL2, DX9 for 3DMArk03, ARB2 for Doom3). Yes, this may be a different path to NVIDIA, but its also the default path that any other DX9/OpenGL board will use so in essences its only NVIDIA that is requiring special paths.

Jesus Christ on a bicycle! It's back to this "oh poor Nvidia, please take pity on us, the game developers have done something to make us look bad, we don't know why they hate us but they do, and it makes us all cry".

Pathetic, Nvidia, pathetic. Everytime someone from Nvidia opens their mouths, it makes me want to change channel like I've seen a slimy, lying polititian.
 
Bouncing Zabaglione Bros. said:
DaveBaumann said:
This is a running theme in all their presentations lately - they state that ATI and NVIDIA have such different shader architectures that they both need their own spearate paths, however they appear to overlook that in all titles released presently, or known upcoming ones, the ATI path is purely the API default path (i.e. HLSL for TR:AoD & HL2, DX9 for 3DMArk03, ARB2 for Doom3). Yes, this may be a different path to NVIDIA, but its also the default path that any other DX9/OpenGL board will use so in essences its only NVIDIA that is requiring special paths.

Jesus Christ on a bicycle! It's back to this "oh poor Nvidia, please take pity on us, the game developers have done something to make us look bad, we don't know why they hate us but they do, and it makes us all cry".

Pathetic, Nvidia, pathetic. Everytime someone from Nvidia opens their mouths, it makes me want to change channel like I've seen a slimy, lying polititian.

worse BZB..... you've seen an NV graphics PR rep ;)

My dad once said this saying to me:

"If ten men tell you you're dead... you had better lie down"

Now as a stubborn git at times my reponse was:

"Guess they've seen a ghost"

Now Nvidia please tell me why in your graphics department and in gfx PR you will not believe everyone in the community but just think "_____ ________ have done something to make us look bad, we don't know why they hate us but they do".

Hehe it is getting kinda amusing though in a perverse way - just like SCO's recent press releases damning the GPL and linux... predictable, slanderous and funny :D
 
David Kirk exposed his power of two snobbery as he said:
FP24 doesn’t exist anywhere else in the world except on ATI processors and I think it’s a temporary thing. Bytes happens in twos and fours and eights -- they happen in powers of two. They don’t happen in threes and it’s just kind of a funny place to be.
Hmm... perhaps I should remind Mr. Kirk that processors for specialised applications (such as graphics) don't necessarily conform to his arbitrary rules. Intermediate processing sizes occur in applications like DSPs as one example - these may process internally at 24 bits (fixed or float) and output at 16, using the larger number of internal bits to retain accuracy.

So it seems that bytes can happen in 3s contrary to his belief. Amazingly enough those with an understanding of the graphics industry will know that 24-bit screen resolutions exist as well and were widely used in the past. This seems more surprising if anything since it seems to me that reliance on bytes coming in twos, fours and eights is more associated with the requirements of external memory than any internal processing restrictions. Internally to hardware data paths don't necessarily conform to this rule, and in fact may not even come in byte sized elements at all...

The Motorola 68000 only had a 24-bit address bus because this was deemed to be wide enough at the time, and that was quite a highly regarded piece of hardware as I recall.

Still, I guess it sounds good if you're trying to make it appear that ATI have broken some unwritten law and hence dragged the whole industry down kicking and screaming. He might just as easily have said that GF3 and GF4 were 'funny places to be' because as far as I remember they use 10 bits per component in their pixel ALUs (range of 0->2 and 9 fractional bits).

You know who said:
I think what ended up happening was during the course of DX9 development and discussions between the various parties the targeted precision changed several times and we took a snapshot when the precision being discussed was 32 and ATI took a snapshot when the precision was 24
I think that what Mr. Kirk thinks here may be a somewhat inaccurate picture of events, but maybe I remember things wrongly.

However, it seems to me that claims of being the guardians and evangelists for higher precision processing by parties who produce hardware that prefers lower default precisions would seem to be somewhat suspect.

An apparent fan of arbitrary generalisation said:
FP24 is too much precision for pure color calculations and its not enough precision for geometry, normal vectors or directions or any kind of real arithmetic work like reflections or shadows or anything like that.

"Too much precision for pure colour calculations."

I would have thought that when dealing with real-world colour elements like exposure where brightness ratios can easily be in the many 1000s you might want quite a lot of precision for colour calculations unless you want to risk banding artifacts.

"Not enough precision for geometry"

Certainly not necessarily the case. It is obviously possible to generate cases where it isn't enough (you can extend this to any arbitrary level of precision, however), but there is plenty of scope for working with geometry creatively within the bounds of 24-bit FP.

"Not enough precision for normals"

Hmm... certainly in many applications normals are frequently specified with less than 24 bits of FP (for example, normal maps might have only 8 bits per component). I don't think there are going to be many complaints about the accuracy of normal calculations in 24 bits for some time.

"Not enough precision for directions"

This would seem to be highly dependent on circumstances, just as with the geometry case. In all these cases you can generate examples where any chosen precision will turn out to be insufficient.

"Not enough precision for shadows"

Funny, we seem to be able to do plenty of shadow calculations within these limitations.

"Not enough precision for any kind of real arithmetic work"

I guess no ALU instructions in pixel shaders should ever use really inferior things like 16-bit FP then? Obviously 16-bit FP must only be any good for fake arithmetic, so it seems insane that anybody would ever use it at all, and yet apparently it manages to be really useful at the same time?

Feh!

David Kirk actually made some sense as he said:
... people really want to have predictable precision and predictable results
Exactly what I believe ATI hardware gives them, and up until now it seems that some competitor's hardware may not.

Then I think he went off on one as he said:
I personally think 24-bit is the wrong answer. I think that through a combination of 16 and 32, we can get better results and higher performance
Can we have a timescale on when will we see this?

I would prefer a statement like: "I think that through a combination of 16 and 32, we may manage to be about 30% slower, with lower quality, while clocking our hardware 30% faster. Maybe."

Or perhaps I'm being too harsh - he's entitled to his opinion after all. I personally think that as things stand at the moment he's wrong.

[edit - added reference to GF3/4]
 
I think what ended up happening was during the course of DX9 development and discussions between the various parties the targeted precision changed several times and we took a snapshot when the precision being discussed was 32 and ATI took a snapshot when the precision was 24

That's funny... as far as i recall NV were touting 32bit and since ATI had the 'lowest precision' at 24bit that was chosen as teh DX9 minimum.... if NV had harped on enough about 16bit back then they might not be in the mess they are now :rolleyes:
 
parhelia said:
Games aren't meant to be developed "for Ati" or "for GFFX", but "for DX9".

Ok, then why do they encourage editors to optimize games for "GFFX" with special codepaths then??
'Cause that, and reduced IQ, are the only ways for NV to stay competetive (sp?) at the moment.
Plus big companies don't want their games too look "bad", even on inferiour hardware, so they take the effort, IF they have enough money/developers/time.
 
FP24 is too much precision for pure color calculations and its not enough precision for geometry, normal vectors or directions or any kind of real arithmetic work like reflections or shadows or anything like that.

I may be wrong, but I thought the R3xx architecture proceeded geometry in full 32 bits. And that 24 bits was only used in the last part of the pipeline ?
 
CorwinB said:
I may be wrong, but I thought the R3xx architecture proceeded geometry in full 32 bits. And that 24 bits was only used in the last part of the pipeline ?

That's correct, but I believe he's referring to generating or modifying geometry using the pixel shader in this case.
 
Back
Top