Interview w/ Nvidia guy David Kirk at FS

andypski said:
RussSchultz said:
Heh. THe DSP I use at work uses a 24 bit fixed point processor.

And I hope to God it dies soon.

Any particular reason?
Its all about using the right tools for the right job. A 24 bit fixed point DSP is not a good general purpose processor.

24 bit fixed point is terrific for working with audio. Audio processing is only about 10% of what needs to happen in a portable audio device. For the other 90% of what happens, 24 bit registers and non-byte addressability seriously get in the way. Ever written a filesystem on a system that can only address memory in 24 bit words? It makes the baby Jesus cry.
 
DaveBaumann said:
Yes, this may be a different path to NVIDIA, but its also the default path that any other DX9/OpenGL board will use so in essences its only NVIDIA that is requiring special paths.


What other directx9 boards are you refering to Dave?
 
RussSchultz said:
Its all about using the right tools for the right job. A 24 bit fixed point DSP is not a good general purpose processor.
Yup - I guess that's why it's called a DSP, not a CPU :cry:

24 bit fixed point is terrific for working with audio. Audio processing is only about 10% of what needs to happen in a portable audio device. For the other 90% of what happens, 24 bit registers and non-byte addressability seriously get in the way. Ever written a filesystem on a system that can only address memory in 24 bit words? It makes the baby Jesus cry.
That certainly sounds bad, although the problems seem to stem from trying to fit a square peg into a round hole rather than anything else. Unfortunately I'm sure you don't have much choice in the matter, so you have my sincerest sympathies.

24-bit floating point is terrific for working with 3D graphics, but I wouldn't necessarily want to write a filesystem based around it either ;)

- Andy.
 
Thanks for the sympathy.

One good thing about working on such a chip--there's never a paucity of things to complain about and its easy to get sympathy from other embedded engineers. :D
 
I thought Brandon was, um, polite. That usually never gets us the information we want. But I can understand this.

WRT the interview itself, I can understand some of the things David said (no, really, I do, seriously) but some are purely what I would term as "damage control". That ole' "power of two" thing (as andypski brought up) is a familiar one... and, should I say "frighteningly so" it was something a couple of 3dfx founders (one now with NV, the other is prolly fishing) kept telling me back in the ole' days.

I have no arguments about David's 24bit-vs-32bit FP comments. However, I don't know all that goes on behind closed doors wrt API vs HW... David's comments about this sounds plausible... but also emphasized one huge fact that permeates the entire interview :

Be first out the door... the sooner, the more damage to the competition.
 
Reverend said:
However, I don't know all that goes on behind closed doors wrt API vs HW... David's comments about this sounds plausible...

Yes. I'm sure they were chosen to sound plausible.
 
andypski said:
Reverend said:
However, I don't know all that goes on behind closed doors wrt API vs HW... David's comments about this sounds plausible...

Yes. I'm sure they were chosen to sound plausible.
No offence andypski, but you work for ATI and since I honestly do not know what goes on and perhaps you do (since you work for a IHV), I will read your comments as nothing more than a competitive jab at David (or perhaps even me). Feel free to tell us what you want to say in this aspect (API development, David's words about this matter vis-a-vis HW and API progress).

Of course, I'd already said (quite a number of times) how disappointed I am with the GFFX's performance (in relation to both ATI's offerings and API "specs"). I have to say this because some folks think I'm pro-NVIDIA (then again, some others also think I'm pro-ATI... go figure). Not to mention how much I disagree with some of NVIDIA's conduct. :)
 
Reverend said:
I have no arguments about David's 24bit-vs-32bit FP comments. However, I don't know all that goes on behind closed doors wrt API vs HW... David's comments about this sounds plausible...

It has been speculated a number of times now that if the HW (nvidia's) is not in sync with the API, the API is probably to blame.

Just exactly what is being suggested here: That Microsoft choose ATI's implementation over nVidia's? Or that Microsoft was holding nVidia in the dark about the DX9 specs? Why is it soo impossible to think that nVidia just made some decisions on their own, going partly beyond DX9 because of flexiblity for OpenGL devs, partly gamble that we wouldn't see any real-time DX9 use so soon? :?
 
Andypski, good post.

Kirks comments are such BS that it pisses me off. Apparently he's coming to CUTC (Canadian Undergraduate Technology Conference) this year. Maybe I'll have the opportunity to grill him then :LOL:

You're so right about his comments. If ATI's FP is running at full speed, how can you have too much precision? FP16 has what, 11 bits mantissa + sign? That's going to run out of precision very easily.

It's plenty for normal vectors and directions. Just look at the ATI Car demo, which seems to be almost a worst case scenario for normals (I've never seen 8-bit normals look so bad). Heck, the entire front air dam (or whatever it's called) of the F50 is done with normal maps!

As for geometry, we're talking about pixels shaders, fool! The only way you can get pixel shader results back into the geometry pipeline is with VS 3.0 or an experimental extension like GL_ATI_uber_buffers (which AFAIK NVidia doesn't have). He may have a point with shadow maps, but NVidia doesn't support floating point shadow maps anyways (again, AFAIK), so that's pretty much unrelated.

Well, when all is said and done, can you really blame him? I don't think he has much choice about pimping his hardware. Still, I got really fumed when he said:
I think that people are finding that although there are some differences there really isn’t a black and white you know this is faster that is slower between the two pieces of hardware, for an equal amount of time invested in the tuning, I think you’ll see higher performance on our hardware."

First of all, does he really think developers of TR:AOD spent more time on ATI hardware on a TWIMTBP game? Second of all, who is finding this? People who program in OpenGL, with fixed point shaders, using only NV's shader extension, very few registers, AND no dependent texture reads? :rolleyes: Well, so long as at least 2 people on the planet are doing that, he's not technically lying.

At least when ATI had inferior hardware they priced it lower.
 
Reverend said:
No offence andypski, but you work for ATI and since I honestly do not know what goes on and perhaps you do (since you work for a IHV), I will read your comments as nothing more than a competitive jab at David (or perhaps even me).
None taken - naturally you're free to take my comments how you like. After all, I do work for ATI, just as David Kirk works for nVidia, and there will naturally be some competitive jabs going on.

There are obviously a lot of possibilities as to how things really unfolded - it could be that things remained poorly specified in some way, and each IHV took some sort of 'snapshot' and then ignored what was going on after that, or alternatively it could be that great attention was paid to the chosen precision of the API and that since it is such an important part of the specification it was actually chosen carefully and fixed early on in the process.

Or something else could have happened. :LOL:
 
I think people are getting ready to tie you to a chair and tickle you with goose feathers, demanding "WHAT HAPPENED??!!!" :p ;)
 
cthellis42 said:
I think people are getting ready to tie you to a chair and tickle you with goose feathers, demanding "WHAT HAPPENED??!!!" :p ;)

FWIW, sireric has posted that the PS 2.0 precision choice was made by fall '01, and that the R3x0 did not become an FP24 design until after the decision was made. (Although one might guess ATI was lobbying for FP24, as a minimum precision of FP32 would have resulted in a die size cost on a chip that is already very big for .15u.)

Of course ATI's single-precision pixel pipeline would have been much easier to redesign to support a different default precision than the NV3x pipeline based on three precisions and packed registers.
 
Just to follow up on my own reply earlier. :rolleyes:

David Kirk on being the victim of Microsoft said:
I think what ended up happening was during the course of DX9 development and discussions between the various parties the targeted precision changed several times and we took a snapshot when the precision being discussed was 32 and ATI took a snapshot when the precision was 24. In fact DX9 was released without any guidelines as to precision and a clarification was made later and the clarification that was made was very timed to ATI in that it did not make a statement that 24 was not enough.

Sorry, I just don't buy the "we took a snapshot but was unlucky". The target precision was discussed of course , but Microsoft knew well that the IHV's had to have the call early enough to being design.

Nice spin, Kirk, but the fact is that you choose what you thought was (and still think is) the right thing to do.

Edit: Thanks Dave H, nice link!
 
LeStoffer said:
Sorry, I just don't buy the "we took a snapshot but was unlucky". The target precision was discussed of course , but Microsoft knew well that the IHV's had to have the call early enough to being design.

Nice spin, Kirk, but the fact is that you choose what you thought was (and still think is) the right thing to do.

Edit: Thanks Dave H, nice link!


Yeah, I agree. It's just not credible that the market leader with hundreds of millions of dollars spent all this money on R&D, but got it wrong because of bad luck. What next, Nvidia will be telling us they make their major decisions on the toss of a coin? "Gee, I never liked the NV30 FXflow, but we played paper-scissors-rock and I lost, so we had to finish building it." :rolleyes:

Pull the other one Kirk, you just screwed up because Nvidia thought they could pull the market towards low precision, special paths, and screw the agreed API. I guess ATI just "got lucky", eh? :rolleyes:
 
Back
Top