Interview w/ Nvidia guy David Kirk at FS

Reverend said:
No offence andypski, but you work for ATI and since I honestly do not know what goes on and perhaps you do (since you work for a IHV), I will read your comments as nothing more than a competitive jab at David (or perhaps even me). Feel free to tell us what you want to say in this aspect (API development, David's words about this matter vis-a-vis HW and API progress).

So, in other words, you might automatically discount any reasonable, knowledgeable, verifiable, logical comment he might make, simply because he's employed by ATi? You'd consider his comments as no more than "competitive jabs," otherwise without substance? (Why would you think he feels he's competing with you, I wonder?)

You automatically suspect him of bias it seems, without considering the content of his remarks. Everyone here knows he works for ATi, and andypski knows that they know that, so what dividends would unsupportable bias pay him out here in this forum? I can't think he'd imagine there would be any.

Come on, Rev--that's a bit paranoid, don't you think? Unlike Kirk at Firingsquad, andypski is not using this forum as a PR platform to represent the interests of his employer. I think it's barely possible andypski might be simply explaining why he disagrees with Kirk's PR statements at Firingsquad. At least, that's how I interpret his remarks. I hardly think working for ATi means he has to check his brain at the door.

Andypski is not preening to the huddled masses, or saying generically meaningless things to appear authoritative, or using techno-speak gibberish to underscore the illusion of making technical commentary. He's simply saying, here on this forum, what he thinks of Kirk's unabashed and unashamed pandering to the PR camera. I judge the comments of neither person by the jobs they hold or the companies which employ them--I judge them on the merit of their comments. There was a lot to appreciate in andypski's remarks, I thought. There was also a lot I could agree with based on my own experience.

Of course, I'd already said (quite a number of times) how disappointed I am with the GFFX's performance (in relation to both ATI's offerings and API "specs"). I have to say this because some folks think I'm pro-NVIDIA (then again, some others also think I'm pro-ATI... go figure). Not to mention how much I disagree with some of NVIDIA's conduct. :)

I certainly don't believe you to be pro-anybody, any more than the rest of us are. If any of us are "pro" anything, it's "pro good 3d products," isn;t it? Nothing wrong, or biased, about being "pro"-product X when productX is clearly a good product. I believe it to be unwise to discount the comments of anyone merely on the basis of his employment.

I was somewhat surprised to see you react so viscerally to andypski's perfectly credible comments simply on the basis of your assumptions as to how his employment affects his personal conduct here in the forum. I see nothing in what he said to justify such suspicions, and I don't think that criticizing his remarks on the basis of his employer is, well, croquet. I mean, if you wanted to address his remarks directly because you disagree with their basis in fact, that's one thing. But to dimiss his comments out of hand because he works for ATi...well, at the least you should think about that again.

Edit: spelling
 
I think the Rev probably was just trying to nudge me into being more specific instead of (as he apparently saw it) using innuendo or veiled competitive jabs, or perhaps he just didn't like the tone of my original response (which I intended to be somewhat humorous as well as pointed).

Anyway, no harm done.

I had already responded to many of Mr. Kirk's points specifically, so I saw no need to expand on things much further at the time. I just find it hard to credit the notion that the two major IHVs in some way took 'snapshots' of the DX specification and one of them got lucky. I mean, would you leave something like that to chance, or would you pay close attention to how things were developing at all times?

If there are any other parts of my position you want me to expand upon Rev then let me know.
 
Dave H said:
FWIW, sireric has posted that the PS 2.0 precision choice was made by fall '01, and that the R3x0 did not become an FP24 design until after the decision was made. (Although one might guess ATI was lobbying for FP24, as a minimum precision of FP32 would have resulted in a die size cost on a chip that is already very big for .15u.)

Of course ATI's single-precision pixel pipeline would have been much easier to redesign to support a different default precision than the NV3x pipeline based on three precisions and packed registers.

Yes, when you think about it, designing your next chip around capabilities you know will be needed is a fairly logical approach, isn't it?

What I can't understand about your comments is your hypothetical premise that nVidia didn't know about fp24 in time to do anything about it relative to nV3x. I've seen no effort made to verify that insinuation. It seems just as likely to me that nVidia knew about it in plenty of time, but simply chose to go its own way with fp16/32, instead. Now, I can't prove that, either...:) But it seems to me of equal weight with your assumption.

IMO, it seems probable to me that nVidia decided it would rather do fp 16 than fp24, after learning the spec and correctly assuming ATi would follow it. Doing this (my assumption goes) nVidia felt it would be able to shoehorn fp16 into the APi later on, and fp16 would provide it with a competitive advantage over ATi since--not understanding anything else about the likely prowess of R300--nVidia assumed it could run fp16 faster than ATi could run fp24. And that it could run fx12 even faster. This would also explain the subpar support in the chip for fp32 performance capable of actually being used in 3d games. So, why *did* nVidia go fp16/32 as opposed to fp24, as the spec called for? I think it was because they knew that technically fp32 would satisfy the fp24 DX9 requirement, but what they would actually *use* would be fp16/fx12 for 3d games.

Indeed, as I recall the first nV3x drivers didn't even support fp32. Hence the reason for the dual-precision fp pipeline, from nVidia's point of view. A case of the fox outfoxing himself, I think. When we consider that nVidia designed a chip with an integer pipeline and an fp pipeline, I think we can safely assume that what they considered relevant for nV30 and 3d gaming would be fx12 and fp16, and probably in that order of importance. fp32 exists in the chip to satisfy the API requirement--but for little else, apparently, as otherwise fp16 and fx12 wouldn't have been needed. (As they were not needed in R300 because ATi planned to *use* its fp pipeline for everything from the start, at the specified level of precision, and would handle standard integer through the same pipeline.)

The one often unmentioned facet to knowing the API specifications so far in advance is, that if you're clever, you might succeed in using them as a *weapon* to overcome your competition--provided you can work around them in some respect. This is what I think happened and is why nVidia went with fx12, and fp16/32 instead of fp24. It was believed, based on nVidia's advance knowledge of the specs, and nVidia's belief that ATi and other competitors (not necessarily just ATi) would follow those specs, that it would be in nVidia's competitive advantage to circumvent the specs, if possible, so as to provide it with what it believed would be an inherent performance advantage over any and all "DX9" competition. In short, fp32 doesn't have the requisite gpu support it needs to run 3d gaming competitively, because nVidia did not include fp32 for that purpose.
 
andypski said:
....I just find it hard to credit the notion that the two major IHVs in some way took 'snapshots' of the DX specification and one of them got lucky. I mean, would you leave something like that to chance, or would you pay close attention to how things were developing at all times?

Well, I think this is the line cooked up by the conspiracy buffs who simply can't accept that nVidia has been so thoroughly bested this year. To them, it doesn't follow that ATi might ever leapfrog nVidia in the fashion we've observed, and therefore something is rotten in Denmark...:) At least, that's my take on it. By fomenting the idea that "neither company knew the specs" prior to commiting hundreds of millions of dollars to their respective chip design processes, they're saying "Isn't it funny how ATi just happened to get lucky and hit everything on the head?" The logic is further extended to the idea that M$ conspired with ATi and deliberately *changed* the specs at a point in time too late save poor nVidia, and changed them to exactly mirror ATi's "lucky" DX9 architecture. The theory goes that, as a reward, and an inducement to actively work the conspiracy along with M$, that ATi got not only the API, but also the xBox2 contract--thus helping to rid M$ of the pesky nVidia Corporation once and for all. I think that's basically how it goes...:)

Anyway, this is all so convoluted that it's amazing why people don't consider what was much more likely IMO--that nVidia deliberately thumbed its nose at the DX9 specs and decided on an independent path designed to provide it with what it believed at the time would be a competitive advantage over all other "DX9" chips likely to be produced--as I outlined in the post above and so won't repeat here. Considering that fp24 was one of the few things nVidia could be sure its competitors would likely be doing for their DX9 chips--knowledge that nVidia had early on just like ATi--but that nVidia had *no way of knowing anything else* about likely architectures its competitors might field--it seems an entirely rational thing for a company like nVidia--ever paranoid about its position in the market--to undertake. nVidia wanted to be sure of a performance advantage--and bypassing fp24 with a nod in the chip to fp32, while planning all along to use fx12 and fp16 for its "DX9" 3d-gaming support--was a tactic it felt would assure it of ultimate supremacy in its markets.

Also, since the xBox2 contract was coming up for consideration anyway, M$ did not need ATi's help in denying nVidia the xBox2 contract. It just isn't logical to assume that M$ would have ever gone with what it considered to be an inferior technology for xBox2, regardless of what company was tapped to provide xBox2 technology.

I think that basically nVidia underestimated its competitors, over estimated its own influence, and frankly just underplanned its nV3x technology such that it wasn't competitive with that produced by competitors for DX9. No conspiracy--just faulty judgement on the part of nVidia. That's my theory, anyway...
 
andypski said:
I think the Rev probably was just trying to nudge me into being more specific instead of (as he apparently saw it) using innuendo or veiled competitive jabs, or perhaps he just didn't like the tone of my original response (which I intended to be somewhat humorous as well as pointed).

Anyway, no harm done.

I had already responded to many of Mr. Kirk's points specifically, so I saw no need to expand on things much further at the time. I just find it hard to credit the notion that the two major IHVs in some way took 'snapshots' of the DX specification and one of them got lucky. I mean, would you leave something like that to chance, or would you pay close attention to how things were developing at all times?

If there are any other parts of my position you want me to expand upon Rev then let me know.
I hope I didn't appear "aggressive" to you andypski. Regardless of the way a IHV employee expresses his thoughts (forums, interviews), I tend to take extra care in reading their words. It's just a natural thing for me, considering how often I email IHV personnels, coupled with the fact that I write for a web-based media outlet (B3D).

It would be helpful if Brandon could pop in here and tell us just how long it took David to answer each of Brandon's question. Being a telephone interview (as opposed to email interviews), I'm wondering if Brian Burke was listening in on the interview as well and the whole interview was agreed on the premise that answers to questions would take a couple of minutes after the question was asked. :)

You (and almost all of the ATI personnels that participate here) have conducted yourself in a very complimentary manner so far. This hasn't changed and I certainly don't want to appear like I naturally "distrust" postings by IHV personnels. It (my posts here) was just one of the reactions upon reading your "... and was chosen to sound plausible." comment (and of which you didn't expand upon).
 
Yeah, I would say that one of the issues is that since our hardware came out a little bit later some of the developers started to develop with ATI hardware, and that’s the first time that’s happened for a number of years. So if the game is written to run on the other hardware until they go into beta and start doing testing they may have never tried it on our hardware and it used to be the case that the reverse was true and in this case now it’s the other way around. I think that people are finding that although there are some differences there really isn’t a black and white you know this is faster that is slower between the two pieces of hardware, for an equal amount of time invested in the tuning, I think you’ll see higher performance on our hardware.

So how does this pan out with regards to Dawn?
 
What exactly is the job description for Nvidia's chief scientist?

Is he more like a spokesperson or administrator?
It is a shame firingsquad didn't ask about how the dawn demo ran so well on ATI hardware.
 
Hi guys, sorry for the delay in getting on here, I've been rather busy the past few days. You guys have a great thread going here, with lots of insightful comments and observations, an A+ read. I hope the interview didn't come off as too PR'ish, but yeah there are some pretty obvious examples in there that you guys quickly picked up on. Hopefully there was a little something useful in there for everyone, at least the part on Cg (which to my knowledge, wasn't common information).

Anyway, Dr. Kirk answered all of my questions very promptly, I would say the average delay was nothing more than a few seconds for all of the questions. There were a lot of uhms and stumbles from both of us, I tried to edit as much of it out as I could but I will admit that it still came out pretty raw gramatically. Personally I prefer these types of interviews over the phone rather than through email, as the interviewee doesn't have time to prepare those nicely thought out PR-type statements you see all the time. Unfortunately I didn't have enough time to prepare any formal questions, I was literally winging it because I had been busy with Dave B. discussing something else (I haven't seen you online since Dave! :LOL: ) so I'll be the first to admit that I missed some things.

As far as your second question Rev, yes BB was sitting in the background, although he never said a word until the end of the interview, basically asking Kirk if there was anything else he'd like to add to the piece, he replied "no" so I turned off my tape recorder. We chatted briefly about the decision to go with FP32 for another minute or so then it was over. And no, this isn't like a California gubernatorial debate, Kirk wasn't given the questions in advance ;) nor was he given 30 seconds or whatever to think of a response to each previous question. I have it all on tape as well...
 
digitalwanderer said:
The Dig pulls out a couple of goose feathers and some rope.

Anyone got a chair?
/me is worried about the fact that DigitialWanderer hasn't got a chair yet does have some goose feathers and rope nearby. :oops:
 
Simon F said:
/me is worried about the fact that DigitialWanderer hasn't got a chair yet does have some goose feathers and rope nearby. :oops:

His weekend "excesses" sometimes stretch out until Tuesday. ;)
 
Brandon said:
I hope the interview didn't come off as too PR'ish, but yeah there are some pretty obvious examples in there that you guys quickly picked up on.

I thought it was a very professional interview. You asked some tough questions. There is nothing you can do about the fluff answers. Good Job
 
Reverend to andypski said:
....
You (and almost all of the ATI personnels that participate here) have conducted yourself in a very complimentary manner so far. This hasn't changed and I certainly don't want to appear like I naturally "distrust" postings by IHV personnels. It (my posts here) was just one of the reactions upon reading your "... and was chosen to sound plausible." comment (and of which you didn't expand upon).

I was surprised because that didn't sound like you! Anyway, thanks for clarifying it above. I appreciate your response, Rev.

About the "....was chosen to sound plausible" remark, I completely agreed with it, and thought it was a rather obvious observation, myself, based on the text of Kirk's interview. Although PR people of this "stripe" (and I dont necessarily mean that prejudicially, as I'll explain a couple of paragraphs down) may instantly respond to questions asked, it does not mean those responses will either appropriately address the questions asked, or that the responses will contain meaningful information or accurate information. What is often the case, in my experience, is that PR people build up, over time, a large intellectual portfolio of "automated responses" which are triggered by certain questions. Heh...:) They have an entire psychological superstructure in place consisting of rationalizations, half-truths, evasions, insinuations, and even at times calumny, which, in a rehearsed and practiced fashion, they draw upon almost mechanically when "answering questions" posed by an interviewer in a professional, work-related context. Some PR people operate this way. The really best PR people are the ones who know how to tell the unvarnished truth and make it sound completely beneficial at the same time, and have enough basic knowledge about the subjects they address so that they can properly synthesize the information they receive from inside the company concerning those subjects.

Sometimes, too, when people in PR are not directly involved in the technological design/production work of the company, or else have limited/past experience in the field or a related one, they will often approach the real "movers and shakers" in a company (generally invisible to the public), for answers to questions they consider pertinent to their PR work. I mean to say that sometimes the desire of these people to learn and understand is genuine, and the last thing they expect is that they themselves will be manipulated by the people in the company who have the answers they need. But unfortunately, this happens. Thus, PR people themselves may dispense information publicly which they have every confidence in, because they trust the people in the company from whom they obtained this information, but may wind up dispensing falsehoods without even being aware that they are doing so at the time.

So, when the phrase..."was chosen to sound plausible" is used, I can instantly understand it. Much of PR is "chosen to sound plausible," but the key is in determining what actually is plausible in whatever PR propaganda one might be exposed to. Only with experience in the related fields can one expect to be able to do this, though.

All PR is not necessarily propaganda. Good PR is recognizable by its lack of inflammatory spin, IMO. "Bad" PR, or "gutter PR," as I define it, seeks to capitalize on the ignorance of its intended targets, and to misrepresent evident truths, to spin them, such that only the technologically inexperienced might be swayed by the "arguments" presented. Bad PR seeks essentially to deceive. Good PR educates its market even as it persuades its market. Good PR *never* condescends, but bad PR almost always does. The best PR is based wholly on the truth. Of course, it goes without saying that this is my personal attitude on the topic, and I wouldn't pretend to speak for anyone else.

To one specific of Kirk's--the, in my opinion, meaningless drivel about "24-bit fp pipelines being based on an incompatible mathematical progression," or whatever he said, isn't a technological statement. Most obviously, of course, R3x0 proves the statement devoid of foundation. It might be valid in numerology or astrology somehow, but certainly isn't a technical statement of any kind as relates to fp pipeline precision. Technically, at best it's gibberish, and at worst it's calumny. To that end this kind of statement is, IMO, "gutter PR." As well, it does nothing except to completely undermine Kirk's designated title as "chief scientist" at nVidia, as far as I am concerned. It merely underscores the proposition that his title as such is a complete affectation designed to elicit attention from a PR standpoint. People tend to listen more carefully to a "chief scientist" than to a "Vice-President of Public Relations."

Kirk said:
I think that’s a very interesting question. If you go back to how DX9 was developed it wasn’t clear as we were doing our development and Microsoft was doing their development and ATI was doing their development what the target precision was going to be for DirectX. If you look at processors FP24 doesn’t exist anywhere else in the world except on ATI processors and I think it’s a temporary thing. Bytes happens in twos and fours and eights -- they happen in powers of two. They don’t happen in threes and it’s just kind of a funny place to be.

FP24 is too much precision for pure color calculations and its not enough precision for geometry, normal vectors or directions or any kind of real arithmetic work like reflections or shadows or anything like that.

I think what ended up happening was during the course of DX9 development and discussions between the various parties the targeted precision changed several times and we took a snapshot when the precision being discussed was 32 and ATI took a snapshot when the precision was 24. In fact DX9 was released without any guidelines as to precision and a clarification was made later and the clarification that was made was very timed to ATI in that it did not make a statement that 24 was not enough.

Complete nonsense, of course, in every respect. First of all, you don't invest $400 million in the development of an architecture based on mere "snapshots"--at least, no company with a brain in its corporate noggin would commit without a clear picture of exactly where it wanted to go and what it wanted to support. What a ridiculous concept. And of course, Kirk never explains the value of fx12 or fp16 in the nV3x architecture, which could be criticized in exactly the same fashion, nor does he bother to point out that using fp32 in nV3x results in extremely uncompetitive 3d performance. Strangely enough, the above comments of his pretend that the only precision in the nV3x pipeline is fp32. Which is kind of interesting, considering the question was about fp32 in the first place--gosh, based on his evaluation of fp24 as stated above I guess Kirk must believe that fp32 has way too much color precision, then, and that, of course, fp16, which he doesn't predictably mention here to avoid the contradiction, has "far too little" precision for geometry...:) Heh...:) Sounds like a pretty good reason to me to skip both fp16 and fp32...:)

The following statement I thought was very telling:

Certainly one of the choices that Microsoft could have made is that it has to be 32 or nothing. They could have also made the choice that it has to be 16 or nothing.

I translate this as nothing less than: "Microsoft should have let us determine the DX9 specs instead of brazenly thinking they could actually manage the API themselves." Apparently, Kirk also believes that M$ itself is "numerologically challenged" with respect to the practicality of a 24-bit fp pipeline. The bottom line for Kirk is that everyone else is wrong, and only nVidia is "right." It's really sad to see the "chief scientist" of a company making superstitious statements on the value of undefined numerological conventions as applied to fp pipeline precision in a 3d chip (not that defining them would make his statements more valid, heh...:)).

I personally think 24-bit is the wrong answer. I think that through a combination of 16 and 32, we can get better results and higher performance.

So, nVidia's "chief scientist" thinks that 24-bits of fp precision is the "wrong answer"; yet ironically, the R3x0 with fp24 outperforms nV3x in either fp16 or fp32. Maybe that's why ATi thought it was the "right" answer--not to mention M$, of course? I'm not sure Kirk has reasoned it out that far--at least it's not apparent from his remarks. Apparently, he is unable to digest this fact from a scientific viewpoint, although it is in evidence all over the Internet and is probably apparent in the R3x0 products nVidia has bought to study in its labs.

From a numerological/astrological position, it's interesting to note that when you combine 16 + 32 and average them, the result is 24, which just so happens to correspond to R3x0's level of fp precision. So is Kirk unknowingly contradicting his own numerological premise? Heh...:)

This was amusing:

The major issues that cause differing performance between our pipeline and theirs is we’re sensitive to different things in the architecture than they are so different aspects of programs that may be fast for us will be slow for them and vice versa. The Shader Day presentation that says they have two or three times the floating point processing that we have is just nonsense. Why would we do that?

I thought the "sensitive" comment was pretty amusing--and notable of course for the amount of specific detail the "chief scientist" supplies to support his proposition. Well, I guess admitting to designing chips based on numerology would be pretty embarrassing--so I guess that's why he doesn't feel the need to supply credible examples to back up his statements. As well, I thought it was pretty funny that he talks about "Shader Day" without realizing that "fp precision" might possibly be construed as a subject other than shaders--but that's only a "scientific" distinction, certainly. I really liked his "Why would we do that?" remark. Why would you do what, DK? Why would you design and ship a much much slower chip than ATi? Well, obviously, don't you think it might have something to do with that fact that you had nothing do do with what ATi shipped? It's really strange to hear that nVidia apparently believes it has some sort of direct control over the products its competitors ship, because the only possible inference that can be drawn from this question is, "Why would we allow ATi to get that far ahead of us?" Heh...:) As though it was up to nVidia in the first place. I have no idea why, DK. Presumably you do, though, so how's about you let us in on it?

Next, Kirk speaks scientifically:

Well one example is if you’re doing geometric calculations with reflections or transparencies and you need to do trigonometric functions. Our sine and cosine takes two cycles theirs takes eight cycles, or seven cycles I guess. Another example is if you’re doing dependant texture reads where you use the result of one texture lookup to lookup another one. There’s a much longer title time on the pipeline than there is in ours. So it just depends on the specific shader and I feel that for the calculations I mentioned are pretty important for effects and advanced material shaders and the types of materials that people use to make realistic movie effects. So they will get used as developers get more used to programmable GPUs and we’ll have less of a performance issue with those kinds of effects.

"So it depends on the specific shader..." Really, who'd a thunk it? Who'd a thunk that nV3x has a very hard time with ps2.0? Gosh, and I guess if your desire is to forego use of nV3x as a DX9 3d chip, and to pretend that R3x0 isn't designed for 3d, then things like trigonometric function cycle time might be of interest, I suppose--if you want to use the vpus as cpus, maybe, and *all* you need to do is run sine and cosine functions, etc. But really he answers himself here thusly:

...and I feel that for the calculations I mentioned are pretty important for effects and advanced material shaders and the types of materials that people use to make realistic movie effects. So they will get used as developers get more used to programmable GPUs and we’ll have less of a performance issue with those kinds of effects.

Emphasis mine. Ok, so, yes, "where nVidia is better than ATi" it's not of value to 3d-gaming support, but according to what the "chief scientist" says here, will impact "people who want to make realistic movie effects," but with the gotcha' that that isn't something which is immediately apparent from the superiority of NV3x's trigonometric function processing capability right now, but is something that will come in time and is dependent on the ability of "developers" to "get more used to" programmable gpus. I really wish he'd make some sort of scientific distinction between what kind of "developers" he's talking about--3d game developers, or developers who write software to "make realistic movie effects." He doesn't seem to appreciate a difference, it would seem.

Yeah, I would say that one of the issues is that since our hardware came out a little bit later some of the developers started to develop with ATI hardware, and that’s the first time that’s happened for a number of years. So if the game is written to run on the other hardware until they go into beta and start doing testing they may have never tried it on our hardware and it used to be the case that the reverse was true and in this case now it’s the other way around. I think that people are finding that although there are some differences there really isn’t a black and white, you know this is faster that is slower between the two pieces of hardware, for an equal amount of time invested in the tuning, I think you’ll see higher performance on our hardware.

For some reason, the "chief scientist" at nVidia doesn't understand API support in a 3d game, and API support in a 3d-card's drivers, and how that card's drivers are supposed to bridge the gap between the API support in the game code and the API feature support found in the 3d hardware. Reading this, one might think that APIs simply do not exist, and that what developers really do is to custom-program support paths which are different for everybody's hardware, and that developers can't really do that "until they go into beta." Presumably, everybody's still "in beta" at present--even the shipping DX9 titles, I guess. Yep, nothing like setting us back 8 years, is there, DK?


I really don't understand his comments. Is he saying that he thinks the development state of HL2 is in the pre-beta stage, and that when Valve "goes into beta" they'll discover....what, exactly? I mean, for him to sit there and say that "I think people are finding out..." something or other about there being no difference between nV3x and R3x0, is remarkable in its appalling ignorance of current events.

After Valve's recent presentation--where it wasn't a matter of "equal time," it was a matter of Valve spending 500% more time creating an nV3x-specific code path in the software than it spent in creating the DX9 code path for the game (suitable for all DX9-compliant hardware, including nV3x/R3x0), it's strangely negligent of Kirk to pretend that "people are finding out" anything more than doing a vendor-specific, mixed-mode code path for nV3x is a waste of time and money and that the nV3x architecture performs best under a generic DX8.x code path. Valve has a certain amount of stature here that I think it would be foolish of Kirk to ignore. But if Kirk and nVidia won't listen to M$, and developers like Valve, I suppose it is foregone they won't listen to anybody.

This post is far too long so I'll end it with some positive, heartfelt advice for DK:

Rid yourself and the company culture there with this obessesion you've got with frame-rate numbers in benchmarks. It has blinded you to everything your potential market has been telling you all year long: We Want Image Quality. Burn that into your psyches. Stop "optimizing shaders" at the expense of IQ. Rip out all your optimizations in your drivers relative to preventing the consumers of your products from getting things like full trilinear support, and full AF support. Give them the IQ features they want, and worry about frame rates *later.* You are now engaging in doing exactly the opposite of what your market wants. You will reap what you sow, in other words.

Face the truth that everybody knows that R3x0 is superior to nV3x. No amount of posturing, pomposity, bravado, bluffing, equivocating, and prevaricating will change that. Face that and move on to something you can control and you can improve in your own products right now: Image Quality. You can do a lot to improve that.

The way nVidia has conducted its affairs all year long has provided people with the firm conviction that buying nVidia 3d products means you get less IQ AND less performance. Since you are limited by the current architecture to taking second-place in the performance department--regardless of how you strip out IQ in order to attempt performance parity with R3x0--I advise you to reverse your present course with regard to frame-rate performance and to instead attempt to beat R3x0 in terms of image quality--if it's possible. Granted, you know nV3x much better than I do, but your comments indicate you appreciate little to nothing about R3x0 and that you are completely out of touch with your market. It could well be that you already know you cannot beat R3x0 in either image quality or performance, and that knowledge would explain much of your company's present conduct.

Nevertheless, your customers are demanding IQ--not your definition of it--their definition. I would strongly suggest you start listening. No one wants nVidia out of the race--I know that I don't. But by the same token if you continue in the fashion you've become accustomed to all year long, few will eventually care if you withdraw from the 3d market. Consumers have an ingrained habit of polarizing around the companies which make a concerted effort to meet their demands.

ATi reshuffled itself from top to bottom and has clobbered you. If you guys continue to be slow on the uptake that it's no longer Business As Usual, and do nothing to address the needs of your markets other than to continue in your present path, I'm sure the clobbering will get a lot worse for you before it gets better--if indeed it ever does.
 
Walt C said:
What is often the case, in my experience, is that PR people build up, over time, a large intellectual portfolio of "automated responses" which are triggered by certain questions. Heh... They have an entire psychological superstructure in place consisting of rationalizations, half-truths, evasions, insinuations, and even at times calumny, which, in a rehearsed and practiced fashion, they draw upon almost mechanically when "answering questions" posed by an interviewer in a professional, work-related context. Some PR people operate this way. The really best PR people are the ones who know how to tell the unvarnished truth and make it sound completely beneficial at the same time, and have enough basic knowledge about the subjects they address so that they can properly synthesize the information they receive from inside the company concerning those subjects.

Sounds like every politician I know.
 
For immediate release

It has come to our attention that some people are busy examinating the frank and honest answers provided by our chief scientist and worldclass 3D expert David Kirk. We frankly wish they would stop doing that, as here at Nvidia we strongly believe that what David Kirk says is exactly sounding like truth provided he says it fast enough. We also believe that the FiringSquad should not have released any written text from David Kirk's interview, as his speech is still currently in beta and should not be analysed by customers (but you can mention how fast he speaks, though).

We don't know exactly what Brandon did with his questions, but it seems he did something to make our beloved chief scientist look bad. Our 456 654 651 654 132 746 874 654 657 613 216 546 540 customers all personally know and trust David Kirk, and all of them have offered him the hand of their first-born daughter as well as the key to their house, so that means something about how reliable and honest our bleeding-edge, worldwide known industry-leader and award-winning chief scientist is.

In fact, Nvidia will soon release speaking pins of David Kirk saying "I personally think 24-bit is the wrong answer.". Due to some firmware problems, the pins are currently saying "I nally nk 2 it s e ong wer", but it's much faster this way, and future updates to the pins will restore the full sentence without affecting performance. We expect PeRformance enthusiasts all around the world to wear those pins as badges of honor.

Nvidia is a global cheater in the communication age, and our goal is to "deface every pixel on the planet".
 
nelg said:
Sounds like every politician I know.

Good point...I look at my post above and marvel at the waste of time...what's the point? nVidia's not listening, and it really doesn't matter anymore...
 
I thought it had been established a while back that Nvidia was trying to leverage its market dominance to force the industry into an Nvidia proprietary direction. CG, NV30 all part of the same plan. The problem is such a strategy adds a new competitor to your business: Microsoft. Dumb move Nvidia.

(Certain statements in this post, including any statements relating to NVIDIA's motives, business strategies, and whether they are a pack of lying *****'s, are forward-thinking statements that are subject to risks and uncertainties that could cause the truth to be materially different than suggested. )
 
<laughs> I'm EXTREMELY glad I've stopped drinking anything when I notice one of those posts from you, Corwin. :LOL: :LOL: :LOL:
 
WaltC said:
Kirk said:
I think that’s a very interesting question. If you go back to how DX9 was developed it wasn’t clear as we were doing our development and Microsoft was doing their development and ATI was doing their development what the target precision was going to be for DirectX. If you look at processors FP24 doesn’t exist anywhere else in the world except on ATI processors and I think it’s a temporary thing. Bytes happens in twos and fours and eights -- they happen in powers of two. They don’t happen in threes and it’s just kind of a funny place to be.

FP24 is too much precision for pure color calculations and its not enough precision for geometry, normal vectors or directions or any kind of real arithmetic work like reflections or shadows or anything like that.

I think what ended up happening was during the course of DX9 development and discussions between the various parties the targeted precision changed several times and we took a snapshot when the precision being discussed was 32 and ATI took a snapshot when the precision was 24. In fact DX9 was released without any guidelines as to precision and a clarification was made later and the clarification that was made was very timed to ATI in that it did not make a statement that 24 was not enough.

Complete nonsense, of course, in every respect. First of all, you don't invest $400 million in the development of an architecture based on mere "snapshots"--at least, no company with a brain in its corporate noggin would commit without a clear picture of exactly where it wanted to go and what it wanted to support. What a ridiculous concept. And of course, Kirk never explains the value of fx12 or fp16 in the nV3x architecture, which could be criticized in exactly the same fashion, nor does he bother to point out that using fp32 in nV3x results in extremely uncompetitive 3d performance. Strangely enough, the above comments of his pretend that the only precision in the nV3x pipeline is fp32. Which is kind of interesting, considering the question was about fp32 in the first place--gosh, based on his evaluation of fp24 as stated above I guess Kirk must believe that fp32 has way too much color precision, then, and that, of course, fp16, which he doesn't predictably mention here to avoid the contradiction, has "far too little" precision for geometry...:) Heh...:) Sounds like a pretty good reason to me to skip both fp16 and fp32...:) .

I agree, those statements were absurd. It sounded like he was saying the choice was between FP32 and FP24. In the end, when MS finally clarified the issue, they “did not make a statement that 24 was not enoughâ€. You got to love how he stated, in nVidia's opinion, MS made the wrong decision without actually saying it.

If I recall correctly sometime earlier this year DeanoC was suggesting that nVidia was pushing for the minimum precision to be lowered from FP24 to FP16. If nVidia actually believed FP32 was the right choice why push MS to lower the minimum precision even further. I don’t see how nVidia can reasonably argue their motives were to improve the API.
 
The only thing that upset in the interview was that DK's responses weren't pressed abit harder. The questions were good, the evasions were of his usual standard - but why not a "Oh come on David - that's not even half the picture - surely you're not expecting that folk will accept that line.... " to about 70% of his answers- toned down abit of course.

Like everyone else I'm agrieved at the fact NVidia can just gloss over how much harm they are doing to industry by being the major who is so far off the standard 3d APIs - its incredible really. Yet not one question on the pain game developers are feeling to code proprietary code paths for the major IHV who eschewed the standards - that was the area I would have loved to see address with real vim.

I am sure DK would have avoided and evaded like the best - but an on record comment of the NVidia persepctive of being below (he would have called it at) and above standards - when you use proprietary features would have been interested. I am sure DK might have responded "Would you buy a car with only 1 gear or would you like multiple gears to suit it for different terrains?" type response to show NVidia is being clever. And fp16 or lower is in DX9 with the PP hint so NVidia can say we are doing the right thing - honest - its game developers who are being too simplistic in how they code games. But B3D has already covered that with game developers a few months ago.

Complexity for complexities sake isn't desirable.
 
Back
Top