Another sexy interview with Richard Huddy

Here we go:
http://www.gamersdepot.com/interviews/ati/richard_huddy/001.htm

Highlight:
GD: If you had to pick between the two to be stranded on an island with - Shania Twain or Faith Hill? Why?

RH: We don't have that many country singers in the UK - so being offered a choice between the two is pretty unusual. Obviously I'm too much of a gentleman to turn either of them down without a lengthy one-on-one interview. But if I were forced to choose I'd probably go for Shania because her video "I feel like a woman" was so good.

If I were offered a third alternative I'd take Jeri Ryan (Star Trek's Borg Babe "7 of 9"). She's so gorgeous I make an odd kind of reference to her in my usual email signature calling myself Richard "7 of 5" Huddy.
:LOL:


BTW, from his last interview he said in some cases 16bit FP is actually less precise than 24-bit integer colour. How does that work?
 
It can't be less precise, unless you're talking about 24-bit per channel integer color (where the integer format would be less precise for calculations with very high dynamic range).

He also seemed to go to great lengths in that article to say that nVidia's 16-bit FP is not good enough, and 32-bit FP is too slow.

16-bit FP will be more than good enough for a large number of calculations.

32-bit FP shouldn't be much slower for many calculations (the big difference is that there are fewer registers available). Note that the NV30 has just as many FP32 registers as the R300 has FP24 registers.
 
Chalnoth said:
32-bit FP shouldn't be much slower for many calculations (the big difference is that there are fewer registers available). Note that the NV30 has just as many FP32 registers as the R300 has FP24 registers.
If half speed doesn't count as "much slower", what does?
 
I thot it was a great interview --I really like this guy. Out of various interesting statements, I found this one the most intriguing:

"I'd point out that our drivers are the best there are (for example we have much fewer open bugs in Microsoft's database than the competition). . ."

#1 -- Can someone confirm this is true (i.e. the open bugs with MS) and provide the relevant numbers?

#2 -- If true, is this a valid metric to be using to prove the point he is trying to make on a macro scale ("our drivers are the the best")? On a micro scale, of course, the metric is/ought to be "how does it work for me in the games I play?". But on a macro scale, at least on its face, the metric he is pointing at seems like it ought be a pretty good one. Are there better metrics to use for the macro comparison?

Best. Geo
 
I'm mystified by Chalnoth's second comment a bit as well, but I actually thought along the same lines as his first comment when I was reading the interview...I think Mr. Huddy was practicing a bit of the old "spin". I get the distinct impression from various comments he's made, discussed in these forums I believe, that he takes rather large issue with the PR surrounding nvidia's CineFX initiative, and considers countering it a major priority of his job description.

That isn't to say I think the difference between fp16 and fp24 can't be seen in real use, but that I think his wording downplayed the improvement it can offer over what non-9500 and higher users see today. I think they (ATI, Richard Huddy) are aware of the consumer's perception of "16 bit" as "bad" and are using that terminology to make an association with that negativity. I think nvidia is trapped regarding this use of terminology since due to their own policy with trying to say "128-bit" and not discuss details accurately (performance), they have a unique difficulty in mentioning 64-bit at all when at the same time they are trying to downplay the R300's 96-bit while not discussing details (actual impact on rendering, etc.).

The only thing about their (presumable) success I worry about is if such spin becomes the defining characteristic of their PR, but I currently don't see persons such as Huddy and Orton encouraging such a focus.

geo:

That is a Direct X metric. Personally, I find it easy to believe based on my impression of the problems being discussed with 4x.xx detonator drivers (the most significant competition for stability AFAIK) in and around GF FX threads I've been reading over the net. They've mentioned the bug tracking database in several places, so it wasn't just a slip or isolated misunderstanding. It also hasn't been said to be incorrect by nvidia or Microsoft AFAIK, even though the first time was a decent amount of time ago.
 
Well I certainly do not like nVidia's move to use FP16 as the default precision in PS2.0.
FP16 is sufficient for color calculations, but PS is used for much more than that.

On the other hand RH's comparing the 9200 with the 5200 saying that the 5200 uses FP16 and therefore the 9200 is better is ridiculous.
The 9200 supports FX12(s3.8) only.
I think even the 5200's FX12(s1.10) is slightly more useful than that.
 
Hyp-X said:
Well I certainly do not like nVidia's move to use FP16 as the default precision in PS2.0.
FP16 is sufficient for color calculations, but PS is used for much more than that.

On the other hand RH's comparing the 9200 with the 5200 saying that the 5200 uses FP16 and therefore the 9200 is better is ridiculous.
The 9200 supports FX12(s3.8) only.
I think even the 5200's FX12(s1.10) is slightly more useful than that.

Have you tried any experiments to actually see how many bits of fractional accuracy there are on a 9200?

I would suggest that if you do you then might want to reconsider this statement.

[Edit - made it a bit clearer]
 
Full of pr spin imo.

That 'trickle down' progression will clearly happen with the technology which was first shown in the Radeon 9700 Pro. And ATI has already started aggressively pushing that feature set into the various market segments. The original 9700 Pro is now accompanied by the 9600, 9500, 9200, 9100 and 9000. That's a full range of graphics cards to satisfy everyone from the top-end extreme gamer to the casual user who wants only occasional access at a low, low price.

I don't like the fact the he talks about pushing the R9700 feature set into different market segments and then also includes 9000-9200 cards in that segment.

but their cut down "16-bit half float" solution simply doesn't deliver sufficient quality (Bob Drebin showed some clear and simple demos of just why 16 bit isn't good enough at the Title Club launch event). So NVIDIA are left with a half-way house that just doesn't hack it - and that's why the Radeon 9x00 series of graphics cards will remain the best choice for OEMs and consumers for quite some time.

And here he does the same thing again. First complaining about Nvidias FP16 and then mentions the 9X00 series of chips.
 
Bjorn, I don't know if Richard had to go through ATI PR (it sounds like it didn't, or at least ATI PR is rather lax in terms of interviews) but it sure reads better than most NVIDIA interviews nowadays IMO.

With the current situation, IMO :

ATI interviews = the right to brag/shoot down the competition
NVIDIA interviews = less right to brag/shoot down the competition

Mature readers or hardened video card enthusiasts (like those at B3D) know what to expect when it comes to interviews with these folks and they try to focus on *not* countering obviously-PR-related answers.

Just MHO.
 
Reverend said:
Bjorn, I don't know if Richard had to go through ATI PR (it sounds like it didn't, or at least ATI PR is rather lax in terms of interviews) but it sure reads better than most NVIDIA interviews nowadays IMO.

With the current situation, IMO :

ATI interviews = the right to brag/shoot down the competition
NVIDIA interviews = less right to brag/shoot down the competition

Mature readers or hardened video card enthusiasts (like those at B3D) know what to expect when it comes to interviews with these folks and they try to focus on *not* countering obviously-PR-related answers.

Just MHO.

I was actually more annoyed by the fact that he's trying put the 9000-9200 under the same badge as the 9500+ series then about the crap thrown at Nvidia. And i definitely don't think that Nvidia is any better.
 
Hmm, I am "not authorized to view this page"...

Anyway, I believe R8500-9200 use either 13 or 14 bits of precision (s3.9 or s3.10), and NVidia's FP16 is s10e5 with an implicit 1.
So in the range of 2 to 8 (or 4 to 8 ), the fixed point representation is more precise.
 
Hyp-X said:
Well I certainly do not like nVidia's move to use FP16 as the default precision in PS2.0.
FP16 is sufficient for color calculations, but PS is used for much more than that.
BTW NVIDIA have changed there mind, as MS stuck to there guns. PS_2_0 is FP24 minimum unless the partial precision flags are set.

How long until the drivers catch up is anyones guess.
 
andypski said:
Have you tried any experiments to actually see how many bits of fractional accuracy there are on a 9200?

I would suggest that if you do you then might want to reconsider this statement.

Hmm.

Statement under reconsideration. :)

Just tested a 9000Pro (I don't have access to a 9200), and it shows 12 bit fractional precision, at first look. (I'll try more detailed tests later, when I figure out how to do that.)

The interesting thing that the same test program shows less precision on the 8500 - I never knew that the precision was changed between the 8500 and the 9000.
My results are not yet conclusive on the 8500 - I get strange rounding errors.
 
Hyp-X said:
Hmm.

Statement under reconsideration. :)

Just tested a 9000Pro (I don't have access to a 9200), and it shows 12 bit fractional precision, at first look. (I'll try more detailed tests later, when I figure out how to do that.)

The interesting thing that the same test program shows less precision on the 8500 - I never knew that the precision was changed between the 8500 and the 9000.
My results are not yet conclusive on the 8500 - I get strange rounding errors.

Hmm... that's odd on the 8500 - I'm not sure what could cause the difference.

But on the matter of precision in general I hope it is clear that the level of precision of the 9200/9000 etc. is actually chosen carefully, and is _very_ useful. I believe it is a significant step up from 'FX12' (but then, of course, I would... ;) )

- Andy.
Bias
ATI [x---|----] nVidia
(What would you expect...)
 
DeanoC said:
BTW NVIDIA have changed there mind, as MS stuck to there guns. PS_2_0 is FP24 minimum unless the partial precision flags are set.

Score one for developers!

How long until the drivers catch up is anyones guess.

Dunno, but I guess this means we are about to see a slew of papers and workshops and eduction all about setting partial precision flag "whenever possible". Perhaps another round of 3DMark bashing for not setting partial precision flags for all cases in their shaders. And that might even be a valid criticism in certain cases for all I know....shame that nVidia no longer has relations with 3DMark though.

...And back to the interview...

That is what Huddy was getting at with the difficult position of nVidia's marketing. On one hand, they tout "true" 32 bit floating point precision as a differentiator and a necessity. On the other, they are going to have to preach the use "lower than the competition" 16 bit precision in order to ensure performance.

It just makes for a message that sounds very conflicted, and therefore difficult to pull off.
 
Back
Top