Another Richard Huddy Interview

Lezmaka said:
Granted, I don't have a very good memory, but over the 1 1/2 years or so that ATI had a definate advantage, when asked about nvidia, it seems that they "took the high road" and focused their answers on their own hardware/drivers.

Well, as we have seen, Ati only take the high road if they can. As they could with the R300 vs NV30. Now there's a real fight again and then the gloves come of immedietly. PR crap, releasing beta drivers. And i must say that i kind of like it, both IHV's are throwing mud at eachother for a change :)
 
Huddy was always very straight to answer about nvidia in the other interviews. Unless you missed all of them, i really don't see the surprise here.
 
Bjorn said:
I wouldn't automatically call shader replacement "questionable optimizations". At least not as long as the IQ are the same which we have yet to see any evidence that it's not. And i think Carmack would be the first one to complain if the result he got wasn't what he wanted.

Maybe, but would you call shader replacement "fair benchmarking comparison"?
 
vb said:
Maybe, but would you call shader replacement "fair benchmarking comparison"?

As long as we're not talking about benchmark tools like 3D Mark then yes. As long as the output are the same. And if i'm not mistaken, this will be the way that the next 3D Mark will work, different shaders for different hardware, as long as the output are equivalent.
 
Bjorn said:
vb said:
Maybe, but would you call shader replacement "fair benchmarking comparison"?

As long as we're not talking about benchmark tools like 3D Mark then yes. As long as the output are the same. And if i'm not mistaken, this will be the way that the next 3D Mark will work, different shaders for different hardware, as long as the output are equivalent.

The only time shader replacements should be used is if they are a work around for a bug and then we should be told whats going on.

Using shaders is dishonest .

Its like working a company and paying other people to do your work so your work gets done faster and you look better as a result .
 
Folks,

Anyone remember Quake Mini-drivers ? This harks back to the good 'ol days when JC pubished a short document for driver developers which basically said 'if you want Quake to run fast then optimise these paths...' Very soon every vendor and his dog had an optimised driver just for Quake which sucked on everthing else :)

I'm sure that Nvidia's driver team as been hard at work on getting D3 to run as well as possible on their cards by optimising the paths used. It's therefore not suprising that if you change the path you may fall outside of these tuned areas.
 
Lezmaka said:
I only remember seeing NV3x specifically mentioned, although I wouldn't doubt it has some effect on NV4x, but I don't think it'd be nearly as detrimental as it would be for NV3x.

John Carmack said:
On the other hand, the Nvidia drivers have been tuned for Doom's primary light/surface interaction fragment program, and innocuous code changes can "fall off the fast path" and cause significant performance impacts, especially on NV30 class cards.

He says "especially" not "exclusively". That statement *includes* NV40 cards, and knowing Carmack, that's exactly how he meant to write it.
 
Sorry, the literal interperetation of that is the optimisatsions are still there for NV40, they just don't cause such a large drop off (probably because the architecture is better in the first place). Had the word "especially" not been there then the meaning would be NV30 only.
 
DaveBaumann said:
Sorry, the literal interperetation of that is the optimisatsions are still there for NV40, they just don't cause such a large drop off (probably because the architecture is better in the first place). Had the word "especially" not been there then the meaning would be NV30 only.
Literal interpretation is that is there especially for NV30 class (whoch include what in his mind? FX 5800? :LOL: ) and another ones, could be NV40, GF4, or any other class.
 
Bouncing Zabaglione Bros. said:
Evildeus said:
Well, could or could not, as the statement is enough vague to mean anything.

I don't know how it can be more obvious unless English is not your first or second language.
Unless there's many class, which could include NV40s or not, and some use it and other don't. But well, sure my english is my 2nd language, so...
 
WaltC said:
incurable said:
Well, until they support for that allegation with some old-school facts, this puts ATi firmly in the 'bad loser' & 'FUD spreader' categories.
Yea, I guess Carmack, too, since he's the one who brought up the subject in the first place...;)
I don't think Carmack mentioned names in his 'driver fragility' comment (Edit: I was wrong here, he apparently mentioned nVidia and especially NV30.), and honestly, I don't think this has anything to do with the interview at hand or my comments.

But hey, if you take ATi PR's comment on nVidia's drivers as gospel without even the slightest hint of proof ...

cu

incurable
 
Bjorn said:
vb said:
Maybe, but would you call shader replacement "fair benchmarking comparison"?

As long as we're not talking about benchmark tools like 3D Mark then yes. As long as the output are the same. And if i'm not mistaken, this will be the way that the next 3D Mark will work, different shaders for different hardware, as long as the output are equivalent.

I don't see Doom3 benchmark used as indicator of only doom3 performance. At least in US people who would play Doom3 have already bought it, played it, got bored with it.

Doom3 benchmark is used for a general performance evaluation, just like "benchmark tools like 3D Mark"

jvd said:
Using shaders is dishonest.
;)
 
Evildeus said:
Surely 8) , still, literelly i'm correct ;)

No you're not, no matter how many times you say it. Let try again, with my bolding:

John Carmack said:
On the other hand, the Nvidia drivers have been tuned for Doom's primary light/surface interaction fragment program, and innocuous code changes can "fall off the fast path" and cause significant performance impacts, especially on NV30 class cards.

Carmack is talking about the Nvidia drivers, all of them. At the end, he qualifies that statement by saying the problem is especially bad on NV30 *class* cards. In the main, he is talking about drivers, not hardware. At the end, he says the problem is *especially* bad on NV30 level cards, not *only* NV30 cards.

If you can't understand this, then I suggest you go back to your English teacher, or stop trolling.
 
vb said:
I don't see Doom3 benchmark used as indicator of only doom3 performance. At least in US people who would play Doom3 have already bought it, played it, got bored with it.

Doom3 benchmark is used for a general performance evaluation, just like "benchmark tools like 3D Mark"

I thought that we already established that f.e Quake 3 performance doesn't necessarily map over directly to Q3 engine based games. So i'm not so sure that everybody is using Doom 3 as a general performance evaluation. I know that i'm not doing it at least.
 
Evildeus said:
DaveBaumann said:
geo said:
Ummm, that last sentence is a bit of an oversell, isn't it? The cost/benefit won't be any better next gen, will it?

The ratio improves. The even if the benefit doesn't move (on the software side assuming no SM3 titles, or ones that show no benefits over SM2.0; on the hardware side no significant changes are made to the "instruction per cycle / branching abilities), the cost is reduced since there will be smaller processes hence more can be packed in at similar die sizes. This is exactl as Dave Orton said when we interviewed him.
But that still stay true. If i can do the same thing in SM2.0 with 2/3 of the die size; even in smaller process, i still win. I don't see what the process changes, because the cost is still 33% more.

Let's use simple numbers out of my head to make an imaginational point.

Suppose Ati needs 66M transistors for a SM2 part and 99M for that same part in SM3 (66 = 2/3 *99)

Now suppose another future product is like the previous product but in smaller process, clocked higher and adds 50M transistors of cache to the same part. Now they would need 116M transistors for SM2 and 149M transistors for SM3. Not 2/3 anymore...

edit: okay so point is... they could add or subtract anything from their parts and the ratio could change...not likely that it always stays at "2/3 no matter what" anyways :)
 
incurable said:
But hey, if you take ATi PR's comment on nVidia's drivers as gospel without even the slightest hint of proof ...

cu

incurable

Wow...;) Some people here really need to take "Context 101," but pass it this time...;)

Richard Huddy said:
Firstly the benchmark numbers which have been released used ATI drivers which have not been heavily tuned for Doom3. In contrast, NVIDIA’s drivers contain all sorts of optimisation for Doom3, including one that’s likely to get a lot of attention in the future.

John Carmack of ID software, the creators of Doom3, also noted that NVIDIA’s driver seems fast, but he also noticed that if he made small changes to his code then performance would fall dramatically. What this implies is that NVIDIA are performing “shader substitution†again.
We saw them spend most of last year doing this for benchmarks, and the consequence looks like faster benchmark numbers but it’s important to understand that the NVIDIA driver isn’t running the same code as the ATI driver. The ATI driver does what it’s asked, and does it fast, and the NVIDIA driver does something that’s quite similar to what it’s asked to do – but which is only chosen because it’s faster.

(Bold text indicates context.) This is the second time you've missed the fact that Carmack also notes the highly optimized nature of nVidia's drivers for Doom3, and that Carmack noted it before Huddy. Indeed, Huddy cites Carmack directly.

Also, it's been remarkable for me to read the posts here written by people who think that Carmack's "especially nV30" remark meant "only nV30," since the context for Carmack's remarks was the nV40/R4x0 D3 benchmarks conducted on site by id.

I will say, however, that Carmack's remarks are indeed suspect, since he does not mention nVidia's trilinear optimizations at all while mentioning ATi's in a negative light, so I have to admit it's barely possible that Carmack flip-flopped here and chose to mention nVidia's D3 driver optimizations in a negative light while ignoring ATi's completely...;)
 
Back
Top