Another ATIvsNVIDIA review from gamersdepot

parhelia said:
I guess it was because NV was scared it would show their boards under a "bad light"

Which is more or less exactly why it doesn't make sense for iD to drop benchmark tools.

Can you imagine the conspiracy theories in the following scenario?

1) Doom3 "benchmark tools unexpectedly dropped"
2) The top ATI cards perform better on Doom than nVidia cards (as determined by FRAPS)
3) Conspiracy conlcusion: iD is doing what it can to hide the fact that ATI has the better card.

:oops:

If ID is worried about getting caught in some IHV conspiracy theory soap operas, I don't see how including or not including a native benchmarking facility makes much difference.
 
WaltC said:
My own theory about D3 slipping to next year is that ID didn't want to go head-to-head with HL2 this fall.

... not a bad one. Let's face it: If Half Life2 were shipped at the end of September, Doom III would still have plenty room to sell a ton up to the holiday. Now things have changed with HL2 somewhat delayed and Id will still earn very decent bucks by selling the game through those dark get-inside-your-house! winter months (most games are still sold in the Northern Hemisphere, okay?).
 
DaveBaumann said:
That German article based on the Nvidia patent seemed to suggest that the new FP unit(s) only had ADD, MUL, and MAD functionality, leaving the other PS 2.0+ ops for the pre-existing full FP32 unit; that sounds extremely likely to me.

Did it? I didn't see it (I mentioned it in the thread).

Right, I must have gotten confused having read the article and the thread one after another.

[url=http://www.beyond3d.com/forum/viewtopic.php?p=161543#161543 said:
Dave Baumann[/url]]FYI, According to John Spitzer the FP units that replaced the FX12 units in NV35 are only capable of arihmetic ops such as MUL, ADD, SUB, DP3, DP4.

Actually, this short little post tells us three things:
  1. The FX12 units have indeed been removed from NV35.
  2. The FP units that replaced them only handle a small subset of all PS 2.0+ ops.
  3. All of this is confirmed by John Spitzer, who's the head of European DevRel if I got that straight.

Combined with the evidence (from Uttar's Dawn demo precision mods) that FX12 and FP32 performance is roughly unchanged from NV30 but that FP16 performance is suddenly on par with FX12, it seems quite likely that what happened was that the two PS 1.1-1.3 functionality FX12 units per pipe were replaced with two FP16 units capable of only a subset of all PS 2.0 instructions.

This seems like the most reasonable way to modify NV30's fragment pipeline to match the FX12-less PS 2.0 and ARB_f_p specs without a major redesign or large transistor increase. It also seems plausible that it could fit in the 5 million extra transistors, although I really don't have any idea of the transistor count increase required by the wider memory controller, not to mention the effects that would have on ideal cache sizes, etc. (Again, it's likely some dead or redundant functionality could have been pruned to make room.)
 
parhelia said:
DaveBaumann said:
My understanding was that there is the possability that Doom3 will not have a benchmarking facility.

Wow... That would break a long tradition over at ID.
I guess it was because NV was scared it would show their boards under a "bad light"

I think there is also the issue of the game itself being much slower. If people see their $400 cards running at 30 fps @640x480, they'll be upset, regardless of the IQ or how well the pace of the game matches the lower frames.

Wasn't it Carmack a few years ago talking about having lower screen res and lower refresh (like TV) but making up for it with high quality visuals, AA, and high colour precision (like TV)?
 
Dave H said:
Combined with the evidence (from Uttar's Dawn demo precision mods) that FX12 and FP32 performance is roughly unchanged from NV30 but that FP16 performance is suddenly on par with FX12, it seems quite likely that what happened was that the two PS 1.1-1.3 functionality FX12 units per pipe were replaced with two FP16 units capable of only a subset of all PS 2.0 instructions.

In all honesty I've not yet reconciled that statement from John with the performances I've seen. IIRC, the statement was that these were FP32 units, not FP16 - because of this I still do not see how the DX8 performance can match NV30, especially since he stated that NV30 could use the FP32 units for DX8 shading under DX as well - this being the case then logically there must be a performance reduction for integer ops somewhere.

Also, where are you getting the improved FP16 performance from? According to these tests FP16 is still in line.

Prod me some when I have a little time, to remind me to mail John and ask about these performance figures and his statement a little more...
 
DaveBaumann said:
digitalwanderer said:
What's the difference anymore? nVidia has got iD on a tight enough leash that it's all the same thing, isn't it? :(

And to move away from that type of remark is exactly why it makes sense.
Again I don't mean any disrespect and this isn't meant as any type of inflammatory comment, but by not including any benchmarking features I'd tend to think that plays exactly into the "nVidia is pulling iD's strings" theory.

Benchmarks have been killing nVidia lately, it's no wonder they don't want any. :(

How do you figure by not including a benchmark they'll distance themselves from that image? (And who out there ain't gonna Frap D3 anyways? ;) )

EDITED BITS: The quoting got all wonky on me, went back and tried to fix.
 
Joe DeFuria said:
parhelia said:
I guess it was because NV was scared it would show their boards under a "bad light"

Which is more or less exactly why it doesn't make sense for iD to drop benchmark tools.

Can you imagine the conspiracy theories in the following scenario?

1) Doom3 "benchmark tools unexpectedly dropped"
2) The top ATI cards perform better on Doom than nVidia cards (as determined by FRAPS)
3) Conspiracy conlcusion: iD is doing what it can to hide the fact that ATI has the better card.

:oops:

If ID is worried about getting caught in some IHV conspiracy theory soap operas, I don't see how including or not including a native benchmarking facility makes much difference.
Dang it, I posted before reading the next page again. :rolleyes:

I agree with you Joe. I see no benchmarking features as a definate break from iD tradition and as a HUGE warning flag for consipiracies. :(
 
My opinion is that JC is independent enough to be not in any IHV's pocket. And since he's coding different paths for different IHV's, he doesn't believe that Doom3 can be used fairly as a benchmark.
 
Dave:
I'm thinking they added two FP16 Mul & Add units ( or maybe MAD units - the NV30 patent refer to separate units, but they very well could be united this time around ) which can unite themselves to become one FP32 unit when required.
That means effectively 3 FP16 units or 2 FP32 ones, all with 1 COS/SIN/... in parallel ( that is, according to that patent - maybe it's not permitted in the real silicon )

The 12FP32 ops/clock number comes from 2 FP32 MADs and 1 FP32 COS/SIN/... I bet. So in practice, they could also claim 16 FP16 ops... But!
If they claimed that, it would seem SLOWER, because they also claimed 'FP16 is twice as fast as FP32' since the NV30 launch ( stupid marketing launch ) - so people would think 8 FP32 ops. Which is simply wrong ( okay, unless you're in the real world and use a lot of registers, hehe, but that's another matter completely )


Uttar
 
Not conspiracies..the idea behind a benchmark is to determine a cards rendering power on equal terms, meaning precision and overall workload is the same. In this case with Doom 3 and now 3Dmark 03 this is NOT the case. One IHV is allowed to have a less overall workload and not even complying to DX9 specs...that and custom code paths doesn't make for a 'fair comparison'. I would have no problem with Doom 3 as a benchmark as long as the ARB path was the only option, afterall Nvidia has agreed to those specs being part of the ARB and all :rolleyes:
 
Doomtrooper said:
One IHV is allowed to have a less overall workload and not even complying to DX9 specs...that and custom code paths doesn't make for a 'fair comparison'.
Yeah, that's where I kind of get the whole "nVidia has got iD on a tight leash"-thing from. :)
 
Whatever your feelings on the matter, iD need to ensure the game runs acceptably on as many cards as possible. There's no point them forcing the NV3x cards to run the ARB2 path just to try and make a statement.

This is a game, and it needs to be playable - iD are ensuring that's the case, no matter what they have to do. In the real world, that's how it works.
 
PaulS said:
Whatever your feelings on the matter, iD need to ensure the game runs acceptably on as many cards as possible. There's no point them forcing the NV3x cards to run the ARB2 path just to try and make a statement.

This is a game, and it needs to be playable - iD are ensuring that's the case, no matter what they have to do. In the real world, that's how it works.
Well, let's at least be fully honest about it and admit that nVidia is paying iD up the wazzoo to insure that "the game runs acceptably on as many cards as possible"....it's not like JC just wrote nVidia their own path out of the goodness of his heart. :rolleyes:

EDITED BITS: Added "JC", it makes more sense that way.
 
I don't see how it's any different to having an R200 or R300 path. Specialised paths have existed for a long time - this isn't the first time, so don't be all conspiratorial about it :?
 
PaulS said:
I don't see how it's any different to having an R200 or R300 path. Specialised paths have existed for a long time - this isn't the first time, so don't be all conspiratorial about it :?
I'm not being conspiratorial about it, nVidia payed iD an undisclosed (~8 million US) amount to code that additional path to make up for their underperforming hardware.

That's not being conspiratorial, that's being realistic. :)
 
Whaaatt? Where are you getting that figure from?!

Even if they DID pay him, he would have included a specialised path anyway. He doesn't need any incentive to make the game run as well as possible - and, as gamers, we should be thankful for that. It's bad enough people bought the FX series cards, without being slapped in the face by a coder out to prove a point.
 
I think you've got to remember that when Carmack started coding for D3, Nvidia was still dominating the market, and promising a lot with the upcoming NV30. Given that ID make a lot of money from licensing their engines out to other developers, at that time Carmack probably saw it as sensible to code NV3x paths to keep as much performance as possible for what might be a large number of cards.

Since then of course, D3 is later than it should be, NV3x was late and underperforming badly, and ATI has come up a long way, making Carmack's decision look strange.

It is in fact quite ironic that Carmack coding the NV30 path is the only thing that's making D3 playable on NV3x, (even if he did get $8 million for doing so), making it a good call in hindsight, even if for all the wrong reasons.
 
digitalwanderer said:
PaulS said:
I don't see how it's any different to having an R200 or R300 path. Specialised paths have existed for a long time - this isn't the first time, so don't be all conspiratorial about it :?
I'm not being conspiratorial about it, nVidia payed iD an undisclosed (~8 million US) amount to code that additional path to make up for their underperforming hardware.

That's not being conspiratorial, that's being realistic. :)

He coded an r200 path too, I assume you think ATI paid him for that.. get a grip carmack has always tried to make his games work the best on a variety of hardware, and if he needs multiple paths he will put them in. He wants to sell his engine, if it runs like crap on 80% of cards he wont get many sales.
 
The Point is the average joe will not know the nv30 path is a much lower workload path and they will see Nvidia beating ATi and assume Nvidia is better and buy more Nvidia cards....this is why I want it all over the place that the nv30 path is much less work load and thats why and if on the same path Nvidia goes to a crawl.
 
Its a co-marketting deal. Scuttlebut tells me $5 million, and it was paid to Activision and not id.
 
Back
Top