NV40: 6x2/12x1/8x2/16x1? Meh. Summary of what I believe

OpenGL guy said:
You choose to miss the point as well.

Well, if it's any consolation I'm still searching for your line of thinking.

What am I defending here? The R300 does as many stencil ops per clock as the NV30. What I am questioning is how you can conclude that nvidia put any more work into stencil op performance than ATI did.

If you want to say that the 5800 Ultra was 50% faster because of its higher clock speed, I won't dispute that.

Well that is exactly what I'm stating, I never commented along those lines [concerning work put in], other than the Radeon being inferior in absolute stencil preformance in a product a consumer can purchase. Which just happens to fall under that whole 50% faster thing which is beyond dispute, which you know and you just stated so.

What I dispute is calling the 5800 Ultra a real product. I never saw it on store shelves, for example.

So, it's only "real" if it meets some arbitrary bound you describe? :rolleyes: Common bud, the product in question is being utilized in people's PCs to good effect, it's most definitly real.

I still consider most high-end sports cars as "real" even though there are only a handful in circulation. For example, the Ferrari F50 (one of my favorite looking cars) only saw 349 produced and sold at high price. Is it not "real" either? This is semantics cum insanity.

I'll state it clearly: Clock per clock, NV30 and R300 do the same number of stencil ops: How can you say that NV30 is more optimizated for stencil ops than R300?

Pretty simple question.

I would. This is just like a replay of the whole Pentium4/Athlon argument. Doing N ops per clock is great, but it's only one facet of what defines absolute preformance. This is too basic to even mention - the fact is that the 5800U has a higher output than the competition, it's beyond any reasonable dispute (reasonable in that you stated it's not "real"). You're playing the same game that so many AMD supporters have in the past, trying to state that per-clock preformance via concurrency is the only "acceptable" means to reach a preformance level. I disagree and I have the actual, tangible, product and it's absolute preformance on my side. Sorry to disagree, but I have to.

To Joe:

JoeDeFuria said:
The number of units sold (or in this case, producible at that clock speed in any appreciable quantity) is an indication of which of the differing ideologies is actually legitimate.

Joe, this crazy; you're way to smart for this. So, what you're saying is that XBox (especially in Japan) isn't "legitimate" in that the number of unit's sold are approaching 1/9th that of PlayStation2?

Sales are never an indication of technological superiority. There are Marketing Departments for a reasons.
 
Vince said:
Joe, this crazy; you're way to smart for this.

And apparently, you're not. But I guess we can't dispense with the petty insults, eh?

So, what you're saying is that XBox (especially in Japan) isn't "legitimate" in that the number of unit's sold are approaching 1/9th that of PlayStation2?

Why, did MS actually stop manufacturing the X-Box for Japan before any appreciable product even hit the shelves? And did they then immediately replace the X-Box with a new version of it?

Come on, Vince, you're much to smart to not understand exactly what we're talking about with respect to the FX5800's viability as it relates to its shipping clock speed...

And I have a sneaking feeling that the 5800 sales were "somewhat less" than 1/9th of the 9700 sales....
 
Vince said:
JoeDeFuria said:
The number of units sold (or in this case, producible at that clock speed in any appreciable quantity) is an indication of which of the differing ideologies is actually legitimate.

Joe, this crazy; you're way to smart for this. So, what you're saying is that XBox (especially in Japan) isn't "legitimate" in that the number of unit's sold are approaching 1/9th that of PlayStation2?

I checked his posts over a couple of times, nowhere could I see him mention ps2 or xbox in this thread.

Why must people always use horrible analogies to attempt to support bad arguments?
 
Joe DeFuria said:
Why, did MS actually stop manufacturing the X-Box for Japan before any appreciable product even hit the shelves? And did they then immediately replace the X-Box with a new version of it?

First of all, it holds in that it shows that sales aren't a metrics for technological superiority (or inferiority) - which you blatently stated.

But, just to correct you, they did stop new shipments for over a year, almost 2 IIRC, which did impact production. They did replace the controller when it was the source of outrage and was deemed to be a drag on sales and designed a new one for the market which was substituted in shipping product. Which, due to the differential dynamics of the console market are about as close to what nVidia did as they could untill XBox2.

And I have a sneaking feeling that the 5800 sales were "somewhat less" than 1/9th of the 9700 sales....

Ok, fine. It still doesn't make it any less "real" which is what I objected to.
 
AlphaWolf said:
I checked his posts over a couple of times, nowhere could I see him mention ps2 or xbox in this thread.

Why must people always use horrible analogies to attempt to support bad arguments?

Yeah...this is a PC Graphics forum, and yet Vince is always dragging irrelevant console stuff over in here because he thinks it applies somehow. He should just go back to his X-Box hating console forum...

(I know a select few can appreciate that tongue in cheek comment...)
 
Well Joe, I can't always juggle 5 different people responding to this thread, which started because some people were saying that the NV40 can't possibly be good because NVidia's engineers aren't good.

Anyway, I think the point has already been made, if not more eloquently by Vince. There is "no correct way" for performance. I stated that NVidia made some design decisions (.13um, low-k, zixel pipelines, etc) NVidia went for per-pipeline-per-clock performance for stencil, but traded off color-op-per-clock performance. In that sense, they concentrated on stencil, because they were willing to trade off color ops. OGL Guy misinterpreted this as some attack against ATI and somehow took offense, which it was not. I merely stated that NVidia tried something different this time around. There are many ways to design for stencil fill performance. I happen to think that ATI's approach to stencil fill performance is no different than on the R200 or any other past card. Nothing about the way z/stencils are output seems to have changed.

This isn't an attack on ATI and it is not a value judgement. It is simply a stating of the fact of what NVidia attempted to do with the NV30. Had not other problems arisen with the design, especially the .13um and low-k issues, we might have seen a 6 or 8 pipe NV30 and the "designed for stencil" performance aspect would have been obvious.

I don't really understand why OGLGuy is taking exception with this.

p.s. what's with this fixation on the 5800? From what I can tell from stencil benchmarks in real games, like Homeworld 2 and C&C Generals, the 5950 still has the stencil performance edge.
 
DC,

First of all, please do not use "Vince" and "Eloquent" in the same sentence, I've got a rather expensive monitor here, and I'd rather not have it covered with mucus laden diet coke... ;)


OGL Guy misinterpreted this as some attack against ATI and somehow took offense, which it was not.

I see it completely differently.

You (and Vince) seemed to take OpenGL's comments as some sort of personally affront, vs a simple questioning of the stated "superiority" of nVidia's architecture with respect to stencil.

Please, go back and actually read OpenGL comments.

I happen to think that ATI's approach to stencil fill performance is no different than on the R200 or any other past card. Nothing about the way z/stencils are output seems to have changed.

nVidia's method may be different...is it "better?" Is OpenGL somehow being overly defensive if he questions that?

This isn't an attack on ATI and it is not a value judgement. It is simply a stating of the fact of what NVidia attempted to do with the NV30. Had not other problems arisen with the design, especially the .13um and low-k issues, we might have seen a 6 or 8 pipe NV30 and the "designed for stencil" performance aspect would have been obvious.

That didn't happen, so that's all irrelevant, of course.

I don't really understand why OGLGuy is taking exception with this.

And I don't understand why you're taking exception with his comments, so I guess we're all even. ;)
 
Joe is actually John Leguizamo! FLEE!

(okay, so not really. but kinda, if you squint a bit...)
 
Vince said:
What am I defending here? The R300 does as many stencil ops per clock as the NV30. What I am questioning is how you can conclude that nvidia put any more work into stencil op performance than ATI did.
If you want to say that the 5800 Ultra was 50% faster because of its higher clock speed, I won't dispute that.
Well that is exactly what I'm stating, I never commented along those lines [concerning work put in], other than the Radeon being inferior in absolute stencil preformance in a product a consumer can purchase. Which just happens to fall under that whole 50% faster thing which is beyond dispute, which you know and you just stated so.
Except that every product since the 5800 Ultra has shipped with clocks under 500 mhz. Where's the alleged superiority of stencil ops?
What I dispute is calling the 5800 Ultra a real product. I never saw it on store shelves, for example.
So, it's only "real" if it meets some arbitrary bound you describe? :rolleyes: Common bud, the product in question is being utilized in people's PCs to good effect, it's most definitly real.

I still consider most high-end sports cars as "real" even though there are only a handful in circulation. For example, the Ferrari F50 (one of my favorite looking cars) only saw 349 produced and sold at high price. Is it not "real" either? This is semantics cum insanity.
Sure, that's a real close comparison. How many other graphic chip models have sold in such low quantities as the NV30 Ultra? Cars, like the Ferrari, that are sold in such low quantities cost hundreds of thousands of dollars, and perform like it. Can't say the same about the 5800 Ultra on either account.
I'll state it clearly: Clock per clock, NV30 and R300 do the same number of stencil ops: How can you say that NV30 is more optimizated for stencil ops than R300?

Pretty simple question.
I would. This is just like a replay of the whole Pentium4/Athlon argument. Doing N ops per clock is great, but it's only one facet of what defines absolute preformance. This is too basic to even mention - the fact is that the 5800U has a higher output than the competition, it's beyond any reasonable dispute (reasonable in that you stated it's not "real"). You're playing the same game that so many AMD supporters have in the past, trying to state that per-clock preformance via concurrency is the only "acceptable" means to reach a preformance level. I disagree and I have the actual, tangible, product and it's absolute preformance on my side. Sorry to disagree, but I have to.
Except that I state, again, where's the superiority when every newer product has shipped at less than 500 mhz? I also use this as evidence that the 5800 Ultra was not a product. If the architecture is so "pipelined and revolutionary" then newer products should be shipping at ever higher speeds.
 
OpenGL Guy said:
But in any event, the R300 does well with any shaders as it's always doing 8 colored pixels per cycle. Even single textured polygons run at full speed. Can you say that about NV3x?
OpenGL Guy said:
My point wasn't to put down the NV30 or nvidia, but simply to question your conclusions.

IMO OpenGL Guy, the reason they are taking an exception to what you're saying is because your point IS to put the NV30 down in light of the R300. Anytime anyone posts anything about the NV3x design which puts a positive spin on the architecture, you respond with comments like: that's fine... but look what the R300 can do... or in your words exactly, "Can you say that about the NV3x?" That comment is derogatory in nature and I don't know how you can spin that in any other way. I'm fine with this because I know you're biased and you should be. What upsets me, and maybe this is just the vibe I'm getting, is that you masquerade your bias with statements which sound objective, but can't be because the R300 is partly your product and you want to promote it.
 
If you notice, bdmosky, OpenGL guy's initial post in this thread was a direct response to a post by DC...where DC made a comparison of NV30 to R300.

The stage was set already.
 
Joe DeFuria said:
You (and Vince) seemed to take OpenGL's comments as some sort of personally affront, vs a simple questioning of the stated "superiority" of nVidia's architecture with respect to stencil.

The only problem with this theory is that there was never any stated "superiority". Here is the paragraph that OGLGuy responded to:

DemoCoder said:
NV30 wasn't a bad design, it simply wasn't good enough. If ATI hadn't shipped the R300, people would be quite satisifed with the boost the NV30 delivered over the NV25. It is only because ATI did such an incredible job that we view the NV30 as a suckage. ATI went for full FP, Nvidia thought long instruction limits, stencil, and fast integer was the right strategy. ATI guessed the market right.

Nowhere did I state that NVidia performance was superior. I simply stated that NVidia prioritized things differently. NVidia concentrated on process shrink, low-k, long instruction limits (and pipeline flexibility), plus stencil. It is unquestionable that they "concentrated" on it. They tried to novel design with alot of extra complexity and control logic. I didn't say they beat ATI or that their design is superior. For example, the fact that they designed in support for extremely long shaders says nothing about performance. Ditto for UltraShadow. It merely shows that NVidia had their priorities mixed up. ATI's big push of FP throughout the entire pipeline turned out to be the right priority. To me, ATI's stencil performance came "for free", they didn't have to change anything, nor did they have to "design for boosted stencil performance". Whether this holds for future chips where the concept of "pipeline" might get blurred, and various stages get uncoupled (such as Z and Color)

OGLGuy somehow interpreted that NVidia "went for long instruction limits, stencil, etc" as some assertion that this gave NVidia superior performance.


Please, go back and actually read OpenGL comments.

I am not the one who originally misinterpreted things. And don't forget that my original response is a direct response to a bunch of anti-NVidiot vitrol, with people already attacking NVidia engineers and the NV40 before anything is released.

nVidia's method may be different...is it "better?"

I never asserted it. In fact, I've been trying to assert the opposite: That NVidia's engineers were not idiots and that they tried something different, and doing something different is not neccessarily an indication of your skill, and being different is not neccessarily the same as being inferior. Hence, the comparison to the Itanium. Sometimes non-traditional designs work out, sometimes they don't.
 
OpenGL guy said:
Except that every product since the 5800 Ultra has shipped with clocks under 500 mhz. Where's the alleged superiority of stencil ops?
Why is 500MHz the magical number for achieving 'stencil superiority'? NV3x and R3x0 do an equivalent number of stencil ops per clock, hence, more MHz = more stencil power.

5800U was clocked higher than 9700P, but was an imaginary product. 5900U and 5950 were also both clocked higher than their competition, and were definitely not imaginary. Where's the issue?

Edit: Yes, it's a bad habit of mine to constantly edit and rephrase stuff.
 
Fodder said:
OpenGL guy said:
Except that every product since the 5800 Ultra has shipped with clocks under 500 mhz. Where's the alleged superiority of stencil ops?
Oh come on. Is the 5950 clocked higher than the 9800XT? How about the 5900U and 9800P? Why is 500MHz the magical number for achieving 'stencil superiority'?
Uh, because that's what the 5800 Ultra was clocked at? Democoder, and others, have claimed that the NV3x is designed to do stencil well. If so, then why are the chips doing less stencil ops now than before?

And, yes, the 5950 is clocked higher than the 9800XT, so why is it slower in 3D Mark 2003 GT3 which makes heavy use of stencil ops? Does higher clock speed automatically imply higher performance?
 
OpenGL guy said:
Except that every product since the 5800 Ultra has shipped with clocks under 500 mhz. Where's the alleged superiority of stencil ops?

  • In this discussion are we referring to the 5800U?
  • If so, is the 5800U superior in absolute [theoretical] preformance over the then contemporary competition?

End of story as that's all I'm stating. Stop trying to enlarge this debate so you can show ATI superiority/nVidia inferiority. We all realize we're viewing a singular case at a specific point in time.

Vince said:
So, it's only "real" if it meets some arbitrary bound you describe? :rolleyes: Common bud, the product in question is being utilized in people's PCs to good effect, it's most definitly real.

I still consider most high-end sports cars as "real" even though there are only a handful in circulation. For example, the Ferrari F50 (one of my favorite looking cars) only saw 349 produced and sold at high price. Is it not "real" either? This is semantics cum insanity.
Sure, that's a real close comparison. How many other graphic chip models have sold in such low quantities as the NV30 Ultra? Cars, like the Ferrari, that are sold in such low quantities cost hundreds of thousands of dollars, and perform like it. Can't say the same about the 5800 Ultra on either account.

Yes, it's a close comparason for the reasons you stated. Both are marketed by their producer as special, high-end, high-price, low-production volume parts which cater to a niche during a limited production run after which they are superceeded by cheaper, lower-preforming products.

And get this, both are "real." True story too.

Except that I state, again, where's the superiority when every newer product has shipped at less than 500 mhz? I also use this as evidence that the 5800 Ultra was not a product. If the architecture is so "pipelined and revolutionary" then newer products should be shipping at ever higher speeds.

Get this threw your head, nobody is debating ATI's superiority since then. We all know it's there, we all know the current siuation in the 3D world. Nobody in this current debate cares about anything but what is being debated here - namely the 5800U and it's stencil preformance relative to the competition at the time. What ATI and nVidia and Michael Jackson did since then have no effect what-so-ever on how the 5800U compares with the competition. Get off your frickin' selfish, self-reinforcing, pathetic ATI high-horse and try to act like an engineer and not a Derek Perez disciple.

And yes, the 5800U was a "product," I'm sure you can afford a dictionary.
 
DemoCoder said:
Nowhere did I state that NVidia performance was superior.

Vince said:
Nobody in this current debate cares about anything but what is being debated here - namely the 5800U and it's stencil preformance relative to the competition at the time.

It would probably help if you two could actually agree upon what is actually being "debated here." :rolleyes:
 
Joe DeFuria said:
It would probably help if you two could actually agree upon what is actually being "debated here." :rolleyes:

Believe it or not Joe, we're two different people. We can actually have different positions and argue different things. Democoder even stated this to you before. This isn't a get behind the leader and defend or attack your hated IHV at all costs debate. Atleast it isn't on this side of it.... :rolleyes:
 
Vince said:
Believe it or not Joe, we're two different people. We can actually have different positions. This isn't a get behind the leader and defend or attack your hated IHV at all costs debate. Atleast it is on this side of it.... :rolleyes:

No Vince, it's worse. On your side of it, it's lead your own personal attack an individual that works for your own hated IHV.

I don't see DC participating in the "personally attack OpenGL Guy" game here...you're right...you are two different people. At least DC can be civil in his disagreements, and not blindly hypocritical.
 
Man this thread is a sick joke. Which arcitecture was superior? Which one worked and which one did not? Which card performed and which one did not? Which one did not have ot cheat in the drivers for the last year and which one did just to keep up?

The rest of this really does not matter. It's just a bunch of hot air about how good an architecture suposedly was when it never ever had the performace to back it up. That is the truth and that is what happened and that was born out in sales also as the consumers saw what was going on.

Who cares waht it could have been. What was it when it was all over?


Not only that but to lay some of the other points to rest. If it was the .13 and low-k from tsmc that was so bad how come after nividia claimed that .13 was not ready for the mass market ati then shipped the 9600 based on .13 and later with the xt they used low-k? Then after nvidia switched to ibm they still couldn't get the chips in any kind of quantity so they started to work with tsmc on it again also.

The truth is that there was some serious design issues. In any of this it is not what you can design but it is how do you design it to meet both your cost goals and performance goals. A product that performs well but is to expensive to mass produce is no good either. Possibly nvidia cut a few to many corners some place. Not sure as I dont know that kind of engineering but I bet they have some design issues to work through.
 
Back
Top