ATI - Full Tri Performance Hit

Status
Not open for further replies.
WaltC said:
Possibly for the same reason M$ says the current DX rasterizer isn't really valid for nV40? What is it about nV40 which would cause that to happen, do you think? And, if M$ hasn't updated its rasterizer to support the capabilities of nV40, because nV40 is capable of producing a better image than nV3x, then might not the same thing hold true for the current DX rasterizer and R420? Perhaps you can answer your own question by pondering that a bit.

No one mentioned a single nV-Chip here. You do not want to divert, don't you?

WaltC said:
Then, too, there's this issue: despite the wishful thinking of some, R4x0 is indeed a new architecture for ATi, and so we ought to expect many things to improve over the next few months in terms of Driver Maturity--another concept worth pondering here.
Could you explain to me, what exactly is "architecurally" new in the R4xx-Line of Chips?
I suspect, you're hinting at the shader-speed improvements with the recently surfaced Beta-Catalyst driver for X800 XT?

WaltC said:
The initial reports I read when this "story" broke had to do with using the DX9 rasterizer, finding small differences in pixels, and making conclusions based on those pixel differences. I believe Dave B. here at B3d ran the same tests and got similar results. It was at that point that ATi began publicly talking about its adaptive trilinear optimization. Only then did the Search for IQ Degradation begin...;)

Well, if you're in for DX RefRast Pictures, i cannot provide you with anything, because i did never do such things. Might ask Dave about it though.

WaltC said:
Last, can I take it from your comments here that since you apparently can't find any visible mipmap boundaries relative to ATi's trilinear optimization, that you have shifted the focus to The Search For Shimmering Textures? ...:D
:)
Is it just me or are you actively trying to cover your lack of [insert what you want, "motivation" for example] to prove uncle Nyquist wrong?

I did not specifically look for visible Mip-Boundaries in the meantime. One of the reasons being my recent shortage of any R4xx-Sample. :)
Another one being, that, as you might know already, mip-boundaries are especially hard to capture on screenshots. :)
(and then there's always this "denial" behaviour...)
 
dizietsma said:
Dare I say "apples to apples, oranges to oranges ". This forum has only been going on about it for over the last XXX months.

But...the problem is that the issue is not "apples to apples" and never has been. Here's the apples-to-apples people believe exists:

Ati's trilinear optimizations = nVidia's trilinear optimizations

That has never been true and isn't true today. What is true is:

R420 does not = nV40

Catalysts do not = Forcenators

ATi's trilinear optimizations do not = Forcenator trilinear optimizations

ATi's universal optimizations do not = nVidia's universal optimizations

It's evident that we cannot get an Apples-to-Apples comparison from the hardware, drivers, and optimizations, because they are all different and all unequal.

So, how can we get "apples to apples"...?

By comparing products of the same generation and price range produced by each IHV, and using the APIs, in-game IQ comparisons, and synthetic benchmarks (to test API feature support not yet supported in shipping games) as the "equalizers" for the comparison. Other than that, everything else is apples-to-oranges.

The big mistake I see repeated in recurring fashion on this subject is the quite unreasonable assumption that ATi's trilinear optimizations are the same as nVidia's trilinear optimizations, and that you can turn nVidia's off whereas ATI's are always on.

First, ATi's particular approach to optimizing trilinear is not the same approach taken by nVidia for trilinear optimization--in terms of code and method and implementation. They are fundamentally different optimizations. Right off the bat, then, it's plain that direct comparisons between the optimizations is apples-to-oranges.

Second, ATi's trilinear operation is set to function conditionally, and automatically, which means it turns itself off automatically under the right conditions. However, it is improperly, in my view, suggested that there's something wrong with what ATi has done here, simply because people desire to "turn off" ATi's tri optimizations just as they "turn off" nVidia's, so that they can get an an "apples-to-apples" comparison.

The problem with this approach is this: there's no evidence that I've seen to date that conclusively proves that when you "turn off" the tri optimizations in the Forcenator control panel all such optimizations in the drivers are in fact turned off. In the absence of compelling proof which demonstrates that nVidia's bri-defeat switch actually functions across the entire spectrum of 3d games, reliably and predictably, we can see that even if we could "turn off" ATi's just as we "turn off" nVidia's, we still could not be assured of an apples-to-apples comparison, unless we could demonstrate the efficacy in both products of a bri-defeat switch. (A recent example here is the set of Forcenators released to coincide with the R420 launch reviews, in which it came to light later that the cp bri-defeat switch was somehow conveniently "broken" in that set of drivers.)

I am of the opinion that the Forcenator bri-defeat switch will be shown to work in some titles, but have little to any effect in others. I would be pleased to be shown wrong in this opinion, without a doubt. I think about it this way:

The default for nVidia's tri optimization in its drivers for brilinear is "always on." Indeed, it was "always on" long before nVidia added the "off" switch to the control panel, which certainly suggests that brilinear became a fundamental component of nVidia driver design long before any thought was given to turning it "off" via a cp switch. This in turn suggests to me that turning it off via a cp panel switch might not always work predictably.

Had nVidia desired from the start to enable user control over its brilinear optimizations, I would have imagined that the driver default would be "brilinear off" and that a cp switch would enable you to turn it on. Had the first appearance of nVidia brilinear optimization appeared in that configuration I would be far less skeptical of the "off" switch than I presently am (since it would be a "brilinear on" switch," instead, and would have been introduced as an optional component from the start.) But such is not the case and the "off" switch can be "broken"--but apparently the brilinear optimizations cannot be broken in the Forcenators.

See what I mean? If turning on brilinear is indeed fundamentally an optional setting for the Forcenators, then I'd expect to see instances of "brilinear on" being "broken"--instead of what we've seen--which is that the driver mechanism to turn off brilinear has been "broken." The upshot here is that brilinear seems far more of a fundamental component of the Forcenators, and the so-called "off" switch nothing more than an unreliable, unpredictable driver hack added much later, which has already been "broken" at least once. And, as I said, when not "broken" it remains to be seen how reliably and predictably the "off" switch actually works across all 3d games in terms of defeating the optimization.

OK, so where does that leave us with regards to an apples-to-apples comparison? Since we do not yet know if it is possible to actually defeat brilinear within the Forcenators reliably and predictably, then for comparison purposes we must consider the Forcenators to always be "on" with regard to brilinear (the exceptions being the cases we can demonstrate that the defeat switch isn't broken.) But this presents a problem, because although the ATi brilinear optimization has no cp defeat switch, it turns itself on and off according to its pre-programmed conditional response. So, it is always going to be apples-to-oranges when comparing the nVidia approach to optimization with ATi's.

In the end we swing full circle and arrive where we started, and that is we compare these products, apples-to-apples, by way of pricing and generation, by way of API support as demonstrated in games and synthetics, and by way of observed IQ across the software spectrum. That is as close to "apples-to-apples" as we can reasonably expect to get, in my view. I think that a lot of people have taken what is a fairly complex issue and grossly oversimplified it.
 
And I think the ATi fans are starting to resort to fairy tales to justify their preferred companies actions.

Congratulations WaltC. You always been capable of posting material of a length that would make Tolkein or Tolsky blush, now you are rivalling them in terms of fantasy fiction.
 
Quasar said:
No one mentioned a single nV-Chip here. You do not want to divert, don't you?

Considering that I was talking about R420, your meaning quite escapes me...;) The M$ comment, while made in an nV40 context, cleary stated that M$ had not yet updated the DX rasterizer for any new-gen gpu/vpu. That R420 would have to be included was my point.

Could you explain to me, what exactly is "architecurally" new in the R4xx-Line of Chips?
I suspect, you're hinting at the shader-speed improvements with the recently surfaced Beta-Catalyst driver for X800 XT?

How about 12-16 pixel pipelines, for starters? How about ps2.0+ support over and above that supported in R3x0? These are major architectural differences between generations, just for starters. I'm sure there are many more differences which ATi has not disclosed, and which I've overlooked. R4x0 does not = R3x0. Very easy to see and understand.

(Parenthetically, I never bought 3dfx's exclamation that the GF1 was "just two TNT2s cobbled together," either...;) Even if it was quite literally true, it still did not diminish the differences between TNT2 and GF1 for me, which were substantial.)

When I say "driver improvements" I'm talking about the routine driver improvements that occur after an IHV ships a new-gen gpu--it's quite a traditional process for IHVs.

Well, if you're in for DX RefRast Pictures, i cannot provide you with anything, because i did never do such things. Might ask Dave about it though.

It wasn't me who did the rasterizer comparisons--it was the site that started the rumors initially that reported on pixel differentations between them, and began hyping the issue based on those differences.

:)
Is it just me or are you actively trying to cover your lack of [insert what you want, "motivation" for example] to prove uncle Nyquist wrong?

I guess what I'm saying is that I haven't seen much utility in visiting uncle Nyquist lately, as he's almost doddering these days...;)

Another one being, that, as you might know already, mip-boundaries are especially hard to capture on screenshots. :)
(and then there's always this "denial" behaviour...)

I hope you're kidding--as the difference between trilinear and bilinear mipmap boundaries has been demonstrated in thousands of published screen shots for the last several years in trade rags and on the Internet...;) It's one of the easiest things to capture in a screen shot, imo.
 
radar1200gs said:
And I think the ATi fans are starting to resort to fairy tales to justify their preferred companies actions.

Congratulations WaltC. You always been capable of posting material of a length that would make Tolkein or Tolsky blush, now you are rivalling them in terms of fantasy fiction.

Why, thank you, radar...! Coming from you, especially, your sentiment assures me of having nailed the issue precisely...:D
 
WaltC said:
The M$ comment, while made in an nV40 context, cleary stated that M$ had not yet updated the DX rasterizer for any new-gen gpu/vpu.
Even if you repeat it endlessly, (and I pointet this out to you before) it did not.
 
Xmas said:
WaltC said:
The M$ comment, while made in an nV40 context, cleary stated that M$ had not yet updated the DX rasterizer for any new-gen gpu/vpu.
Even if you repeat it endlessly, (and I pointet this out to you before) it did not.

Ditto, yourself...;) That is exactly what the unnamed M$ employee is quoted as saying. He does not state that only nV40 is cable of producing better IQ than the current DX rasterizer, he states that nV40 is capable of better quality because M$ has not yet updated its rasterizer for the new generation of gpus.

Where in his statement do you imagine he is only talking about nV40 and not R420? I'll go dig up the exact quote and add it here as an edit.

Edit:

unknown M$ employee said:
"The DX9 reference rasterizer does not produce an ideal result for level-of-detail computation for isotropic filtering, the algorithm used in NV40 produces a higher quality result. Remember, our API is constantly evolving as is graphics hardware, we will continue to improve all aspects of our API including the reference rasterizer."
(emphasis mine.)

Where in this statement is R420 excluded from "our API is constantly evolving as is graphics hardware, we will continue to improve all aspects of our API including the reference rasterizer"...? Sounds like a very generic statement to me concerning the fact that M$ updates the API and the rasterizer in accordance with updates that occur in 3d hardware. He specifically does not state that this is a condition applicable only to evolution in nVidia hardware, does he?

Also wanted to add the context for this remark as originally published at TR and then cut & pasted (without the TR attribution) by THG: the unnamed M$ employee was asked by TR why the DX rasterizer results in a later set of Forcenators was inferior to the result obtained with the rasterizer in an earlier set of Forcenators, and this was the quote originally attributed to the unnamed M$ employee by Tech Report, which was supposed to explain it. (I am making no comment on the believability of the statement whatever.) I do find it odd that the M$ employee wrote in run-on sentences--two of them back to back--which would at least superficially indicate the author was a bit challenged in English punctuation, perhaps. I am ever skeptical of unattributed quotes...;)
 
Is this all about tiled floors and walls or something? Seems to me this ATI issue is about scaled mip maps only, from one paticular way of generating them when you are tiling a floor or something. Frankly, I'd rather see a whole lot less tiled anything, makes all games seem to same to me, all quake like and retro (ex. firestarter). If you want more realism you need less repetition and straight lines. Working on lighting is cool and all, but if all your detail is tiled wallpaper it's not going to do a whole lot. Seems to me cards are getting fast enough to actually avoid covering everything in wallpaper, maybe some day mipmaps will just go away. (Even FarCry has magic grass that presents the same view as you walk around it, dunno, probably just me, but we're a long way from the kind of realism I would like, games seem stuck in a rut.)

My $0.02, no 3d expert or anything though, probably too storage/bandwidth heavy to avoid tiling, but maybe procedural textures or something, dunno.

Sorry if it's not keeping exactly to the topic of wrangling over who done what, lol.
 
Martillo1 said:
jvd said:
I recall nvidia saying the same thing to not use dynamic branching with the 6800s . I will try to bring up the quote.

You missed my point. What I mean is that ATI itself admits its intentions of supporting S.M. 3.0. That, coupled with NVIDIA actually supporting S.M. 3.0 and the ease of development it gives, leads to the conclusion that S.M. 3.0 will be widely used.

rgrds
Why wouldn't they in the future support it . If they support sm4.0 in thier next core (not the refresh of the r420) then by the dx spec they have to support sm3.0

Its very simple. Just like sm2.0 has sm 1.0 build into it.

Hell they could skip to sm 5.0 (if such a thing is in the plans ) and they would still have to run sm 3.0 .

The jury is still out on if the 6800s can run sm 3 at reasonable speeds. The only sm 2.0 game farcry runs on the nv3x mode which they still have yet to fix. You would think a simple id patch would force it to run the standard path that the r3x0s are running . That Since cyrtek orwhatever the name is , is on such good terms with nvidia that they would have done this for all those 6800 users so they could enjoy the full quality of the game. As ati users have been doing with 2 year old cards .
 
How about 12-16 pixel pipelines, for starters? How about ps2.0+ support over and above that supported in R3x0? These are major architectural differences between generations, just for starters.
I think you're stretching the definitions of both architectural and major here, Walt.

I hope you're kidding--as the difference between trilinear and bilinear mipmap boundaries has been demonstrated in thousands of published screen shots for the last several years in trade rags and on the Internet... It's one of the easiest things to capture in a screen shot, imo.
Sure, the difference b/w "reference," or "legacy," bi and tri are easy to spot. The difference b/w bri/try and tri are harder to spot in a static screenshot as digit-life's latest comparison shows, and Ted has shown a mipmao bondary clearly in evidence with his MP2 videos.
 
Sure, the difference b/w "reference," or "legacy," bi and tri are easy to spot. The difference b/w bri/try and tri are harder to spot in a static screenshot as digit-life's latest comparison shows, and Ted has shown a mipmao bondary clearly in evidence with his MP2 videos.

Not to call ted a liar and because i don't own the game myself. But you'd think if teds videos were reproducable that one of the websites on the witch hunt would have already made thier own videos and wrote an article on it .
 
Pete said:
I think you're stretching the definitions of both architectural and major here, Walt.

Unless you don't think there's any "major" difference between 16 pipelines and 8 pipelines (not counting the differences inside each of the pipelines themselves), I think that you're underestimating the differences. Most people felt there were big and major differences between TNT2 and GF1, didn't they? And, most people felt there was a major difference between the 4 pipes in nV3x and the 8 in R3x0, and if anything the difference between R4x0 and R3x0 is even greater, imo. It seems to me that some folks don't think an architecture qualifies as "different" unless the differences are equivalent to the differences between nV3x and nV4x. Yet, many of these same people considered the differences between nV25 and nV30 to be "profound"...;)

Sure, the difference b/w "reference," or "legacy," bi and tri are easy to spot. The difference b/w bri/try and tri are harder to spot in a static screenshot as digit-life's latest comparison shows, and Ted has shown a mipmao bondary clearly in evidence with his MP2 videos.

The whole point to the kind of trilinear optimization discussed in the thread is that it is supposed to be hard, if not impossible, to distinguish the difference....;) Right? Differences between bi and trilinear traditionally have been very easy to spot, and to capture in a screen shot, precisely because the differences are anything but hard to spot. That's why we've never had to resort to such cumbersome devices as "MP2 videos" to display the differences. On such videos, let me ask you this: would you be satisfied with a hardware web site which dropped 1024-768-1600x1200 screen shots and gave you 400x300 MPG2 still frames instead? I think not, right?...;) The pixel-dropping, res-lowering, artifact-inducing, frame-dropping 3d-to-2d conversion process would kind of defeat the whole "screen shot" concept, wouldn't it?
 
WaltC said:
Most people felt there were big and major differences between TNT2 and GF1, didn't they?

And, most people felt there was a major difference between the 4 pipes in nV3x and the 8 in R3x0, and if anything the difference between R4x0 and R3x0 is even greater, imo.

The nr of pipes are completely unimportant since we're talking about featureset. And the difference in featureset between the R300 and R420 is very small, especially if you don't count the "HD" addition to the R300 featureset :). The difference between the NV3X and R300 is imo a lot bigger.
 
Bjorn said:
The nr of pipes are completely unimportant since we're talking about featureset. And the difference in featureset between the R300 and R420 is very small, especially if you don't count the "HD" addition to the R300 featureset :). The difference between the NV3X and R300 is imo a lot bigger.
Yeah that's a good point. The NV3x lacked a lot of features the R300 supported, such as floating point render targets and MRTs.

-FUDie
 
Martillo1 said:
You missed my point. What I mean is that ATI itself admits its intentions of supporting S.M. 3.0. That, coupled with NVIDIA actually supporting S.M. 3.0 and the ease of development it gives, leads to the conclusion that S.M. 3.0 will be widely used.

Using that same logic, then CG should have been widely used. ATI was clear on its intentions of supporting CG while NVIDIA was fully supporting CG which was to ease development. So SM3 will be as popular as CG is/was?
 
BRiT said:
Martillo1 said:
You missed my point. What I mean is that ATI itself admits its intentions of supporting S.M. 3.0. That, coupled with NVIDIA actually supporting S.M. 3.0 and the ease of development it gives, leads to the conclusion that S.M. 3.0 will be widely used.

Using that same logic, then CG should have been widely used. ATI was clear on its intentions of supporting CG while NVIDIA was fully supporting CG which was to ease development. So SM3 will be as popular as CG is/was?

No, because Microsoft HATES IT when companies try to "circumvent" their plans by creating non-standard things, ESPECIALLY new languages. That's what nVidia needs to learn, don't go against the flow (Microsoft), otherwise you'll end up losing, big time.

Same thing with DirectX 9. Microsoft and ATI have been working close on creating that standard. nVidia had different plans for DirectX 9. They wanted DirectX 9 to conform to their liking. The result? NV30. No comments. But right now, with a clean design sheet (new core) that is more DirectX 9 friendly, they've got a good thing going. Hopefully they'll learn from their mistakes (everyone makes mistakes, including MS & ATI) and move on to be great competitors again.

About SM 3.0: I wouldn't worry about it. SM 4.0 is in the works, and when the time comes to play games that support it, we'll be 3 generations ahead. God knows who would win the perfromance crown then. Just sit back, relax, and watch the show, 'cause next-generation cards are coming very fast. So fast, that a bird told me we'll be seeing a new red card THIS YEAR..... :devilish:
 
Let me thank you in advance for not throwing some 50k worth of ASCII code at me in your posting. :)

WaltC said:
Quasar said:
No one mentioned a single nV-Chip here. You do not want to divert, don't you?

Considering that I was talking about R420, your meaning quite escapes me...;) The M$ comment, while made in an nV40 context, cleary stated that M$ had not yet updated the DX rasterizer for any new-gen gpu/vpu. That R420 would have to be included was my point.
That would have been my next question: What do you think, the RefRast is good for, going to such lengths to "prove" that R420-peculiarities have not been incorporated in it?
To my knowledge, this RefRast provides all but a quality reference. How could it possibly looking at its capabilites - even for older generations.

WaltC said:
Quasar said:
Could you explain to me, what exactly is "architecurally" new in the R4xx-Line of Chips?
I suspect, you're hinting at the shader-speed improvements with the recently surfaced Beta-Catalyst driver for X800 XT?

How about 12-16 pixel pipelines, for starters? How about ps2.0+ support over and above that supported in R3x0? These are major architectural differences between generations, just for starters. I'm sure there are many more differences which ATi has not disclosed, and which I've overlooked. R4x0 does not = R3x0. Very easy to see and understand.

(Parenthetically, I never bought 3dfx's exclamation that the GF1 was "just two TNT2s cobbled together," either...;) Even if it was quite literally true, it still did not diminish the differences between TNT2 and GF1 for me, which were substantial.)
Well, others, some more knowledgeable than me, have spilled the beans on this one already.
Just adding more lanes does not chance the road and to me the R420, looking at its diagrams, does not seems to have changed from cobblestone to asphalt yet. It only has some more lanes...

WaltC said:
When I say "driver improvements" I'm talking about the routine driver improvements that occur after an IHV ships a new-gen gpu--it's quite a traditional process for IHVs.

Great, because eventually, you'd ended up seeing the same relative performance improvements in the year-old RV3x0 architecture - which would not have been very supportive of your line of argument.

WaltC said:
Quasar said:
Well, if you're in for DX RefRast Pictures, i cannot provide you with anything, because i did never do such things. Might ask Dave about it though.
It wasn't me who did the rasterizer comparisons--it was the site that started the rumors initially that reported on pixel differentations between them, and began hyping the issue based on those differences.
If you happen to hint at german website www.Computerbase.de rest assured, that you're wrong. If not, please tell me the name of the website, you're referring to.

WaltC said:
Quasar said:
:)
Is it just me or are you actively trying to cover your lack of [insert what you want, "motivation" for example] to prove uncle Nyquist wrong?

I guess what I'm saying is that I haven't seen much utility in visiting uncle Nyquist lately, as he's almost doddering these days...;)

So you really do not have a clue.... ;)

WaltC said:
Quasar said:
Another one being, that, as you might know already, mip-boundaries are especially hard to capture on screenshots. :)
(and then there's always this "denial" behaviour...)

I hope you're kidding--as the difference between trilinear and bilinear mipmap boundaries has been demonstrated in thousands of published screen shots for the last several years in trade rags and on the Internet...;) It's one of the easiest things to capture in a screen shot, imo.
We're not talking about bilinear and trilinear, right? :)
And everyone knows, that mip-boundaries, especially when tried to be covered with "unacceptable filtering tricks" (quote from ATi Technologies Inc. on exactly this matter - reducing trilinear to bilinear and thus the amount of work done) further, are best visible in motion.
 
Not to call ted a liar and because i don't own the game myself. But you'd think if teds videos were reproducable that one of the websites on the witch hunt would have already made thier own videos and wrote an article on it .
Also true. I suppose we should all lay off this issue for a month or so, to give IHVs a chance to release updated drivers, game devs a chance to patch their games, and websites a chance to test everything comprehensively. From my spot on the bleachers, it seems there may be some cases where try obviously compromises IQ, but they would appear to be few and far between (by Ted's and Grestorn's own admissions).

Unless you don't think there's any "major" difference between 16 pipelines and 8 pipelines (not counting the differences inside each of the pipelines themselves), I think that you're underestimating the differences.
IMO, Walt, it's you who are confusing engineering with architecture. :) I consider a doubling of pipes more of an engineering feat than an architectural one.

Most people felt there were big and major differences between TNT2 and GF1, didn't they?
Well, I wouldn't call the introduction of TnL small. ;)

And, most people felt there was a major difference between the 4 pipes in nV3x and the 8 in R3x0
R300 and NV30 were different in more significant ways than their pipelines. RV360 and R420 don't seem to be as significantly different to me than R300 and NV30, and I'm surprised we're actually debating this point. The comparison of RV360->R420 and NV20->NV25 should be so obvious as to be an unspoken understanding between us 3D fans. :p

The whole point to the kind of trilinear optimization discussed in the thread is that it is supposed to be hard, if not impossible, to distinguish the difference.... Right?
Yep, thus my emphasis on moving rather than static comparisons.

Differences between bi and trilinear traditionally have been very easy to spot, and to capture in a screen shot, precisely because the differences are anything but hard to spot.
I can't really argue with that, can I? :LOL:

That's why we've never had to resort to such cumbersome devices as "MP2 videos" to display the differences.
Sorry, just to be clear, I used MP2 to refer to Max Payne 2.

On such videos, let me ask you this: would you be satisfied with a hardware web site which dropped 1024-768-1600x1200 screen shots and gave you 400x300 MPG2 still frames instead? I think not, right?... The pixel-dropping, res-lowering, artifact-inducing, frame-dropping 3d-to-2d conversion process would kind of defeat the whole "screen shot" concept, wouldn't it?
I don't think it would defeat it so much as complement it, as motion-induced artifacts would seem to be logically easier (or only possible) to demonstrate via videos, not static pictures. Again, bandwidth is probably the prohibitive factor, but they should be able to squeeze out two 5-10sec, 800x600 divx clips at 1-2MB (not much larger than a few large jpgs).

Let me thank you in advance for not throwing some 50k worth of ASCII code at me in your posting.
:LOL: :!: He still would've had 14k left to throw in a 3D demo. ;)
 
Quasar said:
Well, others, some more knowledgeable than me, have spilled the beans on this one already.
Just adding more lanes does not chance the road and to me the R420, looking at its diagrams, does not seems to have changed from cobblestone to asphalt yet. It only has some more lanes...

The R420 may not be as big of a difference architecturally to the R300 as the NV40 is to the NV30, however that does not mean the NV40 is better then the R420. Anyone with any knowledge will tell you the R300 was a big step ahead of the NV30; the NV40 corrects many of these issues. In order for the NV40 to do this nvidia had to drastically rehaul their architecture. ATI had it right the first go around, so naturally they didn't have to try as hard to achieve similar results to that of the NV40. If you compare the NV40 to the R300 it's nothing more then speed enhancments as well along with SM3 support.

It is also unfair to ATI to overlook the fact that both the R420 and R300 are smaller and consume less power then nvidia's offerings. In fact the X800 XT, while being superior to the R300 speedwise as well as offering new features such as 3dc, TAA and SM2.0b, draws less power then the 9800 XT. The same can hardly be said for nvidia.
 
Unless you don't think there's any "major" difference between 16 pipelines and 8 pipelines (not counting the differences inside each of the pipelines themselves), I think that you're underestimating the differences.
IMO, Walt, it's you who are confusing engineering with architecture. I consider a doubling of pipes more of an engineering feat than an architectural one.


Well even a minor change in the cache in each of these 16 pipelines and the increase in clock speed can lead to tweaks being done at the driver level to get more speed.

ALso your over looking the fact its a brand new memory controler which many ati employees have said is operating at only 70% of what it should be.

There are many things they could have changed inside the standard r3x0 design that could effect the speed of the chips they could even have fixed hardware problems or defects .

While the overall design of the chip may be similar to the r3x0 series even liittle changes can have large effects
 
Status
Not open for further replies.
Back
Top