G80 vs R600 Part X: The Blunt & The Rich Feature

A point that I think Jawed may have been trying to make a while back is that R580 did not significantly outperform G7x when it was launched, but that on the sort of games coming out now it is well ahead of G7x. You could therefore argue that the R580 architecture was "forward-looking" and a better design than people initially realised. If the same thing were to happen with R600 (i.e. the games that come out over the next 12 months run better on R600 than they do on G80) that would be a sort-of vindication of the architecture, and would make buying R600 a slightly better long-term investment.

I'm not saying that will happen, of course, it may not; and even if it does, the absolute performance of future games on R600 may still be unplayably low, even if it outperforms G80.
 
A point that I think Jawed may have been trying to make a while back is that R580 did not significantly outperform G7x when it was launched, but that on the sort of games coming out now it is well ahead of G7x.

I think most people here are well aware of this. The problem is that he's using benchmarks with AF filtering optimizations disabled on the G70 to prove this. And that doesn't really show the potential advantages of the R580's shader power, dynamic branching capabilities and so forth, just that the G7x doesn't perform that well without filtering optimizations.
 
A point that I think Jawed may have been trying to make a while back is that R580 did not significantly outperform G7x when it was launched, but that on the sort of games coming out now it is well ahead of G7x. You could therefore argue that the R580 architecture was "forward-looking" and a better design than people initially realised. If the same thing were to happen with R600 (i.e. the games that come out over the next 12 months run better on R600 than they do on G80) that would be a sort-of vindication of the architecture, and would make buying R600 a slightly better long-term investment.

I'm not saying that will happen, of course, it may not; and even if it does, the absolute performance of future games on R600 may still be unplayably low, even if it outperforms G80.

I think there's about...umm... 5 or 10 ppl that buy high-end GPUs as long term investments, in order to experience the great satisfaction of running a hypothetical future game at 10 FPS compared to the competitor that ripped his longterm investment a new one throughout its lifecycle that runs the future game at 5-6;). Early adopters and high-end GPU buyers generally upgrade every generation...ok, maybe they skip interim parts if there's nothing major about them.

Is this perceived superiority of the R580 due to its focus on DB and added ALU oomph(the so called forward looking features), or due to the fact that ppl nowadays enjoy disabling every possible filtering optimization on the G7x(which may or may not be right, I don't really care either way) and the R580s more "skillful" manner of tackling AA and HSR?If it's really due to DB focus and math ability, let's see tests without AA and AF between the two, in recent titles, at a moderate resolution proving it. I have my doubts, and whilst the ALU upgrade was a no-brainer(cheap enogh, easily pimped by marketing etc.), the heavy DB focus was rather useless, IMHO.
 
The DB level was somewhat a by product of the architecture that followed down from R300. DB granularity actually went down from R520 to R580.
 
If the R600 launched along side the G80, then his argument would be something to think about. But reality is that G80 is already what, 8 1/2 months old compared to the newcomer R600. (This time difference could have been alot bigger IMO if G80 was released with the initial specs).

7months is quite a gap (launch of G80 to launch of R600) if you ask me especially in the semiconductor industry, and for G80 to retain the crown as the "fastest desktop GPU" as of now somewhat makes your argument moot whether or not R600 is elegant, clever, future thinking etc.
 
If the R600 launched along side the G80, then his argument would be something to think about. But reality is that G80 is already what, 8 1/2 months old compared to the newcomer R600. (This time difference could have been alot bigger IMO if G80 was released with the initial specs).

7months is quite a gap (launch of G80 to launch of R600) if you ask me especially in the semiconductor industry, and for G80 to retain the crown as the "fastest desktop GPU" as of now somewhat makes your argument moot whether or not R600 is elegant, clever, future thinking etc.
Well... that's not entirely fair, because I think the difference between NV having G80 silicon back from the fab and ATI/AMD having its first R600 silicon back was certainly not 7 months.
 
Well... that's not entirely fair, because I think the difference between NV having G80 silicon back from the fab and ATI/AMD having its first R600 silicon back was certainly not 7 months.

Im quite sure that early ES of G80 was floating around early 2006. The prototypes while the G80 we see today was taped not so long ago from the actual launch date.

Anyway, you guys should know this by now, much more than me. Life certainly is NOT fair. :LOL:
 
Well... that's not entirely fair, because I think the difference between NV having G80 silicon back from the fab and ATI/AMD having its first R600 silicon back was certainly not 7 months.

Tell that to the shops/customers. For us it was a real-world 7+ months difference.
 
Tell that to the shops/customers. For us it was a real-world 7+ months difference.

Delay of the design in 512bit bus and adoptation of the 80nm, both of them are the possible reason why ATI missed the opportunity before the launch of the G80.


G92 is more lethal wepaon as its mainstream power with its similar 8800GTS performance, which could attract massive 7900GT owners in the progress of upgrading the hardware .


Ati need to walk to walk as well as to talk to talk.

Both Nvidia and ATI will adopt the strategy for either performance mainstream and high end in same asic with nuance in number of the chips.Although it is far away from the days in 2007, we can expect such configuration in 2008.




Period
 
Delay of the design in 512bit bus and adoptation of the 80nm, both of them are the possible reason why ATI missed the opportunity before the launch of the G80.


G92 is more lethal wepaon as its mainstream power with its similar 8800GTS performance, which could attract massive 7900GT owners in the progress of upgrading the hardware .


Ati need to walk to walk as well as to talk to talk.

Both Nvidia and ATI will adopt the strategy for either performance mainstream and high end in same asic with nuance in number of the chips.Although it is far away from the days in 2007, we can expect such configuration in 2008.



Period

Umm, I'm not a native english speaker, so I have to ask:what are you saying?
 
Both Nvidia and ATI will adopt the strategy for either performance mainstream and high end in same asic with nuance in number of the chips.Although it is far away from the days in 2007, we can expect such configuration in 2008.

Same ASIC or ASIC's? The rumors about multi-chip/die are quite numerous on both sides.

EDIT: duh, that's exactly what you're saying. So yes, I think the decision will be "whose single ASIC is better to begin with" as well as "how good does the scaling work".
 
Well... that's not entirely fair, because I think the difference between NV having G80 silicon back from the fab and ATI/AMD having its first R600 silicon back was certainly not 7 months.

I feels this is really a guesswork as to the actual time gap between the working prototype of G80 and R600.. and most importantly this point is moot... what ultimately matters is the actual time frame where they are delivered to the market...
 
Jawed must really be kidding.

1) G80 has better performance than R600.
2) G80 uses alot less power than R600.
3) G80 does all this with fewer memory bandwidth.
4) G80 supports all DirectX10 features just like R600.
5) G80 has a more stable performance over a wide range of games, while R600 is very chaotic on which games it does perform.
6) G80 was launched alot of months earlier than R600.

With all these facts in place, sorry, I can't see how you would promote R600 that much. And believe me, I have had 3 ATi cards before I switched to G80 a few weeks back.
 
It's more of a question of whether the design concepts introduced in R600 have more potential or are more clever.

R600 looks to be a testbed for a number of features, though whether they have some kind of long-term potential that can't be matched by an evolutionary descendant of G80 is uncertain.

R600 has more tightly integrated ALU resources with what was formerly fixed-function hardware, and it is more capable of maintaining mutiple contexts.

That is not without a price.
When it comes to efficiency, G80 is hands down the winner for current apps.
R600 suffers from much more variability, and it is likely any future chips based on it will still have problems with efficiency.

R600's expanded programmability and more flexible hardware seems to have also forced the design to push the entire die to a common clock speed.
Nvidia's approach seems to have been a better choice for the process generation it was fabbed on.
The prospect of forcing all of a large chip to high clocks seems to be a losing proposition for both manufacturers, but G80 limited its ambitions to a smaller core area.

At present, I'd say ATI needs multichip more.
The implementation of R600 on a leaky process and high clocks likely worsened its power draw.

As technically clever as R600 is, the current silicon does not show an awareness of power and process variation concerns.
Those are far more likely to be limiting factors in the future.

The technology has potential, particularly if the memory controller proves adept at handling multichip configurations, but performance inconsistency and concerns about power would have to be addressed.

G80 has a number of issues that will likely appear in future software, but it is perhaps a better design choice in hindsight to keep the focus on performance on current and near-term apps.
 
Jawed seems to have an infatuation with overengineered out-of-spec features as his metric for measuring the better architecture. I find this ironic, because for years, Nvidia was criticized for putting forward looking features in their architectures before the market needed them, features which seriously gimped their performance, and gave no real benefit in games for years.
I agree that Jawed has totally whack reasoning in this thread, but I don't agree with you on this point about NVidia. I think it's even weirder to say they were criticized for wasting die-space (and thus performance) for these features. Personally, I may have undermined the value of NV4x's SM3.0 support from the perspective of what can be done, but per clock per mm2 NV4x never looked wasteful. ATI was targetting the T&L equipped Radeon to be released earlier, too.
Nvidia was the first consumer card (besides 3dlabs DCC stuff) that had a fixed-function T&L chip. Didn't help endusers one iota as most games were inlining vertex transforms as C preprocess macros, not as library calls.
T&L helped them tremendously though the lifetime of the NV1x architecture. I'm pretty sure games were using it within a year. Don't forget 3DMark, either.
Then there was the NV30 disaster, designing a chip to support shader features that went beyond PS2.0 in some respects (PS2.0a)
That wasn't what cost them, though. NV30's feature advantages where what, longer programs, unlimited dependent texturing (which it sucked at in shader programs for even a couple levels deep) and predication? Those aren't expensive features. NVidia just messed up the architecture for whatever reason. NV43 blew away NV30 at the same size and bus width.
NV4x introduced VTF, PS3.0 DB, and other stuff for which games really didn't need, and for which a high performance version would consume massive die space (R5xx)
Exactly. NV4x didn't use much die space to implement those features. They ran like crap and were just checkbox items because NVidia didn't want to devote much die space to an unused feature. The somewhat costly feature it did implement was FP blending and FP filtering, and they were very valuable for sales due to PC devs looking for "correctness" instead of visual effect when it comes to HDR.
The NV3x had double-Z and z-scissor, and it's practical consequence was benefit in only one game: Doom.
Only Doom? Maybe for NV3x that didn't matter since the game was released too late, but for NV4x the results from that single game probably cancelled all the victories R4xx had in most other games (which were not by as big a margin, but still numerous). Riddick and Doom's stencil shadows were responsible for nearly all of ATI's reputation of poor OpenGL performance.

I don't think NVidia sacrificed much of their transistor budget for unused features at all. They made very smart decisions about where transistors should be used, and placed a huge priority on performance, particularly where it impacts sales the most. Aside from R3xx/R4xx, I'd say ATI is more guilty of implementing features that cost die space and thus performance in most games at usual settings. Free 32-bit in Rage128, EMBM in R100 (this is a huge cost), Truform and PS1.4 in R200, DB in R5xx, and god knows what in R600.

As for Jaws contention of a "blunt tool" design for G80, I agree with you as to how senseless that claim is. Even more ridiculous is the claim of G80 being unbalanced. It's a hell of a lot more balanced than R600 in almost every respect, especially when you consider marginal cost of the features it excels in.
 
Nobody cares about G80 with future software. It'll be an old song till then (say 2-3 years from now). Even next year it will be meaningless, as well as any competitive offerings. You don't buy cards today for games you'll be running in two years, that's ridiculous (then with 3 fps using card A - or 5fps if the card B from the current gen is much better at it thanks to it's "future-proof design" ;)). What matters is right now and the next 9-12 months max. At least that's how the buyers perceive it.
 
You don´t have a point here. I bought a 7900GTX, thinking it was the best at the time. Little after, a cousin of mine bought a 1900xtx, and guess what: we did a side by side comparison, and the IQ of my 7900gtx was shitty, compared to his 1900xtx. I didn´t have to be a IQ freak, as you implied, to see the gigantic difference. I changed various settings, to get the same visual quality, and that brought me big drops in performance.
You bought the 7900GTX, didn't you? That's all NVidia wants. Also, would you have known that it was a problem without doing a side by side comparison? Did you play games at those settings you mentioned before you saw this comparison? (BTW, what was the IQ problem? Texture shimmering?)

For nearly everyone else, they would not sell their 7900GTX because they wouldn't have this comparison. Few complained about the NV4x quality in comparison to R4xx, so NVidia figured everything was just fine. This is not like the drastic mipmap fudging and blurring we saw with NV31 and NV34. Looking at sales and margins, NVidia judged it correctly.

If you or Jawed want to complain about default IQ, fine. I'll agree. However, a comparison with disabled optimizations is useless, because you're running G71 a setting where perf/IQ tradeoff is worst. Heck, I wouldn't even run R580 that way. Instead of 4xAA with the settings of computerbase.de, you could probably run the GTX at 8xS with default IQ at the same framerate, or bump the resolution higher. Nobody with a GTX would play games at the settings in that review.

In terms of evaluating the hardware, there's a another flaw in that methodology. NVidia probably doesn't have to disable all optimizations to improve IQ substantially. They give users the option, but since very few customers use it, they don't bother finding which settings are most important.

Remember, also, that I was talking more about the lower end parts. R580 has 80% more die space than the 7900GTX. Compare the performance of the 7600GT and the X1600XT. It's a complete asswhooping by a smaller chip. Would the average gamer be willing to sacrifice around half his framerate for the better IQ of the X1600XT?
Ati could have fool their customers, by setting the default rendering quality of its 1900XTX the same of 7900gtx´s, getting a performance advantage and more sales.
Actually, they couldn't. ATI designed R5xx to have low performance impact with high quality. R580 can usually do math while the TMUs spend more cycles to improve quality. G7x can't do that. Reducing the quality wouldn't make ATI's cards run much faster at all.
 
A point that I think Jawed may have been trying to make a while back is that R580 did not significantly outperform G7x when it was launched, but that on the sort of games coming out now it is well ahead of G7x.
He's using skewed benchmarks to prove the latter. No other website on the internet shows G7x in as bad a light as computerbase.de. Neither R5xx nor G7x were designed to run with all optimizations disabled. Why the heck should we evaluate the hardware in that way?

We barely heard a peep out of anyone regarding NVidia's image quality during the NV4x era, and G7x looks the same. If someone wants to complain that G7x default image quality isn't up to snuff, fine. That's their opinion, and I agree. However, claiming that it's so distracting that it must be rectified by sacrificing 50% of the performance is something entirely different.

If reviewers can't see a problem big enough to rant about, then not only is it unlikely for most customers to see it, but NVidia is fully justified in skipping IQ improvements from NV4x to G7x.
 
Back
Top