That's probably from the indoor parts of the game where Nvidia cards do fine. The outdoor sections are where they get slaughtered.From Tech Report's 8800 review:
Seems that G71 is beating R580. I too thought it was the other way around, but this seems to say otherwise.
Maybe at 1024x768, but at 1600x1200, the X1900 XTX wins in BF2142, DMOMAM, Doom 3, Oblivion and Prey, according to those benchmarks. That's 5 out of 7 games.TomsHardware VGAcharts is a good resource for comparing videocards over different generations. The latest, 2007 version, obviously has the most recent games (7games + 3DM06). Whether this accurately describes what people actually play is a different question - for some reason the hardware sites don't want to test WoW and its peers. If you go back in time to the earlier charts you get a few more comparison points.
http://www23.tomshardware.com/graphics_2007.html
The 7900GTX is faster than the X1900XTX in five out of seven 2007 games, one of two wins for the X1900XTX incidentally being Oblivion.
Does anyone still benchmark the X1800 XT? I own one and would really like to see how it's still holding up.Entropy: THW-chart results are often different from other reviews. Hardware.fr or computerbase.de still tests GF7 and X1 with many recent games:
http://www.computerbase.de/artikel/...hd_3870_rv670/28/#abschnitt_performancerating
TomsHardware VGAcharts is a good resource for comparing videocards over different generations. The latest, 2007 version, obviously has the most recent games (7games + 3DM06). Whether this accurately describes what people actually play is a different question - for some reason the hardware sites don't want to test WoW and its peers. If you go back in time to the earlier charts you get a few more comparison points.
http://www23.tomshardware.com/graphics_2007.html
The 7900GTX is faster than the X1900XTX in five out of seven 2007 games, one of two wins for the X1900XTX incidentally being Oblivion.
I said slower per-clock. R520 was around 20% faster than G70 w/AA at launch (in Direct3D), but was clocked 45% higher (430 MHz vs. 625 MHz). That's why Nvidia were able to quickly take back the performance crown with the 7800 GTX 512, and had ATI not introduced the R580, the X1800 would have to face off against the 7900 GTX (still only 650 MHz vs. 625 for the X1800).
Yes, on a smaller process. When Nvidia migrated to 90 nm, they were able to match ATI's 'narrow and fast' architecture in terms of clock speed, while keeping their huge per-clock advantage.It doesn't matter what the cards were clocked at. It's whether the cards were able to meet their design requirements. G70 had a completely different design philosophy than R520. Wide and slow compared to narrow and fast. Running within the design parameters of each chip, R520 turned out to be the faster design.
Nevertheless, it still overshadowed R520. And again, my point was simply that even with a 125 MHz clock deficit, Nvidia were still able to beat the R520.Yeah the 7800 GTX 512 was great for the approximately 1000 people that were actually able to buy one before Nvidia ran out of hand picked GPUs. And yeah it was great that people were spending close to and sometimes over 1k USD for it.
To measure the amount of work ATI had to do catch up with Nvidia, without having a process advantage.True 7900 GTX was generally better than R520, but then again R580 was definitely better than G70. Why are you comparing it to the past parts?
True, and R580 was meant to launch in 2006 according to Dave, but we're still going to view cards in light of their actual release timeframe, and in light of what the competition has to offer. Was RV670 a response to Nvidia's G92 (in the form of the 8800 GT)? No. But that's how we're going to view it.both G71 and R580 were on the internal roadmaps before G70 and R520 even launched. It isn't like they were a direct response to R520 and G70 respectively.
You really think the 580 was a work in progress to catch up to nvida? I would say the 580 was all ready designed and it just came out that way. Chips are designed years in advance....
To measure the amount of work ATI had to do catch up with Nvidia, without having a process advantage.
....
True, and R580 was meant to launch in 2006 according to Dave,
but we're still going to view cards in light of their actual release timeframe, and in light of what the competition has to offer. Was RV670 a response to Nvidia's G92 (in the form of the 8800 GT)? No. But that's how we're going to view it.
No, I don't, but the performance difference is still there. Read: the amount of work, in hindsight, that they had to do/that had to be done. Given the R520 had double the number of transistors of the previous generation, for maybe 20 - 30% more performance per-clock, I think ATI knew they had to do a lot more to be competitive, longer term (and the R520 was originally a spring part anyway).You really think the 580 was a work in progress to catch up to nvida? I would say the 580 was all ready designed and it just came out that way. Chips are designed years in advance.
Whoops. I meant 2005.R580 did launch in 2006. The end of January, to be precise. I should know, I bought an XTX the day before launch and a CF master card a couple months later.
I'm not disagreeing with any of that.While it's true that RV670 and G92 are direct competitors, it is not true that either is a reaction to the other from an engineering standpoint. Nv made a decision last-minute to increase the SP count (and, possibly the clocks) as a reaction to RV670's unexpected "almost full R600" specs, but G92 already had those units so it's not like NV redesigned it at the last minute. Enabling/disabling units for SKUs that share the same GPU is pretty common, and certainly not an engineering decision.
Whoops. I meant 2005.
http://forum.beyond3d.com/showpost.php?p=622778&postcount=813
I believe there are some other posts on R580's delay as well.
Computerbase.de really gimps G7x by disabling all texturing optimizations (this affects R580 much less), sometimes lopping off more than half the framerate.Entropy: THW-chart results are often different from other reviews. Hardware.fr or computerbase.de still tests GF7 and X1 with many recent games:
http://www.computerbase.de/artikel/...hd_3870_rv670/28/#abschnitt_performancerating
G70's AF was undersampled. Many driver versions caused heavy undersampling. So, AF 16x wasn't really AF 16x.
G70's AF was undersampled. Many driver versions caused heavy undersampling. So, AF 16x wasn't really AF 16x.
Lets imagine, that ATi would use only 2 per-pixel samples for MSAA 4x to compensate R600's slow MSAA performance. Would it be fair to compare performance results of this "MSAA 4x" to G80's true MSAA 4x performance?
Computerbase.de really gimps G7x by disabling all texturing optimizations (this affects R580 much less), sometimes lopping off more than half the framerate.
I understand that there's some shimmering with G7x, but the (relatively) minor quality improvement is not worth that kind of hit. People rarely play with optimizations disabled. Moreover, it's possible that ATI has spent more time making the driver run the HW optimally in HQ mode whereas NVidia doesn't care.
In other words, it's not representative of how people use their cards. Forcing cards to produce as identical output as possible is stupid. One should be optimizing the perf/IQ tradeoff seperately on each card and judge from there.
According to the computerbase.de numbers, overclocking would barely make a dent in the difference. They disable all optimizations, and framerate is often half that of other sites. Is that really worth it to you?I ran my past G7x overclocked constantly in order to compensate for the high quality/no optimisations performance drop and while I'm probably amongst the minority of users that are so finicky with stuff like that it annoys me like hell.
If the image quality difference was really that big, you'd see ATI run its parts with HQ filtering by default since R580 doesn't take as much of a perf hit as G71. Then reviewers would notice right away. However, they went the same route as NVidia since they obviously felt neither reviewers nor users in general would appreciate the IQ difference.Finally since I had my humble little share in the past protesting against such side-effects I fear that G7x got way too much criticism for it's driver default settings. Radeons have and had also AF related optimisations enabled on default and there's no chance in hell that someone can convince me that no side-effects appear due to those either.
Well, computerbase.de is fair in that sense. I'm just saying that 50% perf hits are not tolerated lightly by video card buyers. They're being disingenous about their testing methodology by not showing how fast the cards are with optimizations enabled.In the end when comparing G7x against R5x0 it is unfair to compare one side with optimisations disabled and the other with them enabled, based on the degree of visibility of side-effects. It's either all on or all off in my book.
Point sampling is glaringly obvious, and accordingly no hardware runs faster with filtering disabled for this reason. Other things are not so obvious. You'll have "dancing meanders" many places anyway due to shader aliasing and inadequate AA, so a few more due to optimized filtering isn't a deal-breaker to me. I think "brilinear" is a great optimization as long as it isn't taken too far.If today a reviewer would compare R6x0 against G8x with AF optimisations disabled, I'm afraid it would put the first into a far worse light when it comes to pure AF performance. IMHLO there should be no optimisations at all enabled at default and I've had that standpoint for years now. If the user wants to have dancing meanders over his screen and goes way down to point sampling I couldn't care less; but when an application calls for trilinear it should receive trilinear. There are no but-buts for that one.
Joshua Luna said:One thing, looking back, is ATI seems to have misjudged the part, which can be seen by the bandwidth to memory. All indications are ATI expected R600 to require significant amounts of bandwidth--so what gives? Are there cases where it finds itself being very usefull? (Appears to be corner cases at best). Or is it truly "broke" in some ways, resulting in performance sub-par but retained the memory system for their original target?
According to the computerbase.de numbers, overclocking would barely make a dent in the difference. They disable all optimizations, and framerate is often half that of other sites. Is that really worth it to you?
If the image quality difference was really that big, you'd see ATI run its parts with HQ filtering by default since R580 doesn't take as much of a perf hit as G71. Then reviewers would notice right away. However, they went the same route as NVidia since they obviously felt neither reviewers nor users in general would appreciate the IQ difference.
Anyway, regardless of this incompetence, if you want to evaluate architectural decisions, you have to look at how reviewers test the cards. NVidia saw their good filtering and AF IQ go almost unnoticed with GF3 (though even considering IQ the AF hit was insane), so they went the other way entirely with NV3x and had really crappy default settings and were burned for it in reviews. With NV4x/G7x they found the sweet spot.
I'm not sure why they upped the default texture IQ for G80, but maybe it was to emphasize the IQ improvement (angle independent AF and 16xCSAA being the other big features). The decoupled texture units would also drastically reduce the hit.
Well, computerbase.de is fair in that sense. I'm just saying that 50% perf hits are not tolerated lightly by video card buyers. They're being disingenous about their testing methodology by not showing how fast the cards are with optimizations enabled.
From your experience with G7x, how would you subjectively rate the IQ improvement of disabling all filtering optimizations? Is it:
A) As important as going from noAA to 4xAA?
B) As important as going from noAF to 16xAF?
I can't imagine that you'd say yes to either of those, and they usually entail a lower perf hit. You look at a site like HardOCP that tries to choose the best resolution, AF, and AA settings for a given framerate, and they skimp on the latter two for to get rather small improvements in framerate. You can bet your ass that if they included filtering optimizations as part of the parameter space, they'd choose to keep them enabled all the time.
Point sampling is glaringly obvious, and accordingly no hardware runs faster with filtering disabled for this reason. Other things are not so obvious. You'll have "dancing meanders" many places anyway due to shader aliasing and inadequate AA, so a few more due to optimized filtering isn't a deal-breaker to me. I think "brilinear" is a great optimization as long as it isn't taken too far.