Isnt it universally accepted that the GTX480/GTX580 is 15% faster than the HD5870/HD6970 overall?
10% might be more accurate.
Isnt it universally accepted that the GTX480/GTX580 is 15% faster than the HD5870/HD6970 overall?
10% might be more accurate.
Im not basing this on one tech site. Im basing this on 10+ reviews which average out to those numbers.
==============================================================
TimeDemo Play Started , (Total Frames: 2000, Recorded Time: 111.86s)
!TimeDemo Run 0 Finished.
Play Time: 67.03s, Average FPS: 29.84
Min FPS: 25.88 at frame 140, Max FPS: 46.98 at frame 1013
Average Tri/Sec: -23912640, Tri/Frame: -801473
Recorded/Played Tris ratio: -1.14
!TimeDemo Run 1 Finished.
Play Time: 53.63s, Average FPS: 37.29
Min FPS: 25.88 at frame 140, Max FPS: 47.56 at frame 1004
Average Tri/Sec: -29422840, Tri/Frame: -788982
Recorded/Played Tris ratio: -1.16
!TimeDemo Run 2 Finished.
Play Time: 53.61s, Average FPS: 37.31
Min FPS: 25.88 at frame 140, Max FPS: 47.56 at frame 1004
Average Tri/Sec: -29446210, Tri/Frame: -789316
Recorded/Played Tris ratio: -1.16
!TimeDemo Run 3 Finished.
Play Time: 53.60s, Average FPS: 37.32
Min FPS: 25.88 at frame 140, Max FPS: 47.56 at frame 1004
Average Tri/Sec: -29429798, Tri/Frame: -788670
Recorded/Played Tris ratio: -1.16
TimeDemo Play Ended, (4 Runs Performed)
==============================================================
==============================================================
TimeDemo Play Started , (Total Frames: 2000, Recorded Time: 111.86s)
!TimeDemo Run 0 Finished.
Play Time: 62.38s, Average FPS: 32.06
Min FPS: 27.54 at frame 164, Max FPS: 61.45 at frame 1635
Average Tri/Sec: -25710632, Tri/Frame: -801969
Recorded/Played Tris ratio: -1.14
!TimeDemo Run 1 Finished.
Play Time: 43.17s, Average FPS: 46.33
Min FPS: 27.54 at frame 164, Max FPS: 63.48 at frame 1637
Average Tri/Sec: -36595932, Tri/Frame: -789850
Recorded/Played Tris ratio: -1.16
!TimeDemo Run 2 Finished.
Play Time: 42.96s, Average FPS: 46.56
Min FPS: 27.54 at frame 164, Max FPS: 64.87 at frame 1659
Average Tri/Sec: -36749028, Tri/Frame: -789318
Recorded/Played Tris ratio: -1.16
!TimeDemo Run 3 Finished.
Play Time: 42.92s, Average FPS: 46.60
Min FPS: 27.54 at frame 164, Max FPS: 64.87 at frame 1659
Average Tri/Sec: -36789168, Tri/Frame: -789525
Recorded/Played Tris ratio: -1.16
TimeDemo Play Ended, (4 Runs Performed)
==============================================================
==============================================================
TimeDemo Play Started , (Total Frames: 2000, Recorded Time: 111.86s)
!TimeDemo Run 0 Finished.
Play Time: 75.97s, Average FPS: 26.33
Min FPS: 19.80 at frame 1945, Max FPS: 39.35 at frame 1017
Average Tri/Sec: -21089428, Tri/Frame: -801033
Recorded/Played Tris ratio: -1.14
!TimeDemo Run 1 Finished.
Play Time: 64.96s, Average FPS: 30.79
Min FPS: 19.80 at frame 1945, Max FPS: 43.95 at frame 76
Average Tri/Sec: -24299354, Tri/Frame: -789302
Recorded/Played Tris ratio: -1.16
!TimeDemo Run 2 Finished.
Play Time: 66.97s, Average FPS: 29.86
Min FPS: 19.80 at frame 1945, Max FPS: 43.95 at frame 76
Average Tri/Sec: -23558378, Tri/Frame: -788842
Recorded/Played Tris ratio: -1.16
!TimeDemo Run 3 Finished.
Play Time: 65.84s, Average FPS: 30.37
Min FPS: 19.25 at frame 1949, Max FPS: 44.46 at frame 79
Average Tri/Sec: -23932526, Tri/Frame: -787903
Recorded/Played Tris ratio: -1.16
TimeDemo Play Ended, (4 Runs Performed)
==============================================================
Depends, if you game at low resolutions and up to 4x AA, then GTX480/GTX580 may be 15% faster or more compared to 5870/HD6970. Common sense though, why would you need such cards then? If you game with them as intended, 1920/2560+ with 8x AA or so, then difference is negligible, up to 5%. If you go for even higher res, then 6970 CF or even 6950 CF spanks GTX 580 SLI.Isnt it universally accepted that the GTX480/GTX580 is 15% faster than the HD5870/HD6970 overall?
I still think the inconsistency in the HD6970 performance numbers make it less favorable than the GTX570. Unless its due to immature drivers of course..
Isnt it universally accepted that the GTX480/GTX580 is 15% faster than the HD5870/HD6970 overall?
I still think the inconsistency in the HD6970 performance numbers make it less favorable than the GTX570. Unless its due to immature drivers of course..
Im not basing this on one tech site. Im basing this on 10+ reviews which average out to those numbers.
It may be common sense to ask that question, but it raises questions over why frame-rate minima or "CPU bottlenecks" or "driver bottlenecks" or "geometry bottlenecks" are making NVidia better at lower pixel counts. Those bottlenecks, whatever they are, are real and are worth exploring.Depends, if you game at low resolutions and up to 4x AA, then GTX480/GTX580 may be 15% faster or more compared to 5870/HD6970. Common sense though, why would you need such cards then?
Unfortunately, it is not very rosy like that , only at 2500x1600 and 8X AA does the HD 6970 catch up to GTX 580 , anything lesser than that , and GTX 580 is about 15% faster .Depends, if you game at low resolutions and up to 4x AA, then GTX480/GTX580 may be 15% faster or more compared to 5870/HD6970. Common sense though, why would you need such cards then? If you game with them as intended, 1920/2560+ with 8x AA or so, then difference is negligible, up to 5%.
It may be common sense to ask that question, but it raises questions over why frame-rate minima or "CPU bottlenecks" or "driver bottlenecks" or "geometry bottlenecks" are making NVidia better at lower pixel counts. Those bottlenecks, whatever they are, are real and are worth exploring.
Of course no-one actually explores those - except for Mintmaster's recent Dirt2 regression. Which, for some reason, Dave Baumann has ignored.
Of course no-one actually explores those
It's framerate minima that spoil game play.I dont know if 150 FPS or 200 FPS is such a victory. Maybe its realy just more driver overhead with ATI-s architecture. In 3Dmarks or other pure benchmarks nvidia doesnt have that advantage with lower pixel count.
I don't know what you're alluding to in this case, though there are cases where Intel processors really screw with game performance if HT is on:Edit: Maybe if sites would also bench with a AMD Phenom setup and not just OC Intel core7 CPU-s.
The issue is not "min framerate" but high resolution-independant frame time, although I'm not sure it's real.If u check out for example tomshardware review http://www.tomshardware.com/reviews/radeon-hd-6970-radeon-hd-6950-cayman,2818.html than u realize than the minimum framerates of cayman(and other AMD) cards are same than nvidia has to offer.
HAWX2 is sort of academic, though, as 2560x1600@4xAA is 60fps on the 6950. It makes a huge difference in averages, though, because the 580 is twice as fast. Some insight can be found at Tomshardware:Look at Lost Planet 2 and HAWX 2 for instance, AMD's performance are just abysmal here and Cayman didn't improve that much over Cypress, meaning Cypress' bottleneck in these games hasn't changed.
To be fair, Dave did pop in to say that it was front-end limited. I just wish we knew whether he meant command buffer, driver, or geometry, as they're all pretty much at the front of the pipeline.Of course no-one actually explores those - except for Mintmaster's recent Dirt2 regression. Which, for some reason, Dave Baumann has ignored.
To be fair, Dave did pop in to say that it was front-end limited. I just wish we knew whether he meant command buffer, driver, or geometry, as they're all pretty much at the front of the pipeline.
I was thinking about it the other day, and my guess is, Nvidia has undiscriminating brute force approach (need it or not, here comes 200FPS on your 1024x.. LCD!" ), while AMD has more focused both HW and drivers approach, i.e. high-end GPUs like Caymans are specifically tweaked with high resolutions in mind. The higher res you got, the better it is compared to competition. Like an example of 6850 CF beating more than twice more expensive GTX580 SLi in ultra-high res. Or 6970 being ~= GTX580 in 2560x with 8x AA. If you have a small screen, then it doesnt really matter if FPS is 100 or 150, thats probably the line of thinking in AMDs camp (I'm not claiming to know whats in their heads, just it seems that way ).It may be common sense to ask that question, but it raises questions over why frame-rate minima or "CPU bottlenecks" or "driver bottlenecks" or "geometry bottlenecks" are making NVidia better at lower pixel counts. Those bottlenecks, whatever they are, are real and are worth exploring.