AMD: R7xx Speculation

Status
Not open for further replies.
This is untrue, anyone that is even remotely serious about this goes beyond the inbuilt test, but if what the inbuilt test correlates properly with what you're getting in-game, then you sure as hell use that. Due to ease of use and comparability.
Your response, while well taken, seems like a very sweeping statement as well. What about those who may not be quite as serious? Who are those? Might they even be in a majority? How on earth is a reader supposed to be able to determine who does a more thorough job and who doesn't, if the difference isn't highlighted in the review?

The main benefit from when the benchmark cheating was brought to the public eye the last time around was that it raised awareness, and improved the average review standards. As the issue has been slipping out of focus, the tendency to my eye on average has been to slide back a bit in terms of quality and wide range of testing. It would be a service to all involved if the people who take their benchmarking seriously actually bother to point this out both in terms of methodology and their reasoning behind it. This will help uphold a decent review standard, it will help consumers maintain some awareness and healthy scepticism, it might help keep the manufacturers somewhat honest... - and it will actually give those benchmarkers/reviewers some credit where credit is due.
 
Only that this time the progress effectively exists, and his statements were made without having an inkling of direct insight, they were made just because.

What is a real-game test?Why has this become some magical thing?We all do real-game tests, it's not some bloody breakthrough. We probably all have some spots we like to test with new cards/new drivers.

The thing that is bothersome is the blanket assumption that all reviewers are basically idiots who don't check beyond an inbuilt test (if one exists). This is untrue, anyone that is even remotely serious about this goes beyond the inbuilt test, but if what the inbuilt test correlates properly with what you're getting in-game, then you sure as hell use that. Due to ease of use and comparability.

It's the same story with timedemos/walkthroughs/whatever- you check if what you're getting with an "automated" test correlates with what you're getting in game(especially in those tight spots mentioned above), and then you use it.


I'm sorry, but to date I have only seen to sites that have consistently shown how the R6xx series actually performed (PCinlife(spelling I'm sure I've got it wrong) and HOCP) in games while everyone else ran the canned benches or walkthrus and got vastly higher performance numbers. Who was proven to be correct in the end, the 2 listed above, so yes I view all other sites of dolts for running walkthrus(these by NO MEANS show what a card can do playing the game), flybys, timedemos and ingame benches are heavily optimized for. SO yes, I want to see results from actual game play.

And FYI: FS, AAT and a few others DID walkthrus for BIOSHOCK. Now you say they dont publish numbers until they compare to actual game play. SOrry but you are so wrong. Go play a map you have already cleared, your avgerage framerate will be 20-30fps higher than it was when you actually played that lvl. Yeah, there is a good indicator as to game play and this is true for any walkthru.
 
The biggest problem, IMO, with testing in-game is the fact that it's nowadays hard to replicate the exact run in many games due the fact that AI etc might work differently on different runs resulting unbalanced testing conditions


The only game where this is a huge issue that I am aware of is Oblivion. Because you can load the same save 5 times and get a different experience every single time. But even that would still be better than walking thru an empty city and calling it a benchmark.
 
I've noticed on my 9600GT it gets IMMENSLY higher framerates on walk throughs, but it gampeplay in Crysis, drops down to near zero.
 
I strongly believe there is exactly zero chance AMD is going back to fixed AA resolve. Shader resolve may cost a bit of performance (but all theoretical views on this suggest it shouldn't be much), but it also gains you tons of flexibility (tent filters) while saving transistors (flexible fixed function resolve would be complicated to the point you'd have to essentially add full ALUs to the ROPs).
 
I'm sorry, but to date I have only seen to sites that have consistently shown how the R6xx series actually performed (PCinlife(spelling I'm sure I've got it wrong) and HOCP) in games while everyone else ran the canned benches or walkthrus and got vastly higher performance numbers. Who was proven to be correct in the end, the 2 listed above, so yes I view all other sites of dolts for running walkthrus(these by NO MEANS show what a card can do playing the game), flybys, timedemos and ingame benches are heavily optimized for. SO yes, I want to see results from actual game play.

And FYI: FS, AAT and a few others DID walkthrus for BIOSHOCK. Now you say they dont publish numbers until they compare to actual game play. SOrry but you are so wrong. Go play a map you have already cleared, your avgerage framerate will be 20-30fps higher than it was when you actually played that lvl. Yeah, there is a good indicator as to game play and this is true for any walkthru.

Obviously, walk throughs and whatnot will perform better than actual intense game play, but if the % difference between the cards is maintained through both real world and canned tests, then it doesn't matter, as the cards are being compared to each other.

If you are trying to suggest that the reviews are mainly there to tell you whether or not you can play a specific game, then your logic is flawed. If that were the case, the reviews would go into detail about the slowdowns, when they happen, how often they happen, what the normal frame rate is, and how acceptable the slowdowns are.
 
I think it was a design flaw. I don't think AA worked as expected.

I think it was NOT a design flaw. I think AA worked as expected.

I think that to respect all the DX10.1 specs, the RV670 have to use the shaders rather than ROPs.
Maybe the R600 was a DX10/DX10.1 hybrid, and if so, this can explain a lot of things.

Looking towards improvements on the image quality front, DirectX 10.1 will also see the introduction of full application control over anti-aliasing. This will allow applications to control the usage of both multi-sample and super-sample anti-aliasing, as well as giving them the ability to choose sample patterns to best suit the rendering scenario in a particular scene or title. Finally, these changes in DirectX 10.1 give the application control over the pixel coverage mask, a mask which is used to help to quickly approximate sampling for an area of pixels. This in particular should prove to be a boon when anti-aliasing particles, vegetation, scenes with motion blur and the like. All of this additional control handed to the application could allow for anti-aliasing to be used much more wisely and effectively, and controlled by game developers themselves, rather than the current 'all or nothing' implementation available, which basically amounts to a simple on-off switch.
The most commonly used anti-aliasing technique today is multi-sample anti-aliasing (MSAA), but this only works on polygon edges; it doesn't address texture aliasing or shader aliasing. The edge detect filter anti-aliases all edges, including those textures and shaders.
DirectX 10.1 allows custom anti-aliasing filters to be implemented with pixel shaders. Custom filters can offer improved quality in certain cases where standard MSAA can have issues, such as with HDR lighting and deferred shading techniques.
A new feature of DirectX 10.1 allows all AA buffers to be accessed directly by shaders. Previously, it was only possible to access multi-sampled color buffers; it was impossible to access information from a depth buffer for each sample individually. This allows developers to implement more advanced custom AA techniques using a combination of shaders and dedicated hardware, much like ATI Radeon HD GPUs do today with CFAA.
ATI Radeon GPUs also introduced support for adaptive AA, which smoothes out jagged edges in partially transparent textures (such as those used to render foliage and chain-link fences). DirectX 10.1 expands on this capability by introducing sample coverage masking, which provides control over the specific sample locations where pixel shaders are executed.
Graphics cards that are DX 10.1 compliant will have to offer programmable shader output sample masks and multisample AA depth readback.

And the list goes on...
 
Obviously, walk throughs and whatnot will perform better than actual intense game play, but if the % difference between the cards is maintained through both real world and canned tests, then it doesn't matter, as the cards are being compared to each other.

If you are trying to suggest that the reviews are mainly there to tell you whether or not you can play a specific game, then your logic is flawed. If that were the case, the reviews would go into detail about the slowdowns, when they happen, how often they happen, what the normal frame rate is, and how acceptable the slowdowns are.


The problem is, that is not the case with reviews. They pass off their numbers as thou that is the performance you can expect playing the game which is not the case. As to your seconded point, 2 sites do do that, no others.
 
How so? It's well known it initially really killed R600 performance with AA.

I thought it was established that R6xx's problem was the lack of texture throughput. And seeing as most reviews enabled both 4xAA/16xAF, the blame was always put on AA causing R600's performance woes.

I started doubting that theory when I saw many benchmarks in which R600 pulled ahead of the 8800 GTS and put it on par with the 8800 GTX, once higher levels of AA were enabled (e.g. 8 samples).

I doubt ATI will revert back to hardware based MSAA resolve. I also doubt that AA was a problem with R600 and derivatives.
 
I thought it was established that R6xx's problem was the lack of texture throughput. And seeing as most reviews enabled both 4xAA/16xAF, the blame was always put on AA causing R600's performance woes.

I started doubting that theory when I saw many benchmarks in which R600 pulled ahead of the 8800 GTS and put it on par with the 8800 GTX, once higher levels of AA were enabled (e.g. 8 samples).

I doubt ATI will revert back to hardware based MSAA resolve. I also doubt that AA was a problem with R600 and derivatives.

No, in the initial R600 reviews, turning on AA caused a massive performance drop. I thought everybody on B3d would know that..

For example sometimes it would compete well with 8800GTX..but then turn the AA/AF on and it was toast, where the GTX endured a tiny drop. And face it, most sites are going to bench high end card with AA/AF and rightfully so.

It seemed like the 3870, probably because of drivers, maybe it was just drivers, got a lot better about this..

I still dont see why you'd want to sap shader power for AA under any conditions though..

Edit: Oh I get it, you're claiming it was AF rather than AA that caused the hit..I do remember something like that, but I dont accept that it was decided that AF was the problem that I can recall for sure..
 
Pretty sure people testing it here came up with the AF conclusion, since rarely if ever do reviews leave AA on with AF off. And reviews of 8xAA drop % between RV670 and G92 showed that RV670 dropped the same % as G92 did at 8xAA..
 
Status
Not open for further replies.
Back
Top