AMD: R7xx Speculation

Status
Not open for further replies.
So again, which review does the end user more credit, the ones that use cutscenes, flybys, walkthrus and ingame benches that in no way reflect actual game play and can thus mislead the possible consumer as what to expect or the review that is done by actually playing the game?


There's a difference between benchmarking the graphics card and benchmarking the game.

Obviously the latter depends on your whole system, and how the game responds to what you are doing. The former is a more academic attempt to isolate the one GPU component and remove the other influences.

Unless the reader understands what the review is trying to show, and how it's doing it, it's easy to misunderstand or misapply the results, thus getting into your hypothetical situation of "my purchase doesn't match the reviews".

One of the reasons I think ATI has done consistently badly in the reviews is because Nvidia is so good at targeting how reviews are done and putting their most advantageous foot forwards (for instance emphasising speed over image quality for a long time, because speed shows in the numbers and IQ doesn't).

Yes ATI hardware does underperform compared to Nvidia, but in real world usage it's not as bad as the reviews say it should be. For instance ATI may give lower framerates, but also more stable framerates without the extreme peaks and troughs that give Nvidia the headline win on a review, but arguably a lesser playing experience.

There are just a lot of factors and considerations that the review sites have become progressively less able to address in their reviews.
 
Last edited by a moderator:
I tend to agree that even though some reviews may not seem indicative of gameplay, when doing scientific testing, you want to remove all possible variables but the variable you are testing.

Anyways, if CJ is right and that the RV770 is 1.25x the 8800GT/9800GTX in its PRO and XT forms, then it truly is amazing for the mainstream/performacne part of this generation to be 1.25X faster than the previous gens high end cards at $225ish. Depending on how R700's architecture is (such as just how MCM it really is), it could be good competition for GT200.
 
There's a difference between benchmarking the graphics card and benchmarking the game.

Obviously the latter depends on your whole system, and how the game responds to what you are doing. The former is a more academic attempt to isolate the one GPU component and remove the other influences.

Unless the reader understands what the review is trying to show, and how it's doing it, it's easy to misunderstand or misapply the results, thus getting into your hypothetical situation of "my purchase doesn't match the reviews".

One of the reasons I think ATI has done consistently badly in the reviews is because Nvidia is so good at targeting how reviews are done and putting their most advantageous foot forwards (for instance emphasising speed over image quality for a long time, because speed shows in the numbers and IQ doesn't).

Yes ATI hardware does underperform compared to Nvidia, but in real world usage it's not as bad as the reviews say it should be. For instance ATI may give lower framerates, but also more stable framerates without the extreme peaks and troughs that give Nvidia the headline win on a review, but arguably a lesser playing experience.

There are just a lot of factors and considerations that the review sites have become progressively less able to address in their reviews.

We'll just have to agree to disagree here as I believe you can play the game to benchmark a card. With most games(oblivion being the exception because of its massive randomness) everything is preprogramed to happen at certain stages. EI coming across X enemy at point z. The biggest problem with the items I have listed before is that they do not show how the card can handle things when you are in the middle of it. The explosions, pulse rifles, grenades, axes whatever being used going off happening. H does a good job here as they provide a graph along with numbers and talk about the experience they had playing the game.

Actaully, you have it backwards. xbit, anand, firing squad and a few others all showed the 2900XT and the 3870/X2 being better than the G80s when in fact, actual game play revealed that not to be the case as the cards fell hard actually running the game. And as far as pit falls go, both companies have that issue.

You wanna isolate the card, play 3dMark all day, wanna show me how I can expect the card to behave playing a game, benchmark the game play not cutscenes, flybys, walkthrus or ingame benches.
 
well during game play tests, it has alot to do with the way the game was coded and drivers, fly by's have these issues too, but different parts of levels will behave differently on different cards. But flyby's can be optimized for more easily.

trying to get a metric for absolute graphics performance is nearly impossible if we look at different tests because this is a competitive market, and there are many game developers and not everything is going to show the same results since the cards, drivers, software behave differently, even within the same application.
 
So ATI has really given up then....or should I say AMD ? :cry:
Smells like AMD...


It was just a joke, put the shotguns down.

So RV770 will get trumped by GT200 & Co. Great, more fuel for the green flame... :rolleyes:

If R700 ends up being some pathetic "CrossFire on a stick" reply to GT200, then I will be sorely dissapointed. Should that be the case, then ATI really must have lost its bollocks after the buy out/R600 disaster.
 
Last edited by a moderator:
well during game play tests, it has alot to do with the way the game was coded and drivers, fly by's have these issues too, but different parts of levels will behave differently on different cards. But flyby's can be optimized for more easily.

trying to get a metric for absolute graphics performance is nearly impossible if we look at different tests because this is a competitive market, and there are many game developers and not everything is going to show the same results since the cards, drivers, software behave differently, even within the same application.

Anandtech or TechReport made a note that fly-by's can also be I/O limited, which doesn't lead to a very indicative level of performance in real life.

But still, they're better than nothing, I suppose. :/
 
I'll believe it when I see it. Given there is not alot of performance difference between the 8800gt and 9800gtx, that must mean that either A) GDDR5 isn't doing a damn thing for it other than helping AMD lose money or B) The chip itself is yet still with issues that were in the R600. Why even bother with GDDR5 for all that bandwidth if it isn't even going to be used? As that is what it looks like here, unless I have missed something, the only 2 things different between the Pro and the XT is clocks speeds and type of memory used.

There's a decent bit of difference actually..

And I'm not sure what the GDDR5 comment has to do with..anything. Memory bandwidth is only one of many factors that will determine how fast the card is.

A 9600GT is ~HD3870, <8800GT, <8800GTX,<9800GTX. If each one of those steps is 10-15%, by 9800GTX you're significantly faster (30-45%). Then recall we're debating a card supposedly 25% faster than 9800GTX, and you'd be looking at 50-80% faster than 3870, which isn't bad. And might fall in line with the specs.

If GT260 isn't pretty darn cheap, 4870 might be my new card.
 
There's a decent bit of difference actually..

And I'm not sure what the GDDR5 comment has to do with..anything. Memory bandwidth is only one of many factors that will determine how fast the card is.

A 9600GT is ~HD3870, <8800GT, <8800GTX,<9800GTX. If each one of those steps is 10-15%, by 9800GTX you're significantly faster (30-45%). Then recall we're debating a card supposedly 25% faster than 9800GTX, and you'd be looking at 50-80% faster than 3870, which isn't bad. And might fall in line with the specs.

If GT260 isn't pretty darn cheap, 4870 might be my new card.

So, you're discounting any possible core, shader core and/or memory clockspeed increases for G92b (don't forget it, GT200 and RV770 won't be the only games in town by then) in order for Nvidia to remain competitive with HD 4870 at those price/performance segments ?
Frankly, i would be surprised if HD 4870 was really that much faster than 9800 GTX, but i can't also discard that possibility just yet.
 
So RV770 will get trounched by GT200 & Co. Great, more fuel for the green flame... :rolleyes:

If R700 ends up being some pathetic "CrossFire on a stick" reply to GT200, then I will be sorely dissapointed. Should that be the case, then ATI really must have lost its bollocks after the buy out/R600 disaster.
R700 is 99.832%* certain to be a dual-chip, Crossfire-on-a-stick solution.

*Approximately. :)

But that doesn't necessarily mean you should write it off. For starters, if GT200 uses as much power as it is rumoured to do, it will probably not be possible for Nvidia to make a GT200 GX2. The battle for the top-end will therefore be between a single-chip GT200 and a dual-chip R700 - which might well come out in ATI's favour. We also don't know exactly how the Crossfire setup will work. If there really is some sort of system whereby both chips can access the same pool of memory, that may well alleviate some of the usual disadvantages of dual-chip rendering (for example, one will no longer have to duplicate all the data sent along the PCIe bus, and resources rendered in one frame that are used in the next will be accessible to both chips without having to venture outside the graphics card's onboard memory).
 
I tend to agree that even though some reviews may not seem indicative of gameplay, when doing scientific testing, you want to remove all possible variables but the variable you are testing.

Anyways, if CJ is right and that the RV770 is 1.25x the 8800GT/9800GTX in its PRO and XT forms, then it truly is amazing for the mainstream/performacne part of this generation to be 1.25X faster than the previous gens high end cards at $225ish. Depending on how R700's architecture is (such as just how MCM it really is), it could be good competition for GT200.

It could be interesting to note than that if CJ is right (and normally he is, or better, his sources are reliable, but this is as usual a big if) then with RV770 AMD could be able to extract more performance per die are than Nvidia could do with a 55 nm G92, if Nvidia fails to deliver substantially higher clocks (+15% at least) on the 55nm version of that chip. It's also true that G92 is heavily bandwidth limited, but that was also a design decision made with a compromise between die area and performance.

My impression is that the so-criticized 5-way superscalar ALU arrangement with decoupling TMU and ROP/VRAM could save more space when adding new blocks than adding new clusters in a G80-like architecture. Now it will be interesting to look at what performance the new Nvidia boards (GT200) will be, to see if the performance/die size increases (and eventually if the new high-end nvidia chip could be hevily CPU limited in some scenarios, as I suspect).
 
You wanna isolate the card, play 3dMark all day, wanna show me how I can expect the card to behave playing a game, benchmark the game play not cutscenes, flybys, walkthrus or ingame benches.

Here's a question: if you take a graphics card out of a test machine and put it into another machine, what are you measuring? The card stays the same, the rest of the machine is different, and you get different results.

I'm not suggesting that benchmarking the way you suggest doesn't give useful information. Just it's not the same information as what a lot of review sites are trying to give you, which is why you have to understand what they are doing. You're simply arguing that canned benchmarks have no relevance to realworld usage, and I pretty much agree with you. However, what we get from most websites is a canned benchmark designed to isolate the GPU.

Even HardOCP that takes your view into their benchmarks, still use recorded run-throughs which are different from playing the game, and the difference in their acceptable settings and resolutions breaks easy comparison between cards.

This subject has come up before, and there isn't really a good way to give results that are the same as playing the game and can be reproduced for product comparisons or automated for reviews. The alternative is to throw out the graphs that everyone loves and go with a completely subjective review of what the reviewer thinks and feels, rather than what can be measured in the framerate.

Just like car review that gives you top speeds that are irrelevant to driving around town, they are a single metric that can be used to compare that same metric on another vehicle. It doesn't tell you everything, it only gives a single datapoint. Without understanding where that datapoint comes from or what it means, there's no way for someone to use it as a part-basis for a more complete opinion or picture of the whole product.
 
Last edited by a moderator:
Now it will be interesting to look at what performance the new Nvidia boards (GT200) will be, to see if the performance/die size increases.
It appears the ALU:TEX ratio will substantially increase in GT200. The ALUs should be relatively cheap so I'm expecting double+ performance (compared with G92) for less than a doubling in transistor count.

Though it's worth pointing out that G92 is not a good baseline because it's a bit of a wasteful configuration - i.e. anything that follows it that isn't so wasteful (as we saw with G94) looks dramatically better in performance/transistor terms.

Jawed
 
It appears the ALU:TEX ratio will substantially increase in GT200. The ALUs should be relatively cheap so I'm expecting double+ performance (compared with G92) for less than a doubling in transistor count.

Though it's worth pointing out that G92 is not a good baseline because it's a bit of a wasteful configuration - i.e. anything that follows it that isn't so wasteful (as we saw with G94) looks dramatically better in performance/transistor terms.

Jawed

That's right, but there are a couple of that give me a little thinking:

1) Bandwidth per ROP partition is not so high compared to 9800 GTX. Bandwidth per TU should be 60% more. In heavy AA scenarios this could be a limiting factor to the "more than 2x" scaling.
2) It could have the risk to be heavily CPU limited at lower resolutions
 
That's right, but there are a couple of that give me a little thinking:

1) Bandwidth per ROP partition is not so high compared to 9800 GTX. Bandwidth per TU should be 60% more. In heavy AA scenarios this could be a limiting factor to the "more than 2x" scaling.
2) It could have the risk to be heavily CPU limited at lower resolutions

Why would bandwidth per ROP be an indicator of AA performance? Samples/clock should be your benchmark. If both G92 and GT200 are equally bandwidth limited but the bandwidth is doubled on GT200 you should get your 2x scaling.

Also, isn't CPU limitation at lower resolution a good thing? What's the "risk" you're referring to?
 
30% more performance for ~double the bandwidth. Crass. Why not just use GDDR3 and sell the damn thing for a better price?

Jawed

I said exactly that earlier in this thread.

Would ATI really do what they did with R600 and RV670 again? Increase costs for useless bandwidth?
 
30% more performance for ~double the bandwidth. Crass. Why not just use GDDR3 and sell the damn thing for a better price?

Huh??
What's so great about being completely bandwidth limited?
Besides, by all accounts they WILL sell the same GPU with GDDR3, so what's the problem?

The question is rather if the additional performance is worth it in terms of component cost. But given how happily people seem to pay 35% more for the 15% higher clocks of the top end cards, getting 30% higher performance without any additional power draw, and at reasonable cost seems like a comparatively good deal.
Obviously, whether the additional cost is worth it will depend on usage patterns. People who like to use settings that are bandwidth hungry will fork over the money for the higher end part, others will save their pennies and get a relative bargain. Just as it usually works out.
 
Status
Not open for further replies.
Back
Top