AMD: R8xx Speculation

How soon will Nvidia respond with GT300 to upcoming ATI-RV870 lineup GPUs

  • Within 1 or 2 weeks

    Votes: 1 0.6%
  • Within a month

    Votes: 5 3.2%
  • Within couple months

    Votes: 28 18.1%
  • Very late this year

    Votes: 52 33.5%
  • Not until next year

    Votes: 69 44.5%

  • Total voters
    155
  • Poll closed .
Additionally comparing two entirely different architectures to come to a conclusion about whether one is bandwidth limited compared to the other is...well, there's too many variables involved here to come to any reliable conclusion.

Regards,
SB


I certainly agree with that.
The thing is, right now we have only data for AMD vs NV, so if someone wants to speculate, has to take those data.
Is the speculation going to be accurate?
Probably not.
But we do this for fun, it doesn't matter at this stage if we are 100% correct.
Figuring out if the 5870 is going to be bandwidth limited or not, is good enough.


Here's the thing though, if Dx11 somehow alleviates some of the need for more bandwidth through using performance enhancing features (Dx10.1 also) then it's relevant when discussing whether a card is or will be bandwidth limited.


About DX11 and DX10.1, i knew that depending the DX11/DX10.1 use, it can yield a performance gain.

I don't know if the future Dx11/DX10.1 use will somehow alleviate some of the need for more bandwidth (i mean in a appreciable ammount like -20%)
If it does then it is a good thing for ATI.

I just thought that since we don't know in what percentage this will help, it will lead to confusing us more.

For example:

Take a HD4850, which is already bandwidth limited a little and downclock the memory to 700MHz.

So for this part,
if a developer makes a future DX11/DX10.1 codepath, the game will perform a little better in relation with the DX10 codepath (lets say for example +20%), does this mean that the 4850 with 700MHz mem, stopped being bandwidth limited?

Try overclock the mem at 1,2Ghz and see the new results.

Also, the potential +20% (for example) in perf. that DX10.1/DX11 may bring in the future could be irrelevant to the memory bandwidth.

I don't have a technical background, so it is very hard for me to predict at what percent the Dx11/DX10.1 use, will help regarding memory bandwidth need.

If you are a programmer in the games industry, or an engineer of Graphics cards or if you happen to know about this staff and you can predict, i respect your opinion.
 
Last edited by a moderator:
Well, that's exactly my problem. Doubling the RBEs per MC has clearly (as can already be seen in RV740) brought a significant jump in efficiency, but at the same time the GDDR5 gravy train appears to have hit the buffers. So unless something radical happens and GDDR5 goes way above 6Gbps, the future is looking awful for this architecture - the entire forward-rendering concept needs a rethink.

Wouldn't the deferred shading, lighting features of dx10.1/11 help here? At any rate, we have seen that shaders have become alu oriented over the years in view of the memory wall. May be the game devs will realize this too and implement deferred techniques.

In games that use dx10.1, they already see ~10-20% increase in fps, from the same bandwidth of a supposedly limited 48xx.

In the leaked slides from the 10 sept event, we also saw a deferred lighting demo using dxcs. So there is a good chance that games will start using deferred techniques to save bandwidth.

The situation might be a precursor to a giant bandwidth fubar, but there is some hope yet.
 
I wonder why we haven't seen a single picture of the 5850!

Does anyone has any info about it? Will it be the same as the 5870 just with a different sticker and hardware or what? I mean what kind of cooler will it have?
 
Woah, they don't ? didn't someone post benchmarks of the new ati card being +40% - +60-% (cant remember the exact figure)

You're right!
HD5850 is above GTX285 according to leaked benches. HD5870 was compared to GTX295 and in most cases won as well, but we all know how some games are scaling on SLI/CF ....

I'm personally looking forward to professional benchmarks like SpecViewPref! Judging by OGL game performance of these cards FireGL based on RV870 should be mighty! :devilish:
 
5870family.jpg

Thats what i found on xtremesystems
 
IIRC, the 4850 was a single slot part. Clearly, power budgets have shot up with this gen. Not good. :(

Depends really. Would you be willing to pay 1.5 times more in power to get 2 times performance? Especially if the idle power draw has been dropped right down as leaks suggest. Although your peak power usage may go up for more performance, your overall power usage would go down as your idle power usage has dropped a lot more, and that's probably what your card spends more of it's time doing.

That would sound like a good deal unless you absolutely cannot afford more power under any circumstances.
 
Depends really. Would you be willing to pay 1.5 times more in power to get 2 times performance? Especially if the idle power draw has been dropped right down as leaks suggest. Although your peak power usage may go up for more performance, your overall power usage would go down as your idle power usage has dropped a lot more, and that's probably what your card spends more of it's time doing.

That would sound like a good deal unless you absolutely cannot afford more power under any circumstances.

In that case you should probably not be looking at high-end graphic cards or a high-end computer to begin with.
 
Sorry to interfere but i think i understand why Rangers did that.

If you see his conclusion:

"If anything 5870 typically increased it's lead the higher the resolution/AA/AF settings were turned, so I dont see a bandwidth limiting problem, at least versus the current competition"

So his main goal was to see if 5870 is bandwidth limited or not.
So he had to use only those games that the performance difference was not due to the potential DX10.X codepath implementation.

But the logic, that 5870 doesn't have a bandwidth limiting problem because typically increased it's lead the higher the resolution/AA/AF settings were turned, is wrong imo.

For example take a 16ROPs GTS250 (738MHz core /1100MHz mem) underclock the memory at 900MHz, then take a low power edition of 16ROPs 55nm 9600GT (600MHz core / 900MHz mem)

You will see that the underclocked GTS250 is increasing it's lead the higher the resolution/AA/AF settings were turned, does this mean that the GTS250 is not bandwidth limited with 900MHz memory?

You do realize that the 5870 lead increase isn't so much from the fact that it isn't bandwidth limited, but rather it simply isn't nearly as limited as its competition is. That and the FACT that ATI has some nice 4x and 8x AA algerithims in place right now to so reduce the hit for those settings.
 
Dual slot coolers mean single slot ones were not good enough to keep them cool.
The reference card will have all those connections, the two slot design is not because of TDP requirements, but because of connections. expect cards with less than 2 DVI ports.
 
I'd like to know where the complainers are about how they all have dual slot coolers? Even the 5770 has one, if they cards have dual slot coolers, you know they are gonna run warm to hot.

I always prefered dual slot coolers. Especially ATI ones, that blow all the hot air out of the case. It is the most important thing for me.

After selling my 4870X2, i got a GTX 260, just to have a decent card that can play some games, until the new ones arrived. The problem is that my case's temperature rose about 5C degrees, just because there is too much air blown into the case and that comes from a card that is much less of a power hog, than the 4870X2 was.
 
I've always been willing to give up that unused slot if it makes the difference between my card running cool and quiet or noisy and hot. The days when people stood for noisy PCs is long gone.

In fact, I've just sent back a 4890 with a non-standard cooler (Powercolor changed the coolers as some kind of differentiator, but the retailer didn't change the description), because it was just far too noisy under load. It wasn't terrible by the standards of a few years back, but it's awful by today's standards where everyone is using slow and quiet 120mm+ fans in even the most modest builds. Graphics cards can't be so much noisier than everything else, even though they generate so much heat - people just won't give them a pass on it.
 
I always prefered dual slot coolers. Especially ATI ones, that blow all the hot air out of the case. It is the most important thing for me.

After selling my 4870X2, i got a GTX 260, just to have a decent card that can play some games, until the new ones arrived. The problem is that my case's temperature rose about 5C degrees, just because there is too much air blown into the case and that comes from a card that is much less of a power hog, than the 4870X2 was.


What card did you go with is what I gotta ask. I've got BFG and eVGA cards and both blow the air of the case totally, none goes into the case.
 
Back
Top