The LAST R600 Rumours & Speculation Thread

Status
Not open for further replies.
dont see how it could be only almost as fast with all that bandwidth and the spec rumors floating around

The bandwidth the card offers merely shows its potential. The other rumored specs are just that: rumors. The thing could still be released with 16ROPS and lack any ability to use all of its bandwidth. We just don't know yet. It could be a beast. It could be stillborn. While I lean towards the beast side (they'd have to be stupid to make it 'stillborn' slow), I keep in mind that we just don't know yet.
 
Well, the problem with that for AMD/ATI is that there is no reason to expect that nVidia won't be able to pull another 20%-30% performance out of the G8x drivers.

Nvidia was doing that "driver improvement "since Geforce 2 /3 /4 - until Radeon 9700 Pro hit the market that it did not help Nvidia to outperform R300....
I 'am not saying that this is another R300, but time will tell by how much driver improvement will help.

R520 X1800XT was also delay but performance did not exited me until R580 X1900XT, I hope the delay of R600 will recover with performance side.
 
Nvidia was doing that "driver improvement "since Geforce 2 /3 /4 - until Radeon 9700 Pro hit the market that it did not help Nvidia to outperform R300....
I 'am not saying that this is another R300, but time will tell by how much driver improvement will help.

Both IHVs have released in the past drivers that delivered significant performance boosts; and there's nothing unusual in that either. It's either some sort of "fine tuning" of certain aspects of the architecture or there were "bubbles" in the former drivers that held GPUs back in certain applications to perform to their full potential.

R520 X1800XT was also delay but performance did not exited me until R580 X1900XT, I hope the delay of R600 will recover with performance side.

Trouble being that you'll have a hard time finding online performance comparisons with R520 GPUs included. It's not a slouch at all neither compared to it's direct competitor (G70) nor compared to R580. There's no doubt that the latter is most of the times faster, yet the performance difference is no way analogue to the increase of theoretical ALU throughput the R580 brought with it.
 
Both IHVs have released in the past drivers that delivered significant performance boosts; and there's nothing unusual in that either. It's either some sort of "fine tuning" of certain aspects of the architecture or there were "bubbles" in the former drivers that held GPUs back in certain applications to perform to their full potential.
Yup. ATI tends to do it more gradually, while nVidia has in the past tended to provide larger performance jumps at a time.
 
Yup. ATI tends to do it more gradually, while nVidia has in the past tended to provide larger performance jumps at a time.

The "OGL/MSAA" tweak thingy for R5x0 delivered a significant performance boost for all Radeons of that family for OGL applications when MSAA is enabled.
 
The "OGL/MSAA" tweak thingy for R5x0 delivered a significant performance boost for all Radeons of that family for OGL applications when MSAA is enabled.
Yeah, I seem to remember that. But that sort of thing seems to be rather out of character for ATI. I think there have been a couple of other examples in OpenGL, but then ATI's OpenGL drivers have always been sub-par, particularly compared to nVidia's, making significant improvements easier.
 
Trouble being that you'll have a hard time finding online performance comparisons with R520 GPUs included. It's not a slouch at all neither compared to it's direct competitor (G70) nor compared to R580. There's no doubt that the latter is most of the times faster, yet the performance difference is no way analogue to the increase of theoretical ALU throughput the R580 brought with it.

I don't care that it is hard time finding online performance comparisons, I know how R520 performance is....
The question is absorb that R520 delay did not recover in the performance side, it barely match G70, the reason I say that is because ati in the beginning struggled with MHz frequency to pass/past 500MHz mark in order to match G70. ATI R520 was good solid video card but it was late for the first impression. I hope this will not happen with R600 unless it is significantly faster then G80.

Between R520 and R580 they will perform about same since they both have 16 ROP and 16 texture units. Only when situation like more pixels processor ALU's needed which R580 has.
 
The bandwidth the card offers merely shows its potential. The other rumored specs are just that: rumors. The thing could still be released with 16ROPS and lack any ability to use all of its bandwidth. We just don't know yet. It could be a beast. It could be stillborn. While I lean towards the beast side (they'd have to be stupid to make it 'stillborn' slow), I keep in mind that we just don't know yet.

i have a feeling the rumored specs are pretty accurate, they usually are when they are always similar in nearly every rumor
 
The question is absorb that R520 delay did not recover in the performance side, it barely match G70, the reason I say that is because ati in the beginning struggled with MHz frequency to pass/past 500MHz mark in order to match G70. ATI R520 was good solid video card but it was late for the first impression. I hope this will not happen with R600 unless it is significantly faster then G80.

That's a side-effect you get with all delays in the GPU market; the longer the delay the higher the frequency increase needed to compete.

Between R520 and R580 they will perform about same since they both have 16 ROP and 16 texture units. Only when situation like more pixels processor ALU's needed which R580 has.

Exactly my point; with the only other difference being that the R580 doesn't ever turn out in real games 3x faster than the R520 as the theoretical ALU throughput increase would suggest.
 
Exactly my point; with the only other difference being that the R580 doesn't ever turn out in real games 3x faster than the R520 as the theoretical ALU throughput increase would suggest.

If R580 had 24 ROP's and 24 texture units then it would significantly help.
 
Well, the problem with that for AMD/ATI is that there is no reason to expect that nVidia won't be able to pull another 20%-30% performance out of the G8x drivers.

My personal guess is that over the next 1-2 months nVidia is going to be working hard on Vista compatibility and performance, and should bring that up to par in that time, giving them another couple of months, apparently, to get further performance tweaks to spoil ATI/AMD's launch.
and don't forget G81. Driver improvements on G8x architecture + G81 will give ATI/AMD big headache...
 
Have you thought why AMD/ATI chose to call R600 - X2900XTX as appose X2800XTX.
Their might be a reason behind it.
Perhaps AMD/ATI believes nVidia will release a GeForce 8900 GTX at around the same time, and they don't want to be perceived as being behind?
 
Somehow, i doubt Nvidia would shrink G80 down to the 80nm half-node.
My bet it that they'll unveil a top GPU on the 65nm process sometime in the Fall, or maybe even late Summer, but it will not be a simple die-shrink.
Why can't they do both, however, if it's an optical shrink? It would just be a cheap and efficient way to reduce costs, which makes sense if we aren't expecting them to release a new product for some time afterwards again.
That 65nm G72 is a too suspicious "experiment", when they could have used that process for the G86 to make a truly "killer" low end GPU...
If you look around, you'll find some interesting reports on 65nm yields: http://www.fabtech.org/content/view/2462/73/
Also consider that wafer prices are much higher on cutting-edge processes. It's not difficult to conclude from this that putting a product on 65nm right now is likely less cost-efficient than keeping it on 80nm. On the other hand, it might be more power-efficient...

Getting back to R600...
i have a feeling the rumored specs are pretty accurate, they usually are when they are always similar in nearly every rumor
First thing you've got to realize is that rumors tend to be repeated around by different people, but the source of the information actually hasn't changed. As such, it doesn't matter a single bit if more people are repeating R600 specs - that doesn't make them any more or less accurate than they are otherwise.
 
If R580 had 24 ROP's and 24 texture units then it would significantly help.

Considering the general architectural layout I suspect that it could had ended up with 6 quads instead of 4 in the end. All fine and dandy but I wouldn't want to imagine the transistor count of such a beast, nor its power consumption.
 
Well, the problem with that for AMD/ATI is that there is no reason to expect that nVidia won't be able to pull another 20%-30% performance out of the G8x drivers.

My personal guess is that over the next 1-2 months nVidia is going to be working hard on Vista compatibility and performance, and should bring that up to par in that time, giving them another couple of months, apparently, to get further performance tweaks to spoil ATI/AMD's launch.

You mean texture shimmering coming back in nv side? :rolleyes:
 
Optimisations that existed for G7x would not neccasarily translate into large performance gains for the G80 due the difference in the way shader/texture ALUS are made up.. I think they are actually talking about compilers and possibly of that MUL showing up.

Chris
 
Status
Not open for further replies.
Back
Top