The LAST R600 Rumours & Speculation Thread

Status
Not open for further replies.
What's your gut feeling on the R600? In particular, is it going to be faster than the G80 and G81?

I'm sensing that it will slightly edge out G80, but G81 will easily beat it while using less power.


Do you have any solid reason to believe that a hypothetical G81 is going to close the bw deficit (as opposed to marginally narrow)? I don't. So I think such a scenario, should it come to pass, could have variable results across different settings. We'll have to see first with R600 exactly how far up the resolution/settings food chain you have to go for the bw advantage to start to shine.

Edit: Of course, having said that, looking at that ROP/memory implementation they did, I don't see any particular theoretical challenge for them to step up to 512-bit if they are willing to spend the silicon. It's obviously modular, as GTS attests. But are they willing? And at what process will they refresh: 90, 80, or 65 --and the implications for timing.
 
Do you have any solid reason to believe that a hypothetical G81 is going to close the bw deficit (as opposed to marginally narrow)? I don't. So I think such a scenario, should it come to pass, could have variable results across different settings. We'll have to see first with R600 exactly how far up the resolution/settings food chain you have to go for the bw advantage to start to shine.

G80 is not exactly starving for BW at the moment.
So, unless real world conditions and software demand much more in the near future, i think the whole 384bit vs 512bit debate is a moot point.

By the time a 512bit bus + GDDR4 combination is truly useful, both IHV's will be using it anyway...
 
Think HDR+High AA/AF+High Res.

Bandwidth would get swallowed rather quickly. What geo is positing is, how quickly. If there's a noticeable difference at 1600x1200, then big win for ATI. If it takes going to 2048 res or higher, not so much.
 
Do you have any solid reason to believe that a hypothetical G81 is going to close the bw deficit (as opposed to marginally narrow)? I don't. So I think such a scenario, should it come to pass, could have variable results across different settings. We'll have to see first with R600 exactly how far up the resolution/settings food chain you have to go for the bw advantage to start to shine.

Edit: Of course, having said that, looking at that ROP/memory implementation they did, I don't see any particular theoretical challenge for them to step up to 512-bit if they are willing to spend the silicon. It's obviously modular, as GTS attests. But are they willing? And at what process will they refresh: 90, 80, or 65 --and the implications for timing.

It's a gut feeling. All the delays, need for a newer process and enormous power consumption sounds like not everything is right with the R600.
 
G80 is not exactly starving for BW at the moment.
So, unless real world conditions and software demand much more in the near future, i think the whole 384bit vs 512bit debate is a moot point.

By the time a 512bit bus + GDDR4 combination is truly useful, both IHV's will be using it anyway...
You came to the conclusion that 512b isnt that useful even before it was launched?
 
As I've said before, it puzzles me that people would think that AMD would do 512-bit for just a checkbox. But it *really* puzzles me why people would think that AMD would do 512-bit *and* GDDR4 for just a checkbox.
 
As I've said before, it puzzles me that people would think that AMD would do 512-bit for just a checkbox. But it *really* puzzles me why people would think that AMD would do 512-bit *and* GDDR4 for just a checkbox.
Oh, I'm sure they didn't. But that doesn't necessarily mean that it won't end up being just that, at least for now.
 
Given the engineering difficulty of producing a 512bit bus? I'd say it definitely means that. It's just too damn expensive and would make no sense unless the planning was there to build significant capabilities into the core to utilize that bandwidth.

An external 512bit bus isn't the same thing as the 512bit ring bus experimentation during R5xx.
 
Given the engineering difficulty of producing a 512bit bus? I'd say it definitely means that. It's just too damn expensive and would make no sense unless the planning was there to build significant capabilities into the core to utilize that bandwidth.

An external 512bit bus isn't the same thing as the 512bit ring bus experimentation during R5xx.
Except that the decision had to be made quite some time ago, and any misreading of the CPU or 3D games markets might cause them to overshoot the required bandwidth.
 
We've already seen from past iterations that when it comes to external bandwidth, ATI/AMD has been spot on with their requirements and building balanced parts.
 
and the r520 showed that?

Natoma said:
We've already seen from past iterations that when it comes to external bandwidth, ATI/AMD has been spot on with their requirements and building balanced parts.

There's a reason why I added the word external. ;)

Also,

Natoma said:
Given the engineering difficulty of producing a 512bit bus? I'd say it definitely means that. It's just too damn expensive and would make no sense unless the planning was there to build significant capabilities into the core to utilize that bandwidth.

An external 512bit bus isn't the same thing as the 512bit ring bus experimentation during R5xx.
 
One source suggested that the editor/press launch event for R600 is actually not at Cebit but a few days earlier on the 12-13th of March. Supposedly no benchmarks will be released to the public because the NDA will be lifted 2 weeks later at the end of March. Yes, this all sounds a bit weird, but it's just what I heard. And to top it off, it seems that AMD is being very secretive about it so dates could be shuffled around again at the last moment.
 
Not particularly. I would think the bandwidth increases from R3xx (~23GB max) --> R4xx (~38GB max) --> R5xx (~64GB max) have been rather significant.
 
Not particularly. I would think the bandwidth increases from R3xx (~23GB max) --> R4xx (~38GB max) --> R5xx (~64GB max) have been rather significant.
I think that's more a factor of the DRAM market than anything ATI consciously did, though. Also, wasn't the limited availability of the X800 XT PE blamed at least in part on the ability of Samsung to supply 1.6ns GDDR3?
 
Surely the AIB partners would want to show it off at cebit so the announcement should be just before and then sales slightly after, that would seem the most logical. If they do not lift NDA before cebit then the guys on the Sapphire and HIS stands might just be sitting there watching XFS etc displaying mid range G8 series cards and possibly G81 !

At this rate the K8L will be out first :p
 
Status
Not open for further replies.
Back
Top