AMD: Volcanic Islands R1100/1200 (8***/9*** series) Speculation/ Rumour Thread

Hmm, so a shade faster than Titan, wider mem i/f, but 30% smaller die.

This could be a beginning of a new strategy from AMD, make a ~350mm2 chip on leading edge and refresh it with a slightly larger (~450mm2) chip and some architectural tweaks a year later.

This is going to crater nv's margins on Titan.
 
Given that all of the cards with turbo in the preview have a "+" listed after their clock frequency, but the 290X does not, it's possible turbo wasn't enabled for the benchmarks. It's also possible that the leakers had no way of knowing whether or not the card was turboing in the first place.
 
Looks good so far! I'll be curious to see it's OC potential. The 780's are OC monsters. You'd be hard pressed to find any enthusiast running one at stock speeds.
 
~15-20% faster than a Titan? That's a very nice chip and impressive, being 25% smaller, but it's not the kind of knock-out blow I'd expect it would be, especially with a 512-bits bus. Enable all SMs on GK110, increase the clocks here and there, lower the price obviously, and the real life differences will be as significant as they are right now (that is: not very much.)

I really don't get why AMD doesn't go for the undisputed crown: their GCN architecture is more area efficient than Kelper's (at the very least for non-crippled DP versions.) Why hold back?
 
Hmm, so a shade faster than Titan, wider mem i/f, but 30% smaller die

Great, I am very happy finally to see this... after so long time waiting.

This could be a beginning of a new strategy from AMD, make a ~350mm2 chip on leading edge and refresh it with a slightly larger (~450mm2) chip and some architectural tweaks a year later

Of course, if their GCN architecture is better than the corresponding competitor's. Wonder if Maxwell will be better with the same transistor count compared to Kepler.

This is going to crater nv's margins on Titan.

Great :D

I really don't get why AMD doesn't go for the undisputed crown: their GCN architecture is more area efficient than Kelper's (at the very least for non-crippled DP versions.) Why hold back?

If they go for the crown, that will inevitably have a very positive effect on their brand image. Perhaps partially it is where the cheaper reputation comes from...
 
to guess what they do, it would be really important to know what they're aiming for. if it's about gaming, then they probably won't pay for the 512bit if 'it' are not needed -> their competition could currently keep up with just 256bit.
People are too quick to draw a line in the sand about what is 'balanced', 'needed', etc. The reality is that a GPU goes through a variety of workloads when drawing a scene. Some of those are heavily BW limited, some geometry limited, some shader limited, etc. Even the same draw command can have clumps of triangles varying heavily in these characteristics.

I think the main reason 512-bit made sense is that enthusiasts will demand 4-8GB to match upcoming consoles, and when you have that many chips on the board then you aren't saving much by going 256-bit. The marginal video card cost for 512-bit over 256-bit is probably well below 10%, and I think that much of a performance boost is more than likely.
 
I think the main reason 512-bit made sense is that enthusiasts will demand 4-8GB to match upcoming consoles, and when you have that many chips on the board then you aren't saving much by going 256-bit. The marginal video card cost for 512-bit over 256-bit is probably well below 10%, and I think that much of a performance boost is more than likely.

I think the main reason is that they cannot increase significantly the memory bandwidth if the stayed on 384-bit MI... Do you have any new type of memory capable to achieve this?
 
The sample runs its GDDR5 at 5Gbps -> only ~10% more BW than 384-Bit@6Gbps.
There was probably serious die-space saving through IMC down-grade and we can lay aside dreams of 512-Bit@8Gbps OC (512GB/s).:LOL:
But with such low GDDR5 speeds they could reduce voltage to <1,35V.
 
The sample runs its GDDR5 at 5Gbps -> only ~10% more BW than 384-Bit@6Gbps.
There was probably serious die-space saving through IMC down-grade and we can lay aside dreams of 512-Bit@8Gbps OC (512GB/s).:LOL:
But with such low GDDR5 speeds they could reduce voltage to <1,35V.

It will be interesting indeed to see how well the memory clock OCs.
 
People are too quick to draw a line in the sand about what is 'balanced', 'needed', etc. The reality is that a GPU goes through a variety of workloads when drawing a scene. Some of those are heavily BW limited, some geometry limited, some shader limited, etc. Even the same draw command can have clumps of triangles varying heavily in these characteristics.

I think the main reason 512-bit made sense is that enthusiasts will demand 4-8GB to match upcoming consoles, and when you have that many chips on the board then you aren't saving much by going 256-bit. The marginal video card cost for 512-bit over 256-bit is probably well below 10%, and I think that much of a performance boost is more than likely.

Dont forget they think too their card for Computing and Pro worksation segment. they can go for 8 and 16GB configuration for the FirePro cards. ( Like nearly 15% are retained for the ECC and control stuffs.. the more you can go, the better )

Given that all of the cards with turbo in the preview have a "+" listed after their clock frequency, but the 290X does not, it's possible turbo wasn't enabled for the benchmarks. It's also possible that the leakers had no way of knowing whether or not the card was turboing in the first place.

I got the feeling the wires we see on the back was for bypass some limitations of the initial bios he had.
 
Last edited by a moderator:
People are too quick to draw a line in the sand about what is 'balanced', 'needed', etc. The reality is that a GPU goes through a variety of workloads when drawing a scene. Some of those are heavily BW limited, some geometry limited, some shader limited, etc. Even the same draw command can have clumps of triangles varying heavily in these characteristics.
yet,if your competitor achives nearly your speed with just 256bit and beats you by 30% with the same 384bit, you might consider it's not the bandwith that causes it.

I think the main reason 512-bit made sense is that enthusiasts will demand 4-8GB to match upcoming consoles, and when you have that many chips on the board then you aren't saving much by going 256-bit. The marginal video card cost for 512-bit over 256-bit is probably well below 10%, and I think that much of a performance boost is more than likely.
you mean, like having 6gb, just as the titan has, with 384bit?

Designing a 512bit controller to get to get more memory sounds somehow wrong,
 
~15-20% faster than a Titan? That's a very nice chip and impressive, being 25% smaller, but it's not the kind of knock-out blow I'd expect it would be, especially with a 512-bits bus. Enable all SMs on GK110, increase the clocks here and there, lower the price obviously, and the real life differences will be as significant as they are right now (that is: not very much.)

I really don't get why AMD doesn't go for the undisputed crown: their GCN architecture is more area efficient than Kelper's (at the very least for non-crippled DP versions.) Why hold back?

There's a 512-bit bus, but only 5GT/s memory, so not that much bandwidth.

As for why AMD holds back, it may just be that they're at 250W as it is and they see little point in adding more silicon. Plus, 20nm shouldn't be too far in the future now, maybe that was a factor too.
 
If these rumors are correct, they have a nice piece on their front. With future drivers performance will be even higher esp. on high res and load. My only concern, but that probably affect a few users, are the possibility of out of the box downsampling and flexible driver forced (SS)AA all though the more we are heading to dx11 titles, the more they are even on that part.
 
the possibility of out of the box downsampling and flexible driver forced (SS)AA all though the more we are heading to dx11 titles, the more they are even on that part.
Downsampling is one of the key areas they need to improve; The lack of support for it is saddening, i hope they address in over the next few weeks.

Also since it is kind of related; Is it possible for Nvidia or AMD to add in a feature that downsamples and if the framerate is going to drop below say 60, instead of dropping frames it drops the res back to normal until it can apply the downsample and keep the framerate above 60? Pretty much like what console games are doing except with downsampling.

Actually now that i think about it, doesn't changing your screen resolution require a soft reset?
 
I think CarstenS was talking about the basic specs (shader count, bus width, maybe ROP count). Otherwise I'm pretty sure it carries all the GPGPU features that Tahiti has, and probably a few new ones, as well as an improved front end, etc.

There were also rumors that AMD would announce new FirePros based on Hawaii on the 25th.
 
Back
Top