AMD: Volcanic Islands R1100/1200 (8***/9*** series) Speculation/ Rumour Thread

Oh for crying out loud, bring it on already AMD! Next week we have the Apple Ipad event on October 22th and all PR-thunder will be lost for the Red Team. :runaway:

Worse: now that results are out, they're giving enough time for nVidia to make a surprise presentation for adjusted prices in the 760/770/780/Titan range and introduce the 780i.
Just in time for the reviewers to put up price/performance comparisons against the updated range from nVidia in the R9 290 reviews.
 
I remain unconvinced that a week or two delay ultimately matters. Here is my perspective:

For those who are card-carrying members of Team Red or Team Green, I cannot fathom why another week of wait will affect their card-carrying status either way. If you have deified your favorite vendor and their products, then the Next Best Thing by your preferred vendor is what you're most likely to purchase. If you disagree, you probably fit into the next category, which is...

I see the open-minded allegiance types to be "swing voters", preferring one vendor but waiting to see if the other guy can make a good argument. I surmise that those people are (in my opinion) probably more apt to wait and see the competition before making their vote. These are the folks who are ready to vote (with their wallet!) but want to know both contenders and their arguments for why they're better.

The remaining individual purchasers are the "fence sitters" who just kinda hang out and wait for whatever bang for the buck makes sense to them. I see these as pragmatists, and they aren't likely to be the ones buying in the first month as they wait for a good bargain at their favorite e-tailer.

Even if you wanted to swing the argument towards IHVs and VARs like HP, Dell, Asus, Acer, Lenovo and the like, they already know what's up at this point and have probably already made their decision.

So, outside of rabid forum-goers and their crying for more data, I can't see how the delay ultimately impacts the bottom line.
 
I remain unconvinced that a week or two delay ultimately matters. Here is my perspective:

For those who are card-carrying members of Team Red or Team Green, I cannot fathom why another week of wait will affect their card-carrying status either way. If you have deified your favorite vendor and their products, then the Next Best Thing by your preferred vendor is what you're most likely to purchase. If you disagree, you probably fit into the next category, which is...

I see the open-minded allegiance types to be "swing voters", preferring one vendor but waiting to see if the other guy can make a good argument. I surmise that those people are (in my opinion) probably more apt to wait and see the competition before making their vote. These are the folks who are ready to vote (with their wallet!) but want to know both contenders and their arguments for why they're better.

The remaining individual purchasers are the "fence sitters" who just kinda hang out and wait for whatever bang for the buck makes sense to them. I see these as pragmatists, and they aren't likely to be the ones buying in the first month as they wait for a good bargain at their favorite e-tailer.

Even if you wanted to swing the argument towards IHVs and VARs like HP, Dell, Asus, Acer, Lenovo and the like, they already know what's up at this point and have probably already made their decision.

So, outside of rabid forum-goers and their crying for more data, I can't see how the delay ultimately impacts the bottom line.

Nah, you're over-thinking this. I'm just plain out impatient, thats all. :p
 
All that I'm waiting for is the big Jawed, and accompanying brain gang, final results shakedown discussion to begin!
 
http://wccftech.com/amd-radeon-r9-2...aii-gpus-gaming-synthetic-benchmarks-exposed/

Radeon-R9-290-Series-Overclocked-Performance-635x390.jpg




$649 R9 290X vs. $999 Geforce Titan
 
Cocktail napkin musing about about the leaked memory bus and speeds, based on the Hynix data sheet.

A 384-bit bus at 6.0 Gbps is 288 GB/s.
A 512-bit bus at 5.0 Gbps is 320 GB/s.

P = f*c*V^2
Since I don't know the capacitance or what other factors might go into calculating the memory bus/controller power.
This makes the (hopefully not too invalid) assumption that those factors are not wildly different.

P = V^2
Bus width*V^2 = an abstracted relative power unit PU


Code:
Width*V^2   PU    BW   PU/GBs
384*1.5^2   864   288  3
512*1.5^2   1152  384  3
512*1.35^2  933   320  2.92
384*1.6^2   983   336  2.93
512*1.6^2   1311  448  2.93

In this scenario, the density gain for an interface that is close to 2x as area efficient might be more important. The other constraints are a possible absolute power ceiling that makes the 512*6.0 or 7.0options unacceptable, and a refusal to regress in terms of bandwidth.

If we include a factor for clock speed differences, assuming a linear relationship.
P = speed relative to 6.0*PU


Code:
(6/6)*384*1.5^2   864     288  3
(6/6)*512*1.5^2   1152    384  3
(5/6)*512*1.35^2  778     320  2.4
(7/6)*384*1.6^2   1147    336  3.4
(7/6)*512*1.6^2   1529    448  3.4

Not knowing what changes on the GPU side, the area and power savings could be more noticeable if the latter hand-wavy math works out.
Since the rest of the chip is growing in size, any power savings or at power/perf savings would help, even if the peak memory capability is not a big improvement.

I'm curious if the drawbacks are the additional memory device costs and board complexity. Another thing might be the perimeter requirements, which might still be higher even though the overall PHY area is smaller.
 
I think Bitcoin miners could save substantial amounts of power (on Cypress/Tahiti) simply by under-clocking memory: tens of watts. This, despite the fact that mining barely uses off-die bandwidth. I think the holy grail was running the memory at idle clocks, while the ALUs were running at full speed (or overclocked).

Oh and FWIW, Tahiti's GDDR PHY is about 21% of the chip. Or, a bit more than 512 ALUs-worth of compute (including supporting circuitry). Or about 13 CUs.
 
I think Bitcoin miners could save substantial amounts of power (on Cypress/Tahiti) simply by under-clocking memory: tens of watts. This, despite the fact that mining barely uses off-die bandwidth. I think the holy grail was running the memory at idle clocks, while the ALUs were running at full speed (or overclocked).

Oh and FWIW, Tahiti's GDDR PHY is about 21% of the chip. Or, a bit more than 512 ALUs-worth of compute (including supporting circuitry). Or about 13 CUs.


You are right about BitCoin mining as I played with it on my HD 6970 and downclocking memory to 300MHz saved 40-60W of total system power (if my memory serves me well).
 
Most GPU miners have either shutdown their GPU or have move on to Scrypt based coins such as litecoin which do require memory bandwidth.
 
Maybe this has been posted already..but pconline apparently has a card and say their official review will be up on Oct 24th. So I guess that's when all the magic happens. :D

http://translate.google.com/translate?u=http%3A//diy.pconline.com.cn/363/3631256_6.html&hl=en&langpair=auto|en&tbb=1&ie=gbk
 
Back
Top