AMD: Southern Islands (7*** series) Speculation/ Rumour Thread

Little odd that both 7970 and 7950 slides there advertise the 7970's TFlops att the bottom ?

or am i just being overly skeptical now?
bdKB

bdKJ

Stupid marketing people!
 
It's 4 SIMDs, 1 CU :)

No SIMDs are "gone", or rather renamed Compute Units. I think 4CUs = 1 Compute Group or something like that. Which is confusing, but necessary since 1CU now comprises four 16-wide SIMDs… which do not mean the same thing as Northern Islands SIMDs at all… :LOL:
 
No SIMDs are "gone", or rather renamed Compute Units. I think 4CUs = 1 Compute Group or something like that. Which is confusing, but necessary since 1CU now comprises four 16-wide SIMDs… which do not mean the same thing as Northern Islands SIMDs at all… :LOL:

You might want to inform AMD. They seem to think Tahiti has 32 CUs with 4 SIMDs each (see their slides) :)

Oh you meant what's disabled - yeah 4 CUs.
 
Wow, did anyone noticed the exact transistor count?
Maybe some joke because of the "slight" miscounted figures for BD?

It's actually slightly smaller than the gap between 6870 and 6850 (28/32=87.5%, 12/14=85.7%).

It's also possible the relative difference in core clockspeed will be smaller in return, I'd expect 850 +/- 25 MHz for the 7950.
I think as usual a 20% or so overall performance difference is to be expected. So yes, 28CUs instead of 32, 10% lower clocks should about do it.
I suspect though the 7950 won't be quite as cheap as some people are hoping, at least for the time being. Maybe 550 for the 7970, 450 for the 7950 3GB and 400 for the 7950 1.5GB later (as everything indicates even 7950 is 3GB for now)? If it's ~20% slower than 7970 it should still easily beat the GTX 580, after all.
 
Hey is this thread about gcn or the quality of amd's drivers? I forget. Can any of you remind me?

Isn't the driver quality a very important part of the GPU purchase? I mean who cares if the thing can pull off 1,000,000,000,000 jiggaflops if the whole world looks like Quake 1 for a second every time I turn around?

I am not trolling, this is a serious issue and ignoring or denying it won't improve the situation.

Have we heard any specs on Pitcairn? It appears that thing will turn out to be a little monster.
 
Charlie @SA says that Intel confirmed PCI-E 3 on SB-E was working by using prototype AMD boards.

This is Charlies bad joke or he just tries to tarnung somethings in misinterpretation of the words .. i'm familiar with his "one pebble could start an avalanche" credo

so we have to believe SB (as SandyBridge) "some prototype" has fully supported pcie3.0 with *AMD prototype boards* .... isnt SB pcie2.0 ONLY? not even pcie2.1 as 890GX/890FX and newer chipsets in AMD are. And shouldnt 990FX/970 already support pcie3.0? (at least that was DAMNs promises some year and half ago when they released Thuban into mainstream and focusing their marketing blabb onto next gen products)


PCIe 2.0 versus 1.0 is barely relevant for most of the market for AIB graphics products.
Unless a x16 slot is being split for multi-card setups and this may require trifire to even be noticed, it is not significant.
With the doubled rate in 2.0, the saturation point is even further away. If AMD cards support 2.1, it may be so that they match the broadest range of platforms possible on their marketing checkboxes.

It aint everything in raw bandwidth ;) afaik every new pcie revision is like every new ddrX revision it cames with updateted electrical specification which enables better and simpler integration of technology in new chipsets.

Lets take for exam ddr3 not to go very far back on time line -- first ddr3 modules used 1.9V for ddr3 which is even more than standard ddr2 specifications to achieve rates of pc3-10600 (in 2008 w/ x48 chipsets). when Nehalem arrived it required ddr3 modules that doesnt surpas 1.65V for everydaylife and Westmere(-EX) was even far more susceptible to high ddr3 voltages which could damage it's MC, just because it was 32nm processing node. while nowadays we had chips/modules that can easily work ddr3-lp 1.35V (@1600MHz) and ddr3-ulp 1.25V (@1333Mhz)

So pcie3.0 is natural evolution for Y2012 chipsets which need to be produced on 40nm or even 32nm/28nm and probably smaller nodes in the future. So from that PoV old pcie2.0 electrical specs would probably made those chipsets fail on pcie links in the long run, as servers are usually planned to last lets say 5yrs and not to be changed every few month as some mainstream dudes does with their hardware.

As i see from this presentation http://www.pcisig.com/developers/ma...c_id=f1b037f65a12f0cd26df6f1e54f1091002fc8590 it far from simple memory exam but the idea is the same ;) They're cutting power needs while retaining compatibility with old hardware something that aint needed for memory as sdr/ddr/ddr2/ddr3/ddr4 don't mix together very well neither is that required as for all thei predecessors

btw. pcie3.0 DOESN'T DOUBLING BW it's only 8.0GT/s from 5.0GT/s we had on pcie 2.0


Hmm, RV670 was 192mm2 and featured a 256-bit bus on 55nm. Could they really have so much trouble fitting a 384-bit bus on a ~3XXmm2 die on 28nm?

There are some legends that says that GDDR5 used on RV700 generation onwards needs far more complex IMC than those on RV670 which had support for GDDR3/4 only. And those are probably true as GDDR5; at least by specs; needs to use differential lanes to sucessfully work with 8Gbps+ chips. Yet which weren't used on any GPU board design afaik ... So just maybe, R800-Cypress/R900-Cayman have those kind of controllers which fully support differential gddr5 signaling just as last resort if they should ever need it.

And ATi/AMD simply didn't felt like they need to redesign IMC while "newly announced brat in town" Cayman was same old 40nm design with few desperately need new tricks .... so they rather preferred to improve their lunch & sleep costs (L&S costs) instead R&D ones. And I have strangest feeling that I might have the right hunch. After all, they didn't do too much for NI just maximally redesign arbiters aka. co-processors/schedulers and adapt them for VLIW-4 data flow ... they think it was enough ... and for Barts they just dissected "redundant parts" (by DAMN ofc) so they could have smaller and more competitive product (by their marketing team) and use improved co-processors which were still VLIW5 but designed as prototype for Cypress VLIW4 DPD data engine.


I just see no evidence the memory manufacturers can actually produce 7gbps chips, despite claiming to be able to 3 years ago.
You'd certainly think they'd have 6gbps chips which were using standard 1.5V voltage instead of the factory overvolted 1.6V parts if it's so easy. I bet there IS demand for such such chips (as well as higher speed grade 1.35V parts for mobile products).

I think you'r extremely deviating here. Tell me how many products actually use those 6Gbps x32 chips only HD6900 series. AFAIK 6Gbps (1.6V) and those 5.2-5.5Gbps 1.5V chips accompanying Barts (HD6800), Cypress (HD5800) and Juniper (HD5700/HD6700) are same chips just with 6Gbps being acclaimed & verified by manufacturer that they can stably work @1.6V, probably the same could be possible for 5.5Gbps 1.5V parts but they maybe just have voltage locks @1.55V by memory vendors, so they couldn't be "misused on wrong pcb"

And why we didnt saw 7Gbps modules used on cards since Oct 2009 when Samsung and Hynix have been bragging about those probably because they need to sell older products and most of those cards weren't sold in enough quantities when HD5800/HD5700 was becoming EOL. So memory vendors need another year to deplete old stocks and nvidia doesn't use these ""High-Speed parts"" on their graphics pcbs anyway.but more price cutting 4GBps parts on their whole top end GTX580/570 and mainstream GTX560/Ti/550/Ti lineup. While lower end still uses plain DDR3 in both camps ;)

Why still only 1.5V-1.6V because 40nm processing node and IMC on HD5/HD6 series could cope with it. Now for 28nm i don't believe anyone would be that insane to use any modules with higher voltages than 1.35V-1.4V. So we could easily see 7/8Gbps x32 modules accompanying newer 28nm designed GPUs.

GDDR5 is nothing more than just techysavvy-tweaked-n-optimized DDR3 and there are enough 1.35V LP-DDR3 modules that arose into market only this year. (No matter some of them being announced more than 18 month ago.) As Bulldozer based chips came up to the scene w/ improved IMC that could abuse dual channel 1866Mhz memory setup and newer servers now could have advantages in both density and power needs when using more of 4Gb LP chips (1600MHz+). Even thou i personally think they could have better deal with 8Gb 1.5V standard chips (1333Mhz) which are also available but probably with higher price than using two 4Gb@1.35V chips ;)
 
This is Charlies bad joke or he just tries to tarnung somethings in misinterpretation of the words .. i'm familiar with his "one pebble could start an avalanche" credo

so we have to believe SB (as SandyBridge) "some prototype" has fully supported pcie3.0 with *AMD prototype boards* .... isnt SB pcie2.0 ONLY?
Sandy Bridge-E has Gen 3 capabilities. And, yes, it is a very important part of the process to ensure that products that are introducing new technologies and standards interop with each other well.
 
Isn't the driver quality a very important part of the GPU purchase? I mean who cares if the thing can pull off 1,000,000,000,000 jiggaflops if the whole world looks like Quake 1 for a second every time I turn around?

I am not trolling, this is a serious issue and ignoring or denying it won't improve the situation

No one is saying we have to ignore the situation. But this thread (and sub-forum) strictly deals with 3D architectures. We don't have to worry about those "silly" software limitations here (besides what do those dumb programmers know). :D I think we can all agree that the "3D Architectures & Chips" sub-forum is not the appropriate place to discuss software (in this case, drivers).

I strongly urge you to create any new topic (in the correct sub-forum) that you feel is worth discussing.
 
Back
Top