AMD: Southern Islands (7*** series) Speculation/ Rumour Thread

Well, 2012 is the year in which many people will upgrade: windows 8, Ivy Bridge, new graphics card generation... So, at least, many of those who are going to upgrade the whole rig will wait at least until W8 and Ivy Bridge are out. Nvidia should launch the new cards for then. If they don´t do it they will have lost a great opportunity.
 
2_7970_manfromatlantiqvpo6.jpg

I will have to officially boycott this site for this graph. I am so tired of non-zero origins to exaggerate performance advantage. The whole idea of a graphic is to depict relative measure.

Instead of depicting 1.6x as a maximum advantage this graph depicts 4x.

Ugh.
 

Wow ban the guy already. Sounds a lot like another "graphics prophet" I know of. :p

Oh btw, yes the r700 has more then two times the shader power of the gtx 280 but performance nope......... not even close....... talk about a waste of resources.
GDDR 5 not going to happen till the refresh. There is miss information out there right now and its not only the ram.
hmm I don't think it is, the closest the rv770 chip gets to the the gt200 (gtx 260) is 25%, given that fact the gt280 is 25% faster, a dual chip solution gets close to the 280 at best case. Thats why its placed cheaper then the gt280,
well performance wise, think about this, close to the same clocks over all to the GTX/ultra, with 2 times the SP's, 33% increased bus (bandwidth), 50% increased efficiency per clock. How does that translate to only 25% faster then a rv770 which is 25% faster then a 9800 gtx for the gtx 260. Unless nV messed up a working design thats the only way, which is unlikely.
Morgoth, the rv770, is going to be a lame duck if the x2 comes out late, the 55 nm g92 will go up against it well and it will be cheaper there will be 4 cards that are faster the the rv770 in single chip configs this year from nV. You want to go up against me? I already have performance figures on both the rv770 (faster then the 8800 gtx/ultra by around 15%) and gt200 (125% faster the the gtx and ultra). You won't be the first one I did this to, it will be fun.
 
I will have to officially boycott this site for this graph. I am so tired of non-zero origins to exaggerate performance advantage. The whole idea of a graphic is to depict relative measure.

Instead of depicting 1.6x as a maximum advantage this graph depicts 4x.

Ugh.

You do realize that it's the standard practice used by both IHVs for years and years, and has nothing to do with the site posting those IHV provided numbers, right?
 
You do realize that it's the standard practice used by both IHVs for years and years, and has nothing to do with the site posting those IHV provided numbers, right?

Yes, of course, but I still hate it and it certainly should never appear in a supposed "review" of any sort.
 

The Skyrim performance over a 580gtx isn't very good at all.

BF3 is a little better at 2560x1600 but then again, mine runs at 920 core 2225 mem so I'll be really curious to see how well the 7970 overclocks. Going by these benmarks, my oc'd GTX easily beats the 7970 is Skyrim and likely goes toe to toe in BF3.

For me, those are only 2 games I'm playing on the PC at the moment. I'll start up Batman: AC at some point (GTX580 does quite well there already) and Dota 2 when it's released. Dota 2 won't stress the gpu at all.

I basically ignore all benchmarks that don't relate to the games and resolution I'm playing at. Makes chosing products a lot easier.
 

It's a bit curious that AMD would keep the same TDP as Cayman's, even though the latter was a 32nm design back-ported to 40nm. I was sort of expecting them to go back to a slightly more reasonable level, like 200~220W or so. Then again, maybe that job will fall to the 7950.
 
4_7970_manfromatlantio8okn.jpg


Why is there no MSAA in both of these? This batch of benchmarks looks way better than what previous rumours suggested but it sure smells like AMD's internal testing with cherry picking most impressive results.
 
It's a bit curious that AMD would keep the same TDP as Cayman's, even though the latter was a 32nm design back-ported to 40nm. I was sort of expecting them to go back to a slightly more reasonable level, like 200~220W or so. Then again, maybe that job will fall to the 7950.

max power should be overlclocked powertune setting like 6990.. it is actually more like ~200W at games

The Skyrim performance over a 580gtx isn't very good at all.

BF3 is a little better at 2560x1600 but then again, mine runs at 920 core 2225 mem so I'll be really curious to see how well the 7970 overclocks. Going by these benmarks, my oc'd GTX easily beats the 7970 is Skyrim and likely goes toe to toe in BF3.

For me, those are only 2 games I'm playing on the PC at the moment. I'll start up Batman: AC at some point (GTX580 does quite well there already) and Dota 2 when it's released. Dota 2 won't stress the gpu at all.

I basically ignore all benchmarks that don't relate to the games and resolution I'm playing at. Makes chosing products a lot easier.

internal oc results for 7970 is 1.2GHz GPU and 6500MHz memory..
 
It's a bit curious that AMD would keep the same TDP as Cayman's, even though the latter was a 32nm design back-ported to 40nm. I was sort of expecting them to go back to a slightly more reasonable level, like 200~220W or so. Then again, maybe that job will fall to the 7950.

Remember that it's the "powertune number", though ;)
With default of 0% powertune for example 6970 sits at ~210W TDP, it's possible 6970 has more than +20% on powertune, thus having under 210W "0% PowerTune TDP"

max power should be overlclocked powertune setting like 6990.. it is actually more like ~200W at games
250W on 6970 is powertune +20% setting, too ;)
 
Not bad, based on these early numbers it has a solid 40% average lead on the 580, up to 60% in some cases. That should improve with drivers too. Even if Kepler is the bees knees it's not coming anytime soon so AMD should clean house with the 7970 even north of $400.
 
Can someone explain in simple terms why AMD did not increase the number or ROP's, if ROP's are being seen as an obvious bottleneck for performance?
 
Can someone explain in simple terms why AMD did not increase the number or ROP's, if ROP's are being seen as an obvious bottleneck for performance?

Depend how are made the rops, rop is not a bootleneck in 2011.

i have read that in GCN the ROP clusters are not linked anymore to the memory bus, but really i don't know if it is "right".
 
AMD ZeroCore Power

Enabling the World’s Most Power Efficient GPUs
When a discrete GPU is in a static screen state it works to minimize idle power by enabling a host of active power saving functions including (but not limited to); clock gating, power gating, memory compression, and a host of other features. However, GPUs with AMD’s exclusive ZeroCore Power technology can take energy savings to entirely new heights by completely powering down the core GPU while the rest of the system remains active.

Nearly all PCs can be configured to turn off their displays after a long period of inactivity. This is known as the long idle state; where the screen is blanked but the rest of the system remains in an active and working power state (ACPI G0/S0). As soon as the system goes into long idle state and applications are not actively changing the screen contents, the GPU enters the ZeroCore power state. In the ZeroCore power state, the GPU core (including the 3D engine / compute units, multimedia and audio engines, displays, memory interfaces, etc.) is completely powered down.

However, one cannot simply remove the GPU and its associated device context completely; particularly when it is the only GPU in the system as is the case in many enthusiast platforms. The operating system and SBIOS must still be aware that a GPU is still present in the system. For this reason, the ZeroCore Power state maintains a very small bus control block to ensure that GPU context is still visible to the operating system and SBIOS. The ZeroCore power state also manages the power sequencing of the GPU to ensure that the power up/down mechanism is self-contained and independent of the rest of the system.

The enablement of the ZeroCore Power feature is controlled by the driver. The driver monitors the display contents and allows the GPU to enter the ZeroCore Power in the condition that the GPU enters long idle and subsequent work requests are no longer being submitted to the engine. If any applications update the screen contents, ZeroCore Power technology can periodically wake the GPU to update the framebuffer contents and put the GPU back into the ZeroCore Power state. Furthermore, applications such as Windows 7 desktop gadgets are architected to minimize activity and save power in the long idle state. These applications are active during screen-on mode to display dynamic content such as weather, RSS feeds, stock symbols, system status, etc. but also have the intelligence to suspend any updates and activity when the system enters long idle. These applications will not wake the GPU from the ZeroCore Power state in long idle.

AMD ZeroCore Power technology delivers tremendous energy savings. Many PCs remain in the long idle state for a variety of use cases that are highly relevant to everyday consumers, enthusiasts and professionals. In ZeroCore Power mode, users can still enjoy non-graphics activities such as file serving/streaming, motherboard audio and music, and remote access while the GPU core is essentially powered off.

ZeroCore Power technology also scales with AMD CrossFire™ technology. With an AMD CrossFire platform, all non-primary GPUs are in the ZeroCore Power state when not in use. For AMD CrossFire workloads, the driver will engage the non-primary GPUs to deliver full performance on-demand. The primary GPU can also enter the ZeroCore Power state during long idle as well. This delivers scalable benefits for the enthusiast. First, it effectively removes the power increase for users of AMD CrossFire under idle and light (single GPU) workloads. Second, it removes the noise penalty multiGPU users have always had to endure since the GPU fans are off in ZeroCore Power mode.
 
Can someone explain in simple terms why AMD did not increase the number or ROP's, if ROP's are being seen as an obvious bottleneck for performance?
I don't think so. AMD stated, that hypotetical Barts with only 16 ROPs but 2 more SIMDs would perform almost exactly as the real one (32 ROPs). If 16 ROPs would be enough for Barts, 32 should be enough for Tahiti. It makes even more sense to prefer CUs instead of ROPs because of higher impact on compute workloads.
 
Back
Top