Clock speed?
This is most likely the cause; the 7850 1GB used was clocked at 860Mhz core whilst the 7790 was clocked at 1075Mhz core, that is an enormous difference.
7850 @860Mhz = 1.76Tflops
7850 @1075Mhz = 2.2Tflops
Clock speed?
It's the OC model of 7790, but still looks like Sleeping Dogs doesn't require mem bandwith so much?
"Spanking" ? Is that the official performance gap, or the AMD Intern approved measurement?
TPU ~5% @ 1920x1080
Minimal difference recorded by TR, and Anand basically shows what was already known- that games with a DirectCompute component tip the scales in favour of the GCN arch.
The pricing and free game seem to be the biggest differentiators between the two cards...and both of those evaporate against their own product stack when you have 7850's retailing for $160.
Orbis still bests it by a pretty comfortable margin, especially for fillrate. Durango should be pretty on par, having a disadvantage on paper but will obviously outperform by quite a margin in reality.
I'm seeing that in a years time a refresh of this with 2GB VRAM will be the minimum for a gaming graphics card. Perhaps they can make an APU with such a level GPU in late 2014?
The 650Ti Boost should still have quite an advantage due to the 192bit/24 ROP configuration - since the memory clock increased vs. the normal 650Ti that's still a 50% bandwidth advantage over 650Ti AMP (and over 7790). So the 650Ti Boost should really be faster and closer to a 660GTX. In perf/power though it will look rather sad compared to 7790...Something funny is the 650TI AMP, can be largely used as an relative indication for the future Nvidia 650TI Boost ( where the OC models will pack 993mh base and 1050+mhz Boost ( 1059mhz on the Zotac one rated at 140W )
I don't see why Durango or Orbis will be seeing particularly higher real world performance than this GPU in a PC. They are exactly the same architecture so any optimizations in the consoles will translate directly to this GPU and API overhead is much lower in DX11 (which will be the standard) than it was in DX9 which is what the "old" overhead arguments are based on.
While DX11 is an improvement my understanding is that there's still a lot of API crud in the windows world, not to mention PCIe latencies etc.
Does somebody know why AMD avoids 192 bit bus?
I mean the card comes really close the hd7850, with more bandwidth it would be even better while still cheaper and more competitive (making it tougher for NV to compete).
I see something a bit weird here as far as product placement is concerned, I would have think that AMD could have "bartized" their previous line, so in the same time replacing anything in between the HD7700 and the HD7850 by that new GPU and soon after putting a product in between Pitcairn and tahiti (so discontinuing pitcairn all together, replace on the lower hand by a cheaper cheaper to produce and on the higher something significantly cheaper than Tahiti), wider bus width would have come handy.
Though not that I imply that there is anything with the product it seems to perform greatly, but I would think that AMD has a good shot a repositioning their offer (and releasing 87xx and 88xx as "Bart type of product").
Maybe because there's no natural 192 bits memory controller, same with 320bits memory controller ? nvidia disable just a part of a larger memory controller, it is not a true 192 bits controller . They just disable one . If you are using a 256bits memory controller on a sku, well you can disable one part and it become a 192bits.. Nvidia just use sku who was not dedicated for be on this range, and diminish the performance of higher sku for make them enter in this one.. But without compet with the cards using the same sku.
AMD could have use Pitcain and just put some variation for release the 7790 sku .. but it seems they had other plans.. Nvidia is going " on the fight" by using their upper sku , and disable some parts .. ok. its " a good war" .. ( in french " ' c'est de bonne guerre " )
Does somebody know why AMD avoids 192 bit bus?
Hmm, 7790 has more transistors but in the end shows either lower or on par performance with the 6870.
They don't want it out performing the 7850 at $150. There is no mystery. They slotted in a part to fill a gap, they don't want it cannibalizing sales of their $200 part.
Note the 7790 also handles the 6870 in some newer titles like sleeping dogs and bf3. In a couple years I wouldn't be surprised if the 7790 looks much stronger.
There are native 384-bit controllers. I'm sure 192-bit ones would be no more challenging to make. But AMD probably figured it wasn't worth the extra cost, and considering how Bonaire performs, I think they were right.
Have you ever seen Nvidia do this type of controller `? his question was why AMD dont use this type of controllr ?
I think have respond to him... thoses controllers have nevr been developped, because the results are bad.. Nvidia just use them, on slime sku, for dimish the perromances of the inital sku vs the original .
AMD dont do it because they dont need it .
I think NVIDIA's GK106 (found in the GTX 660 Ti) has a native 192-bit interface, and it performs quite well. It makes it a bit awkward to manage memory capacities if you want powers of 2, but otherwise it's fine.
You will need to recover your source ( or maybe if it is right i should do it myelf, so i let the door open if i was wrong ), but it is not a native 192bits interface .. The 660TI use the same sku of the 670-680 ( GK104 ) and the same chip and memory controller with some part disabed.
The GTX660TI use the GK104 as base .. but not the GTX 660 ( who use the GK106 )