AMD Vega 10, Vega 11, Vega 12 and Vega 20 Rumors and Discussion

GP106 boost: 1706MHz (1060 6GB)
GP107 boost: 1392MHz (1050Ti)
1706/1392= 1.226 or rounded to two significant digits 23% higher clocks for the GP106

Numbers this time taken from nVidias own site.
In practise:
https://www.computerbase.de/2016-10/geforce-gtx-1050-ti-test/3/#abschnitt_die_taktraten_unter_last
https://www.computerbase.de/2016-07...taktraten_der_founders_edition_und_msi_gaming

The 1050 Ti Storm with reference clocks seems to hit over 1,6 Ghz on average.
The 1060 FE over 1,8 Ghz

Overclocking hits a powerwall because the GP106 designs can't exceed the PCIe power delivery.
But overclocking the 1050 Ti Storm + its memory yields in 9-12% better performance.
So you can't really look when stability issues occur but it doesn't seem like a big difference between those pascal cards exists.
 
Probably just current console designs for TSMC. The increased flexibility in wafer sourcing could simply be referring to Samsung (AMD's recent roadmaps make no mention of 16nmFF in any capacity IIRC).
But TSMC has been making the console chips since 2013, and that deal only appeared in 2016 IIRC.
Plus, aren't those chips ordered by Microsoft and Sony, and not AMD?
 
But TSMC has been making the console chips since 2013, and that deal only appeared in 2016 IIRC.
Plus, aren't those chips ordered by Microsoft and Sony, and not AMD?
So far, the indications are that the chips are ordered by AMD. If Sony and Microsoft were handling the manufacturing, AMD's inventory wouldn't lead the sales cycles and margins wouldn't be so low if it were just licensing.
 
Overclocking hits a powerwall because the GP106 designs can't exceed the PCIe power delivery.
But overclocking the 1050 Ti Storm + its memory yields in 9-12% better performance.
So you can't really look when stability issues occur but it doesn't seem like a big difference between those pascal cards exists.
I think the most important difference is, that the 14nm 1050 Ti needs to be clocked lower to offer the same energy efficiency of >20%-higher-clocked 16nm 1060/1080. So 16nm process allows for 20-25 % higher clocks (performance) at the same perf. / watt level.
 
I think the most important difference is, that the 14nm 1050 Ti needs to be clocked lower to offer the same energy efficiency of >20%-higher-clocked 16nm 1060/1080. So 16nm process allows for 20-25 % higher clocks (performance) at the same perf. / watt level.
The results from CB indicate less than 15% clock difference in practise.
In the 1080p Index the 1060 GTX is 63% faster than the 1050 Ti Storm modell, while the whole system consumption increases about 49%.

TPU testet the MSI 1050 Ti gaming with an additional 6-pin connector, the perf/watt results are even 3-8% better than from the 1060 GTX.
Unfournality they didn't protocol the clock range for each game so it's hard to compare apples to apples but I still don't see a clear indicator that the TSMC chips are behaving different in a significant way.
https://www.techpowerup.com/reviews/MSI/GTX_1050_Ti_Gaming_X/28.html
 
I think the most important difference is, that the 14nm 1050 Ti needs to be clocked lower to offer the same energy efficiency of >20%-higher-clocked 16nm 1060/1080. So 16nm process allows for 20-25 % higher clocks (performance) at the same perf. / watt level.

I wouldn't assume that. Smaller chips are typically less efficent. For example, in the above statement, you put the 1060 and 1080 in the same bag, but there's at least a 10% difference in perf/watt between those two cards and the 1060 has lower clocks (100 mhz lower base clocks, in fact). The 960 was also significantly less efficient than the 980.

So I don't think you can draw any conclusions out of it.
 
TPU testet the MSI 1050 Ti gaming with an additional 6-pin connector, the perf/watt results are even 3-8% better than from the desktop reference 1060 GTX

FTFY.
The GP107 can go down to 70W in the 1060 Max-Q version and still be able to soundly beat a desktop or notebook 1050Ti, and the vanilla laptop GTX1060 consumes little more than a laptop 1050 Ti, while getting up to 2x better performance.
Furthermore, there are only Max-Q versions of GPUs made at TSMC, that might mean something.
 
I don't think IPC has been reduced. FWIW, MAD-latency per clock still seems exactly in line with Fiji.
Correct. AMD has confirmed that the core ALUs are still 4 stages long. "Main ALU maintains its four-stage depth"

So far, the indications are that the chips are ordered by AMD. If Sony and Microsoft were handling the manufacturing, AMD's inventory wouldn't lead the sales cycles and margins wouldn't be so low if it were just licensing.
Correct on all counts. The WSA amendment was specifically for the consoles.
 
I think the most important difference is, that the 14nm 1050 Ti needs to be clocked lower to offer the same energy efficiency of >20%-higher-clocked 16nm 1060/1080. So 16nm process allows for 20-25 % higher clocks (performance) at the same perf. / watt level.
No, because smaller Chips are always less efficient, simply because parts not needed for producing FPS (like video decoders etc.) take a larger part of the chip compared to a larger chip.
 
No. They aren't. According to Hardware.fr, GTX 980 offered 8 % higher FPS per watt than GTX 980 Ti and GTX 1080 offered 9 % higher FPS per watt than GTX 1080 Ti. GTX 750 Ti had 1 % higher FPS per watt than GTX 960.
I would love to see the data which this is based on. Do you have the link handy?
 
So far, the indications are that the chips are ordered by AMD. If Sony and Microsoft were handling the manufacturing, AMD's inventory wouldn't lead the sales cycles and margins wouldn't be so low if it were just licensing.

To add to that, Sony or MS doing the manufacturing would potentially get into a legal mess WRT to x86 licensing from Intel. It makes more sense for Sony, MS, and AMD to just have AMD manufacture the chips even if Sony and/or MS might own some IP blocks related to their semi-custom design.

Regards,
SB
 
Last edited:
No. They aren't. According to Hardware.fr, GTX 980 offered 8 % higher FPS per watt than GTX 980 Ti and GTX 1080 offered 9 % higher FPS per watt than GTX 1080 Ti. GTX 750 Ti had 1 % higher FPS per watt than GTX 960.

Yes, they are. There are exceptions, but the norm is that cards with smaller chips are less efficient, as this TPU chart shows.

https://tpucdn.com/reviews/NVIDIA/GeForce_GTX_1060/images/perfwatt_2560_1440.png

980 might be better than 980 Ti, but the difference is much greater when compared to 960, which is over 20% worse. 1060 is also 15% worse. 780 Ti also better than 770, which is better than 760. R9 28x worse then 290 worse than Fury. Etc.
 
Without knowing the % spread of power consumption for the same SKU, I'd be cautious about making strong conclusions from differences that are less 10%.
 
Without knowing the % spread of power consumption for the same SKU, I'd be cautious about making strong conclusions from differences that are less 10%.
Right, different SKUs in different environments can run their chips with different delta to power sweet spot.
 
Ryzen turned out fine despite being fabbed at GF.
According to many sources GF 14nm is clearly behind Intel's new 14nm+ (introduced in Kaby Lake).

Quote: Looking at the node specifications, we can see that the Intel 14nm process remains superior to the GLOBALFOUNDRIES 14nm process. In fact, looking at the CPP, Fin Pitch and Metal Pitch we can see that the node is around 11% – 18% better than the one used by GLOBALFOUNDRIES. This means that not only can the Intel die space house more transistors inside, but that Intel has a larger die space to begin with.

Link: http://wccftech.com/ryzen-smaller-die-intel-zen-architecture-not-good-hpc/

Intel got +200 MHz base clock increase and +300 MHz (1c) turbo increase when they moved from their original 14nm process to their refined 14nm+ process. And both 6700K and 7700K are running at identical 91W TDP. Intel has always been ahead of process technology compared to others, because of their finances and because they can design the chip and their own process hand in hand. Now other's are closer than ever, but Intel still has a lead. Imagine how good Ryzen would have been if GF process was as good as Intel's 14nm+.

Without digging any facts, we can do the following mind game: Assume similar potential GF process gains as Intel Skylake 14nm -> Kaby 14nm+. We would get Ryzen 1800X running at 3800 Mhz base and 4300 Mhz turbo. That's 5.5% multi-threaded and 7.5% single threaded clock boost. This would have resulted Ryzen being practically better across the board, instead of beating Intel in highly multi-threaded tasks and losing in single threaded tasks. Process still matters a lot.

Obviously the situation with GPUs is completely different. There's no Intel around with their node advantage over others. So we can't assume that Nvidia's advantage is process based, unless someone with more knowledge posts process comparison including hard numbers.
 
R9 28x worse then 290 worse than Fury. Etc.
R9 280, R9 290 and Fury are GCN1, GCN 2 and GCN3. GCN2 added boost, GCN3 added delta-compression. These features affect energy efficiency.

Without knowing the % spread of power consumption for the same SKU, I'd be cautious about making strong conclusions from differences that are less 10%.
Average clock of GTX 980 is higher than average clock of GTX 980 Ti, the same applies for GTX 1080 and GTX 1080 Ti. It's evident, that with clock normalisation these deltas wouldn't be less than 10 %, but even higher. I'd expect ~15 %.
 
The consoles are using the same process used by Nvidia for Pascal GPUs (TSMC 16nm FF).
 
Back
Top