AMD: Speculation, Rumors, and Discussion (Archive)

Status
Not open for further replies.
Why would they be? TSMC's 16nm processes are low power processes too, just like GloFos/Samsungs.

Are you sure?
Because it seems from all documentation out there the TSMC 16FF+ in general is a higher power/"performance" design to the Samsung and was designed to replace their 28HP/HPM technology.
Probably makes sense for there to be a thread soon combining all the documentation relating to both Samsung and TSMC 16nm options.

Cheers
 
Why would they be? TSMC's 16nm processes are low power processes too, just like GloFos/Samsungs.

It's unclear to me. Cadence is certainly marketing 16FF+ as the successor to TSMC's 28HP, but I don't know if it's a high performance process in the same sense. Certainly we can say that it's HP compared to TSMC 16FFC, but is it compared to Samsung/GF 14nm? I don't know. From reading the various foundries descriptions I'm getting the sense that maybe from now on everything is low power FinFET and that's that.

On the other hand, didn't Nvidia and AMD pass on 20nm because it was LP, and wouldn't they have also known that forthcoming nodes would also be strictly LP?
 
670 at release was equal to 7970:

http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_670/28.html

e.g. at 1920x1200 7970 is 1% faster.

Now, much less so:

http://www.techpowerup.com/reviews/MSI/GTX_960_Gaming/29.html

e.g. at 1920x1080, 7970 is 10% faster. Increase the resolution and the difference is about 18%. Taking 4MP gaming as the benchmark (since you're now using DSR), this means 970 v 670 is a 65% upgrade, but only 40% upgrade versus 7970.

So the upgrade you experienced was bigger than it could have been...

Now a 280X is like 20% faster than a 770 at 1080p while it was like 2% faster in your 'Now' review.

http://www.techpowerup.com/reviews/Gigabyte/GTX_980_Ti_Waterforce/23.html

The difference remains similar between 970 and 7970 though, so it seems the 2GB is hitting its limits besides of course
*cough* kepler gimping *cough*
.
 
It's unclear to me. Cadence is certainly marketing 16FF+ as the successor to TSMC's 28HP, but I don't know if it's a high performance process in the same sense. Certainly we can say that it's HP compared to TSMC 16FFC, but is it compared to Samsung/GF 14nm? I don't know. From reading the various foundries descriptions I'm getting the sense that maybe from now on everything is low power FinFET and that's that.

On the other hand, didn't Nvidia and AMD pass on 20nm because it was LP, and wouldn't they have also known that forthcoming nodes would also be strictly LP?
20nm had multitude of other issues too, and according to Cadence 20SoC was "high performance family" too. They do mention that TSMC has 2 variations of 16FF+, but AFAIK they're still both low power processes, one just more area optimized than the other. For what it's worth, Apple A9 is acording to some reports built on 16FF+ too, the supposed "high performance process"
 
Well TSMC imply 16FF+ is related to the 28HPM, and that lower power is as mentioned 16FFC.
They do compare 16FF+ to 20SoC.
However where the 20SoC may be deemed similar to HP family is due to the work I thought was done to create high performance FPGAs for Xilinx on that technology, but yeah I agree with you all it is questionable.
Remember there were technical issues with developing the 20nm technology that skewed what was released or its limitations in real world (this may be different to what could be achievable with more cash thrown at it and critically more time that is rarely an option if already over-running).

We know the GPU related from Samsung is coming from their LPP, as GF mention they have had success in adapting this (to what extent though is not publicly mentioned).
I get the feeling this is going to turn into a semantic debate about the difference in terminology of TSMC using the words "than its 28HPM technology" and also "comparing with 20SoC technology" :)
For now if going by with what released I would tend to think it is related to 28HPM, which ties a bit into the 2015 Technology Symposium at San Jose presentation they did.

But yeah the only one where we have a true and full picture is that of the 28nm ecosystem: http://www.tsmc.com/english/dedicatedFoundry/technology/28nm.htm
Some of the variables mentioned by each of us were touched earlier in the year on Semiaccurate forum: http://semiaccurate.com/forums/showthread.php?t=8647
Will be interesting when more details come out on both technologies from Samsung and TSMC.
Cheers
 
Notwithstanding the last two big releases from AMD, I was giving them the edge on getting their cards out sooner but if they are using a separate process entirely then that changes everything.

Any ideas as to how they compare, there were some reports that the apple SoCs showed better performance with TSMC?

Anandtech had a good piece on this and concluded that you couldn't conclude much unless you had a large amount of samples: http://www.anandtech.com/show/9708/analyzing-apple-statement-for-tsmc-and-samsung-a9

EG they're close enough that it probably won't matter that much. Samsung actually started producing chips first, but TSMC has had better yields so far. But by next year? Who knows.
 
670 at release was equal to 7970:

http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_670/28.html

e.g. at 1920x1200 7970 is 1% faster.

Now, much less so:

http://www.techpowerup.com/reviews/MSI/GTX_960_Gaming/29.html

e.g. at 1920x1080, 7970 is 10% faster. Increase the resolution and the difference is about 18%. Taking 4MP gaming as the benchmark (since you're now using DSR), this means 970 v 670 is a 65% upgrade, but only 40% upgrade versus 7970.

So the upgrade you experienced was bigger than it could have been...


If someone got the 7970 Ghz edition, they're now looking at a GPU roughly 30% faster than the 670. The 280X is the 7970 Ghz. The 960 is more or less on the same performance delta as the 670.

perfrel_1920_1080.png


AMD GPUs have aged far better than their NV competitors. But that's aside the point, because how many people keep their GPUs for over 3 years if they are serious gamers? Unless you're gaming mostly indie games or old DX9 titles, not many.
URL]
 
If someone got the 7970 Ghz edition, they're now looking at a GPU roughly 30% faster than the 670. The 280X is the 7970 Ghz. The 960 is more or less on the same performance delta as the 670.
URL]

You have to consider that not everyone is always playing the latest benchmarking suites. While a 280x may be much faster in say GTAV, things are likely still much more even in older games and perhaps just less high profile games.
 
My 1GHz 7970 has been going strong since summer 2012. Admittedly World of Tanks is most of my gaming. The Japanese heavy tanks at tiers 5 and 6 are very entertaining :LOL:

Another way of looking at it might be which card has the longer list of games that are unplayable at "max settings". I think it could be argued that at 1080p that list isn't very different between 670 and 7970.
 
Anandtech had a good piece on this and concluded that you couldn't conclude much unless you had a large amount of samples: http://www.anandtech.com/show/9708/analyzing-apple-statement-for-tsmc-and-samsung-a9

EG they're close enough that it probably won't matter that much. Samsung actually started producing chips first, but TSMC has had better yields so far. But by next year? Who knows.

There is like a 20-30% or even more discrepancy which doesn't require large number of samples to verify. 3-4 reviewers ought to be enough. From what more I read of it, it seems to be confined to Geekbench battery test for now.
 
There is like a 20-30% or even more discrepancy which doesn't require large number of samples to verify. 3-4 reviewers ought to be enough. From what more I read of it, it seems to be confined to Geekbench battery test for now.

That's not the point the article is trying to make, the point is you need to have a minimum N (sample number) under controlled condition to actually get anything useful, a test of two single phones isn't nearly enough. You'd need a minimum of about 20 of both samsung and tsmc each (depending on the acceptability range of each chip) to get anything useful. 20-30% could be fully within expected voltage/leakage variance for Apple's SOCs. Thus the entire difference could come from one chip just coming out much better than the other rather than a distinct process advantage.
 
AMD GPUs have aged far better than their NV competitors. But that's aside the point, because how many people keep their GPUs for over 3 years if they are serious gamers? Unless you're gaming mostly indie games or old DX9 titles, not many.
URL]

Huh?
most gamers play on old games.
dx9 engines are still used and new games are created on them.
load of people dont upgrade their cards due to playing on 1920x1080 or less resolution due to for example cs:go or such e-sport games.
dont need new cards for 200fps in those games.

bought a new game yesterday, I have over 200fps using 1440p resolution at max with a 390.
for any amount this 390 an 290 serie card which is soon 3 years old does just fine.

if amd can deliver people would buy more of their cards but they often choose a more technical approach than what the customer wanted to have.
the guy who work there stating a OC dream with the fury and he was drunk was talking smoke from his ass.
such comments will hurt a company for decades.
thats the issue with engineers they cant understand their own market.
 
Huh?
most gamers play on old games.
dx9 engines are still used and new games are created on them.
load of people dont upgrade their cards due to playing on 1920x1080 or less resolution due to for example cs:go or such e-sport games.
dont need new cards for 200fps in those games.

bought a new game yesterday, I have over 200fps using 1440p resolution at max with a 390.
for any amount this 390 an 290 serie card which is soon 3 years old does just fine.

if amd can deliver people would buy more of their cards but they often choose a more technical approach than what the customer wanted to have.
the guy who work there stating a OC dream with the fury and he was drunk was talking smoke from his ass.
such comments will hurt a company for decades.
thats the issue with engineers they cant understand their own market.

Dx9 games are going to be here for a while, its not the age of the game that does that either. Most engines have been built with Dx9 in mind and then updated to newer Dx's. But keep this in mind, since the new consoles were released, there has been quite a few games that were Dx11 only and up hardware wise even though the engine they were built on was an updated Dx9 engine. So yeah developers are taking the easy road and dropping support for more than 2 generations of Dx's. And this is mainly due to console development.

Upgrade cycles are personal preferences, and I think most people who spend 300 + bucks have expendable cash and they will upgrade when they feel like the upgrade is worth it, and this release they didn't show that the upgrade was worth it because nV's cards where there. AMD's market share lose shows that too. I'm sure AMD has lost most of its marketshare at the high end, a little less loss at the mid range brackets (they lost at the low end too, but break down wise its probably less loss)

All this stuff does't matter if AMD cards are "aging" better. Get this shit out right if you want sales, cause once something is launched and reviewed, there are no take backs.
 
Last edited:
That's not the point the article is trying to make, the point is you need to have a minimum N (sample number) under controlled condition to actually get anything useful, a test of two single phones isn't nearly enough. You'd need a minimum of about 20 of both samsung and tsmc each (depending on the acceptability range of each chip) to get anything useful. 20-30% could be fully within expected voltage/leakage variance for Apple's SOCs. Thus the entire difference could come from one chip just coming out much better than the other rather than a distinct process advantage.

And how do you think this minimum N is calculated? ;)

They lament at the end that they only had TSMC chips, but they could have easily used them to check how much variance is there between chips from TSMC.
Apple claim 2-3% variance for their phones, so a 20-30% difference is way out on the curve to bother with such testing.
 
If someone got the 7970 Ghz edition, they're now looking at a GPU roughly 30% faster than the 670. The 280X is the 7970 Ghz. The 960 is more or less on the same performance delta as the 670.

AMD GPUs have aged far better than their NV competitors. But that's aside the point, because how many people keep their GPUs for over 3 years if they are serious gamers? Unless you're gaming mostly indie games or old DX9 titles, not many.
URL]
Radeon 7970 is a surprisingly potent GPU. It sucks in tessellation and triangle throughput, but it even beats the Geforce 780 in many common compute tasks. DX12 async compute improves the GCN performance even further. With proper next gen engines based on compute shaders (and optimized for GCN architecture), the old AMD GCN GPUs are going to remain competetive for quite a long time.

GCN also has tier 3 resource binding and many other DX12 features not supported by Geforce 600/700 series. It looks like AMDs architecture was too forward looking, too much focusing around excellent compute peformance, compute flexibility and DP. Now that developers can finally drop support of last gen consoles (and DX 9/10 PCs) the new engines can fully be designed around GPU compute. This is unfortunately a little bit too late for AMD, as they have already lost lots of their GPU market share. Nvidia's focus on geometry performance, ROP delta compression and other rendering related improvements was the right call for games designed around old console hardware limitations.
 
Last edited:
I think that has always been AMD/ ATi's problem, they pretty much "over" engineer their chips or think software is going in a certain direction quickly, but not taking into consideration time to release of software that will use those features. If those features can't be shown as useful right off the bat, those features are pretty much "nice" to have but not essential for them to be marketable.

PS happy holidays everyone!
 
Well, wasn't GCN made with consoles in mind? If so, it makes sense that it would be designed for the future, in a sense, given it would need to last for 5+ years.
 
Status
Not open for further replies.
Back
Top