The AMD Execution Thread [2007 - 2017]

Status
Not open for further replies.
While they've done bigger changes, they're still evolutionary, not revolutionary, just like GCN generations have been

One of the changes was from DX10.1 to DX11, which doesn't seem evolutionary at all. Another was from VLIW5 to VLIW4, which seems like a pretty big change too.
 
One of the changes was from DX10.1 to DX11, which doesn't seem evolutionary at all. Another was from VLIW5 to VLIW4, which seems like a pretty big change too.
I meant since first GCN they've been evolutionary, just like kepler and Maxwell are from Fermi, didn't mean they've always been
 
Optimism or misty optics?
The fabless chipmaker has been slammed for re-introducing its old-generation graphics processing units under new names. Only new GPU per year is a result of multiple business decisions, which have also seen massive layoffs from the company.

AMD seems optimistic and claims that it still has strong APU, CPU and GPU roadmaps and will continue improving its GCN architecture, not introduce something brand new.

So it is on that next generation of CPUs starting with 'Zen'. It is on successive generations of our graphics core next. Huge volume in what we have in not only in discrete graphics and our APU, but the game console wins are all on graphics core next and we have a very strong roadmap for that graphics core next IP going forward."

Papermaster said in mid-2015 that 2016 was the year the company intends to release graphics processors that will be based on an architecture that will be considerably different than today's GCN.

The next iteration of GCN architecture is projected to be two times more energy efficient compared to current GCN and will support new features. Since AMD intends to use 14nm or 16nm FinFET process technologies to make its next-gen GPUs, increased performance and energy efficiency is not surprising.

AMD's GCN is four years old and has evolved significantly and it looks like AMD wants to continue developing generations of GCN-based GPUs.

AMD's latest "Fiji" graphics processor based on the third iteration of GCN looks competitive against, it Should be Noted That Nvidia has Introduced two major architectures ("Kepler" and "Maxwell") since 2012 and plans to unveil "Pascal" next year.

It is starting to look like next year will make or break AMD.
http://www.fudzilla.com/news/processors/38461-amd-waves-its-graphics-core-next-roadmap
 
Last edited:
Kepler and Maxwell are both evolutionary architectures from Fermi, which generally isn't enough to be called "major architecture"

If Kepler is evolutionary to Fermi, then what is revolutionary to you?
 
If Kepler is evolutionary to Fermi, then what is revolutionary to you?
Major architectural changes? The base architecture remained quite similar between the two, more units were added to each SM(X) and doubled shader clock was removed, which gave notable power savings, but it was more or less "more of the same" with some arrangement changes
(at least if my memory isn't betraying me here)
 
One could say GF114 is an evolution to GF110, for example. For Nvidia, I think we should look at their Compute Capability nomenclature. Every major CC version increase essentially states a new architecture node, with all the minor branches within.

I would argue, that Maxwell is definitely more "revolutionary" than what Kepler offered (Kepler's primary benefits came from the then new 28nm process, before anything else). There are significant changes to how data caching is handled, compared to both Fermi and Kepler. Instruction scheduling has also been reworked. And that's on top of the radical layout changes to the multiprocessor design.
 
Last edited:
For example, consider scheduling.

And the ALU arrangement is completely different, with decoupled clocks.
One could say Fermi looks like Tesla with DX11 (similar to the Terascale 1 -> Terascale 2 evolution), but Kepler is completely different.
 
Fermi introduced a whole new concept of memory pipeline and data handling, compared to the Tesla generation, both on hardware and compiler level. That alone is worth all the innovations from Kepler.
 
Fermi and Kepler were quite different. GCN iterations (all of them combined) have been a much smaller change.

I don't remember the source, but I remember that there was some discussion that suggested that Maxwell internal microcode showed some hints about multi-level register file. If this is indeed true, then Maxwell is a bigger architectural change than most believe. NVIDIA paper from 2011: http://dl.acm.org/citation.cfm?id=2155675
 
I don't remember the source, but I remember that there was some discussion that suggested that Maxwell internal microcode showed some hints about multi-level register file. If this is indeed true, then Maxwell is a bigger architectural change than most believe. NVIDIA paper from 2011: http://dl.acm.org/citation.cfm?id=2155675
Sure it is. In fact I think Nvidia started the work earlier on Maxwell, overlapping it with Kepler's time-frame. It's possible that Pascal is an "extended" evolution of Maxwell's ISA, now with full-blown DP for the Tesla SKUs and some other goodies. Of course, it will carry it's own major CC iteration, as always.
 
Kepler and GCN (1) were direct competitors, both were introduced during winter 2011/2012. It doesn't make sens to compare Fermi -> Kepler transition with GCN transitions. Fermi's competitor was VLIW, not GCN. Since winter 2011/2012, Nvidia introduced Kepler, Maxwell (1) and Maxwell (2) - AMD GCN (1), GCN (2) and GCN (3).
 
Fermi and Kepler were quite different. GCN iterations (all of them combined) have been a much smaller change.

I don't remember the source, but I remember that there was some discussion that suggested that Maxwell internal microcode showed some hints about multi-level register file. If this is indeed true, then Maxwell is a bigger architectural change than most believe. NVIDIA paper from 2011: http://dl.acm.org/citation.cfm?id=2155675

The following discusses assembly coding for Maxwell, and the change in the arrangement and caching of register operands.
https://code.google.com/p/maxas/wiki/sgemm
 
47105_06_amds-gpu-market-share-drops-again-even-release-fury.png


The last time we had GPU market share numbers, NVIDIA was dominating AMD with 76% of the discrete GPU market, leaving AMD with scraps. This was back in Q4 2014 (with our article released in February 2015), where NVIDIA's best video card was the GeForce GTX 980.

This was before the release of the Titan X in March, and before the GTX 980 Ti in June. At the time, AMD had its Hawaii architecture inside of the R9 290X, and the dual-GPU in the form of the R9 295X2. At the time, all signs pointed to the R9 390X turning things around, but the R9 390X ended up being yet another rebrand, while the R9 Fury X was discovered with our world exclusive during Computex 2015 in June, powered by High Bandwidth Memory

Fast forward to now, where we're in Q3 2015, and AMD has multiple new products on the market: the R9 Fury X, R9 Fury, R9 390X and a bunch of rebranded 300 series video cards. According to Mercury Research's latest data, NVIDIA has jumped from 76% of the discrete GPU market in Q4 2014 to 82% in Q2 2015. This leaves AMD with just 18% of the dGPU market share, even after the release of multiple new products from Team Red.

Now, one would think that with the release of a truly next-gen card like the R9 Fury X, rocking HBM1, that it would sell well - but it has not. There are multiple issues here, and not just the single issue that most people would think. Most would come to the conclusion that the Fury X isn't selling well, but if you remember our exclusive report that HBM1 yields were seriously low, so low that there would only be 30,000 units made over the entire of the year, this is issue one.

Issue two is that Radeon fans haven't had a new video card release in over a year, shouldn't they be foaming at the mouth over the Fury X? If not, why not the Fury? Well, there are also seriously low Fury units in the wild, too. So let's move over to issue three: rebrands. AMD has rebranded nearly its entire product stack, with no real reason to buy a R9 390X if you own an R9 290X. Absolutely no reason. There is 8GB of GDDR5 on board compared to the 4GB offered on most R9 290X cards, but that's not enough to push someone to upgrade their card.

Then we have the big issue of the HBM-powered R9 Fury X not really offering any form of performance benefits over the GDDR5-powered GeForce GTX 980 Ti from NVIDIA, with the 980 Ti beating the Fury X in some tests. NVIDIA has plenty of GM200 GPUs to go around, with countless GTX 980 Ti models from a bunch of AIB partners. There is absolutely no shortage of GTX 980 Ti cards in the wild.

http://www.tweaktown.com/news/47105/amds-gpu-market-share-drops-again-even-release-fury/index.html
 
Last edited:
Well the Steam Hardware Survey says that virtually nobody bought the rebrands.

Though I wonder if the survey is grouping together the rebrands using the same chip.
There are a lot of 7900, 7800 and 7700 series in there, so maybe they're grouping together the 7900+R9 280/X, the 7800+R9 270/X+R7 370 and the 7700+R7 250X/260/260X + R7 360.
That would leave the "200 series" with Hawaii chips only, so 290/X+390/X. The only question would be where they're putting the Tonga chips in there.
 
The hardware ID of the rebrands changed, so I'd like to assume that Steam would not lump them together. If I remember (and if I'm granted time from the wife after kiddo is in bed ;) ) I'll poke around in the Steam hardware survey site tonight. Are other, prior rebrands from AMD and NV lumped in the same way? Can't think of one just at this moment, but I know both vendors have examples in the recent past.
 
Status
Not open for further replies.
Back
Top