The AMD Execution Thread [2007 - 2017]

Status
Not open for further replies.
This kind of FPGA is realistically only useful for very narrow applications. Probably only data center where hardware cost is of lesser importance and where you have a chance that there's some algorithm that warrants going through the expense of a custom hardware development for some highly specialized function.
I don't think we'll see this in the consumer space for many years to come.
 
Yes, but even if they actually implement this FPGA stuff in hardware, will there be software support to take advantage of it? AMD's market share is so incredibly weak these days, they don't have much leverage anymore (not that it was super awesome even in the best of times, ahumm...)
Some of the functions seem straightforward enough to be mostly abstracted from anything external to the stack, or are of the type that probably shouldn't hit error cases that require handling--since the abstraction won't let anything external handle them. The stack potentially has a store of configurations for the logic layer that is set up during manufacturing, rather than being an FPGA that the software can actually detect.
Software support would be limited to implementing specific hooks, and some could be system-level or OS configuration options. Something like wanting to change the endianness or plaintext/encrypted state of data from the same location would just require signalling which mode should be used. Odds are that any environment that wants that kind of function would know how to use it.
Some of the other possibilities like reconfiguring around hardware flaws or faults might be a manufacturing or low-level RAS feature user software would be unaware of.

The idea of a catch-all solution that allows a limited power and area stack to appeal to a lot of cases too niche to justify hardware for all of them seems interesting. There could be various limitations or gotchas to it having to be encapsulated in the stack and not fully controllable like AMD's PIM presentation that might change the cost/benefit analysis if a lot of cases translate to a "I could almost use this, except for X" that can happen with a solution that tries to bring in somewhat higher level concepts into a semi-fixed implementation.

Maybe it's something they're designing for a semi-custom product for specific purpose?
Anything semi-custom could make use of items AMD has researched. The initial filing predates the semicustom group's official debut, so the patent's existence does not weigh on things either way. The idea of being able to configure the base of a memory stack to provide additional functions or to allow the same base die to apply to changed memory stacks does lend itself to customization.
On the other hand, companies make patents all the time for ideas that are not tied to a particular product, or were tied to projects that do not lead to products. There is a raft of patents tied to a different version of Bulldozer that never saw the light of day, for example.
 
AMD quizzed over sharp drop-off in R&D expenditure
AMD chief technology officer, Mark Papermaster, was a speaker at Pacific Crest's Global Tech Leadership Forum in Colorado on Monday. Unfortunately for him AMD shares were taking a beating on the day – at one point on Monday AMD shares were 13 per cent down - with no AMD news or financial releases to blame, good or bad.

The AMD CTO didn't quibble about the reduction in R&D, and admitted that as the PC market has contracted AMD has put less R&D effort into this part of the business. Instead of supporting its role in the PC market AMD was looking to target R&D in carefully chosen operations, so-called "banking the future of the Company."

Papermaster sketched out the narrower targets of its reduced R&D spend; "So it is on that next generation of CPUs starting with Zen. It is on successive generations of our graphics core next." With particular reference to its APU and GCN designs Papermaster boasted that a lot of investments have paid off in getting its APUs adopted for its game console wins. Furthermore the AMD CTO asserted that "we have a very strong roadmap for that Graphics Core Next IP going forward".

There were some other interesting nuggets from Papermaster, quoted by Barron's. The CTO talked about the innovative design AMD had to drive forward to keep getting better processors from the 28nm production node, boasting that Carrizo "doubled the battery life," compared to earlier 28nm chips. Papermaster also said that AMD was "fixing" its data centre offerings to provide consistent total cost of ownership advantages to enterprise.

What do readers think? Can AMD continue to devote fewer resources to R&D but succeed against the likes of Intel and Nvidia?
http://hexus.net/tech/news/industry/85448-amd-quizzed-sharp-drop-off-rd-expenditure/
 
Maybe they want to keep using Pitcairn, Bonaire and Hawaii for another 3 years.

2017 will see the release of a new Hawaii card with 16GB GDDR5, 1200MHz GPU clock and a 400W TDP.
But fear not, because that new firmware is going to do wonders.



On a more serious note, AMD isn't single-handedly developing and marketing an API anymore, nor are they developing custom systems for three high-profile console makers at the same time (plus the R&D for the Nintendo NX may be near completion by now?), so it makes some sense that their R&D expenditure is lower than what it was in 2010/2011.
 
Last edited by a moderator:
Master Master, where's the overclocking I was after? Master, Master, papering over AMD"s cracks!! :LOL:

Are AMD going to survive till the zen releases and will they survive its release? They could've been doing well with the mobile parts, alas.

There have been rumors of facebook looking for a custom APU? part, maybe google can ask for same? Of course, it'd also require a good software stack, so there's that.
 
Iterations of the Terrascale architecture brand carried AMD from the HD 2xxx series to the HD 6xxx series. I would be surprised if Arctic Islands wasn't an evolutionary development of GCN.

That was from mid-2007 to January 2012, so around 4.5 years.

GCN1 is over 3.5 years now and there are graphics cards being released with the 3.5 year-old chips right now. After 3.5 years since the R600 release, there had been major changes in the architecture (VLIW4) and 3 process transitions (80nm > 65nm > 55nm > 40nm).


I think most people don't understand how old GCN is actually getting. During its lifetime, nVidia went through Fermi, Kepler and Maxwell. All of them being pretty major architectural transformations.

I'm not suggesting GCN is hopeless, but AMD had better come up with more relevant changes than the ones we've seen in the last 3 years, because color compression, TrueAudio and HBM are obviously not enough to keep up with nVidia.

Steam's Hardware Survey can be very misleading, but the fact that there are 3.45x more GTX 970 users than people with GPUs from the entire R9 200 range paints a very terrible setting.
 
Last edited by a moderator:
That was from mid-2007 to January 2012, so around 4.5 years.

GCN1 is over 3.5 years now and there are graphics cards being released with the 3.5 year-old chips right now. After 3.5 years since the R600 release, there had been major changes in the architecture (VLIW4) and 3 process transitions (80nm > 65nm > 55nm > 40nm).


I think most people don't understand how old GCN is actually getting. During its lifetime, nVidia went through Fermi, Kepler and Maxwell. All of them being pretty major architectural transformations.


And today an nearly 2 years old 290x, renamed 390X with a slighty higher clock is matching an 980 Maxwell easely .. http://www.guru3d.com/articles_pages/asus_radeon_r9_390x_strix_8g_review,21.html

Beat the 980 in 50% of the games tesed, the rest they are in average at 4-5fps under ... just for remember that the 290x was the opponent of the 780TI, and allegedly at this time a little bit slower.. the 980 was beat the 780TI by around 20-25% performance gain..

GCN is pretty solid as an architecture...And this is without speaking about computing. GPGPU..
 
Hawaii is a great chip. I have two 290X in my system.

But a factory-overclocked 390X matches a reference GTX 980 at the cost of spending twice on GDDR5 amount, a 10% larger die, >50% higher power consumption and probably a more expensive PCB (twice the memory lanes, more power consumption hence more VRMs, etc).

I'm not saying GCN is bad for the consumer. AMD places the GCN cards in their price range quite properly.

I'm saying GCN - as it stands right now - is becoming bad for AMD.
 
GCN1 is over 3.5 years now and there are graphics cards being released with the 3.5 year-old chips right now. After 3.5 years since the R600 release, there had been major changes in the architecture (VLIW4) and 3 process transitions (80nm > 65nm > 55nm > 40nm).
There have been pretty substantial changes to GCN as well. Hawaii lands in a relatively minor architectural rev, though gains pretty big operation changes, while Tonga/Fiji has somewhat larger architectural delta's.
 
And today an nearly 2 years old 290x, renamed 390X with a slighty higher clock is matching an 980 Maxwell easely .. http://www.guru3d.com/articles_pages/asus_radeon_r9_390x_strix_8g_review,21.html

Beat the 980 in 50% of the games tesed, the rest they are in average at 4-5fps under ... just for remember that the 290x was the opponent of the 780TI, and allegedly at this time a little bit slower.. the 980 was beat the 780TI by around 20-25% performance gain..

Unfortunately this might only be true if comparing reference 980's, which are basically non-existent in the retail market. If you compare custom 980's the story will be very different unless the 390X has similar performance levels as Fury.
Yes, the Strix Fury still tends to beat the GTX 980 FTW—especially in Bioshock, Mordor, and Metro—but EVGA’s beastly card narrows the performance gap mightily overall, drawing equal to the Strix Fury’s stock results even in that trio of titles when overclocked further. Considering how gaping the performance gap between the stock 980 and the Strix Fury is, that’s no small feat, and a testament to both Maxwell’s overclocking chops and EVGA's ACX 2.0 cooling solution.

This really boils down to the intangibles: Do you prefer the Strix Fury’s higher frame rates and superb two-card performance scaling, or the superior power efficiency and smaller build of the EVGA GeForce GTX 980 FTW, paired with Nvidia’s constant barrage of Game Ready drivers?
...
With all things being so similar, EVGA’s $500 bird in the hand today trumps AMD’s $580 bird in the bush. But either board you pick will certainly have you singing sweetly, especially at 1440p resolution.

http://www.pcworld.com/article/2951...hics-cards-brawl-for-pc-gaming-supremacy.html
 
Hawaii is a great chip. I have two 290X in my system.

But a factory-overclocked 390X matches a reference GTX 980 at the cost of spending twice on GDDR5 amount, a 10% larger die, >50% higher power consumption and probably a more expensive PCB (twice the memory lanes, more power consumption hence more VRMs, etc).

I'm not saying GCN is bad for the consumer. AMD places the GCN cards in their price range quite properly.

I'm saying GCN - as it stands right now - is becoming bad for AMD.

The "factory overclocked " have 20mhz higher clock... than the reference 390X, problem is most 980 "reference " had way higher turbo speed than the " reference turbo clock"... ( 1200+ mhz ).. Its hard to determine what is the reference gpu for the case of Maxwell anyway outside some minimal turbo clock warranty on specific models..

Ofc the comparaison was a bit faulty, i admit it without problem...
 
Last edited:
And today an nearly 2 years old 290x, renamed 390X with a slighty higher clock is matching an 980 Maxwell easely
It's interesting stuff of course, but what about power consumption? My Hawaii card is a fairly hot-running bastard...
 
It's interesting stuff of course, but what about power consumption? My Hawaii card is a fairly hot-running bastard...


True.

For the first time in a lot of years, I saw myself forced to further mod my case by opening holes for more and larger fans, because 2x 290X on open coolers = >70ºC on a system that used to have rather good airflow. The PSU was always very loud and the main GPU kept downclocking for reaching the 94ºC after 5min of Witcher 3 gameplay.
Here's hoping a 200mm fan on the sidewindow next to the graphics cards will suffice for air extraction.
 
I think most people don't understand how old GCN is actually getting. During its lifetime, nVidia went through Fermi, Kepler and Maxwell. All of them being pretty major architectural transformations.
While they've done bigger changes, they're still evolutionary, not revolutionary, just like GCN generations have been
 
The biggest problem with GCN is not reaching clockspeeds with added voltage/water cooling that nvidia cards are doing as routine. Which in turn is related to power efficiency.

The other problem is recycling old silicon, not pushing new chips like a 384-bit Tonga with 48ROPs and few more shaders that might encroach 780Ti territory and losing goodwill among their supporters.

Maybe the new node looked way closer than they expected or we might see some great APUs.
 
Are AMD going to survive till the zen releases and will they survive its release? They could've been doing well with the mobile parts, alas.

In their last investor relations briefing, they had this slide:

xrVP2f8.png

(sorry for text, I somehow smudged it during conversion from pdf)

That's essentially their runway. With their cash reserves, they should be able to keep their pared down R&D expenses going until the debt collectors start knocking, even if sales are terrible. But when 2019 arrives, I rather doubt they will be able to roll their debt forward anymore. So they need one truly great product before then.
 
Status
Not open for further replies.
Back
Top