AMD Vega Hardware Reviews

NV/Jen Hsun said several years ago now (kepler era maybe?) that moving around data in a GPU costs more power than doing calculations on said data. He also said around pascal's release (prolly the reveal conference) that they'd taken a lot of care in laying out the chip/routing data flow, or words to that effect. Spared no expense, most likely (or very little anyhow - as pascal reportedly cost a billion buckaroos IIRC to develop.)

1322367845.jpg


http://www.monolithic3d.com/blog/the-dally-nvidia-stanford-prescription-for-exascale-computing
 
AMD probably uses way more automated design/layout tools than NVIDIA or Intel. As a result their GPUs appear to be way less efficient at shuffling bits.
IIRC, Intel has like, hundreds of skilled silicon engineers just for laying out their CPUs (and, GPUs too these days I suppose, heh). No idea how big a team is under Raja, but hundreds of guys would probably be quite an expense. And they typically have to release multiple dies in different performance brackets. So automation is probably out of necessity, not choice. (Curiously, no lower tiered vega chips announced - so polaris will continue to hold the fort in mid/low-end tiers for yet another year?)

I wonder if maybe AMD spends more effort on reining in power consumption in a chip like Xbox Scorpio's than they do for desktop PCs, seeing as consoles are much more sensitive to heat dissipation than a gaming PC is...?

Also aren't modern Intel iGPUs fully DX12 compliant?
Not the faintest idea! I amend my statement to that vega looks to be first fully DX12 capable discrete GPU then... :p

Not that it really matters; the majority of PC gamers have NVIDIA cards (~64% according to Steam), so devs have to code with that in mind. Kind of a shame but this is reality.
I'm hoping ryzen in particular, but hopefully also vega once they get this beast straightened out a little (with a bunch of well-engineered 3rd party boards and fixed drivers including working wattman functionality), will inject some much-needed cash into AMD, making what comes after vega perhaps what we're all REALLY wishing for!

Ryzen and now threadripper seems to already have done a lot of good for AMD's stock price. With some wind in their sails, good word of mouth and so on it should be easier to attract competent staff - especially if you can now afford to pay them well! ;) No idea how far along navi is in design work, but maybe there is time enough for it to see some improvements from AMD's positive change of fortunes.
 
Hm, I don't really know anything about anything really, but seems likely from a strictly layperson's perspective there's no any single cause of vega's power draw. Would scheduling really account for a hundred watts increase in dissipation?
Yeah, I have the same feeling about this.

The very rough first order approximation for power consumption, assuming everything is running on the same clock, is the number of gates spent on something. You can refine it further by the number of toggling gates. And the number of RAM block accesses.

In a GPU, that's always going to be the ALUs and logic to feed it.

Nobody is going to convince me without some really hard data that scheduling decisions, whether to kick off a thread group yes or no, is going to consume more than a tiny fraction of the power to feed and calculate an FMA over the same thread group. Not on a GPU that doesn't have anything close in terms of the CPU-like scheduling needs.
 
The very rough first order approximation for power consumption, assuming everything is running on the same clock, is the number of gates spent on something. You can refine it further by the number of toggling gates. And the number of RAM block accesses.

I think Vega's high power consumption is a result of AMD pushing Vega up the DVFS curve to make sure V64 was competitive with the 1080 and the V56 with the 1070.

I'm guessing the vast majority cares more about outright performance/fps than power efficiency (not me, I'm perfectly happy with my undervolted Fury Nano).

Cheers
 
Not that it really matters; the majority of PC gamers have NVIDIA cards (~64% according to Steam), so devs have to code with that in mind. Kind of a shame but this is reality.
We are already seeing the effect of that in the PC space, this is a list I've compiled that spans 2 years worth of games (AAA and indie). The focus of the list is the comparison between 980Ti vs FuryX. A normal difference between the two is 10~15% in favor of the 980Ti. But because of optimization problems. That percentage grows to reach anywhere from 20 to 70%! And no this list doesn't contain the games that exceed the 4GB FrameBuffer of the FuryX.

Divinity Original Sin 2
980Ti is 20% faster than FuryX @1080p (and 17% faster @1440p), 980 is almost as fast as FuryX @1080p!
http://gamegpu.com/rpg/роллевые/divinity-original-sin-2-test-gpu

Obduction
980Ti is 55% faster than FuryX @1080p(and 30% faster @1440p), 780Ti is delivering the same fps as FuryX @1080p!
http://gamegpu.com/rpg/роллевые/obduction-test-gpu

ABZU
980Ti is 52% faster than FuryX @1080p, (and 30% faster @1440p), 980 is 17% faster than FuryX @1080p as well!
http://gamegpu.com/action-/-fps-/-tps/abzu-test-gpu

War Thunder
980Ti is 15% faster than FuryX @1080p, @1440p it is 25% faster than FuryX, 980 is closely fast as it as well @1080p!
http://gamegpu.com/mmorpg-/-онлайн-игры/war-thunder-1-59-test-gpu

The Technomancer
980Ti is 25% faster than FuryX @1080p (and 17% faster @1440p)! 980 is as equal as well @1080p!
http://gamegpu.com/rpg/роллевые/the-technomancer-test-gpu

Firewatch
980Ti is 25% faster than FuryX @1080p and 1440p, 980 is almost as fast as it as well @1080p!
http://gamegpu.com/action-/-fps-/-tps/firewatch-test-gpu.html

Dragons Dogma Dark Arisen
980Ti is 24% faster than FuryX @4K and @1440p, 980 is as fast as it as well @1440p! (@1080p all cards are CPU limited).
http://gamegpu.com/rpg/rollevye/dragons-dogma-dark-arisen-test-gpu.html

Homeworld: Deserts of Kharak
980Ti is 20% faster than FuryX @4k, though it drops to just 15% @1440p! (@1080p all cards are CPU limited).
http://gamegpu.com/rts-/-strategii/homeworld-deserts-of-kharak-test-gpu.html

Crossout
980Ti is 46% faster than FuryX @1080p, (and 32% faster @1440p), even 980 is faster @1080p!
http://gamegpu.com/mmorpg-/-онлайн-игры/crossout-test-gpu

Mad Max
980Ti is 23% faster than FuryX @1080p, 18% faster @1440p!
http://gamegpu.com/action-/-fps-/-tps/mad-max-test-gpu-2015.html

Call Of Duty Modern Warfare Remastered
980Ti is a whooping 60~72% faster than FuryX @1080p, even a 970 is faster than FuryX! @1440p, the advantage collapses to 25~60%, and regular 980 is equal to FuryX!
http://gamegpu.com/action-/-fps-/-tps/call-of-duty-modern-warfare-remastered-test-gpu
http://www.pcgameshardware.de/Call-...59978/Specials/Benchmark-Test-Review-1234383/

Battleborn
980Ti is 30% faster than FuryX @1080p and 1440p, 980 is equally as fast as the FuryX!
http://www.pcgameshardware.de/Battleborn-Spiel-54612/Specials/Benchmark-Review-1194406/

Homefront: The Revolution
980Ti is 34% faster than FuryX @1080p, and 23% faster @1440p
http://www.overclock3d.net/reviews/gpu_displays/homefront_the_revolution_pc_performance_review/7
http://www.pcgameshardware.de/Homefront-The-Revolution-Spiel-54406/Tests/Benchmarks-Test-1195960/

Assassin's Creed Syndicate
980Ti is 24% faster than FuryX @1080p! 21% faster @1440p! 980 is almost equally as fast!
http://gamegpu.com/action-/-fps-/-tps/assassin-s-creed-syndicate-test-gpu-2015.html
http://www.pcgameshardware.de/Gefor...afikkarte-265855/Tests/Test-Review-1222421/2/
https://www.computerbase.de/2016-08...#diagramm-assassins-creed-syndicate-1920-1080

Conan Exile
980Ti is 45% than FuryX @1080p, 28% faster @1440p! 980 is as fast as FuryX at both resolutions!
http://gamegpu.com/mmorpg-/-онлайн-игры/conan-exiles-test-gpu

ARK Survival
98Ti is 25% faster than FuryX @1080p, the only resolution that matters.
http://gamegpu.com/mmorpg-/-онлайн-игры/ark-survival-evolved-test-gpu

Styx Shardsw Of Darkness
980Ti is 36% faster than FuryX @1080p, 34% faster @1440p! Even a regular 980 is almost as fast as FuryX @4K!
http://gamegpu.com/rpg/роллевые/styx-shards-of-darkness-test-gpu

Ghost Recon Wildlands
980Ti is 28% faster than FuryX @1080p and @1440p!
http://gamegpu.com/action-/-fps-/-tps/ghost-recon-wildlands-test-gpu

Forza Horizon 3
980Ti is over 50% faster than FuryX @1080p! 40% faster @1440p, even a 980 and a 1060 are faster than FuryX here!
http://gamegpu.com/racing-simulators-/-гонки/forza-horizon-3-test-gpu

Mass Effect Andromeda
980Ti is 23~40% faster than FuryX in Mass effect Andromeda @1080p, 17~30% @1440p! FuryX is barely faster than a 480 or 1060.
http://gamegpu.com/action-/-fps-/-tps/mass-effect-andromeda-test-gpu
http://www.pcgameshardware.de/Mass-...55712/Specials/Mass-Effect-Andromeda-1223325/

Anno2205
980Ti is more than 30% faster than FuryX @1080p and 1440p!
https://www.computerbase.de/2017-03/geforce-gtx-1080-ti-test/2/#diagramm-anno-2205-1920-1080
https://www.techpowerup.com/reviews/Performance_Analysis/Anno_2205/3.html
http://www.pcgameshardware.de/Anno-2205-Spiel-55714/Specials/Technik-Test-Benchmarks-1175781/
http://www.guru3d.com/articles_pages/anno_2205_pc_graphics_performance_benchmark_review,7.html

Ultimte Epic Battle Simulator
980Ti is more than 60% faster than FuryX @1080p and 50% faster @1440p, the regular 980 is ahead of the FuryX!
http://gamegpu.com/rts-/-стратегии/ultimate-epic-battle-simulator-test-gpu

Escape from Tarkov
980Ti is more than 35% faster than FuryX @1080p and 1440p! The regular 980 is slightly ahead of the FuryX as well!
http://gamegpu.com/mmorpg-/-онлайн-игры/escape-from-tarkov-alpha-test-gpu

OutLast 2
980Ti is 44% faster than FuryX @1080p and 27% faster @1440p, even the 780Ti is ahead of the FuryX!
http://gamegpu.com/action-/-fps-/-tps/outlast-2-test-gpu

Inner Chains
980Ti is 50% faster than FuryX @1080p and 37% faster @1440p, even the 970 is ahead of the FuryX!
http://gamegpu.com/action-/-fps-/-tps/inner-chains-test-gpu-cpu

Dying Light
980Ti is 40% faster than FuryX @1080p and 27% faster @1440p, the regular 980 is slightly ahead of the FuryX
http://gamegpu.com/action-/-fps-/-tps/dying-light-the-following-test-gpu.html
https://www.overclock3d.net/reviews...owing_pc_performance_review_-_amd_vs_nvidia/7

Vanquish
980Ti is 84% faster than FuryX @1080p and 40% faster @1440p, even the 970 is ahead of the FuryX!
http://gamegpu.com/action-/-fps-/-tps/vanquish-test-gpu-cpu

RiME
980Ti is 37% faster than FuryX @1080p and 25% faster @1440p, even the 1060 is ahead of the FuryX!
http://gamegpu.com/rpg/ролевые/rime-test-gpu-cpu

Tekken 7
980Ti is 25% faster than FuryX @4K, the only resolution that isn't cpu limited, 980 is almost as fast as the FuryX.
http://gamegpu.com/action-/-fps-/-tps/tekken-7-test-gpu-cpu

Get Even
980Ti is 65% faster than FuryX @1080p and 50% faster @1440p, even the 1060 is ahead of the FuryX!
http://gamegpu.com/action-/-fps-/-tps/get-even-test-gpu-cpu

FortNite
980Ti is 30% faster than FuryX @1080p and 20% faster @1440p
http://gamegpu.com/mmorpg-/-онлайн-игры/fortnite-test-gpu-cpu

Aven Colony
980Ti is 56% faster than FuryX @1080p and 33% faster @1440p, even the 1060 is ahead of the FuryX @1080p!
http://gamegpu.com/rts-/-стратегии/aven-colony-test-gpu-cpu

Agents Of Mayhem
980Ti is 58% faster than FuryX @1080p and 25% faster @1440p, even the 1060 is ahead of the FuryX @1080p!
http://gamegpu.com/action-/-fps-/-tps/agents-of-mayhem-test-gpu-cpu

Law Breaker
980Ti is 25% faster than FuryX @1080p, and 14% faster @1440p
http://gamegpu.com/mmorpg-/-онлайн-игры/lawbreakers-test-gpu-cpu

ArmA 3
980Ti is 29% faster than FuryX @1080p {the only playable resolution)
http://gamegpu.com/action-/-fps-/-tps/arma-3-test-gpu-cpu
Sudden Strike 4
980Ti is 20% faster than FuryX @1080p
http://www.pcgameshardware.de/Sudde...cials/Benchmarks-Test-Review-Preview-1232271/

Project Cars, Ethan Carter, Alien Isolation all are 25~70% faster on the 980Ti than FuryX
http://www.babeltechreviews.com/fury-x-vs-gtx-980-ti-revisited-36-games-including-watch-dogs-2/3/
 
Fully capable? Why then it doesn't support ProgrammableSamplePositionsTier? DX12 is a moving target. Claiming something is fully compliant or fully capable in such an environment is not really useful.
GCN2 (consoles) already support max tier programmable sample positions. Source: https://michaldrobot.files.wordpress.com/2014/08/hraa.pptx. See flip quad AA slides.

Also all other SM 6.0 features are already supported by GCN2. The SM 6.0 feature set seems to be actually designed around GCN2 hardware limitations and capabilities. GCN3 can implement some of the new cross lane ops slightly more efficiently, but GCN3 DS_PERMUTE (programmable index per lane) for example isn't exposed at all. I don't know whether Nvidia has runtime indexed swizzle either. AMDs DS_PERMUTE goes through their LDS crossbar, so it isn't super fast either, but would allow some nice tricks.
 
Hm, I don't really know anything about anything really, but seems likely from a strictly layperson's perspective there's no any single cause of vega's power draw. Would scheduling really account for a hundred watts increase in dissipation?
The speculation that the hardware that makes GCN work well with DX12 sounds to me like another case of conflating compile-time scheduling at the warp register dependence level for Nvidia versus the front-end command processor level.
DX12 doesn't care about the former, and both Nvidia and AMD have some amount of hardware management of the latter.

The exact amount and specific limitations of the front-end hardware for each vendor and generation isn't strictly clear, but both have processors in their front ends for managing queues, ASIC state, and kernel dispatch.
To go by AMD's command processors, there are simple proprietary cores that fetch command packets and run them against stored microcode programming, putting things on internal queues or updating internal registers. Other processors internally set up kernel launches based on this. These cores do not have the bandwidth, compute, or large ISA and multiple hardware pipelines of a CU. From Vega's marketing shot, the command processor section (~8 blocks?) appears to take up about 1/3 of the centerline of the GPU section.

We know AMD doesn't have billions and billions to spend, and what money they do have must be shared with console SoC and x86 CPU divisions.
One item to note is that AMD doesn't budget the money paid by its semi-custom clients for the engineering costs the same way as its purely in-house R&D. That's part of the point of the semi-custom division, so what is actually spent on developing the IP shared with the consoles is not as straightforward as just looking at what AMD's line for what it spends out of its own budget.
As such GCN may be resourced-starved but not strictly as much as it might appear.
However, another potential set of effects may stem from trying to fold architectural features into a mix from multiple partners whose vision and requirements of the base architecture may not align with advancing AMD's own products sufficiently. Then there's the potential cost of having resources diverted towards contradictory or partially backwards-looking directions, and making the hardware general enough that the architecture can be tweaked--potentially in a sub-optimal way for AMD.

As a result, doesn't it seem chances are fairly high that vega isn't nearly as efficiently laid out as it could have been, and that much power is spent/lost just on shuffling bits around the die? Hardware units themselves might also be comparatively inefficiently designed compared to NV's chips.
GCN architecturally also defines certain elements that move more data more frequently than Nvidia's, such as differences in warp size and things like the write-through L1s.
Further details are somewhat obscured by Nvidia's reluctance to disclose low-level details like AMD has, but we do know Nvidia has stated there's been an ISA revamp after Fermi, a notable change with Maxwell, and Volta apparently has changed the ISA to some extent again. GCN has had some revisions, and flags at least one notable encoding change since SI. I think there are pros and cons to either vendor's level of transparency.

Iteration rate alone may weigh against GCN, and AMD has spent resources on a number of custom architecture revisions that do not necessarily advance AMD's main line. The ISA and hardware are kept generic enough and complex enough to support or potentially support quirks that no individual instantiation or family of them will support, which means there's some incremental reduction on how svelte a given chip can be. That's not including that much of this could be improved if the effort and investment were expended--but weren't.

IIRC, Intel has like, hundreds of skilled silicon engineers just for laying out their CPUs (and, GPUs too these days I suppose, heh). No idea how big a team is under Raja, but hundreds of guys would probably be quite an expense. And they typically have to release multiple dies in different performance brackets. So automation is probably out of necessity, not choice.

Die shots for Ryzen, Intel's, and other chips shows a lot of those blobby areas that are indicative of automated tools. With advancing nodes and expanding problem spaces, tools have eroded the set of items that humans can do better--or at least better enough. AMD might use automation more, but even Intel is automating most of its blocks as well.

There's still some specific areas like SRAM or really performance-critical sub-blocks that can be targeted. AMD even marketed Zen's design team being brought in on Vega's register file, although I wonder if it was the team involved in the L1 or L2/L3 given the use model versus Zen's highly-ported 4GHz+ physical register file.
It may also be helped by AMD's aiming to have Vega on the same process as Ryzen for Raven Ridge, but that can also be a case where the physical building blocks are being forced to straddle the needs of a CPU and GPU.

I wonder if maybe AMD spends more effort on reining in power consumption in a chip like Xbox Scorpio's than they do for desktop PCs, seeing as consoles are much more sensitive to heat dissipation than a gaming PC is...?
Console GPUs fit in a niche that doesn't try to beat Nvidia and don't try that hard to reach laptop/portable power, and they don't care too much about features that might matter to a professional or HPC product. A design pipeline and philosophy that apparently does decently as long as it doesn't aim that high or that low, apparently.


The very rough first order approximation for power consumption, assuming everything is running on the same clock, is the number of gates spent on something. You can refine it further by the number of toggling gates. And the number of RAM block accesses.
There's also the wires, and GCN defines a lot of them being driven, shuffled, and sent varying distances.
Per AMD, a good chunk of Vega's transistor growth was about driving them.
 
However, another potential set of effects may stem from trying to fold architectural features into a mix from multiple partners whose vision and requirements of the base architecture may not align with advancing AMD's own products sufficiently. Then there's the potential cost of having resources diverted towards contradictory or partially backwards-looking directions, and making the hardware general enough that the architecture can be tweaked--potentially in a sub-optimal way for AMD.

Are you thinking of MS and Sony specifically, or do you have something else in mind? In the former case, what backwards-looking directions do you expect console manufacturers to be interested in?
 
Are you thinking of MS and Sony specifically, or do you have something else in mind? In the former case, what backwards-looking directions do you expect console manufacturers to be interested in?
They're the major part of the semi-custom business. AMD has spent time and resources re-implementing slim variants of two Sea Islands APUs, which is engineering, staff, characterization, production bring-up, fees to Globalfoundries, and other resources spent on hardware that has limited benefit for helping Vega.
For PS4 Pro and Scorpio, I have stated before that I suspect that the low-level compatibility they have with their 28nm ancestors means that they aren't as close to Vega's base IP level as some are assuming. The marketing has been saying they have "Vega features" or "Polaris features", which isn't the same thing as being Vega or overlapping with it in many areas.

Neither Sony or Microsoft's consoles encourage a good laptop/embedded GPU, they specifically drag AMD into paying Globalfoundries extra, they don't necessarily to share their compiler or tool sets with AMD, and their existing consoles don't need infinity fabric, memory better than GDDR5, HBCC, large numbers of VM clients, or Vega's clock ceiling.
 
They're the major part of the semi-custom business. AMD has spent time and resources re-implementing slim variants of two Sea Islands APUs, which is engineering, staff, characterization, production bring-up, fees to Globalfoundries, and other resources spent on hardware that has limited benefit for helping Vega.
For PS4 Pro and Scorpio, I have stated before that I suspect that the low-level compatibility they have with their 28nm ancestors means that they aren't as close to Vega's base IP level as some are assuming. The marketing has been saying they have "Vega features" or "Polaris features", which isn't the same thing as being Vega or overlapping with it in many areas.

Neither Sony or Microsoft's consoles encourage a good laptop/embedded GPU, they specifically drag AMD into paying Globalfoundries extra, they don't necessarily to share their compiler or tool sets with AMD, and their existing consoles don't need infinity fabric, memory better than GDDR5, HBCC, large numbers of VM clients, or Vega's clock ceiling.

Understood, except for the GloFo part. Wouldn't it be a very good thing for AMD to have a steady demand of chips that they can have manufactured by GloFo, in order to meet their WSA targets?
 
Understood, except for the GloFo part. Wouldn't it be a very good thing for AMD to have a steady demand of chips that they can have manufactured by GloFo, in order to meet their WSA targets?
The consoles are fabbed at TSMC. They're the largest if not only reason currently for the charges AMD took for the amendment to the WSA for using other fabs. The most recent agreement also puts an extra charge per wafer AMD didn't take at GF, but I'm not sure which classes of chip that provision applies to.
 
The consoles are fabbed at TSMC. They're the largest if not only reason currently for the charges AMD took for the amendment to the WSA for using other fabs. The most recent agreement also puts an extra charge per wafer AMD didn't take at GF, but I'm not sure which classes of chip that provision applies to.

Ah, thanks. Well, that raises an obvious question: why not make them at GF? It seems to me that if TSMC's process is better, AMD should use it for chips that face very tough competition (e.g., Vega) as opposed to those that face almost none (namely console semi-custom SoCs).
 
Ah, thanks. Well, that raises an obvious question: why not make them at GF? It seems to me that if TSMC's process is better, AMD should use it for chips that face very tough competition (e.g., Vega) as opposed to those that face almost none (namely console semi-custom SoCs).
Microsoft faces very tough competition vs. Sony and vice versa. The duopoly isn't even much different than nvidia vs. AMD in terms of marketshare, for example.

Key difference here is that Microsoft and Sony get a lot more revenue from their console business (hardware + high margin peripherals + software + licensing) than AMD gets with discrete GPUs for PC. Their availability to pay the extra to manufacture their SoCs at TSMC is therefore much higher.
 
Ah, thanks. Well, that raises an obvious question: why not make them at GF? It seems to me that if TSMC's process is better, AMD should use it for chips that face very tough competition (e.g., Vega) as opposed to those that face almost none (namely console semi-custom SoCs).
It could be a mix of various things like TSMC's process being at least somewhat better, TSMC's being more rapid in node transitions, GF's less than stellar reputation, GF's lower volume, and potential questions as to what processes Sea Islands and Jaguar can be ported to and at what cost.

It was leaked that Sony almost went with Steamroller for the PS4, and rumor was that it was manufacturing risk concerns that caused the switch to TSMC and necessitated the switch to the more portable Jaguar.

Also, if Vega has physical tweaks like its register file borrowing from the GF-only Zen core, porting that to TSMC may be questionable.
The console makers also have volume requirements and likely have contracted prohibitive penalties for not meeting them, and there's less motivation to have those cheap chips fighting for volume with Zen products at GF. Vega probably can't be as large a competitor for volume, and it does seem to be constrained even further at this point.

One item I have been pondering, and I'm not sure if it's this thread or the Vega thread to discuss, is whether the immaturity of Vega and its difference from some of the Greenland rumors indicates something did get blown up or split apart in terms of the hardware blocks used and what nodes/foundries might have been in play until near the end.
Greenland is supposedly Vega, but Vega does not have the GMI or DP capabilities rumored for it in the past. That might explain why some of the software seems as if it didn't have a more final design until more recently (edit: recently in terms of the 4-5 years development cycle, and why there was seemingly limited benefit to developing in advance of final tapeout in late 2016).
 
Key difference here is that Microsoft and Sony get a lot more revenue from their console business (hardware + high margin peripherals + software + licensing) than AMD gets with discrete GPUs for PC. Their availability to pay the extra to manufacture their SoCs at TSMC is therefore much higher.
As costs associated with manufacturing, the extra payments as part of the WSA are not strictly the concern of MS or Sony. At some level, AMD may have factored in additional costs in the price schedules they negotiated. The timelines of the various negotiations and their terms are not public. For example, what went into the schedule for Neo predates the announcement of the WSA amendment.

The customers don't necessarily have to care about the foundry used, just as they don't have direct exposure to changes in manufacturing costs due to the contracted payment schedule. Given the risk impact of such a change they might have some kind of additional provision just in case, or they could just not be willing to pay the porting costs that could make it an unprofitable move for AMD.

I though amd was obligated to produce their chip at GloFo ? And that was why Vega was made here.
AMD could pay extra to fab it elsewhere, if they wanted to. The question was why something that in theory could get more money per-chip like Vega is kept at GF, while the consoles that pull in much less per unit would incur the cost.
 
Maybe yields are bad, so they lose less money with GF?

I'm not clear on how one would know that yields are bad before investing in an implementation and tape-out on two different nodes, outside of making an educated guess. If I were to bet on a manufacturer, I wouldn't bet on the manufacturer with GF's history or late process as being most likely to be better.
Vega should have a decent amount of yield recovery, with at least two salvage SKUs. The consoles are the ones where AMD would likely have less leeway with yields and volume, and falling short on fulfilling promises to Sony or Microsoft is probably contracted to be very painful.
 
Back
Top