AMD Vega 10, Vega 11, Vega 12 and Vega 20 Rumors and Discussion

D3D12 for me atm boild down to this: If I see the intro-logo corresponding to my installed graphics card, it's worth a try. With AMD more regularly than with Nvidia, with NV Pascal more regularly than Maxwell.

I see not many rosy things right now about D3D12, given that more than one game needed more than one patch to get performance to where you would expect it and some games still struggle to provide reliably enjoyable experiences in D3D12 compared to D3D11, even if the average Fps achieved in benchmarks are higher than with the older API.
 
Last edited:
The subject is "NVidia's long term failure to get D3D12 working well on its D3D12 GPUs".
What is the non anecdotal evidence for this though?
I don't see how "D3D12 version of the game is running slower then D3D11 version of that same game" can be the evidence of that claim? We don't have any insight into what these games are actually doing or how they are using D3D12 API. And there are some conflicting guidelines from both IHVs about how to use D3D12 on their hardware. I think the agreed consensus is that NV D3D11 driver is pretty damn quick (or at least good with jumping through hoops). I think it's also agreed that D3D12 is lower level and as such more blame gets shifted to the developers and it gets harder for driver to work behind developers back. And while there were some driver releases from NV where they claimed perf boost for some D3D12 tittles I think there were far more straight out game patches that dealt with D3D12 performance wows. I think it's also agreed that jumping through hoops with D3D12 for driver would be both harder (because it's a lower level API) and unwanted (again: because it's a lower level API).
 
NVidia has been failing at D3D12 since Maxwell launched as far as I can tell. Years of fail.
Ok, so 3 years of NV failing upwards. Their PC market share has essentially never been larger than it is right now. "If that's failing then bring me seconds!", I believe Jen-Hsun was quoted as saying in response...
 
...


I'm mystified how AMD thinks it's going to survive in gaming graphics over the next 3 years, when it appears to be about 2 years behind now and likely to add another year to its deficit within the next 9 months. I think Vega is the final nail in the coffin. Maybe AMD will surprise me.

...

That's the big question. I know, all the cards are selling right know, but yeah, they gap is bigger and bigger with nVidia, especially perf per watt. In the commercial slides, Vega was shown as a improvement over Polaris in that aspect, but it's not the case at all...Maybe GloFo 14nm is crap, and 7nm will solve a lot of things, but... I really think RTG is understaffed, quantity and quality wise, and they don't have deep pockets enough to change that...
 
Have they done so yet? No, because they're supply limited, so they can't.
With financials beating estimates, the first Q2 sales increase in 8 years, and still no product on shelves, I wouldn't call supply that limited. Obviously insufficient in the face of current demand, but not because they aren't making more cards. How that applies to Vega is unknown without reliable figures. The only safe conclusion is Vega being being sold or stockpiled as fast as they are currently produced, not unlike Polaris.

So when something at NV does not perform perfectly out of the box, it is broken.
When something does not work out of the box at AMD, we are waiting for AMD to unlock the hidden potential?
Everyone knew from the start Nvidia lacked the hardware to properly accelerate DX12 though. With Vega the missing features are well documented.

Packed math being used in consoles tells jack about how they are going to be used in PC. This is a repackaged argument from the past, when NVidia was supposedly screwed because AMD had a monopoly in consoles, so all cross platform games would run better in AMD hardware. We all know how that turned out, quite the opposite with NVidia gaining market share and expanding its performance lead, particularly in perf/watt while AMD all but lost its share in discrete laptops GPUs. Its useful to remember that although PS4 Pro has packed math, Xbox One X doesn't, while the latter is the one closer to DirectX, the API mostly used in PC Games. So that AMD advantage might be quite limited to specific cases where OpenGL or Vulkan is used.
Packed math on any platform is used much the same way with similar results. The algorithms don't change. And yes consoles made a difference. We have new low level APIs being adopted on PC that are problematic with Nvidia hardware. Those problems are the real holdup in designing the new engines that are required and we've yet to really see games designed around it. In part because the Nvidia base can't run it efficiently.

RPM being available only in AMDs top-of-the-line line of cards with it's market penetration implications would not help rapid adoption of this feature, would it? They removed FP16-support from their most recent drivers for all but Vega.
In the short term no, but support isn't that difficult and there is an upside of future sales. Ryzen Mobile, and future products, will support RPM with a larger market. Not all that different from Skyrim still being ported to mobile platforms like Switch. Even without RPM, the packed registers are beneficial to a wider range of hardware.

Ok, so 3 years of NV failing upwards. Their PC market share has essentially never been larger than it is right now. "If that's failing then bring me seconds!", I believe Jen-Hsun was quoted as saying in response...
They've been losing share to Intel for some time. Ryzen Mobile with any uptake and AMD will pull well ahead in graphics. Going off the latest JPR numbers Intel is ~70% of the market. Ryzen manages even a third of the market and AMD has twice the share of Nvidia. So I wouldn't say their share has ever been larger. Even in discrete they've been losing ground since Polaris launched.
 
But that doesn't prevent me making the observation that all is not rosy in NVidia's drivers with high profile, highly demanding games, some of which have D3D12 API usage.

The "NVidia's drivers are always excellent from day one" meme doesn't belong around here. Yet for some reason it's tolerated.
Nvidia's GPUs have played above their TFLOPS weight compared to AMD in DX11 for years. If Nvidia is close to the theoretical limit of what can be achieved with DX11 and AMD is not (for whatever unknown reason), then it's much easier for AMD to pull a DX12 rabbit out of its hat and claim how much of an improvement DX12 is compared to DX11.

I'm not challenging the fact that DX12 is better in some cases, but I do question if it's inherently as much better as the perf improvements seen on AMD GPUs. If AMD is just bad at DX11, and Nvidia great, then it's impossible for Nvidia to achieve the same kind of improvement. And it's always going to be a disappointment in the eyes of some.

And then you add the fact that AMD has some additional DX12 features that Nvidia lacks and it becomes really hard to not to be seen as failing at DX12.

Sometimes, no amount of driver excellence can work around a HW weakness.

Maybe driver excellence is what allows their DX12 to be as close as DX11 as it is right now. :)
 
And then you add the fact that AMD has some additional DX12 features that Nvidia lacks and it becomes really hard to not to be seen as failing at DX12.
And even if that magical feature would be implemented perfectly in NV hardware just how many flops are there left?

And even turning that magical thingy of in say Ashes of Singularity AMD will still gain loads by switching to DX12 for reasons I would think should be obvious but are clearly not.
 
Yet NV DX 12 overall performance per flops/watts/transistors still comfortably ahead of AMD.

Must be due to some magical drivers then.
And you believe any of those metrics have any significance? The issue is not being able to run the desirable paths in the first place. The uplift isn't there because engines have to be designed around Nvidia's inability to run it. If you think drivers are magical you clearly don't understand what is going on. It's not rocket science either.
 
So when we discuss how long it might take for AMD to sort out its drivers for Vega, we need to remember that NVidia has struggled to sort out its driver for Pascal D3D12. Maxwell D3D12 is such a tedious subject I'm not sure anyone can be bothered with it.
The Pascal situation is a bit better for NVIDIA, because NV is competitive with Vega there in Hitman, Ashes Of Singularity and Sniper Elie 4, three games that previously had Fiji leading Maxwell. Now Pascal and Vega are neck and neck in them.

At any rate, the situation with DX12 has slowed to a crawl, 2.5 years and only 16 games supported, and only 2 released in 2017, which means adoption is abysmal, worse than even the short lived DX10.
 
And you believe any of those metrics have any significance?

Those metrics have a lot of significance to AMD's bottom line, which affects R&D spending, which affects future products. Specifically, AMD was not able to devote enough software development (part of R&D) to Vega to enable several major functions on the chip at launch.
 
And you believe any of those metrics have any significance?
They absolutely, indisputably do. Number one and three have a direct impact on AMD's profit margin/bottom line, and number two affects OEMs' desire to buy and incorporate the product into their machines, thus affecting AMDs bottom line again.

The issue is not being able to run the desirable paths in the first place.
Wut? What DX12 game cannot be run on NV cards??? You crazy.
 
So when we discuss how long it might take for AMD to sort out its drivers for Vega, we need to remember that NVidia has struggled to sort out its driver for Pascal D3D12. Maxwell D3D12 is such a tedious subject I'm not sure anyone can be bothered with it.?

Concurrent Graphics and Compute queue worked on Pascal from day one.

And i dont see how nVidia can "sort out its driver for Pascal D3D12" when most of the work happens on the software side. Look at Ryzen + Geforce: Nixxes fixed the performance problem in Rise of the Tomb Raider with an application patch...
 
Does this patent that @Anarchist4000 linked a while back have anything to do with the "programmable" features in Vega? Or maybe something that can be added to a SoC with either Vega or Zen (or both)? Vega has a lot of SRAM, right?

Computer architecture using rapidly reconfigurable circuits and high-bandwidth memory interfaces

A programmable device comprises one or more programming regions, each comprising a plurality of configurable logic blocks, where each of the plurality of configurable logic blocks is selectively connectable to any other configurable logic block via a programmable interconnect fabric. The programmable device further comprises configuration logic configured to, in response to an instruction in an instruction stream, reconfigure hardware in one or more of the configurable logic blocks in a programming region independently from any of the other programming regions.

One embodiment of a programmable device is a Field-Programmable Gate Array (FPGA) having multiple configuration domains that can be reconfigured in parallel and independently of each other based on instructions in an instruction stream. Configuration data for each of the multiple configuration domains can be stored in three-dimensional (3D) stacked memory, which provides high-bandwidth access to the configuration data. The partitioning of the programmable logic in the device in conjunction with the high memory bandwidth allows for reconfiguration of the programmable logic within a few clock cycles, allowing for flexible pipelines that can be reconfigured to accommodate different types of instructions.

In an FPGA, the logic blocks can include elements such as lookup tables (LUTs) and other fixed functions that are programmed by inserting values into small Static Random Access Memories (SRAMs) or registers.

The implementation of flexible pipelines allows for greater flexibility in instruction scheduling, in contrast with fixed function processing pipelines, since any pipeline can be reconfigured to execute any of multiple types of instructions without interrupting execution of the instruction stream. In such a system, different threads executing different functions can be scheduled in a single cycle across multiple execution lanes.

I know basically nothing about this stuff... Just saying.
 
Does this patent that @Anarchist4000 linked a while back have anything to do with the "programmable" features in Vega? Or maybe something that can be added to a SoC with either Vega or Zen (or both)? Vega has a lot of SRAM, right?

I skimmed some of the claims, and it seems to cover an FPGA that is able to store custom configurations that it can reprogram itself to use on the fly as part of its processing of an instruction stream, rather than requiring starting up in an initial programming mode that must be completed in advance of any execution in the primary mode of operation.
This FPGA is also part of a die stack.

It's potentially the case that such a device could be included in an AMD SOC. It doesn't seem like Vega's features require this, and the physical description seems to indicate this is not relevant.
 
In the short term no, but support isn't that difficult and there is an upside of future sales. Ryzen Mobile, and future products, will support RPM with a larger market. Not all that different from Skyrim still being ported to mobile platforms like Switch. Even without RPM, the packed registers are beneficial to a wider range of hardware.
Oh yes, they surely would be. Unfortunately, as long as it is arbitrarily disabled in the drivers, developers cannot count on there being an installed base so those who are being ruled by their CFOs would think twive about using it. There needs to be concise drivers support, even it is only benefical in terms of register space saved.
 
Back
Top