Tonga and Fiji are both GCN3?You obviously more so than me for example, given that you mean HBM and G5 and not G5 and D3. Do you have any link where I can enlighten myself?
Tonga and Fiji are both GCN3?You obviously more so than me for example, given that you mean HBM and G5 and not G5 and D3. Do you have any link where I can enlighten myself?
That's where I read something in that direction (I hope you know who the author of this post is).I must have missed this as well, can you provide a link with an explanation?
You also need a product on the shelves to sell. It's like AMD's DX12 lead was meaningless because they couldn't capitalize on it until Nvidia had better support for it. AMDs feature set can be great on paper, but if Nvidia brute forces things at a quicker rate, then I don't see how Nvidia isn't still ahead at that point.
Thank you, but no. Ex-Coda?That's where I read something in that direction (I hope you know who the author of this post is).
Shouldn't have any effect. The driver work will be for the architecture and Polaris exists. Different product tiers are comparatively simple. In this case I'm saying they chopped the board development costs of 490 in favor of Threadripper as an example.Why would Vega be late when they skipped releasing a high end Polaris?
So it's all conjecture at this point.
Nothing like that, just not quite meeting my standards to share. Sent you a link though, but Gipsel's link should cover it. Bit lite on hard evidence atm.Thank you - even though i'd love to see and follow that informed discussion myself, I hope you will relay important updates to us normal people here.
There are uses beyond graphics AMD likely feels are important.If Vega is on-time and its drivers are almost ready but just need another month of development, then why did AMD release Vega today instead of just waiting a month to do it properly and without the PR nightmare?
I am not sure I can follow his assertion here. He obviously comes from a Vulkan background, were maxPerStageDescriptorUniformBuffers is limited to 12 on Nvidia hardware, as far as i could google quickly. In DirectX however, Wikipedia and Microsoft bot talk of limits of 14 - which is in place for Tier 2 already, which itself is a prerequisite for DX12 FL12_0. So either Nvidia is emulating that already (and is seemingly doing ok wrt to performance) or the limitation of 12 is some kind of artificial driver limit for Vulkan.@CarstenS:
Yes.
Thanks for the PM link!Nothing like that, just not quite meeting my standards to share. Sent you a link though, but Gipsel's link should cover it. Bit lite on hard evidence atm.
What non-graphical uses not only do not require decently working drivers but are also important enough to justify the PR nightmare and the tainting of the public's perception about Vega?There are uses beyond graphics AMD likely feels are important.
All the deep learning and HPC stuff on Linux with the ROCm stack that would also involve Epyc, Instinct, SSG, etc. The really high margin stuff that may also sell their CPUs that AMD has been chasing.What non-graphical uses not only do not require decently working drivers but are also important enough to justify the PR nightmare and the tainting of the public's perception about Vega?
Cuz thats not how AMDs PR worksIf Vega is on-time and its drivers are almost ready but just need another month of development, then why did AMD release Vega today instead of just waiting a month to do it properly and without the PR nightmare?
Supporting is one thing, actually gaining performance and programming benefits is another. The obvious example is the increasing use of asynchronous tasks, which GCN excels at and Maxwell actually loses performance on, Pascal with small gains. Having the driver return flags for a feature and emulating it through brute force will just mean developers won't use it.Vega's superior (DX12) feature set doesnt help them. Maxwell and Pascal supports everything, too. Pushing for example Tiled Ressource Tier 3 will put Pre-Vega-GCN into a disadvantage...
Currently those areas belong to Nvidia.All the deep learning and HPC stuff on Linux with the ROCm stack that would also involve Epyc, Instinct, SSG, etc. The really high margin stuff that may also sell their CPUs that AMD has been chasing.
Currently those areas belong to Nvidia.
It makes no sense to damage the card's gaming reputation (its bread and butter) because of areas where AMD isn't yet present anyway, and all because they really didn't want to simply wait another month for decent drivers. That's a flawed proposition.
Yeah, I guess The Pioneers will enjoy their one month headstart.time isn't on their side when it comes to breaking in those markets.
Haven't look at the test code, but I believe the key things need to be considered are the following:Albeit it has to be said the "shade once" part (i.e., HSR) cannot work with the UAV counter used in this code (as that's certainly a side effect which would change the rendered result). But that should work independently from the actual binning part I'd hope...
The link you have posted shows 10 graphs comparing Vega FE and P5000 results. Vega FE is faster than P5000 im half of them, so both cards are very comparable. Try to compare price / performance.Workstation performance is as bad as gaming performance. Getting beating by a GTX1080 with Quadro drivers doesnt look better...
Exactly that was the more or less general conclusion: Tiling/binning should work in that test (if not something is too conservative and disables it because of the UAV), HSR should be disabled.Before drawing conclusions, we need to understand whether HSR is allowed for this test application. Tiling itself shouldn't be a problem, as UAV order is not guaranteed (and even ROVs only guarantee order per pixel == only local tiled order matters, global ordering not required).