Yes but I am talking exclusively about AoTS, in context of the previous posts about the current benchmarkIts changed its definite active now on maxwell 2 cards, don't know its the same AMD path but its turned on by default.
Actually not just software. You need a hardware implementation (the way you code the colors, the brightness etc). AMD have practical support for HDR in Fury and i think, not sure, in the 380(games and pics but not videos since the standards for HDR videos weren't complete at the time)
NP, its also my style of writingah sorry misunderstood
They didn't remove DX12 path, they only removed Async compute from NVIDIAs DX12 path at one pointIf you are talking about AoTS, remember where you saw them removing that DX12 path to disable async compute for NVIDIA?
Cheers
I am specifically talking about it as it was in my OP and the DX12 rendering path.....They didn't remove DX12 path, they only removed Async compute from NVIDIAs DX12 path at one point
Nov 2015 said:Personally, I think one could just as easily make the claim that we were biased toward Nvidia as the only 'vendor' specific code is for Nvidia where we had to shutdown async compute. By vendor specific, I mean a case wherewe look at the Vendor ID and make changes to our rendering path. Curiously, their driver reported this feature was functional but attempting to use it was an unmitigated disaster in terms of performance and conformance so we shut it down on their hardware. As far as I know, Maxwell doesn't really have Async Compute so I don't know why their driver was trying to expose that. The only other thing that is different between them is that Nvidia does fall into Tier 2 class binding hardware instead of Tier 3 like AMD which requires a little bit more CPU overhead in D3D12, but I don't think it ended up being very significant. This isn't a vendor specific path, as it's responding to capabilities the driver reports.
From our perspective, one of the surprising things about the results is just how good Nvidia's DX11 perf is. But that's a very recent development, with huge CPU perf improvements over the last month. Still, DX12 CPU overhead is still far far better on Nvidia, and we haven't even tuned it as much as DX11.
Mid Feb 2016 said:Async compute is currently forcibly disabled on public builds of Ashes for NV hardware. Whatever performance changes you are seeing driver to driver doesn't have anything to do with async compute.
I can confirm that the latest shipping DX12 drivers from NV do support async compute. You'd have to ask NV how specifically it is implemented.
Oh yeah all the r9 300 series have partial support for HDR checked in this Anandtech article: http://www.anandtech.com/show/9836/amd-unveils-2016-vistech-roadmap/3Robert Hallock stated it would become available on current 300-series graphics cards, not just the 380.
NVIDIA had at least at some point disabled 10bit support on GeForces while it was enabled on QuadrosAFIK all current GPUs can output r10g10b10a2 and r11g11b10.... And since there is no a fixed and well defined standard, as MS suggested in the last GDC conference, you can do it all manually....
edit: only issue is the display port/hdmi support...
But not by disabling the corresponding texture formats, only by limiting what format they allow on the HDMI/DVI/DP link. The rest is just software, and possibly the driver applying some additional tone mapping to ensure that legacy applications are not accidentally using the entire dynamic range.NVIDIA had at least at some point disabled 10bit support on GeForces while it was enabled on Quadros
I am not talking about proprietary OGL extension to bypass DWM output... on Windowed mode, only Microsoft can do something (ie: allow 10-11 bit mode on compositor, which is going to be in redstone or in redstone 2).NVIDIA had at least at some point disabled 10bit support on GeForces while it was enabled on Quadros
There is some slack there, although given the rumored 100mm2 or so less die area, power density concerns may limit how much the silicon could be pushed in order to position Polaris 10 against the 1070, besides unknowns related to where GCN's preferred clock range might be at this node.
Given how Polaris 10 is confirmed to be the higher performance part, I'm sure it's supposed to be a ~230mm^2 chip. I haven't heard of a <100mm^2 GPU, not even the lower-end Polaris 11, but it could be that..
No it isn't just 10bpc is much much more than that. I suggest you to read more about it.Regarding this HDR thing - that's just 10 bpc output right? I've run 10bpc on Radeon 6950 and GTX 970. I have a Benq BL3200PT monitor.
I experimented a bit with Alien Isolation's deep color setting. I really couldn't see anything different. I assume higher color depth should reduce banding problems.