A standard HSLS should work, but GameWorks is a "checkbox". The devs don't have the source code. Maybe they will use HDAO for Mantle.Mantle uses HLSL, so if it's a standard HLSL effect it should work right?
A standard HSLS should work, but GameWorks is a "checkbox". The devs don't have the source code. Maybe they will use HDAO for Mantle.Mantle uses HLSL, so if it's a standard HLSL effect it should work right?
HBAO has been described enough in the original paper and follow-up tech reports for someone to implement it themselves. In fact, haven't folks already done this? The GameWorks implementation would just be for convenience; it's hardly required.
Furthermore, there are reasonable alternatives if someone doesn't want to implement it for some reason.
Yeah but they describe HBAO+ a fair bit (interleaved rendering+IIRC some temporal reprojection)... it's not hard to understand what's going on.NVIDIA at least makes a clear distinction between HBAO and HBAO+
1. What is important about Mantle?
Because Mantle is so new, and so different, the development cost is higher than normal. In order to understand why it’s worth it, you need to understand just how important Mantle is.
2. What does Mantle Buy You?
Simply put, Mantle is the most advanced and powerful graphics API in existence. It provides essentially the same feature set as DX11 or OpenGL, and does so at considerably lower runtime cost.
.
.
.
Civilization, it turns out, requires a significant amount of rendering to generate our view of the world, and that in turn means we are required to make many, many more draw calls than you might expect..
.
.
.
Besides being more efficient, core per core, Mantle also enables fully parallel draw submission (this has been attempted before, but never with the same degree of success). Until now, the CPU work of processing the draw calls could only by executed on one CPU core. By removing this limitation, Mantle allows us to spread the load across multiple cores and finish it that much faster.
.
.
.
Finally, the smallness and simplicity of the Mantle driver means that it will not only be more efficient, but also more robust. Over time, we expect the bug rate for Mantle to be lower than D3D or OpenGL. In the long run, we expect Mantle to drive the design of future graphics APIs, and by investing in it now, we are helping to create an environment which is more favorable to us and to our customers.
We heard nothing of the development of a new version of D3D12 until sometime after the early tests of Mantle were indicating that this radically different API model could work, and work well – data which we shared with Microsoft. What prompted Microsoft to redesign DX is difficult to pin down, but we’d like to think that it would never have happened if Mantle hadn’t changed the game. We are delighted in DX12’s architectural similarities to Mantle, and see it is a validation of the pioneering work that Oxide was privileged to be part of.
Does D3D12 mitigate the need for Mantle? Not at all. Though it is difficult to determine without more complete information on D3D12, our expectation is that Mantle will have the edge of D3D12 in terms of CPU performance. This is because the GCN architecture by AMD is the most general of all the current GPUs, and therefore the driver needs to do a bit less.
I do chuckle a bit at these sorts of claims (they are not uncommon). Developers have all the documentation that they need to write their own driver/interface for Intel and AMD hardware for a while now. Hell they even have a big head start from the open source Mesa stuff. But yet they don't... because realistically the claim based on the silly notion that game developers are rock-stars and driver folks have no idea what they're doing. Just for various reasons game devs are allowed to be mouthy about driver issues but driver guys have to silently fix 90% of broken games and get blamed for the other buggy 10% that they don't yet have workarounds for. Note that I am neither a driver developer nor a game developer, but I get to see both sides regularly.Dan said:Admittedly, this was after advocating no API at all caused the hardware architects’ faces to pale a bit too much.
Developers have all the documentation that they need to write their own driver/interface for Intel and AMD hardware for a while now. Hell they even have a big head start from the open source Mesa stuff. But yet they don't...Sometimes I do wish that someone would go ahead and write a path that directly targets (each version of) Intel and AMD hardware interfaces. I want to see how well that goes for users when they get a new piece of hardware and can't run the game (or run it slower) because the developers haven't written a path for it yet/ever.
No, that is exactly my point. In the same paragraph people claim that that don't have the resources to do all these things but then implicitly dismiss them as easy problems and complain about poor driver engineering when there are bugs (even if they are in application code...). Turns out GPU hardware interfaces and designs aren't as stable as CPUs...You're missing the point. Just having the docs for a particular chip is not sufficient, and not what we've been asking for. Yes, we have the ability to write our own drivers for specific parts, but we don't have the bandwidth to maintain those drivers for every new part. We also don't have the ability to deliver new drivers on the day the hardware ships, and we don't have the ability to walk down the hall and talk to the hardware people when we run into undocumented quirks.
So along with that are you willing to give up on any new features for 5 years? Are you willing to pay a cost for basically putting a driver in hardware? I'm not sure how much you know about x86 decoders but I don't imagine you want to slap one of those on the front of every execution unit on a GPU (RIP Larrabee , although it had a separate vector instruction set of course).What is needed is something that is like x86 in that it is low level, and like DX/OGL, in that it is well-defined and forward compatible.
There is nothing exciting or high overhead about the hardware command protocol itself. I see no reason to go lower level than what Mantle and D3D12 already do - they have already hit the major problematic driver points for modern GPUs.If an IHV published an IHV-specific command protocol, built the necessary hooks into the major OSes, and committed to supporting it across future hardware generations, would they get support?
Compared to what?I'm a big fan of Mantle but so far support seems to have been dissapointingly poor.
They could start by writing a decent DX11 driver first. In the StarSwarm benchmark (oxide's own engine), NVIDIA cards wipe the floor with AMD cards in DX11 path, while also giving the Mantle path a run for it's money.I see no reason that AMD wouldn't be able to write a D3D12 driver that is as efficient as (or more) their Mantle driver, but time will tell.
Compared to what?
The "most general"?
Perhaps there are some huge UE4 and CryEngine announcements just around the corner
Welcome Josh, nice to have you contribute here
In reality, you really do want an API of some form, you just want it to solve all of the above problems while introducing zero overhead.
So along with that are you willing to give up on any new features for 5 years? Are you willing to pay a cost for basically putting a driver in hardware? I'm not sure how much you know about x86 decoders but I don't imagine you want to slap one of those on the front of every execution unit on a GPU (RIP Larrabee , although it had a separate vector instruction set of course).
Thus you really have two options... either slow down/stop the pace of GPU innovation in the name of hardware interface convergence or continue to have a translation layer, even if it's fairly thin.