The link is in the post I quoted.
Regards,
SB
uh.... you didn't quote a post on that entire page(Well, entire page using the forum defaults).
The link is in the post I quoted.
Regards,
SB
have to suffer the abysmal DX11/OpenGL performance of their hardware
The new APIs aren't exactly low level, they're lower level, and they're not an excuse for anything or anyone. Many developers asked for APIs like DX12 and Vulkan because there are advantages to the application having control over memory, syncs, etc. Even though developers have experience with similar APIs on consoles there's still a learning curve. And if you think AMD has the market power to push Microsoft and the other Khronos companies around you're mistaken. We have these new APIs because multiple hardware and software companies agree they're a good idea.Right now, and in many occasions, low level APIs are a just an excuse by AMD to shift the blame of optimizing to developers rather than do it themselves, worse yet, this comes later after game launch, suffers delay, gets incomplete support "Beta" and worst of all, in many cases (Ashes, RoTR, Warhammer) can't even beat the performance of their competitor's High level API, that they are supposed to maintain and optimize for.
The new APIs aren't exactly low level, they're lower level, and they're not an excuse for anything or anyone. Many developers asked for APIs like DX12 and Vulkan because there are advantages to the application having control over memory, syncs, etc. Even though developers have experience with similar APIs on consoles there's still a learning curve. And if you think AMD has the market power to push Microsoft and the other Khronos companies around you're mistaken. We have these new APIs because multiple hardware and software companies agree they're a good idea.
I guess it depends upon interpretation as the APIs support this, and for some games it is integral-fundamental to the engine such as we see for AMD and Vulkan Doom, or Nvidia with openGL and Doom.I agree shader intrinsics are low level, but they are not fundamental to the new APIs. What makes these more practical for AMD is console developers already use these instructions. Gameworks was a brilliant response to the console effect for AMD because it allowed Nvidia to implement low level optimizations and push them into games.
Exactly, some developers, not all of them, in fact many others have approached this subject cautiously, mentioning the cost of development and the burden of maintenance as barriers. So far, results have been mixed at best, and in many cases it leaned on the very bad side. So if current trends continue these "lower level APIs" will hit the drain for sure. Other developers would be hard pressed to follow suit in a redundant and failed endeavor. DX10 comes to mind.ThWe have these new APIs because multiple hardware and software companies agree they're a good idea.
If it were truly that bad we wouldn't have all these developers getting on board in the first place. I doubt it's a fluke the low level APIs have seen the fastest uptake of any graphics API in recent year. There are some markets, mobile, where the benefits are significant. Writing off the upside because some IHVs currently lack support seems short-sighted.Exactly, some developers, not all of them, in fact many others have approached this subject cautiously, mentioning the cost of development and the burden of maintenance as barriers. So far, results have been mixed at best, and in many cases it leaned on the very bad side. So if current trends continue these "lower level APIs" will hit the drain for sure. Other developers would be hard pressed to follow suit in a redundant and failed endeavor. DX10 comes to mind.
The problem with this argument is that the low level APIs are held back in some cases by reliance on high level APIs still being supported. Take away that requirement and the improvements will come. It would seem to be an argument between better looking shadows in the background and an order of magnitude more characters in the scene. I haven't seen many details on high priority compute with high level APIs and I'm sure there is a reason for that.And no, not all "relevant" hardware vendors wanted lower APIs, and the only relevant three when it comes to gaming are Intel, NVIDIA and AMD. The first two were concentrating on pushing the visual front more, with the likes of Order Transparency, increasing scene complexities through using more draw calles (NV's FF DX12 demo comes to mind), And so they got behind the APIs that will allow such things to become standard. Which is nothing unusual in by itself. Every vendor will support every API, it's not rocket science, you can barely support something till it dies (OpenGL, OpenCL, DX10 ..etc). Or you can get behind it to make it truly universal (DX9, DX11). Big difference. And so their support for the new APIs lies within the promises of Graphics advancements. But if you are going to use such APIs to increase the performance of low graphics games (such as half of DX12 titles), -through vendor gouging- to barely keep up with the performance levels of old APIs, then you are not going to get far with them. This will be the quickest way to bury such APIs. And no body will care.
I'd still argue async is somewhat fundamental to what the low level APIs are doing. Fully async behavior is just the result of multiple threads running with zero synchronization. Not doing work is inherently faster and simpler than doing work. The biggest advantages of the low level APIs in my mind are the high priority queues. Being able to accelerate game logic and other subsystems independently of the rendering thread. That's a feature we've yet to see demoed, but appears to be coming. Increasing the complexity and realism of an environment is a huge leap IMHO. No more limiting characters on screen because they would crush your framerate with all their independent animations and decision making.Suddenly all the shiny new promises for DX12 are gone and are replaced with Async, which is I repeat nothing but an excuse for AMD to shift the burden of optimizations off their shoulders. Such is the current trend. If someone thinks big PC vendors will rally behind DX12/Whatever because it offers Async then they are severely mistaken.
Of course, the problem is, gaming is not related to mobile, gaming depends on gameplay and graphics innovations, developers will not invest time and resources in something that doesn't advance either of them. If they did so in the short term, they will not long term. Look how many studios experimented with DX10 then abandoned it for DX9. The other problem is the focus on certain features that truly are redundant in the large scheme of things. Almost all modern hardware support DX12, but did we really get something out of it other than the mildly selective fps boost of some ugly/average looking game?There are some markets, mobile, where the benefits are significant. Writing off the upside because some IHVs currently lack support seems short-sighted.
Except that in DX12 we don't have neither, so far at least, games run in DX12 barely faster than DX11. Without looking or behaving any differently.It would seem to be an argument between better looking shadows in the background and an order of magnitude more characters in the scene. I haven't seen many details on high priority compute with high level APIs and I'm sure there is a reason for that.
I'll drink to that.Increasing the complexity and realism of an environment is a huge leap IMHO. No more limiting characters on screen because they would crush your framerate with all their independent animations and decision making.
Except that in DX12 we don't have neither, so far at least, games run in DX12 barely faster than DX11. Without looking or behaving any differently.
While I disagree with much of your opinion about the new APIs I'm not going to keep debating the issue. We'll just have to wait and see how it pans out over time.Exactly, some developers, not all of them, in fact many others have approached this subject cautiously, mentioning the cost of development and the burden of maintenance as barriers.
I have yet to see a game were low level API would provide inferior experience to high level. Plus not many people have CPUs as powerful as most hardware review sites are using. Only Warhammer screwed the CPU load somehow- but that might be why they call it beta.But if you are going to use such APIs to increase the performance of low graphics games (such as half of DX12 titles), -through vendor gouging- to barely keep up with the performance levels of old APIs, then you are not going to get far with them. This will be the quickest way to bury such APIs. And no body will care.
http://arstechnica.co.uk/gadgets/2016/07/amd-rx-480-crossfire-vs-nvidia-gtx-1080-ashes/While my second RX 480 had to be sent back to the publication I borrowed it from before I could conduct more tests (thanks, guys!), the folks over at TechPowerUp also managed to pull together some Crossfire benchmarks. It found that while the RX 480 fared well in certain games, on average it was much slower than a GTX 1080 and just slightly slower than a GTX 1070. Given that the 1070 costs roughly the same as a pair of 4GB RX 480s, buying them outright isn't a particularly good idea.
....
Ultimately, the lesson is this: always take company claims with a pinch of salt, and if you are looking to promote a product, you can't go wrong with a bit of blood and guts.
That is not true, early DX11 games (such as Battleforge and Bad comapny 2) offered performance boost for all hardware, other games that came after them offered Tessellation, better shadows, lighting, better MSAA performance, better AO, reflections and post processing. These elements helped separate PC ports over consoles even during X360/PS3 era. (look at Crysis 2, Far Cry 3 and the difference between DX9 and 11 there, just to name a few).Think of it another way. Dx11 and Dx10 games didn't look or perform much differently from Dx9 games until the PS4/XBO came out. Why? Because up until that point virtually every game created had a Dx9-ish foundation due to the PS3/X360 and thus was primarily designed with Dx9 in mind.
I think we still yet to rid ourselves of confusions and misconception that run rampant on the internet, as well as contradictory statements. First of which is consoles support all DX12 fearures, (not true by the way) which means they run DX12 by default. Then it is said that multi-plat games are designed for DX11, which -According to that logic- is false as well. Because in reality they are designed for consoles.the consoles already basically support all/most features of Dx12/Vulkan.
No, Tomb Raider as well, also Hitman and Ashes for NV.Only Warhammer screwed the CPU load somehow- but that might be why they call it beta.
While it maybe true Doom can run on a 2 core CPU under Vulkan, the truth is, once you go quad core, OpenGL will deliver better fps, only with very high end CPUs does Vulkan surpass OpenGL. (at least for AMD).Plus not many people have CPUs as powerful as mosthardware review sites are using.
Take that bias away and you get something like Time Spy benchmark where every chip from every vendor falls in it's right place.
I am sorry if you feel that way. But I would rather you refrain from calling people biased just because they engage in a plausible argument with opposite opinions to you. Calling people biased and making Un supported, unreferenced claims are easy. Providing counter argument is not.At this point I'm going to follow 3dcgi's example as your bias is extremely strong and there's no getting through it.
I never said that, however it tells you to a fairly accurate degree how video cards will behave relative to each other.Futuremark benchmarks have never been representative of how graphics cards perform in actual games.
BTW - early Dx11 implementations (Dx11 bolted on similar to the current state of Dx12) also featured severe performance hits quite often with little to no noticeable visual improvements. Especially if you didn't have a top of the line CPU.
You willing to count Doom on Nvidia hardware? Feels a bit more laggy, stutters intermittently due to DXGI sync issue and thus runs not really faster than OpenGL.I have yet to see a game were low level API would provide inferior experience to high level. Plus not many people have CPUs as powerful as most hardware review sites are using. Only Warhammer screwed the CPU load somehow- but that might be why they call it beta.
Isn't that true basically for all hardware? One or the other feature is not supported and of the supported ones, one or more are at tiers below the maximum.The truth is, consoles support some features of the full DX12 set, and those that are supported are low tiers as well.
You willing to count Doom on Nvidia hardware? Feels a bit more laggy, stutters intermittently due to DXGI sync issue and thus runs not really faster than OpenGL.
edit:
My colleague's article is now live: http://www.pcgameshardware.de/Doom-2016-Spiel-56369/Specials/Vulkan-Benchmarks-Frametimes-1202711/