monstercameron
Newcomer
I had to log in just to agree with this point!Now that would be a very valid excuse. F-ing animals!
I had to log in just to agree with this point!Now that would be a very valid excuse. F-ing animals!
Pineapple and anchovies! The horror... the horror...OK, so AOTS can't be used because it doesn't look that good and Hitman Ch.1 can't be used because it's not optimized for nvidia enough.
What's the current excuse being used for putting aside TW Warhammer? Does someone in Creative Assembly eat pizza with pineapple?
Hard to say, I don't think the game even supports DX11 and while the engine does technically, it was built for Mantle & Future APIs
For the record in this vid Raja kinda slips out 'less than 150W'
I'm inclined to doubt they are all that far under.
Saying 150W if they could have quoted something more like 'just over 100W' just opened them up to looking bad in perf/W vs the faster, also 150W 1070.
But I'd be happy to be wrong.
Using AOTS as an indicator of what games moving to Dx12 could do is extremely misleading.
The unique rendering architecture of this game is far more important for the performance and SLI/CF than the API. All these AAA Dx12 games with deferred rendering that will be released in the next 3 years will have NOTHING in common with AOTS. Even ultra perfect implementations of Dx12 will not bring characteristics of popular engines closer to AOTS. Again - this is absurdly misleading for gamers.
Thanks to the more correct (but also much more expensive computationally) way of rendering in AOTS multi-GPU support is automatically much, much better, but all these commercial engines that cheat as hell with 2D lighting can easily achieve much more visually impressive look. Due to these screen-space multi frame cheats they cannot support multi-gpu well and DX12 will NOT solve that problem, because core rendering architecture is far more important than an API.
It was obvious right after GDC. AOTS's engine does shading in texture space via compute shaders(no place for delta compression), it does it for two 4096x4096 texture atlases(16.7 millions of pixels for terrain, 16.7 millions of pixels for units = tons of overshading) and it uses quite simple shaders for lighting(hence mediocre look). Such render will always be bandwidth (especially with MSAA) and texture bound + there are no heavy vertex shaders(you don't need skinning and other stuff for simple units), so geometry pass should be quite light too = good use case for async, all in one -- it's very very very different from modern deferred shading engines and it's a perfect fit for Fury with its massive raw bandwidth, texturing speed and asyncand regarding AOTS controversy, one interesting comment on reddit by Kontis:
why not use chapter 1.5
OK, so AOTS can't be used because it doesn't look that good and Hitman Ch.1 can't be used because it's not optimized for nvidia enough.
What's the current excuse being used for putting aside TW Warhammer? Does someone in Creative Assembly eat pizza with pineapple?
Double standards, Gears Of wars was discarded because it's broken on AMD hardware. rise of Tomb raider also got discarded for being broken on both vebdors.OK, so AOTS can't be used because it doesn't look that good and Hitman Ch.1 can't be used because it's not optimized for nvidia enough.
What's the current excuse being used for putting aside TW Warhammer? Does someone in Creative Assembly eat pizza with pineapple?
Note that the DirectX 12 patch for Warhammer is not out yet. Any current benchmarks are done with a press beta.What's the current excuse being used for putting aside TW Warhammer? Does someone in Creative Assembly eat pizza with pineapple?
It's quite obvious really. AMD's vigorous push to represent their cards in light of the ashes benchmark will do nothing to improve their position, it wil give their products false image or expectation which will only serve to hurt their products when it's fully dissected and it falls short on those expectations.
Double standards, Gears Of wars was discarded because it's broken on AMD hardware. rise of Tomb raider also got discarded for being broken on both vebdors.
That comment is only relevant if developers don't bother to take advantage of DX12. Which may or may not happen, but considering how closely Dx12 resembles the programming done on consoles, I would imagine there's more likelihood of developers adapting to Dx12 than continuing to have PC ports of games being radically different from their console counterparts.
Some developers will transition more quickly than other developers.
Regards,
SB
Unfortunately, Async compute will not be as relevant in PC ports as it is in console versions. Why? It's really simple: all current gen consoles have a hardware scheduler - it makes sense to use it. Meanwhile most of the PC GPUs don't have it and maintaining two rendering paths for PC (universal and GCN) just to get some perf boost in the smaller pool of potential customers will be considered absurd by many devs. The last nvidia card with a hardware scheduler was Fermi. Cutting it out gave Nvidia nice perf boost in Kepler (die space used for other things instead), so I doubt they will bring it back in the future.
kontis sez:
Pffft you & your logical consistency this is GPUsPeople didn't have a problem when developers took advantage of things on Nvidia hardware that made AMD look bad. So they certainly shouldn't have a problem if it's the other way around.
Then that would be on Nvidia to bring it back. Why should dev's have to make their life more difficult if they don't have to. They have to maintain multiple versions of games anyway. If a Dx12 version that very closely matches the console version works, then why wouldn't they use it? Just because Nvidia don't want them to because it makes them look bad?
People didn't have a problem when developers took advantage of things on Nvidia hardware that made AMD look bad. So they certainly shouldn't have a problem if it's the other way around.
But if this results in notable differences in a shipping application, it is most likely the developers fault. You should always check your fp16 code on both fp16 and fp32 to ensure that the image looks the same. #ifdef the type attribute (allows you to disable fp16 from all shaders with a single line code change). Every rendering programmer who has worked with PS3 knows how to deal with this. But fp16 support on modern PC hardware is still very limited, meaning that many developers don't yet have full hardware matrix to test it.
Your really being pendantic on this, did you raise this many arguments regarding Chapter 1 being used? No.In the review you linked it doesn't work correctly on either one
Any chance PCGamesHardware could revisit Chapter 1 then (both DX12 and DX11)?I didn't really look closely at Hitman, but: Isn't the game using the latest engine iteration for all chapters?
Until I read the footnotes I discount GoW. The rest seems reasonable although deceptive for Hitman given it's issues you guys say in chapter 1 versus 2.Was? They do include GoW in their "performance leadership" in dx12 slide, though a dx9 game rendered in dx12 doesn't look any good.
Screen space techniques and temporal reprojection do not prevent multi-gpu techniques. AFR doesn't work well with techniques using last frame data, but who would want to use AFR in DX12? AFR adds one extra frame of latency. AFR was fine when developers could not write their own custom load balancing inside a frame. The DX9-11 driver had to automatically split workload between two GPUs and splitting odd/even frames worked best with no extra developer support needed. This was a big compromise.Thanks to the more correct (but also much more expensive computationally) way of rendering in AOTS multi-GPU support is automatically much, much better, but all these commercial engines that cheat as hell with 2D lighting can easily achieve much more visually impressive look. Due to these screen-space multi frame cheats they cannot support multi-gpu well and DX12 will NOT solve that problem, because core rendering architecture is far more important than an API.