I will take higher minimal framerate over higher average framerate any day. The DX12 path is worth it no matter who made your GPU.All I know it's the only "DX12 game" in which DX12 reduces performance without affecting IQ on both brands...
I will take higher minimal framerate over higher average framerate any day. The DX12 path is worth it no matter who made your GPU.All I know it's the only "DX12 game" in which DX12 reduces performance without affecting IQ on both brands...
I think we can drop the GameWorks is to blameYeah the DX12 path in this game definitely looks "strange"...only game that has reduced perf in DX12, runs better on NV than AMD (when equivalent cards are compared), only has ASYNC implemented when it suits NV....but then again it's an Nvidia GameWorks branded PC port so..
Funny how they made the effort to put async in now that nvidia can get some benefits of it
Maybe they just figured other features had a higher priority.
I tried to find it in the review but find no information.I find it very hard to believe that async hasn't been used in the XBone version since November, or that it isn't in the PS4 port since the beginning.
How that AMD-enhancing performance feature didn't get ported on the original DX12 patch and appears now that the new nvidia cards seem to support it, is real mystery. Especially given all the nvidia branding that came with the PC port.
Regardless, here's a Fury X vs. 980 Ti comparison with the new patch:
http://www.overclock3d.net/reviews/...e_tomb_raider_directx_12_performance_update/5
Looks like RoTR has just joined the list of titles where Maxwell loses and GCN gains performance when transitioning to DX12.
They did use PresentMon for at least one game historically (Quantum Break but also because it is UWP) but felt it was cumbersome for them to use, so no idea if they stayed with it for DX12 in general as I cannot find any recent mention in review test setups, maybe me missing it *shrug*.This performance data is taken from actual gameplay, from the same section of the game as we have used in our previous DirectX 12 performance overview. Our original DirectX 12 data predates that Rise of the Tomb Raider built in benchmarking tool.
Nop. I don't usually follow that site and I only got there through comments from news elsewhere, though I think it went through wccftech in the meantime.They mention not using the preset benchmark but wondering if you know more how they do this for DX12 games:
Why is that a DX12 specific problem and not something generally related to internal benchmarks?Great analysis and shows how DX12 can be a disappointment, especially again regarding internal benchmark compared to the actual real-world game.
You're luckier than me, I couldn't make sense of the translation, even having insight of previous versions of the engine... (Napoleon/Shogun 2/Rome 2)I have just read the performance review of Total War: Warhammer at pcgameshardware.
Great analysis and shows how DX12 can be a disappointment, especially again regarding internal benchmark compared to the actual real-world game.
http://www.pcgameshardware.de/Total...Specials/Direct-X-12-Benchmarks-Test-1200551/
I used the Chrome translator, so not perfect explanation but still makes sense.
Cheers
FRAPs does not work correctly with DX12.Nop. I don't usually follow that site and I only got there through comments from news elsewhere, though I think it went through wccftech in the meantime.
Maybe they used FRAPS? Their DX11 results don't differ a lot from other results taken with the internal benchmark?
Hopefully, the GTX1060 and RX470 reviews will use the new patch and we'll be able to see results based on the internal benchmark.
Why is that a DX12 specific problem and not something generally related to internal benchmarks?
DX12 can be a disappointment because most developers are not implementing it well, nor optimising it for each architecture, and I can see this causing more extreme competition between AMD and Nvidia in game sponsorship.
What's "not made with DX12 in mind" supposed to even mean? There are two performance enhancing features in DX12:
1. Significantly reduced D3D API bottlenecks
It's not. But neither is this really a GPU feature (forgetting hardware that can't run DX12 because of this).This isn't a single feature.
It's not. But neither is this really a GPU feature (forgetting hardware that can't run DX12 because of this).
As you can see there is a very healthy performance increase, in fact in Ultra HD running towards 20% just from switching OpenGL to Vulkan mode.
For Nvidia that is not a similar result. I ran the test with the very latest 368.69 WHQL driver on the GeForce GTX 1070, Vulkan does not seem to kick in. Overall seen from the previous driver OpenGL 4.5 perf has increased overall a tiny bit, however with Vulkan activate in WQHD and UHD there even is a tiny bit of negative scaling.