No DX12 Software is Suitable for Benchmarking *spawn*

This screen shot showing the games ocean being fully tessellated under the City for no reason has been debunked?

Where?

That tessellated geometry is only present in that particular view and not in the normal game view apparently. I won't pretend to know the specifics but @Dictator did a detailed post on it some time back if I recall correctly.

EDIT: @trinibwoy beat me to it.
 
We know DX12 requires more developer side optimisation for specific vendor architectures than DX11 where the optimisation was done more by the vendors themselves in the driver and architecture specifics were more hidden from the developer by a thicker abstraction layer.

This I don't agree with as Nvidia's 522.25 driver has shown there is still plenty of driver level optimisations and performance to be extracted at the driver level in DX12 games.
 
This I don't agree with as Nvidia's 522.25 driver has shown there is still plenty of driver level optimisations and performance to be extracted at the driver level in DX12 games.

There's no doubt some optimisation can still be made through the driver. But the entire premise of DX12 is that it's a thinner abstraction layer that gets the developer closer to the metal of the GPU. And so a result of that is they will need to do more of the optimisation to the metal themselves rather than relying on the driver. That's pretty much been the central discussion around low level API's since Mantle.
 
This Twitter exchange between Alex B and a Crytek rendering dev for example.

That tessellated geometry is only present in that particular view and not in the normal game view apparently. I won't pretend to know the specifics but @Dictator did a detailed post on it some time back if I recall correctly.

EDIT: @trinibwoy beat me to it.

Now the article I have seen states this:

Gazing out from the shoreline, that simulated water looks quite nice, and the waves roll and flow in realistic fashion.

GPU PerfStudio gives us a look at the tessellated polygon mesh for the water, which is quite complex.

From the same basic vantage point, we can whirl around to take a look at the terra firma of Manhattan

In this frame, there’s no water at all, only some federally mandated crates (this is an FPS game), a park, trees, and buildings. Yet when we analyze this frame in the debugger, we see a relatively large GPU usage spike for a certain draw call, just as we saw for the coastline scene above.

That’s right. The tessellated water mesh remains in the scene, apparently ebbing and flowing beneath the land throughout, even though it’s not visible

So the debugger is showing the mesh is there under the City, if the mesh wasn't there it wouldn't be showing in the frame data.

But that's not the only problem with tessellation in Crysis 2, it tessellates a lot of objects that don't look any different to the them running in DX9 and applies silly levels to flat surfaces that don't need it.

The tessellation implementation is extremely poor and wasteful in a lot of places.
 
You mean like these "concrete slab" in Fortnite UE 5.1? Funny how the exact same result gets praised now...
So there's concrete slabs in Fortnite, where perfectly flat surface is split into countless polygons just for the sake of it, without any visual or such benefits?
 
Now the article I have seen states this:



So the debugger is showing the mesh is there under the City, if the mesh wasn't there it wouldn't be showing in the frame data.

Yes the article is correct, but Alex's tweet is explaining why the debugger mode isn't representative of the actual gameplay. That ocean mesh is culled in the game but not in the debugger.
 
Yes the article is correct, but Alex's tweet is explaining why the debugger mode isn't representative of the actual gameplay. That ocean mesh is culled in the game but not in the debugger.

Alex's Tweet was about "Wireframe mode in CryEngine"

This is not the same as GPU PerfStudio used in the article which is an external debugger that has no reliance on CryEngines wireframe mode.
 
Given this is a complete disaster probably not. A hell of a lot of games do run better under low level APIs on AMD GPUS though. Both RDNA and older GCNs from the Paxwell era.
Are you sure that isn’t a result of the consoles playing a big factor here. both Sony and MS supported developers with low level apis documentation and support with their own hardware specifically for older GCNS and now RDNA which all happen to be AMD.

We’re talking about nearly a decade of building engines around these consoles, the level of optimization around the api and hardware is likely to be further along here.
 
So there's concrete slabs in Fortnite, where perfectly flat surface is split into countless polygons just for the sake of it, without any visual or such benefits?
Well Nanite structures contain millions of small polygons just like highly tessellated surfaces. There's a huge difference between the two though, they're not really comparable. Nanite is far more efficient at processing these types of objects, faster than traditional objects under standard rasterization pipelines.
 
There's no doubt some optimisation can still be made through the driver. But the entire premise of DX12 is that it's a thinner abstraction layer that gets the developer closer to the metal of the GPU. And so a result of that is they will need to do more of the optimisation to the metal themselves rather than relying on the driver. That's pretty much been the central discussion around low level API's since Mantle.

The discussion is somewhat pointless to be honest without knowing what specific DX12 knobs favor AMD or Nvidia. It’s possible that optimizations for consoles are helping RDNA on PC in some games. Or maybe there’s some objectively useful thing that Nvidia didn’t bother to implement properly. We don’t know.

The only thing we know for sure is that DX12 has sucked for years on all PC architectures. Pointing out one or two games where it works decently doesn’t change that fact. Soon it won’t matter as there wont be any DX11 fallback to compare against.
 
Yes, that is Nanite for you.
At least Matrix demo has decently sized triangles where appropriate, like the road. If it was anything like Crysis concrete slab it would be tiniest triangles like in finer geometry.
 
This thread is kind of dumb. It’s very easy to find a lot of developers that think dx11 is better than dx12, because it’s very easy to accidentally tank your performance in dx12 and it’s generally more complicated to use. You can also find developers that think dx12 is better because you have more explicit control. And I’m talking devs working on major games. The api does not describe a singular correct way to do things. Many things can be implemented in many ways, some that will favour one architecture over another and some that will be terrible across the board.
 
There is nothing to "match" with h/w, the API expose everything needed, the problem is in how the s/w is using the API.
I'm not going to say there probably aren't goign to be exceptions to this rule.... but
I think developers will need to code DX12 games for RDNA and GCN, and DX12 separately for Nvidia GPUS. Just like they do with Xbox and PS.
That's the level they need to go for low level optimization.

if they aren't interested in that, stick with DX11.

I think it's growing pains that will hammer out over time, but DX12 was never designed to be a 1 fit all API. That would be DX11s job, DX12s job is to allow as much control over the GPU as possible, and as we can see, AMD and Nvidia GPUs do not work the same.
 
Back
Top