Hairworks, apprehensions about closed source libraries proven beyond reasonable doubt?

Status
Not open for further replies.
Food for thoughts:




**AMD had a similar situation with GCN favorable shaders in Tomb Raider, Deus Ex HR, Dirt Showdown. NVIDIA doesn't complain, they move on and optimize until sometimes the situation becomes reversed. I expect this will happen again.

**AMD retired Mantle after failing to achieve any of it's company invested goals. It had many bugs that actually worked against some of it's bullet points (like increasing memory footprint), it carried the burden of architecture backward compatibility and offered nothing in terms of actually reducing CPU overhead to a tangible difference or offering any advantage over the competition. Most of it's headlines were made through sabotaging the DX11 path on AMD hardware. it also failed to stay relevant for it to affect image quality. So, Yes Mantle pressured the industry to accelerate the introduction of DX12 (same thing with G.Sync and Variable Refresh Rate), but that's about it when it comes to how Mantle was relevant to the consumer. Same logic can be applied to G.Sync too if it failed to differentiate itself enough.

I have absolutely no idea where to start when i read this .


AMD retired Mantle after failing to achieve any of it's company invested goals. It had many bugs that actually worked against some of it's bullet points (like increasing memory footprint), it carried the burden of architecture backward compatibility and offered nothing in terms of actually reducing CPU overhead to a tangible difference or offering any advantage over the competition.

WHat ?
 
Last edited:
Windows API? What is that? Do you mean the DDI layer? If so, how would one correlate the DDI calls from one vendor to another given that the driver implementations are completely different?
I'm pretty sure the Direct X API calls are identical between AMD and Nvidia. As long as GameWorks uses those, you can easily check whether or not it issues the same work for one and the other.
 
It seems AMD has had issues with tessellation way back in 2010. Is there any particular reason why they haven't tried to incorporate a fix? I guess it may come down to how much you're willing to re-investing profits back into development.

Recently, AMD has been attacking nVidia and Ubisoft for their HAWX 2 benchmark, which would be unfair to AMD hardware, by ‘over-tessellation’, like certain other tessellation benchmarks. AMD tries to convince us that we need adaptive tessellation, and we don’t want to go smaller than 16 pixels per triangle.

Where AMD is hurt is in their limited throughput.
I mean, if you take one triangle over the entire screen, and tessellate it down to triangles of about 16 pixels, as AMD suggests… then that is still adaptive tessellation…
It will not work well on AMD’s hardware though, because the pain is in the conversion of 1 triangle to such a large number.
AMD can only do limited amplification of triangles, so you need to feed it pretty detailed geometry to begin with, and then have limited subdivision done by the tessellator, eg each triangle converted to 4 smaller ones.
But that is not how tessellation is meant. Tessellation is meant to serve two purposes:
1) Reduce the overall memory/bandwidth required for geometry, by generating details on-the-fly.
2) Improve image quality by smoothly moving from lower levels of detail to higher levels of detail, and avoiding any kind of undersampling/oversampling problems.
But in order to achieve these two things, you need to be able to handle a large range of tessellation factors, so you can start with very low detail geometry, and have it tessellated down to almost per-pixel details when required (again, this is all adaptive).
Since AMD’s range is so limited, you can’t really achieve either of the purposes for tessellation. You need to feed it highly detailed geometry in the first place, which means you still need a lot of memory/bandwidth. And you still need to rely on ‘oldskool’ multiple levels of fixed geometry, with their popping and undersampling/oversampling issues.
Bottom line is just: AMD’s tessellation is a bit of a failure. Just like the geometry shader was a failure for both AMD and nVidia in DX10. You couldn’t do what you wanted to do, because throughput was too slow.

AMD fell for the same trap again in DX11, nVidia went with a complete redesign, which apparently works much better (although we’re still not quite there yet).
AMD is trying to put up a smokescreen by trying to make the focus on triangle size, but that is not the REAL issue here. The real issue is that their tessellator is a bottleneck. It cannot subdivide triangles and spit them out fast enough to keep the rest of the GPU busy. That’s why nVidia chose to do a fully parallelized implementation, rather than a serial one (as I said, that’s the mistake made with the geometry shader, which theoretically could already do a bit of tessellation, it just couldn’t spit out the triangles fast enough).

https://scalibq.wordpress.com/2010/...-need-to-lie-about-tessellation/#comment-7053
 
Last edited:
Almost none of this is correct.
Nope.

Mantle inflated memory requirement in several games (like Battlefield 4 and Dragon Age). Cards with 2 GB of RAM had issues running the Mantle path, while having none of that in DX11. The problem still exists to this day.

Tonga had reduced Mantle performance compared to older GCN designs. Keep in mind that Tonga was just a mild re-design of GCN. so compatibility is not a strong point for Mantle at all.

Mantle performance on AMD cards was comparable to DX11 performance on NV cards. While AMD cards had abysmal DX11 performance in Mantle games (even some pure DX11 games). The issue still exists to this day, it even shows it's head in synthetic benchmarks.

Mantle failed to increase CPU load, or put CPU cycles into better use. In GPU limited scenarios, it's benefits were essentially non existent.

Mantle offered no image quality enhancement whatsoever.

All of these points were covered in extensive details in the Mantle thread. So please tell me where exactly did I go wrong?
 
Okay enough FUD now, if a mod is willing to clean up the mess if might open again, otherwise it's an exemple of ignorance and/or fanboyism.
 
The only way to be sure is to nuke the thread from orbit!
 
Status
Not open for further replies.
Back
Top