No DX12 Software is Suitable for Benchmarking *spawn*

This entire thing, We should ask a group of developers to create an article on async compute and shut everyone up lol, problem solved but then that would ask them to take time out of there day to create and coordinate each other. I think that would be a good article if one of the websites reaches out to notable developers and does something like this though, it would be an end all be all of stop listening to marketing and random forum posters who don't have a complete understanding of the situation.

We have had developers here that have stated the most critical points on the topic but those posts get drowned out with all the noise.
 
Seems like some Steam users have been led to believe, Time Spy is no valid DX12 test and does not use „proper“ DX12 multi engine dispatch with enough parallelism:
http://steamcommunity.com/app/223850/discussions/0/366298942110944664/?ctp=1
The irony is that the recent work AMD has done with GPUOpen and synergy with consoles will also be lost in the mix of Async Compute pitchforks.
In fact I bet any performance gains from GPUOpen will be put down to async compute by most, which I think is already being seen with Doom.
And if anyone mentions the Shader Intrinsic Function, the response is this cannot be doing anything for AMD because Vulkan is an open standard - read it in a few other sites/reddit.
On a side note, it will be interesting to see what flexibility Nvidia has with their own Intrinsic Functions (although only seen it referenced with Cuda).
Cheers
 
Last edited:
This entire thing, We should ask a group of developers to create an article on async compute and shut everyone up lol, problem solved but then that would ask them to take time out of there day to create and coordinate each other. I think that would be a good article if one of the websites reaches out to notable developers and does something like this though, it would be an end all be all of stop listening to marketing and random forum posters who don't have a complete understanding of the situation.

We have had developers here that have stated the most critical points on the topic but those posts get drowned out with all the noise.

From MSDN:

Most modern GPUs contain multiple independent engines that provide specialized functionality. Many have one or more dedicated copy engines, and a compute engine, usually distinct from the 3D engine. Each of these engines can execute commands in parallel with each other.
Can, not must. It's not difficult at all tu understand that it is not a constraint.
 
This entire thing, We should ask a group of developers to create an article on async compute and shut everyone up lol, problem solved but then that would ask them to take time out of there day to create and coordinate each other. I think that would be a good article if one of the websites reaches out to notable developers and does something like this though, it would be an end all be all of stop listening to marketing and random forum posters who don't have a complete understanding of the situation.

We have had developers here that have stated the most critical points on the topic but those posts get drowned out with all the noise.
You really think that would help? Some are up with pitchforks against Futuremark that they don't do "proper async compute". It's a religion basically. Just waiting for someone to call out a jihad...
 
Seems like some Steam users have been led to believe, Time Spy is no valid DX12 test and does not use „proper“ DX12 multi engine dispatch with enough parallelism:
http://steamcommunity.com/app/223850/discussions/0/366298942110944664/?ctp=1
Read two sentences of the original post and concluded guy probably has no idea what he's talking about...

Press: please don't elevate that silliness to anything more than it already is :) Let's just look at the pretty pixels and forget the rest of the irrelevant noise maybe.
 
Again, Futuremark guys are too much fair.

Not sure if you're being ironic, but one 3dmark version (vantage IIRC?) used to have a physx test that contributed to the main score, where nvidia systems would use the GPU and all others had to resort to that old very unoptimized x87 physx code in the CPU.
 
Not sure if you're being ironic, but one 3dmark version (vantage IIRC?) used to have a physx test that contributed to the main score, where nvidia systems would use the GPU and all others had to resort to that old very unoptimized x87 physx code in the CPU.
I was not ironic, they spend time trying to answer and explain to childish people they did not anything wrong.
About physx: that's not a Futuremark fault if the CPU-software version of physx had legacy x87 calls.
 
Not sure if you're being ironic, but one 3dmark version (vantage IIRC?) used to have a physx test that contributed to the main score, where nvidia systems would use the GPU and all others had to resort to that old very unoptimized x87 physx code in the CPU.

You may have a point, but it's well established at this point that even old versions of physx were well optimized and performance competitive with other physics engines.
 
I was not ironic, they spend time trying to answer and explain to childish people they did not anything wrong.
About physx: that's not a Futuremark fault if the CPU-software version of physx had legacy x87 calls.

It was for using PhysX in the first place. Havok would have been preferable, but that wouldn't have showcased the potential advantages of one IHV at the expense of everyone else. If they are going to use open standards they should use open standards, or at least standards that aren't controlled by an IHV that is inherently going to make it so whatever it controls shows its own hardware in a better light than its competitors (which is what it should be doing).

In that case the blame was fully on Futuremark for using PhysX.

Regards,
SB
 
Read two sentences of the original post and concluded guy probably has no idea what he's talking about...

Press: please don't elevate that silliness to anything more than it already is :) Let's just look at the pretty pixels and forget the rest of the irrelevant noise maybe.
Agreed, but to late I' m afraid. Wish i was in charge... somehow.
 
Not sure if you're being ironic, but one 3dmark version (vantage IIRC?) used to have a physx test that contributed to the main score, where nvidia systems would use the GPU and all others had to resort to that old very unoptimized x87 physx code in the CPU.
No, you are not remembering correctly. Sorry. There are, however, 3DMarks that do use higher than base feature levels of DirectX to optimize the rendering process. I await a collective call for justice now...
 
AMD marketing did its job well this time....

No kidding. They hit this one out of the park.

Some people are actually convinced that the results of two independent bits of code are incorrect unless run concurrently. Is this the end of multi-threading as we know it?

They're also convinced that there are multiple types of "async" and futuremark picked the one that favors nVidia. The stupidity is astounding.
 
  • Like
Reactions: HKS
I wonder if their universe will implode when they find out, that the actual ALUs cannot run the compute queue in parallel to the 3D queue, but that at a wavefront level there still is timeslicing going on.
 
Back
Top