No DX12 Software is Suitable for Benchmarking *spawn*

Not sure if this was posted ... but I guess times of desperation are here.

So that previous post of mine with lots of comparison graphics in the last page bothered you that much?

But please indulge us as to how a blog post showing 80% scaling using explicit multiadapter, getting 10% better performance with 2*$240 graphics cards than the competitor's >$700 single card is considered times of desperation.
And they didn't even have to use bullshit scales (deserving of parody) in the graphics for people to discern the difference.

kt7Z4k3.jpg

Bj3E2bh.jpg
 
Last edited by a moderator:
The reason I say it's not relevant (for PC as we are talking about) is not because there are no cases in which it could be useful to have more "realtime" constraints, but because there is actually no control in DX12 over priorities or preemption. The two are also separate concepts but they constantly get confused in this conversation. Truly time sensitive work (like VR and desktop composition) should preempt, not run concurrently and so the granularity of preemption is what matters. More cooperative/sustained but still somewhat high priority work can get away with queue priorities.

But again, none of this is exposed to user applications on the PC which is why I say it's not even relevant to application developers right now :)


Most consumer CPUs with 4 or less cores though *do* have an iGPU, which can happily run low latency and concurrently with a dGPU :)
Would be great if we had eDRAM as well for the better performance/I7 desktop CPUs as well :) - not just the 65W Broadwell.
Has anyone managed to convince the Intel executives to actually try and show some enthusiasm for the desktop PC market and do this rather than limiting it to particular models that had limited interest?
Seems from various rumours-news out there that even Kaby Lake is not getting much love with eDRAM for the performance/I7 type desktop CPUs.
We can dream though :)
Cheers
 
Last edited:
So that previous post of mine with lots of comparison graphics in the last page bothered you that much?
Not really. It would have been more meaningful had they either used the latest WHQL drivers for both products or the GPU release day drivers for both. The numbers have little meaning when then use the newest drivers AMD (yesterday) and release day drivers (Nvidia), but I guess that's where advertising money well spent comes into the picture.

But please indulge us as to how a blog post showing 80% scaling using explicit multiadapter, getting 10% better performance with 2*$240 graphics cards than the competitor's >$700 single card is considered times of desperation.
And they didn't even have to use bullshit scales (deserving of parody) in the graphics for people to discern the difference.
It's too bad they left the GTX 1060 was left out of the graph ... it's the true competitor of the RX480 at $250 and does a decent job in this benchmark. Let's just say most know what to expect from AMD labs benchies and nothing could be further from the truth. I look forward to reviews of explicit multiadapter comparisons between the RX480 and GTX 1060.
 
It's too bad they left the GTX 1060 was left out of the graph ... it's the true competitor of the RX480 at $250 and does a decent job in this benchmark. Let's just say most know what to expect from AMD labs benchies and nothing could be further from the truth. I look forward to reviews of explicit multiadapter comparisons between the RX480 and GTX 1060.
The graph and its numbers were made before the 1060 was released. The article was posted today.
 
The graph and its numbers were made before the 1060 was released. The article was posted today.

IIRC, pharma and/or others also complained about why the Polaris 11 vs. GTX 950 power consumption demo made back in January during CES didn't include the GP106 instead (of which there's no info 7 months later).

Convenient ignorance fits the narrative really well, just like this little piece here:



The numbers have little meaning when then use the newest drivers AMD (yesterday) and release day drivers (Nvidia), but I guess that's where advertising money well spent comes into the picture.

So you're accusing Golem.de of taking bribes from AMD because they're using an older driver from nvidia.
Let's see what drivers/specs they used:

Asus Z170-Deluxe, Core i7-6700K, 4 x 4 GByte DDR4-2133, Seasonic 520W Platinum Fanless; Win10 x64, Geforce 368.64, Radeon Software 16.7.2

What driver everyone else used:

Tomshardware: same driver
Techspot: same driver
TechARP: same driver
TechPowerUp: same driver
HardOCP: same driver

And here's what TechARP has to say about the subject:
We used the GeForce driver version 368.64 for all three NVIDIA graphics cards used in our tests. NVIDIA released a newer GeForce driver 368.81 several days ago but it does not support the GeForce GTX 1060.



Just so we're clear:
- Do you think AMD bribed all GTX 1060 reviewers in the world to use a one-week-old driver (damn, AMD must be floating in money to handle all those bribes!), or was it just Golem.de?

When Anandtech presents their GTX 1060 pre/review scores today, I wonder if they'll have been bribed by AMD to use that terrible driver that nvidia handed out to reviewers.
 
Last edited by a moderator:
I stand corrected ... the Nvidia release day driver Golem.de used for their review is the most current for the 1060. Let's see if Golem.de conducts a valid comparative shootout when Vulkan/DX12 issues are resolved. Too many unanswered issues surrounding Vulan/DX 12 benchmarks right now to draw any definitive conclusions. Anandtech is fairly consistent and does not involve themselves in any one-sided shootouts. The consistency is there with the one day reviews you listed so no surprise there.
 
Last edited by a moderator:
The 368.81 has been silently updated after the 1060 launch and installed flawlessly on the 1060. I just found out by accidentally clicking on it without modding the INF first.

FWIW & IMHO: Integrated benchmarks are more worthless now than ever. I am under the impression that they are internal ISV tools in order to optimize graphics alone.
Short example: RotTR, Geothermal Valley. The integrated benchmark score vs The geothermal-valley-part of the integrated benchmark vs. a real world run-through of 60 seconds in geothermal valley on my FX 6300 using a 1060 and a 480 (very high, 900p - don't ask :)).

1060:
83,9 - 44/80,4 - 49/57,7 (min/avg. fps where applicable)
480:
67,8 - 40/62,4 - 34/41,6 (min/avg. fps where applicable)

Similar with Warhammer: The integrated benchmark leads you to believe, the game might be graphics limited, which it isn't as soon as you're beyond basic tutorial and command a real army.

edit: added the bolding to the bolded parts
 
Last edited:
But please indulge us as to how a blog post showing 80% scaling using explicit multiadapter, getting 10% better performance with 2*$240 graphics cards than the competitor's >$700 single card is considered times of desperation.

I don't want to comment on the "times of desperation" because it's a pointless argument, but any "AFR solution" cannot be compared to a "non-AFR solution". In the scenario you provided the $700 gpu will almost certainly be the superior experience regardless of what the fps bar graphs claim.
 
I don't want to comment on the "times of desperation" because it's a pointless argument, but any "AFR solution" cannot be compared to a "non-AFR solution". In the scenario you provided the $700 gpu will almost certainly be the superior experience regardless of what the fps bar graphs claim.

The scenario provided (by AMD, not me) is a non-interactive render demo. Are you sure the AFR solution offers an inferior experience here?


Sure, you can downplay 3dmark for being what it is and not representing any real gaming experience (and then AFR = input lag, etc, etc). But then again there's a myriad of factors that can hamper AFR's disadvantages or even nullify them for many users.
Do the AFR disadvantages matter if I'm playing Civ V or XCom 2?
Another example: I'm using a Freesync monitor and couldn't care less about competitive FPS or RTS.
 
The reason I say it's not relevant (for PC as we are talking about) is not because there are no cases in which it could be useful to have more "realtime" constraints, but because there is actually no control in DX12 over priorities or preemption. The two are also separate concepts but they constantly get confused in this conversation. Truly time sensitive work (like VR and desktop composition) should preempt, not run concurrently and so the granularity of preemption is what matters. More cooperative/sustained but still somewhat high priority work can get away with queue priorities.

But again, none of this is exposed to user applications on the PC which is why I say it's not even relevant to application developers right now :)
Queue priority is explicitly exposed to the developer. You can create a high priority queue with D3D12_COMMAND_QUEUE_PRIORITY_HIGH flag.

The documentation of this flag is nonexistent, but as a programmer I would assume that DX12 queue priorities work similarly than queue/thread priorities in all other APIs. It is common to all queue APIs that high priority work is selected to execution first, if both high priority and low priority work exist. For continuous work (such as threads on CPU) it is completely acceptable that there's some kind of time slot granularity. I wouldn't mind if there was a few millisecond latency. However currently it seems that some IHVs completely ignore the D3D12_COMMAND_QUEUE_PRIORITY_HIGH flag, and execute the high priority work after completing several frames of rendering work. This is not acceptable behavior for a high priority queue, and certainly not what a professional programmer would expect when he specifically asks for a high priority queue.

This is a problem that needs to be solved. Otherwise mixed (low latency game logic) GPGPU and rendering will not be possible. We need better documentation and we need some guidelines that all IHVs need to follow. Otherwise it is very hard for the programmers to use the high priority queues.
Most consumer CPUs with 4 or less cores though *do* have an iGPU, which can happily run low latency and concurrently with a dGPU :)
I have though about that. When both dGPU and iGPU are available, this is the best option. However some consumers have 6 and 8 core CPU with no iGPU. It would be a little bit harsh to exclude these players completely. Imagine the shitstorm when you told your customers that a 8 core i7 + GTX 980 Ti wouldn't be able to run your game, but a dual core i3 + any modern discrete GPU runs the game perfectly... or god forbid the game works perfectly without an iGPU when you have any AMD GCN graphics card.
 
Hmm. I find it curious that in every game tested when there is a Dx11 versus Dx12 or OGL versus Vulkan, there is consistent performance degradation on the GTX 1060 when going to the newer API. That isn't always the case with the GTX 1070/1080.

It appears that the 1060 is the better card if using the older rendering APIs, but that the 480 is generally better when using the newer rendering APIs. So overall, they're relatively evenly matched.

Regards,
SB

Hmm, got a link to this?
 
The 368.81 has been silently updated after the 1060 launch and installed flawlessly on the 1060. I just found out by accidentally clicking on it without modding the INF first.

FWIW & IMHO: Integrated benchmarks are more worthless now than ever. I am under the impression that they are internal ISV tools in order to optimize graphics alone.
Short example: RotTR, Geothermal Valley. The integrated benchmark score vs The geothermal-valley-part of the integrated benchmark vs. a real world run-through of 60 seconds in geothermal valley on my FX 6300 using a 1060 and a 480 (very high, 900p - don't ask :)).

1060:
83,9 - 44/80,4 - 49/57,7 (min/avg. fps where applicable)
480:
67,8 - 40/62,4 - 34/41,6 (min/avg. fps where applicable)

Similar with Warhammer: The integrated benchmark leads you to believe, the game might be graphics limited, which it isn't as soon as you're beyond basic tutorial and command a real army.

edit: added the bolding to the bolded parts

It's most likely that the FX 6300 is bottlenecking the 480 rather that there's such a difference between the two GPUs in Valley. Afaik Nvidia Dx11/OpenGL drivers have a significantly lower CPU footprint, making them a better buy for lower to mid end systems (till we get more vulkan/dx12 games).
 
Forgot to mention: This is using DX12 already.
And the point I was trying to make was, how skewed towards pure graphics performance many integraded benchmarks nowadays are - not that one IHV is better or worse than the other.
 
Last edited:
Total War: Warhammer DX12 boost for AMD still can't match Nvidia's DX11 performance

Even now, nearly two months after launch, Warhammer's DX12 support is still classified as 'beta,' and I have to wonder when—or if—that classification will ever be removed. Rise of the Tomb Raider could call their DX12 support 'beta' as well, and it has a disclaimer whenever you enable the feature: "Using the DirectX 12 API can offer significantly better performance on some systems; however, it will not be beneficial on all." Most of the games with DX12 or Vulkan support so far should have a similar disclaimer, and that's part of the concern with low-level APIs.

Total War: Warhammer is one game, so people shouldn't base their hardware purchases solely on how it performs. It's another example of a game with low-level API support where AMD sees gains while Nvidia performance drops, but it's also in the same category as Ashes of the Singularity and Hitman—namely, it's promoted by AMD and sports their logo. If you focus on only DX12 or DX11 performance, the GPU standings change quite a bit, but once you select the best API for each GPU, things are basically back to 'normal.'
http://www.pcgamer.com/total-war-wa..._source=twitter&utm_campaign=buffer-pcgamertw
 
From the Article:
It's a strange new world we live in when a game can be promoted by a hardware vendor (AMD) and yet fail to deliver good performance for more than a month after launch. I'd guess the publisher wanted to get the main product released and felt there was no need to delay that just so the developer could 'finish' working on the DX12 version (which is basically the same thing we've seen with Doom's Vulkan update, Hitman's DX12 patch, and to a lesser extent Rise of the Tomb Raider's DX12 support). The concern is that if this sort of behavior continues, low-level API support may end up being a second class citizen.

Which echoes with what has been said before. If this continues the so called low level APIs will be branded the status of "low level quality" indeed.

Right now, and in many occasions, low level APIs are a just an excuse by AMD to shift the blame of optimizing to developers rather than do it themselves, worse yet, this comes later after game launch, suffers delay, gets incomplete support "Beta" and worst of all, in many cases (Ashes, RoTR, Warhammer) can't even beat the performance of their competitor's High level API, that they are supposed to maintain and optimize for.

So really, If you are an AMD user, you would have to suffer the abysmal DX11/OpenGL performance of their hardware, and their reliance on powerful CPUs, until a new patch is released to iron things out, which will (depending on your luck) get things equal with NVIDIA, or slightly higher, though sometimes you will still remain behind NVIDIA, it's just dump luck or rather how the developer is keen on doing things a certain way. Leaving things in the hands of the developer alone also means the introduction of more bugs, and the slow fixing of these bugs, not to mention the broken mGPU solutions, the hindered ability to introduce new features like: new AA techniques for example, or even force old ones (AA/AF/Vsync/AO) through the driver (at least for AMD).

Fantastic!
 
Last edited:
Back
Top