Vulkan is a GCN low level construct?

Don't play 2016 games with 2009 CPUs. Got it.
I'll share that incredibly useful knowledge with all my friends.

Nah you missed the point which was that nvidia's opengl driver seems to be more efficient than amd's vulkan. Actually I think you did get it and are just trying to be smart. ;-)
 
Nah you missed the point which was that nvidia's opengl driver seems to be more efficient than amd's vulkan [on 7 year-old CPUs] .

Which tells me almost nothing on practical terms, except that nvidia's driver division gets the budget and size to optimize on CPU architectures that were EOL'd a long time ago, and AMD does not.
Had they done any testing with the same CPU (or at least not that old like e.g. an Ivy Bridge) on lower clocks, then maybe we could get some more insight, but they didn't.

From those results, I can't tell if AMD's driver is being inefficient because it's too reliant on raw INT/FP performance, or if e.g. many optimizations have been made using AVX extensions and the absence of those will turn said optimizations off.
Why didn't they just lower the multiplier on that 6700k to bring the clocks down to 2GHz, and/or turned off Hyperthreading? They could do the downclocking without even restarting the testing system. Why go through all the work of firing up ~8 year-old platforms?


I guess there may be some gamers with 8 year-old Nehalem and Phenom systems wondering if they should just upgrade their graphics card to a $250-300 model. For which the proper response should be no.
 
Last edited by a moderator:
Which tells me almost nothing on practical terms, except that nvidia's driver division gets the budget and size to optimize on CPU architectures that were EOL'd a long time ago, and AMD does not.
Had they done any testing with the same CPU (or at least not that old like e.g. an Ivy Bridge) on lower clocks, then maybe we could get some more insight, but they didn't.

From those results, I can't tell if AMD's driver is being inefficient because it's too reliant on raw INT/FP performance, or if e.g. many optimizations have been made using AVX extensions and the absence of those will turn said optimizations off.
Why didn't they just lower the multiplier on that 6700k down to 2GHz, and/or turned off Hyperthreading? They could do the downclocking without even restarting the testing system. Why go through all the work of firing up ~8 year-old platforms?


I guess there may be some gamers with 8 year-old Nehalem and Phenom systems wondering if they should just upgrade their graphics card to a $250-300 model. For which the proper response should be no.


http://www.purepc.pl/karty_graficzn...wy_test_wydajnosci_kart_graficznych?page=0,12

Maybe you're onto something(would've been more helpful if they scaled down the clockspeed though)...

I still think it's pretty awesome that nvidia's driver is that fast, even on old hardware.
 
There you go, negligible performance difference between a 4.5GHz quad-core and a 3.5GHz dual-core.

It would be interesting to know how low can a modern CPU core go without affecting game performance that much, but we would need someone willing to do that.
Techspot PC performance comparisons usually push their cpu clocks down to 2.5Ghz, but they always use the same graphics card.

Maybe if someone looked at the performance on external GPU enclosures connected to laptops with 15W CPUs (say, Razer Blade Stealth with AMD card in a Razer Core?) we could get that info by comparing with regular reviews.
 
Nah you missed the point which was that nvidia's opengl driver seems to be more efficient than amd's vulkan. Actually I think you did get it and are just trying to be smart. ;-)
It's not something new, we had this behavior with DX12 as well. Where NV essentially delivers the same performance using a lowly clocked CPU as a highly clocked one:

image_resize.php

http://m.hardocp.com/article/2016/0...l_cpu_scaling_gaming_framerate/4#.V8R9NMmxVzQ
 
There you go, negligible performance difference between a 4.5GHz quad-core and a 3.5GHz dual-core.

It would be interesting to know how low can a modern CPU core go without affecting game performance that much, but we would need someone willing to do that.
Techspot PC performance comparisons usually push their cpu clocks down to 2.5Ghz, but they always use the same graphics card.

Maybe if someone looked at the performance on external GPU enclosures connected to laptops with 15W CPUs (say, Razer Blade Stealth with AMD card in a Razer Core?) we could get that info by comparing with regular reviews.

Yeah, I wish they would have gone lower. I have a feeling AMD would have buckled sooner. Couldn't find any better tests though.
 
From the same review, it looks like in a GPU-limited scenario the AMD driver works just as well:

WFVrjar.png



Though this isn't Vulkan, so unless we're also trying to make DX12 into a GCN construct, the point is a bit moot.
 
The result is obvious. Porting a game/engine designed for OpenGL/DX11 to Vulkan/DX12 will drastically reduce the CPU cost. People keep their CPUs for long time. A sane developer ensures that their game runs reasonably good (60+ fps) on slightly older CPUs. PC gamers appreciate that their GPU is the bottleneck (even at high end), so that they can tune the frame rate to their liking by adjusting the resolution and visual quality. People don't like heavily CPU bottlenecked games (= no GPU scaling).

As current games/engines are designed around low performance APIs and with a goal of being GPU bottlenecked, Vulkan/DX12 do not help much if you have a desktop with a modern 4 core CPU. Currently the gains are mostly seen with older CPUs and on laptops. Vulkan/DX12 are perfect for low TDP laptops and ultraportables. Android games will also be drastically improved (both battery life and performance) as Vulkan gets more popular.

There are of course exceptions. Nitrous engine (Ashes of Singularity, Star Swarm) for example was designed around DX12 and submits a huge amount of draw calls. It can properly utilize a modern CPU with lots of cores. There are many engines currently in development that target similar draw call counts and show good multithreaded rendering scaling. But there are also many engines moving towards GPU-driven rendering. We are seeing multidraw/executeindirect replacing bulk draw calls. GPU is better in big data operations such as preparing the scene to being rendered (matrix setup, culling, draw list generation, etc). I firmly believe that most AAA engines will eventually move these (highly parallel) tasks to the GPU. It remains to be seen how important a fast CPU is in the future. Rendering isn't going to be the biggest CPU hog anymore. Better physics, destruction, AI, etc is needed.
 
From the same review, it looks like in a GPU-limited scenario the AMD driver works just as well:
Not quite, With the lowly clocked CPU, NV driver achieves 95% performance of the highly clocked CPU, AMD only achieves 90%. Thus NV's DX12 driver is more efficient than AMD's. Though the difference is close this time. On contrary to DX11, in which the difference is massive.

Though this isn't Vulkan, so unless we're also trying to make DX12 into a GCN construct, the point is a bit moot.
The similarities (driver efficiency wise) are amusing, especially in light of the post above from sebbbi.
 
Last edited:
Not quite, With the lowly clocked CPU, NV driver achieves 95% performance of the highly clocked CPU, AMD only achieves 90%. Thus NV's DX12 driver is more efficient than AMD's. Though the difference is close this time. On contrary to DX11, in which the difference is massive.


The similarities (driver efficiency wise) are amusing, especially in light of the post above from sebbbi.

Well, we know that the performance on Ashes for DX11 AMD are extremely bad, even in regard to other DX11 titles, as if AMD have absolutely not optimize anything for it. This said, i have not check if it is still the same months laters
 
A sane developer ensures that their game runs reasonably good (60+ fps) on slightly older CPUs

Do "slightly older CPUs" include Nehalem from 2008/09?

Regardless, I was talking about driver development, not game development.
DX12 games on AMD hardware seem to work great even on Sandy Bridge and first-gen Bulldozer CPUs at 2.5 GHz or less. It's when you pair them with pre-AVX CPUs that all seems to turn into crap, even if they go well over 3GHz.
 
Well, we know that the performance on Ashes for DX11 AMD are extremely bad, even in regard to other DX11 titles, as if AMD have absolutely not optimize anything for it. This said, i have not check if it is still the same months laters
Nvidia has a fast path for repeated draw calls with identical state (similar to multidraw). Nvidia gained massive performance boost in Star Swarm (earlier benchmark using the same Nitrous engine) when they implemented this optimization. Their earlier performance results were much closer to AMD. This kind of optimizations should work perfectly for Ashes and some other games/engines that submit huge amout of similar draw calls.

Nvidia also does much more per title optimizations and tweaking in their drivers. This results in better performance for important titles (selling well or used as benchmarks). Custom driver optimizations per game however don't help smaller indie games at all (unless they become popular enough). As a developer of smaller AA games, I like DX12/Vulkan more, because these APIs don't depend on IHV supporting your game to extract best performance.
 
Last edited:
As a developer of smaller AA games, I like DX12/Vulkan more, because these APIs don't depend on IHV supporting your game to extract best performance.

...just curious, as I've never gone deep with Vulkan/DX12: do they work mostly using a user-mode driver - i.e. writing commands in a user mode command queue(s)/rings/whatever? Or do they still involve many driver calls (except for non-trivial, special tasks like vmemory allocation/mapping that are obviously performed by drivers/hypervisors)?
 
...just curious, as I've never gone deep with Vulkan/DX12: do they work mostly using a user-mode driver - i.e. writing commands in a user mode command queue(s)/rings/whatever? Or do they still involve many driver calls (except for non-trivial, special tasks like vmemory allocation/mapping that are obviously performed by drivers/hypervisors)?
Since Vista, all graphics APIs have been primarily user-mode. Prior to Vista, OpenGL was always user-mode except for "swap buffers" which was handled by the KMD. There is a still a KMD component for memory management, display management, etc.
 
Nvidia also does much more per title optimizations and tweaking in their drivers. This results in better performance for important titles (selling well or used as benchmarks). Custom driver optimizations per game however don't help smaller indie games at all (unless they become popular enough). As a developer of smaller AA games, I like DX12/Vulkan more, because these APIs don't depend on IHV supporting your game to extract best performance.

Yeah, I've personally run into this where support on non-benchmarked, non-partnered and/or non-hyped game titles had relatively bad support and spotty performance on NVidia hardware, whereas performance was more even across titles on AMD hardware.

In that sense Vulkan/Dx12 should help even things out for Nvidia users like me who play a greater percentage of titles that NVidia don't implement driver optimizations for. And for AMD users they just get greater performance overall.

Regards,
SB
 
Since Vista, all graphics APIs have been primarily user-mode. Prior to Vista, OpenGL was always user-mode except for "swap buffers" which was handled by the KMD. There is a still a KMD component for memory management, display management, etc.

hmm... i remember longhorn model, at least vaguely, yeah. Thanks for the clarification.
I was honestly supposing that in windows a blocking issue was having DX to walk the kernel borders all the time,as there is a huge work in kernel for DX, and so OGL
 
Yeah, I've personally run into this where support on non-benchmarked, non-partnered and/or non-hyped game titles had relatively bad support and spotty performance on NVidia hardware, whereas performance was more even across titles on AMD hardware.
I've had quite the opposite actually, a plethora of AMD GPUs run like crap with very old titles, or regular little games (kids games, cheap games ..etc) these titles would be infested by flickering textures, unstable frames, corruption of alpha effects ..etc. As most PC developers optimize for the dominant GPU vendor, which happens to be NV. Heck these problems sometimes find their way into famous titles as well. I won't even touch the subject of horrendous OpenGL support.
 
I've had quite the opposite actually, a plethora of AMD GPUs run like crap with very old titles, or regular little games (kids games, cheap games ..etc) these titles would be infested by flickering textures, unstable frames, corruption of alpha effects ..etc. As most PC developers optimize for the dominant GPU vendor, which happens to be NV. Heck these problems sometimes find their way into famous titles as well. I won't even touch the subject of horrendous OpenGL support.
Have you tried Quake 4 on a GTX 1080? I have and it's atrocious.
 
Back
Top