Vulkan/OpenGL Next Generation Initiative: unified API for mobile and non-mobile devices.

  • Thread starter Deleted member 13524
  • Start date
But on what hardware does Google support Vulkan for? There's still OEMs releasing new Android products with anemic hardware. How can Google push Vulkan for games when their hardware products capable of running it will be splintered?
 
Isn't it just a matter of time? 2 ~ 3 years after Google introduced Vulkan to Android 7 or 8 or whatever 99% of the devices in use will be running a Vulkan capable Android version. Yes there will be a load of old phones still running a older OS but those phones won't be used by people that want to play games.

MS required a OS upgrade to use the latest version of DX twice, I don't see why Google can't do the same.
 
It's not even a software problem. There were brand new SoCs this year with Mali-4xx. You think those will be able to run Vulkan?


I think it would be fine if the phones/tablets carrying these bottom-of-the-barrel SoCs were prevented from upgrading to e.g. Android 7, and from playing high-end games from 2016/17. And AFACT, for 2015 SoCs we're only talking about Hisilicon's Kirin 620 which is only present in a handful of low-end Huawei smartphones. All the others have been putting Midgard GPUs in their lowest-end for some time.
 
I think it would be fine if the phones/tablets carrying these bottom-of-the-barrel SoCs were prevented from upgrading to e.g. Android 7, and from playing high-end games from 2016/17. And AFACT, for 2015 SoCs we're only talking about Hisilicon's Kirin 620 which is only present in a handful of low-end Huawei smartphones. All the others have been putting Midgard GPUs in their lowest-end for some time.

Intel's X3 also uses Mali-4xx (http://www.anandtech.com/show/9304/asus-announces-two-new-zenpad-tablets). I'm sure there's more examples.

But look at the current hardware stats of Android: http://hwstats.unity3d.com/mobile/gpu-android.html

Mali-4xx has almost 30% of the market share (and depressingly it increased its share from last year)! Obviously that will (hopefully?) start to fall, but currently over 50% of the market only supports es 2.0. It's not just a software problem; although I admit software is probably the bigger obstacle to Vulkan on Android in 2016-2017. It's just hard to see Vulkan becoming a "real competitor" when (and I think I'm being generous here) only ~1/3 of the market will support by the end of 2017.

If Vulkan does becomes a real competitor to DX12 (I'm not even sure what that means btw), I don't think it will be because of Android.
 
Vulkan Shader Resource Binding

In this blog post we will go into further details of one of the most common state changes in scene rendering: binding shader resources such as uniform- or storage-buffers, images or samplers.

vulkan_resbinding_layouts.png


To avoid performance pitfalls of traditional individual bindings Vulkan organizes bindings in groups, which are called DescriptorSets. Each group can itself provide multiple bindings and there can be multiple such groups in parallel using a different set number. The number of available sets is hardware-dependent, however there is a required minimum.

By making proper use of the parallel DescriptorSet bindings and PipelineLayouts the software developers can now represent this in Vulkan (increasing set number as we descend). In principle you can do this in previous APIs as well, however, Vulkan tells the driver up front that in this example the “view” bindings, would be common to all shaders at the same binding slot. A traditional API would have to inspect all the software bindings when the shaders are changed with less apriori knowledge about which are being overwritten and which are important to keep.

https://developer.nvidia.com/vulkan-shader-resource-binding
 
Last edited by a moderator:
Yes, it's actually quite different than Mantle despite similar naming - probably one of the API areas that was modified the most (the other being the render pass stuff).
 
Was that because it has to support a wide variety of hardware?
Mostly, yes - both in terms of efficiently supporting a broader range of hardware but also in terms of exposing functionality on other architectures such as push constants. There were other considerations as well (future expand-ability to bindless, DX design, etc).
 
Mostly, yes - both in terms of efficiently supporting a broader range of hardware but also in terms of exposing functionality on other architectures such as push constants. There were other considerations as well (future expand-ability to bindless, DX design, etc).
Have you worked with Vulkan yet? If so I was wondering if you like it and if you think they're taking it in a good direction as far as future proofing goes?
 
Have you worked with Vulkan yet? If so I was wondering if you like it and if you think they're taking it in a good direction as far as future proofing goes?
A little bit but mostly just in the spec phase a while ago - I haven't done my hands-on with it yet on real drivers.

Direction is reasonable overall. I like the binding model stuff in DX12 slightly more but it makes sense why they couldn't go quite as far in Vulkan and what's there is certainly just fine.
 
meh !
as long as it's not gfxVirtualAlloc( GPU|GART, address, size, COMMIT|RESERVE|RESET ) + gfxVirtualFree( address, size, DECOMMIT|RELEASE ) we are not there yet !

Reserve for each resource, Commit/Uncommit on need, know that page granularity is 64Kio and let the OS map hardware pages as it likes ! (That's what virtual memory is for !)

And not allocate physical memory (D3D12Heap) and link it to virtual memory (UpdateTileMapping of a CreateReservedResource) when you could jet let the OS handle that bit at a page granularity...


 
Yeah unfortunately when the GPU guys defined "tiled resources" they made some choices that are incompatible with how "real virtual memory" works, so at best the GPU 64kb stuff ends up as yet another layer on top of the "real OS page tables". That said, I fully agree on the shared page tables, GPU can fault, etc. stuff and reject the nay-sayers who think the world will explode if the GPU has to cover the latency of a page fault.
 
Presumably TLB overhead which is valid. But it goes beyond just the page size - the miss behavior and so on for TR is not compatible with regular OS virtual memory.
 
Direction is reasonable overall. I like the binding model stuff in DX12 slightly more but it makes sense why they couldn't go quite as far in Vulkan and what's there is certainly just fine.
Earlier you said that future expandability to bindless was a factor, do you consider DX12 resource binding tier 3 as bindless? If not will you like vulkan more in the future when they go bindless?
 
Earlier you said that future expandability to bindless was a factor, do you consider DX12 resource binding tier 3 as bindless? If not will you like vulkan more in the future when they go bindless?
Even Tier 2 in DX's binding model is "bindless". If you can index a sufficiently large number of resources (in this case, ~1 million) on the fly in the shader and support non-uniform indexing, it's bindless.
 
What do you think about getting rid of manual paging management and letting the GPU fetch pages from CPU memory upon a page fault, just like normal page miss?

Resource streaming from disk could be handled by mmaping them. Devs could always issue prefetch hints for max perf.
 
Maybe I missed something, but I still do not see an equivalent of the root buffer/signature on Vulkan... Does this concern certain mobile ulp architectures?

And now just an evil question: will Fermi receive real Vulkan drivers? :LOL:
 
Back
Top