DirectX 12: The future of it within the console gaming space (specifically the XB1)

Although D3D1X tracker for Mesa/Gallium3D has been introduced in Spring 2010, development efforts stalled due to lack interest and consequently the source code was removed from the main branch a year ago.
I don't really think the code was anywhere near even pre-alpha stage, since it was someone's student project in the first place.

http://cgit.freedesktop.org/mesa/mesa/log/src/gallium/state_trackers/d3d1x

Also this wasn't actually a full-featured Direct3D/COM wrapper, just some preliminary code to allow Linux applications to use a limited subset of Direct3D11 rendering APIs. It wasn't meant to run existing Win32/Direct3D applications on Linux or compile your existing Win32/Direct3D source code to run on Linux.
 
Last edited by a moderator:
...for what it is my experience in game engines, they have a specific folder with the 'low level engine' files of choice (i.e. say /dx11, /ios, /xbone, /ps4, whatever) and all the engine commands goes there (plus a ton of #ifdef here and there for the featureset).
Yeah, that's what you usually do. Makes porting much easier. If your engine is structured in this way, you can add support for new 3d APIs in just a few months of work.
 
Got to admit that this console generation is going to be the most interesting one for some time. 2 near simultaneous releases of major game consoles both doing quite well sales-wise, former King O'the Hill Nintendo now looking for traction and changing the way it does business, 2 lower level graphics apis with one of them coming from Redmond and last but not least Steam Boxes waiting in the wings. Besides finding out the hardware specs of the next ... next gen consoles it's about as interesting as it gets Right Now !! :LOL:
 
Last edited by a moderator:
...6 physical cores, or 6 VIRTUAL cores?

lay person here.

But is the hardware still virtualized?

The XBO OS setup went from being able to run on any x86/x64 hardware to being limited to the XBO hardware.

So, whats the point of hardware virtualization when the hardware is native?

Would this point to MS moving towards a operating system level virtualization design?

Doesn't OS level virtualization provide little overhead in comparison to hardware virtualization due to the fact that the hardware is native, many instances of the same OS and shared kernel?

If the XBO OS design was initially able to run on an x86/64 design why was MS dealing with stability issues almost up to launch. They wouldn't have been dependent on the XBO hardware to start OS development. Shouldn't it have been a question of performance and couldn't stability issues point to wholesale design changes inside of small window of when the Durango was available to the point of an XBO launch?
 
So, whats the point of hardware virtualization when the hardware is native?

think of using VM more or less as the same thing of using LV1 in PS3 -only done (very, totally) differently.

Result is, your game is just a big VM image and has no installation whatsoever, even if successfully attacked you are still 'in the cage' and you either need to exit VM or sweep into the main OS, which is any way another VM...

it allows you to put strong control on memory access, to limit interactions between two different OSes, to manage resources... all for a certain price.
 

Is this guy saying that only 1 core is emitting draw calls on the XB1 right now ( or the GPU accepting only one source ) ?? My ignorance on the subject is prodigious so I am just asking here... I would think that the XB1 would have allowed a fair amount of flexibility already on the subject. The performance differences between the ps4 and xb1 seem to scale roughly with the difference in hardware ( both being obviously good enough for a "next gen" experience ) I just can't imagine this huge yoke on XB1 performance being lifted and a "doubling" of performance following suit.

Obviously as the hardware becomes better exploited and the intricacies of the ESRAM become less intricate there will be better performance coming out of the XB1 but it seems like he is having a bit of fun with his pronouncements on the dx12 front.
 
Last edited by a moderator:
think of using VM more or less as the same thing of using LV1 in PS3 -only done (very, totally) differently.

Result is, your game is just a big VM image and has no installation whatsoever, even if successfully attacked you are still 'in the cage' and you either need to exit VM or sweep into the main OS, which is any way another VM...

it allows you to put strong control on memory access, to limit interactions between two different OSes, to manage resources... all for a certain price.

Operating system level virtualization provide the same protection without virtualizing the hardware.

I see the point of isolating the applications to their own VM.

But, I see no point in forcing application to navigate through layers of hardware abstraction when the app and OS is running on native hardware.

It seems applicable to WinRT apps but x86 Windows based apps just seem to get weighted down by unnecessary overhead.
 
Last edited by a moderator:
I don't see how better CPU utilization is going to fix resolution

Agreed. I can understand that perhaps the API load isn't being very well spread between the 6 cores (although I'm sure all 6 cores can still be well utilised by game code) but reducing that API bottleneck on a single thread shouldn't effect resolution as far as I can tell.
 
Agreed. I can understand that perhaps the API load isn't being very well spread between the 6 cores (although I'm sure all 6 cores can still be well utilised by game code) but reducing that API bottleneck on a single thread shouldn't effect resolution as far as I can tell.
I think what he's suggesting is that the CPU overhead code within DirectX API, that is called by the game code, is running on only one core but that this will change to all eight cores with DirectX12.

However this would bring additional considerations. If you are trying to write highly optimised code to work within the specific instruction and data cache of a particular core, you don't want other things (like DirectX) throwing in work and contaminating your cache.

I'd virtually written him off as a kook but the lack of credible devs correcting him is making me begin to wonder.

EDIT: the easy balance for this is, if a core is calling DirectX, it can run DirectX CPU overhead code, if a core is not calling DirectX, it should be exempt from distributed work.
 
I think what he's suggesting is that the CPU overhead code within DirectX API, that is called by the game code, is running on only one core but that this will change to all eight cores with DirectX12.

However this would bring additional considerations. If you are trying to write highly optimised code to work within the specific instruction and data cache of a particular core, you don't want other things (like DirectX) throwing in work and contaminating your cache.

I'd virtually written him off as a kook but the lack of credible devs correcting him is making me begin to wonder.

EDIT: the easy balance for this is, if a core is calling DirectX, it can run DirectX CPU overhead code, if a core is not calling DirectX, it should be exempt from distributed work.

http://gamingbolt.com/devs-react-to...ed-ps4-ice-programmer-be-suspicious-of-claims

I guess it all depends on what is meant by credible devs and correcting ? I mean an approximate doubling of performance ?? How hamstrung are XB1 devs when it comes to accessing the hardware if the performance of the system is going to be DOUBLED or the like ???
 
Operating system level virtualization provide the same protection without virtualizing the hardware.
But, I see no point [...]

No, they dont provide the same level of protection or isolation.

A quick question to you: how do you plan to use syscall/enter to go into your gameos kernel? do you put your gameos kernel in R0? if then, no isolation. Do you put a generic handler that grab the call on R0 and then perform an interprivilege call to R1... in x64 mode???
WinOS kernel always supposes to be in R0 - how could that coexist with another OS in isolated mode?
etc...

So, in short, the answer is no. Ring -1 is the only way.
 
No, they dont provide the same level of protection or isolation.

A quick question to you: how do you plan to use syscall/enter to go into your gameos kernel? do you put your gameos kernel in R0? if then, no isolation. Do you put a generic handler that grab the call on R0 and then perform an interprivilege call to R1... in x64 mode???
WinOS kernel always supposes to be in R0 - how could that coexist with another OS in isolated mode?
etc...

So, in short, the answer is no. Ring -1 is the only way.

I don't pretend to be versed in virtualization, but in operating system level virtualization system isolation and protection is solution dependent.

In solutions like openvz, all the partitions share the same kernel. The virtualization layer resides inside the kernel. Each partition behaves and act like its own system with its own processes, file system, root access, users, IP addresses, applications, system libraries and configuration files. The kernel provides each partition with its own set of isolated resources.

You are probably right that OS level virtualization provide weaker isolation and protection mechanisms. But we don't know exactly what MS has implemented or how they favored security over performance. Obviously performance is important and OS level virtualization can provide near native performance while still offering some level of isolation and protection.

Given the hardware advantage of the PS4, how much more performance can MS afford to sacrifice to virtualization overhead?
 
Back
Top