DirectX 12: The future of it within the console gaming space (specifically the XB1)

Discussion in 'Console Technology' started by Shortbread, Mar 7, 2014.

  1. DmitryKo

    Regular

    Joined:
    Feb 26, 2002
    Messages:
    967
    Likes Received:
    1,223
    Location:
    55°38′33″ N, 37°28′37″ E
    Although D3D1X tracker for Mesa/Gallium3D has been introduced in Spring 2010, development efforts stalled due to lack interest and consequently the source code was removed from the main branch a year ago.
    I don't really think the code was anywhere near even pre-alpha stage, since it was someone's student project in the first place.

    http://cgit.freedesktop.org/mesa/mesa/log/src/gallium/state_trackers/d3d1x

    Also this wasn't actually a full-featured Direct3D/COM wrapper, just some preliminary code to allow Linux applications to use a limited subset of Direct3D11 rendering APIs. It wasn't meant to run existing Win32/Direct3D applications on Linux or compile your existing Win32/Direct3D source code to run on Linux.
     
    #341 DmitryKo, Apr 14, 2014
    Last edited by a moderator: Apr 14, 2014
  2. sebbbi

    Veteran

    Joined:
    Nov 14, 2007
    Messages:
    2,924
    Likes Received:
    5,296
    Location:
    Helsinki, Finland
    Yeah, that's what you usually do. Makes porting much easier. If your engine is structured in this way, you can add support for new 3d APIs in just a few months of work.
     
  3. Davros

    Legend

    Joined:
    Jun 7, 2004
    Messages:
    17,884
    Likes Received:
    5,334
    sounds to me like thats the wrapper not a dx to GNM wrapper
     
  4. zupallinere

    Regular Subscriber

    Joined:
    Sep 8, 2006
    Messages:
    768
    Likes Received:
    109
    Yeah it's abstracted or shall we say the "dx11" version with GNM version being more Mantle/LowLevel Dx12.
     
  5. zupallinere

    Regular Subscriber

    Joined:
    Sep 8, 2006
    Messages:
    768
    Likes Received:
    109
    Got to admit that this console generation is going to be the most interesting one for some time. 2 near simultaneous releases of major game consoles both doing quite well sales-wise, former King O'the Hill Nintendo now looking for traction and changing the way it does business, 2 lower level graphics apis with one of them coming from Redmond and last but not least Steam Boxes waiting in the wings. Besides finding out the hardware specs of the next ... next gen consoles it's about as interesting as it gets Right Now !! :lol:
     
  6. Starx

    Regular

    Joined:
    Sep 29, 2013
    Messages:
    294
    Likes Received:
    148
  7. liolio

    liolio Aquoiboniste
    Legend

    Joined:
    Jun 28, 2005
    Messages:
    5,724
    Likes Received:
    195
    Location:
    Stateless
    I don't see how better CPU utilization is going to fix resolution, it might improve frame rate though.
    For resolution it should improve thanks to other fix /better use of the system memory and resources.

    Bad CPU utilization could explain why frame fluctuates more than on the ps4 in some games (/ obviously when there is no fps cap).
     
    #347 liolio, Apr 16, 2014
    Last edited by a moderator: Apr 16, 2014
  8. Ike Turner

    Veteran

    Joined:
    Jul 30, 2005
    Messages:
    2,110
    Likes Received:
    2,304
  9. pMax

    Regular

    Joined:
    May 14, 2013
    Messages:
    327
    Likes Received:
    22
    Location:
    out of the games
    ...6 physical cores, or 6 VIRTUAL cores?
     
  10. dobwal

    Legend

    Joined:
    Oct 26, 2005
    Messages:
    5,955
    Likes Received:
    2,325
    lay person here.

    But is the hardware still virtualized?

    The XBO OS setup went from being able to run on any x86/x64 hardware to being limited to the XBO hardware.

    So, whats the point of hardware virtualization when the hardware is native?

    Would this point to MS moving towards a operating system level virtualization design?

    Doesn't OS level virtualization provide little overhead in comparison to hardware virtualization due to the fact that the hardware is native, many instances of the same OS and shared kernel?

    If the XBO OS design was initially able to run on an x86/64 design why was MS dealing with stability issues almost up to launch. They wouldn't have been dependent on the XBO hardware to start OS development. Shouldn't it have been a question of performance and couldn't stability issues point to wholesale design changes inside of small window of when the Durango was available to the point of an XBO launch?
     
  11. pMax

    Regular

    Joined:
    May 14, 2013
    Messages:
    327
    Likes Received:
    22
    Location:
    out of the games
    think of using VM more or less as the same thing of using LV1 in PS3 -only done (very, totally) differently.

    Result is, your game is just a big VM image and has no installation whatsoever, even if successfully attacked you are still 'in the cage' and you either need to exit VM or sweep into the main OS, which is any way another VM...

    it allows you to put strong control on memory access, to limit interactions between two different OSes, to manage resources... all for a certain price.
     
  12. zupallinere

    Regular Subscriber

    Joined:
    Sep 8, 2006
    Messages:
    768
    Likes Received:
    109
    Is this guy saying that only 1 core is emitting draw calls on the XB1 right now ( or the GPU accepting only one source ) ?? My ignorance on the subject is prodigious so I am just asking here... I would think that the XB1 would have allowed a fair amount of flexibility already on the subject. The performance differences between the ps4 and xb1 seem to scale roughly with the difference in hardware ( both being obviously good enough for a "next gen" experience ) I just can't imagine this huge yoke on XB1 performance being lifted and a "doubling" of performance following suit.

    Obviously as the hardware becomes better exploited and the intricacies of the ESRAM become less intricate there will be better performance coming out of the XB1 but it seems like he is having a bit of fun with his pronouncements on the dx12 front.
     
    #352 zupallinere, Apr 16, 2014
    Last edited by a moderator: Apr 16, 2014
  13. dobwal

    Legend

    Joined:
    Oct 26, 2005
    Messages:
    5,955
    Likes Received:
    2,325
    Operating system level virtualization provide the same protection without virtualizing the hardware.

    I see the point of isolating the applications to their own VM.

    But, I see no point in forcing application to navigate through layers of hardware abstraction when the app and OS is running on native hardware.

    It seems applicable to WinRT apps but x86 Windows based apps just seem to get weighted down by unnecessary overhead.
     
    #353 dobwal, Apr 16, 2014
    Last edited by a moderator: Apr 16, 2014
  14. pjbliverpool

    pjbliverpool B3D Scallywag
    Legend

    Joined:
    May 8, 2005
    Messages:
    9,237
    Likes Received:
    4,260
    Location:
    Guess...
    Agreed. I can understand that perhaps the API load isn't being very well spread between the 6 cores (although I'm sure all 6 cores can still be well utilised by game code) but reducing that API bottleneck on a single thread shouldn't effect resolution as far as I can tell.
     
  15. DSoup

    DSoup Series Soup
    Legend Subscriber

    Joined:
    Nov 23, 2007
    Messages:
    16,785
    Likes Received:
    12,697
    Location:
    London, UK
    I think what he's suggesting is that the CPU overhead code within DirectX API, that is called by the game code, is running on only one core but that this will change to all eight cores with DirectX12.

    However this would bring additional considerations. If you are trying to write highly optimised code to work within the specific instruction and data cache of a particular core, you don't want other things (like DirectX) throwing in work and contaminating your cache.

    I'd virtually written him off as a kook but the lack of credible devs correcting him is making me begin to wonder.

    EDIT: the easy balance for this is, if a core is calling DirectX, it can run DirectX CPU overhead code, if a core is not calling DirectX, it should be exempt from distributed work.
     
  16. zupallinere

    Regular Subscriber

    Joined:
    Sep 8, 2006
    Messages:
    768
    Likes Received:
    109
    http://gamingbolt.com/devs-react-to...ed-ps4-ice-programmer-be-suspicious-of-claims

    I guess it all depends on what is meant by credible devs and correcting ? I mean an approximate doubling of performance ?? How hamstrung are XB1 devs when it comes to accessing the hardware if the performance of the system is going to be DOUBLED or the like ???
     
  17. DSoup

    DSoup Series Soup
    Legend Subscriber

    Joined:
    Nov 23, 2007
    Messages:
    16,785
    Likes Received:
    12,697
    Location:
    London, UK
    Thanks - I'd not read those. Re-calibrating tweets back to 'kook'. :cool:
     
  18. pMax

    Regular

    Joined:
    May 14, 2013
    Messages:
    327
    Likes Received:
    22
    Location:
    out of the games
    No, they dont provide the same level of protection or isolation.

    A quick question to you: how do you plan to use syscall/enter to go into your gameos kernel? do you put your gameos kernel in R0? if then, no isolation. Do you put a generic handler that grab the call on R0 and then perform an interprivilege call to R1... in x64 mode???
    WinOS kernel always supposes to be in R0 - how could that coexist with another OS in isolated mode?
    etc...

    So, in short, the answer is no. Ring -1 is the only way.
     
  19. zupallinere

    Regular Subscriber

    Joined:
    Sep 8, 2006
    Messages:
    768
    Likes Received:
    109
    And now all is right with the world again :lol:
     
  20. dobwal

    Legend

    Joined:
    Oct 26, 2005
    Messages:
    5,955
    Likes Received:
    2,325
    I don't pretend to be versed in virtualization, but in operating system level virtualization system isolation and protection is solution dependent.

    In solutions like openvz, all the partitions share the same kernel. The virtualization layer resides inside the kernel. Each partition behaves and act like its own system with its own processes, file system, root access, users, IP addresses, applications, system libraries and configuration files. The kernel provides each partition with its own set of isolated resources.

    You are probably right that OS level virtualization provide weaker isolation and protection mechanisms. But we don't know exactly what MS has implemented or how they favored security over performance. Obviously performance is important and OS level virtualization can provide near native performance while still offering some level of isolation and protection.

    Given the hardware advantage of the PS4, how much more performance can MS afford to sacrifice to virtualization overhead?
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...