Xbox One November SDK Leaked

Discussion in 'Console Technology' started by DieH@rd, Dec 31, 2014.

  1. dobwal

    Veteran

    Joined:
    Oct 26, 2005
    Messages:
    4,978
    Likes Received:
    996
    Vanilla GCN has each CU with its own 16 KB vector cache. What is shared between CU clusters is 16 KB of L1 scalar cache and 32 KB of L1 instruction cache.

    What you are probably seeing is not a difference in hardware but a difference in the way the overviews of the hardwares were composed.
     
    iroboto and Shifty Geezer like this.
  2. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    40,409
    Likes Received:
    10,776
    Location:
    Under my bridge
    A local OS wouldn't free resources. All OSes are running on the same finite hardware. It'd just move the cloud aspect of a game from the RAM and GPU set aside for the game to the RAM and GPU set aside for the System/cloudOS, and add headaches. You're far better off letting the game handle everything and shrink the OS footprint, giving those OS resources to the game to use on cloudy goodness.
     
  3. Themrphenix

    Newcomer

    Joined:
    Jun 1, 2013
    Messages:
    58
    Likes Received:
    6
    Location:
    Westerly Rhode Island
    Yes the second GCP is being used by OS and Snap functions.Microsoft is working towards opening up for low level priority instructions for games.It will not happen to the Xbox One Next OS is released.

    That's why the whispers are 2016 and limited situations.

    PS sorry I'm just now replying was waiting on my source at Redmond to get back in touch with me.
     
    mosen and Cyan like this.
  4. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    7,678
    Likes Received:
    5,980
    Interesting. Thermphenix if you don't mind but this type of thing might be better suited for the Xbox rumours thread as opposed to this one but yes I understand why you posted in this one though. We can't verify your source unless you want to show shifty or something. Awkward situation lol thank goodness I'm not a mod! :)
     
  5. Themrphenix

    Newcomer

    Joined:
    Jun 1, 2013
    Messages:
    58
    Likes Received:
    6
    Location:
    Westerly Rhode Island
    If Shifty remembers there's a lot I leaked before launch.

    GPU upclock Xbox One
    CPU upclock Xbox One
    I hinted at the flash in Xbox One
    I also stated the PS4 had the same ram memory reserve with the flex of 512 mb.(But the post was deleted because it broke NDA)
     
  6. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    40,409
    Likes Received:
    10,776
    Location:
    Under my bridge
    How will this benefit games? Why would devs want to use it?
     
  7. mosen

    Regular

    Joined:
    Mar 30, 2013
    Messages:
    452
    Likes Received:
    152
    From Christophe Riccio article:

    http://www.g-truc.net/doc/Candidate features for OpenGL 5.pdf

    sebbbi post from DX12 thread:

    https://forum.beyond3d.com/threads/...ming-space-specifically-the-xb1.55487/page-21

    On XB1 12 CUs, two geometry primitive and four render backend depth and color engines support two independent graphics contexts. So, It may let devs to use most/all of XB1 GPU resources easily for graphics. As sebbbi and others explained it's possible to use ACEs for graphics, too (in some special circumstances). But it may be easier or more efficient (since it makes it possible to use fixed function hardwares and synchronous compute instead of async compute as well) to use second GCP for graphics.
     
    #327 mosen, Jan 20, 2015
    Last edited: Jan 20, 2015
  8. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    40,409
    Likes Received:
    10,776
    Location:
    Under my bridge
    But surely Compute fills the idle ALUs? I just can't see this generating much in tangible gains, especially if dual GCPs are limited to consoles and only one allows dev access. Designing games to use compute makes them portable across all devices.

    I find it curious that the 2D Architecture thread on this subject has generated so little interest. It's not as though the GPU-interested elite of B3D are enthusiastically talking about the possibilities enabled by 2x GCPs!
     
  9. Scott_Arm

    Legend

    Joined:
    Jun 16, 2004
    Messages:
    13,161
    Likes Received:
    3,546
    Didn't that dink from Stardock say DX12 would allow draw calls from different threads to be submitted in parallel? It'd be interesting to know if any PC parts already have two command processors, or if the next wave of them does.

    Edit:
    I don't really view this guy as an expert, but what he's saying fits with what mosen linked.
    http://www.littletinyfrogs.com/article/460524/DirectX_11_vs_DirectX_12_oversimplified

    What he's talking about sounds similar. He does not mention submitting commands to multiple command processors, but he is talking about multiple threads ("cores") submitting commands.
     
    #329 Scott_Arm, Jan 20, 2015
    Last edited: Jan 20, 2015
    shredenvain and mosen like this.
  10. psorcerer

    Regular

    Joined:
    Aug 9, 2004
    Messages:
    605
    Likes Received:
    56
    Pfff, the solution is simple: use only one draw call. It's possible. Just not through the current PC (or even console) API.
    I.e. just stop beating the CPU horse, it was already dead 5 years ago, pity that DX12 architects do not understand that (or do not want to?).
     
  11. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    7,678
    Likes Received:
    5,980
    I'm not understanding how making it use only one draw call makes sense, or how that's even possible.

    If I want to draw 50 squares on the screen at different locations, I can't do it using 1 draw call, or at least I don't know how.
     
  12. psorcerer

    Regular

    Joined:
    Aug 9, 2004
    Messages:
    605
    Likes Received:
    56
    One mesh with degenerate triangles. One mesh and transparent texture. Etc.
    Multiple meshes, "created" by GPU, and submitted to itself through circular command buffer. Etc.

    P.S. http://timothylottes.blogspot.com/2014/06/easy-understanding-of-natural-draw-rate.html
     
    #332 psorcerer, Jan 20, 2015
    Last edited: Jan 20, 2015
  13. liquidboy

    Regular Newcomer

    Joined:
    Jan 16, 2013
    Messages:
    416
    Likes Received:
    77
    Every core in the CPU can now talk to the GPU filling all it's cores , as well as the GPU can self-feed it's cores with work using the new "multi-draw indirect" api's in the xdk

    So yes the GPU can reach near saturation ...

    And as mentioned many times before .. all this in truly parallel/async
     
  14. liquidboy

    Regular Newcomer

    Joined:
    Jan 16, 2013
    Messages:
    416
    Likes Received:
    77
    And incase anyone wanted to know how we get multiple-CPU cores doing rendering, as opposed to the current state of Dx where only 1 is rendering..

    And as I've mentioned numerous times before, "Deferred Contexts" are created by the DeviceContext and you they can be an explicit DefferedContext (traditional graphics pipeline with draw/dispatch), or the new ComputeContext (compute dispatch only pipeline)
     
  15. liquidboy

    Regular Newcomer

    Joined:
    Jan 16, 2013
    Messages:
    416
    Likes Received:
    77
    And when I mean true Async..

    1. API's are now async with a "fast path"
    2. We get separate DeviceContexts for graphics (ImmediateContext/DeferredContext) and compute (ComputeContext) and multiple "CommanProcessors" in HW
    3. There is now an async "Presentation Queue" freeing the CPU/GPU from this task
     
  16. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    7,678
    Likes Received:
    5,980
    hmm.. my limited knowledge of it, I wrote a small game in Lua for PSN. I built controllers that would loop through arrays of entities that would call their render functions (I know this is poor on the memory management and the draw call side of things). This method would be faster, basically you've taken data oriented design and gone a step further from the little I can interpret. I mean lol without building your own engine I don't think you can deploy this method.

    Wouldn't this method be hard for prototyping features?
     
  17. psorcerer

    Regular

    Joined:
    Aug 9, 2004
    Messages:
    605
    Likes Received:
    56
    It's a performance-oriented stuff, not good for anything besides the final game. :)
    You need to know exactly what your data is.
     
  18. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,098
    Likes Received:
    2,814
    Location:
    Well within 3d
    I'm speculating here, but I would be curious at to why it wouldn't be harder to use the second GCP in rendering towards a unified output.
    The ACEs were designed and marketed from the outset as being better-virtualized and capable of coordinating amongst themselves. Due to their simplified contexts, prioritization, context switching, and recently some form of preemption were rolled out for them first.

    The graphics front end has not kept up, and significant portions of the fixed-function pipeline have not kept up with this.
    It will not be until Carizzo that preemption finally rears its head for the graphics context, and the paranoia over having the GPC being DOSed has been a point of contention for Kaveri's kernel development discussion for Linux. If a platform is paranoid about a game DOS-ing the GPU, or it needs some level of responsiveness, one way to get in edgewise is to have a secondary front end that can sneak something in.
    (fun fact: It's not just graphics. The SDK cautions against having long-running compute kernels active when the system tries to suspend. If it takes too long to respond, it's a reboot. Similar GPU driver freakouts can occur on the PC.)
    I may be pessimistic, given AMD's slower progress on this front, but it may be harder to get proper results out of a front end that has never needed the means to coordinate with an equivalent front end before.

    The delay until the new OS rollout might be another indicator of the complexity involved. The ability to properly virtualize a GPU without serious performance concerns is recent, and both the VM and hardware need to be up for it. If the older OS system model predates these changes, it may have leveraged a secondary GPC as a shortcut to present a "second" GPU for the sake of a simpler target and improved performance.

    The quoted passage on cooling solutions is "meh" to me. Unless its a ROG or other boutique solution, why would a cooler be specced to dissipate a power level greater than a level that would likely blow out the VRMs of a PCIe compliant device?
    No modern GPU of significant power consumption is physically capable of full utilization without a proper GPU clamping down clocks or voltages almost immediately.

    Maybe. However, it takes quite a bit to saturate the command processor, particularly if other bottlenecks come into play. If one of the motivations for the two command processors in the consoles was better QoS and system accessibility, Carrizo's introduction of graphics context switching might be a case where upcoming APUs have less of a need for a duplicated GPC. The other reason may be that upcoming APUs will probably bottleneck way before the gains from a second front end could be realized.
     
  19. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    7,678
    Likes Received:
    5,980
    Halo Master Chief Collection is undergoing a beta test for it's next multiplayer patch. Cites a large scale change.
    I wonder if this was what you were referring to @Scott_Arm about 2015 multiplayer platform change indicated in the SDK documents. We should keep on eye on this if everything suddenly changes. I hope this type of thing continues, every major game that has a massive community on PC always releases beta versions of their next patch before releasing it to the larger population. I hope this continues to be adopted.

    https://www.halowaypoint.com/en-us/community/blog-posts/1-23-15-mcc-content-update-beta-test-faq
     
    #339 iroboto, Jan 22, 2015
    Last edited: Jan 22, 2015
  20. Davros

    Legend

    Joined:
    Jun 7, 2004
    Messages:
    14,775
    Likes Received:
    2,200
    Can i ask does the documentation say anything about the capabilities of shape ?
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...