Xbox One (Durango) Technical hardware investigation

Discussion in 'Console Technology' started by Love_In_Rio, Jan 21, 2013.

Thread Status:
Not open for further replies.
  1. Squilliam

    Squilliam Beyond3d isn't defined yet
    Veteran

    Joined:
    Jan 11, 2008
    Messages:
    3,495
    Likes Received:
    113
    Location:
    New Zealand
    Well it depends whether or not that performance is later made available. If there is a reservation of say 15% of the performance for Kinect and Microsoft chooses to allow all of it to be used at a later date then there would effectively be another 15% of performance available.

    There is also this point as well.
     
  2. Xenio

    Banned

    Joined:
    Jan 18, 2013
    Messages:
    447
    Likes Received:
    0
    nope. Kinect 2 is rumored to have its own DSP.
     
  3. inefficient

    Veteran

    Joined:
    May 5, 2004
    Messages:
    2,121
    Likes Received:
    53
    Location:
    Tokyo
    I say not a chance. Maybe the dev kit versions might . Final version would make no sense cost wise.
     
  4. Xenio

    Banned

    Joined:
    Jan 18, 2013
    Messages:
    447
    Likes Received:
    0
    make sense if there's too much data for the connection bus, it's smartest send elaborated/compressed data via cable, a simple dsp don't cost so much, it don't need a gpu inside
     
  5. DJ12

    Veteran

    Joined:
    Oct 20, 2006
    Messages:
    3,090
    Likes Received:
    178
    Doesn't the block diagram state Kinect is controlled by one of the so called special sauce units?

    The audio codec/Mec one. It's not shown on the first page but a later posting of it
     
  6. Proelite

    Regular

    Joined:
    Jul 3, 2006
    Messages:
    804
    Likes Received:
    93
    Location:
    Redmond
    Multi Echo Cancellation chip for Kinect.
     
  7. DJ12

    Veteran

    Joined:
    Oct 20, 2006
    Messages:
    3,090
    Likes Received:
    178
    Yes thats it. I forgot the acronym.
     
  8. Hardknock

    Veteran

    Joined:
    Jul 11, 2005
    Messages:
    2,203
    Likes Received:
    53
    Posted this on neogaf but no one really had an answer. This might be a really stupid question but if MS needed 8GBs of RAM so bad but was concerned about bandwidth why not go with split memory that both the CPU and GPU can access. Say 6GBs of ddr3 and 2GBs of gddr5? Gddr5 has greater bandwidth than the esram they are supposedly using anyway plus they would have much more to play with.
     
  9. Brad Grenz

    Brad Grenz Philosopher & Poet
    Veteran

    Joined:
    Mar 3, 2005
    Messages:
    2,531
    Likes Received:
    2
    Location:
    Oregon
    Because they didn't want to be stuck with a 384 or 512 bit memory interface on their APU.
     
  10. XpiderMX

    Veteran

    Joined:
    Mar 14, 2012
    Messages:
    1,768
    Likes Received:
    0
    Then, all the chinese guy rumors was false
     
  11. Brad Grenz

    Brad Grenz Philosopher & Poet
    Veteran

    Joined:
    Mar 3, 2005
    Messages:
    2,531
    Likes Received:
    2
    Location:
    Oregon
  12. (((interference)))

    Veteran

    Joined:
    Sep 10, 2009
    Messages:
    2,498
    Likes Received:
    70
    Yeah, and N2O seems to have conveniently slunk off...
     
  13. Laa-Yosh

    Laa-Yosh I can has custom title?
    Legend Subscriber

    Joined:
    Feb 12, 2002
    Messages:
    9,568
    Likes Received:
    1,452
    Location:
    Budapest, Hungary
    The mods should make registration a LOT harder until at least E3 or we'll drown.
     
  14. DieH@rd

    Legend Veteran

    Joined:
    Sep 20, 2006
    Messages:
    6,101
    Likes Received:
    2,024
    He still believes in his AMD China guy #1. :)
     
  15. arijoytunir

    Regular

    Joined:
    Nov 13, 2012
    Messages:
    347
    Likes Received:
    12
    [​IMG]
     
    #475 arijoytunir, Jan 25, 2013
    Last edited by a moderator: Jan 25, 2013
  16. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,122
    Likes Received:
    2,873
    Location:
    Well within 3d
    There are papers and presentations on how developers create optimized programs for scientific computation, matrix multiplication, and new graphics techniques using the register files, caches, and local memory of GPUs and CPUs..

    The full contents of the SRAM of AMD's Tahiti GPU is several megabytes, and massive performance differences can be found between naive implementations and ones that use that on-die storage effectively.
    I would be disappointed if an order of magnitude extra storage could only be used for a few background tasks.
     
  17. Ethatron

    Regular Subscriber

    Joined:
    Jan 24, 2010
    Messages:
    858
    Likes Received:
    260
    Four DMEs sound more like it relates to the 8 cores of the CPU, as 4 DMEs for 10|12 CUs is a bit asymmetric. Bulldozer shares L2 per 2 "cores" (4*2 = 8 cores); the memory controller can be updated to 4 channels (4*1 = 4 channels); and so on ...
    If we would understand the ESRAM as a manual L3/L4 cache from the PoV of the memory controller, then DMEs could simply provide L2-coherency instructions on the CPU-side, basically an ESRAM<->L2 hypertransport link with some DMA-logic.
    This could make the CPU (or one half of it = 4) the compositing-chip in the pipeline, doing the post-processing via C++ AMP JITted to AVX.
    It could also make the CPU kind of a OoO compute [shader] unit, doing the highly branchy or complex stuff; could be tile-culling from Forward+; could offer much better linked list support for render-targets (OIT, manual adaptive AA etc.); CPUs are much better at processing irregular data-structures, especially compressed ones, for example zipped ones. You wouldn't need a zlib-chip, you just let the CPU decompress and write-back (standard L2-protocol, though with DRAM write-through by-pass in the modified memory controller) via DMEs directly into the ESRAM for the input assembler to consume.
    Well, it could really be anything. :)
     
    #477 Ethatron, Jan 25, 2013
    Last edited by a moderator: Jan 25, 2013
  18. Grall

    Grall Invisible Member
    Legend

    Joined:
    Apr 14, 2002
    Messages:
    10,801
    Likes Received:
    2,172
    Location:
    La-la land
    The more advanced and complicated the possibility being considered, the less likely it is to actually be the case. Same with multi-ported on-chip memories and so on. Multiporting hasn't been used on such a scale in any consumer device ever, it'd add a lot of extra wiring on the chip, as well as additional arbitration logic etc.
     
  19. nirwanda

    Newcomer

    Joined:
    Jan 25, 2013
    Messages:
    1
    Likes Received:
    0
    Does anyone anything about the cryptography engines that was at the bottom of the vgleaks piece.
    Would this allow highly compressed data to store the whole game in main memory and move in compressed form to the data move chips for distribution and getting around some bandwidth issues .
    Or is it just a security chip
     
  20. zed

    zed
    Veteran

    Joined:
    Dec 16, 2005
    Messages:
    4,429
    Likes Received:
    623
    ps3 = 256mb mem & 256mb GPU mem
    360 - 512mb for mem & gpu most likely the ratio in games is similar to ps3

    hence 256mb memory this generation (minus some for OS etc) x 8 = 2GB
     
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...