Xbox One (Durango) Technical hardware investigation

Status
Not open for further replies.
Even if this is true, it would be mostly irrelevant for purposes of game performance.

Well it depends whether or not that performance is later made available. If there is a reservation of say 15% of the performance for Kinect and Microsoft chooses to allow all of it to be used at a later date then there would effectively be another 15% of performance available.

Well it does in so far as Kinect 2.0 would now be 'free' as opposed to the x% of system power we were all mentally reserving for it instead.

There is also this point as well.
 
I say not a chance. Maybe the dev kit versions might . Final version would make no sense cost wise.

make sense if there's too much data for the connection bus, it's smartest send elaborated/compressed data via cable, a simple dsp don't cost so much, it don't need a gpu inside
 
Doesn't the block diagram state Kinect is controlled by one of the so called special sauce units?

The audio codec/Mec one. It's not shown on the first page but a later posting of it
 
Doesn't the block diagram state Kinect is controlled by one of the so called special sauce units?

The audio codec/Mec one. It's not shown on the first page but a later posting of it

Multi Echo Cancellation chip for Kinect.
 
Posted this on neogaf but no one really had an answer. This might be a really stupid question but if MS needed 8GBs of RAM so bad but was concerned about bandwidth why not go with split memory that both the CPU and GPU can access. Say 6GBs of ddr3 and 2GBs of gddr5? Gddr5 has greater bandwidth than the esram they are supposedly using anyway plus they would have much more to play with.
 
BBdrkCnCIAA8fGY.png:large
 
Last edited by a moderator:
There are papers and presentations on how developers create optimized programs for scientific computation, matrix multiplication, and new graphics techniques using the register files, caches, and local memory of GPUs and CPUs..

The full contents of the SRAM of AMD's Tahiti GPU is several megabytes, and massive performance differences can be found between naive implementations and ones that use that on-die storage effectively.
I would be disappointed if an order of magnitude extra storage could only be used for a few background tasks.
 
Four DMEs sound more like it relates to the 8 cores of the CPU, as 4 DMEs for 10|12 CUs is a bit asymmetric. Bulldozer shares L2 per 2 "cores" (4*2 = 8 cores); the memory controller can be updated to 4 channels (4*1 = 4 channels); and so on ...
If we would understand the ESRAM as a manual L3/L4 cache from the PoV of the memory controller, then DMEs could simply provide L2-coherency instructions on the CPU-side, basically an ESRAM<->L2 hypertransport link with some DMA-logic.
This could make the CPU (or one half of it = 4) the compositing-chip in the pipeline, doing the post-processing via C++ AMP JITted to AVX.
It could also make the CPU kind of a OoO compute [shader] unit, doing the highly branchy or complex stuff; could be tile-culling from Forward+; could offer much better linked list support for render-targets (OIT, manual adaptive AA etc.); CPUs are much better at processing irregular data-structures, especially compressed ones, for example zipped ones. You wouldn't need a zlib-chip, you just let the CPU decompress and write-back (standard L2-protocol, though with DRAM write-through by-pass in the modified memory controller) via DMEs directly into the ESRAM for the input assembler to consume.
Well, it could really be anything. :)
 
Last edited by a moderator:
You wouldn't need a zlib-chip, you just let the CPU decompress and write-back (standard L2-protocol, though with DRAM write-through by-pass in the modified memory controller) via DMEs directly into the ESRAM for the input assembler to consume.
Well, it could really be anything. :)
The more advanced and complicated the possibility being considered, the less likely it is to actually be the case. Same with multi-ported on-chip memories and so on. Multiporting hasn't been used on such a scale in any consumer device ever, it'd add a lot of extra wiring on the chip, as well as additional arbitration logic etc.
 
Does anyone anything about the cryptography engines that was at the bottom of the vgleaks piece.
Would this allow highly compressed data to store the whole game in main memory and move in compressed form to the data move chips for distribution and getting around some bandwidth issues .
Or is it just a security chip
 
Status
Not open for further replies.
Back
Top