Why not to put those DSP in Kinect (inside)?
I guess if MS is guaranteeing Kinect with every system its much easier (cheaper) to build the "power" into the console than increase the complexity and cost of an external peripheral.
Why not to put those DSP in Kinect (inside)?
why?Less than 4GB and the machine is DOA.
With a 32 MB pool and 64B lines, that would require 512K tag entries.
Depending on the arrangement, it's easy to require over a MB in cache tags alone.
That would be atypical for a non-server chip (a high-end one at that), but this is potentially an unusual situation.
i've been told the esram is in fact very low latency.
That most likely results in higher power and heat.
this is surely taken in account by amd and microsoft, the main goal is a balanced machine
I think that this heat is overstimate in comparison to cpu, gpu and dissipation capacity
Vector length is still quite long, and that can make it a poor fit for problems with naturally smaller granularity. It's also more likely the longer the SIMD vector that branch divergence can affect performance, and more complex code can increase the number of paths a single wavefront will need to loop over.Well, Arthur Gies in neogaf comments that the improvement in efficiency doesn´t come from the ESRAM, so if true forget about the low latency. He says it comes from the way the GPU simds are managed and from real-time asset compression/decompression(?¿?). Wasn´t GCN architecture suppossed to greatly increase the efficiency of the vector units?. How could this be increased even more?.
At least with GCN, this is going to run into measures already defined to provide some parallelism.Would it be possible to make the GPU out-of-order and capable of execute the wavefront instructions not in-order?. ( If then there is a block inside the GPU that is not shown in the leaked graphic ).
Even if this is true, it would be mostly irrelevant for purposes of game performance.Could the listed GPU specs be the 'exposed' GPU not the actual GPU? I was thinking that if they have 3GB supposedly set aside for various agendas and all or part of 2 cores as well, they could have also partitioned off part of the GPU as well for processing purposes. Perhaps the GPU really has 14 or 16 CUs but they are reserving some for Kinect?
Even if this is true, it would be mostly irrelevant for purposes of game performance.
Even if this is true, it would be mostly irrelevant for purposes of game performance.