Xbox One (Durango) Technical hardware investigation

Status
Not open for further replies.
Their sources can be wrong, he said Mirror's Edge 2 would be on Microsoft E3 conference.

He also went on about a Prince of Persia game for the consoles. It turns out the only thing coming out for it is the already known about iOS mobile game and is targeted for the iPad.

Those who claim he is never wrong are blindly ignoring what doesn't fit in with their own views.
 
perhaps... but it was the followers who ran with a rumor and spun the BS of a downclock while jumping for joy and then kneeling to praise almighty cboat

really exciting to see MS trying to make some moves to create hw parity




yea in fact who is to say Sony won't have a reduction somewhere along the line either.... as has been noted "specs change" does not only apply to upclocks nor only to MS

Who says PS4 is superior to XB1? They are different, one is a truck and the other a car; depending on what I want to do one or the other offer advantages but in general both will work just fine.

Regarding the memory I'd like to see someone outline how 9gigs of memory offers a tangible benefit over 5, as I am not seeing enough of a need to justify the cost and trouble to increase memory this late in the game.
 
It's all about money here and now they discover that the EsRam on the same Die than the APU affects the production and could create overheating.

Where does the conclusion "it's all about the money" come from? How do you know it is not about the energy/power consumption of moving data off chip?

MS has published this, the power/Joules going off chip versus on chip versus actual processing. [Pretty sure it was MS that was credited in the post I read.] The power consumed going off chip is orders of magnitude higher than on chip (10,000 X) (and also for the actual processing operations (10 x 10,000 X.)

Second, how do you know that the power consumption of the ESRAM is bigger (or smaller, or equal) to the power costs of *driving* GDDR5 modules versus driving DDR3 modules?

If moving data off chip consumes so much more power (as per MS publication) then maybe less power is dissipated in equivalent memory transactions when they use ESRAM + DDR3 versus GDDR5?

I can't find the MS link right now but here is the same statement essentially:

JRK/MIT said:
Relative to performing an ALU operation on 32 bits of data, moving that data a few millimeters across a chip requires an order of magnitude more energy, moving it to or from off-chip DRAM requires four orders of magnitude more, and sending it over a cellular radio at least six orders of magnitude more. The efficiency and performance of an application are determined by the algorithm and hardware architecture, but critically also by the organization of computation and data.

http://people.csail.mit.edu/jrk/research.pdf



And don't forget latency. ESRAM <<< Off Chip. A race car doesn't go very fast when it hits red light after red light.
 
Last edited by a moderator:
Isn't it better to say that the race car has slower brakes, the impact of which depends on the number of bends in the track?
 
And don't forget latency. ESRAM <<< DDR3 < GDDR5. A race car doesn't go very fast when it hits red light after red light.
This is essentially a red herring.

As explained by knowledgeable forum members, DDR3 and GDDR5 latency is essentially equal, as measured in nanoseconds (which is the only measure that matters). Also, GDDR5 minimum block transfer size is larger than DDR3, but seeing as transfer speed is >2x faster it should not matter.
 
If moving data off chip consumes so much more power (as per MS publication) then maybe less power is dissipated in equivalent memory transactions when they use ESRAM + DDR3 versus GDDR5?
It's a bad approximation, but if we just go with the relative bandwidths on the external memory pools, the PS4 at peak would be moving about 2.5 times the data off-die.
The exact power consumption would be reliant on knowing the parameters of the chips and voltages, but the overall activity level should bear out.


And don't forget latency. ESRAM <<< DDR3 < GDDR5. A race car doesn't go very fast when it hits red light after red light.

The present situation, now that people have compared data sheets appears more like:
ESRAM <<? DDR3 ~=(maybe) GDDR5.

The DRAM devices themselves don't differ much, so it would come down to the respective designs' memory controllers and design emphasis.
The eSRAM should be faster, to a degree not disclosed.
 
This is essentially a red herring.

As explained by knowledgeable forum members, DDR3 and GDDR5 latency is essentially equal, as measured in nanoseconds (which is the only measure that matters). Also, GDDR5 minimum block transfer size is larger than DDR3, but seeing as transfer speed is >2x faster it should not matter.

If so then at least it can be said than ESRAM Latency is <<< Off Chip Latency.
 
Are you implying a new RROD for the Xbox One? Is the eSRAM hotter than GDDR5/DDR3?

EDIT: Sorry, I misunderstood you.

Not implying that. [I should have written costs not savings. I went and edited that. Sorry for lack of clarity.]

I am saying that there are less off chip transactions (and they consume less power at much lower clock rates) in one scenario (The DDR3 + ESRAM scenario). So I am implying that the power consumed by the ESRAM might not be much compared with that power savings. (The savings referring to the reduced off chip memory transactions.)

The drivers for the GDDR5 should be dissipating quite a bit more since there are both more transactions and a much higher clock. [I am assuming the dominant source of the 10,000 higher factor is the drivers for the SOC to external memory die.] But there are more qualified people who could toss around numbers based upon the fact that the MICRON chips are shown in the picture, the BW/Clock is known (published for the GDDR5 case), etc.

If there is an AMD person around here maybe they can walk over and talk to someone who worked on the GDDR4 or GDDR5 specs and get their opinion in a water cooler conversation. But reading 10,000 X it suggests that it might be pretty real factor.



Plus if the SOC gets lots bigger for a big ESRAM block then the contact area between the SOC and the heatsink just went up lots. Thermal resistance is inversely proportional to contact area (and yes there is much more to it).
 
I hope at some point the Sony faithful will realize that one console being 'weaker' than the other, only serves to reduce the quality of games on the stronger one.

If the Xbox One was closer to the Wii U in performance than the PlayStation 4 that may be true, but this isn't the case. The PlayStation 4 and Xbox One are in the same performance ballpark, just as the PlayStation 3 and Xbox 360 were in the same performance ballpark.

What we've observed in the current generation will likely continue. The PlayStation 3's bonkers architecture and less usable memory resulted in lower resolutions, lower frame rates, lower quality textures and other graphical budgets (alpha blending in particular) being cut. The games were fundamentally the same, it's just the PlayStation 3 often ended up with a compromised visuals compared to the Xbox 360 version.

And you know what? It's not a big deal. :cool:
 
Just wanted to ask a small technical question.

Does the Kinect2 actually have any, or better, processing power in its hardware than the Kinect1 implementation?

If I recall, a huge limiting factor for Kinect 1 was that it offloaded all its processing to the 360, which took away from the game's processing.

There's been new news and tech demos of the Kinect 2 which look impressive, but the kinect 1 also looked impressive in tech demos.

If the Kinect 2 does have its own dedicated processor, anyone know what that is?
 
The problem for sony is they designed the box as small as possible around a fixed tdp and announced the size and diminsons to the public . THey are also taking as many pre orders as possible. So if the chips come in to hot and poor yielding what are they going to do ? Increase the size of the ps4 ? Stiff customers who preordered cause fewer chips hit the new clock speeds ?
This is mind bending in its audacity. So on one hand, it's entirely reasonable to think Microsoft can pull a last minute upclock on their APU - even though the one they showed, presumed to be running at 1600/800mhz, already has a comically large cooling solution, but we're also thinking that Sony, who appear to be running a much simpler design variation of the same Jaguar package, are having yield problems? :eek:

The 1.6Ghz Jaguar would have been picked by Microsoft and Sony because of cost (yield), energy consumption and heat output. It's probably a magic sweet spot for the package which is why both companies picked the exact same clocks. If TSMC were having yield problems producing for Sony, they'd know by now, Sony would know by now, and pre-orders wouldn't be virtually unlimited.
 
Just wanted to ask a small technical question.

Does the Kinect2 actually have any, or better, processing power in its hardware than the Kinect1 implementation?

If I recall, a huge limiting factor for Kinect 1 was that it offloaded all its processing to the 360, which took away from the game's processing.

There's been new news and tech demos of the Kinect 2 which look impressive, but the kinect 1 also looked impressive in tech demos.

If the Kinect 2 does have its own dedicated processor, anyone know what that is?

There is an audio block that does echo cancellation for Kinect, and I imagine whatever other audio processing Kinect needs to do. The rest would be on the GPU and CPU. The question is how much of it is part of the OS reservation and how much is run in the game VM.
 
I'd put money 12GB is pure nonsense, if you mix RAM module sizes you break the ability to dual channel which is a far worse loss than the gain of 4GB. If the memory controller is triple channel and MS just decided 'sod it' let's not populate that then there are pink slips a flying down Seattle way. There is no time for a 'respin' to add more channels or to change the APU itself and if they tried a dual design, dual fab strategy they would burn cash so fast it would be cheaper to just put $100 in every XB1.

The better ESRAM performance intrigues me though, I had presumed it was just that they worked out a more efficient way of r/w in parallel but smarter folks than me are nay saying that interpretation. I guess I saw it as analogous to a microcode engine patch for a CPU that improves performance for certain macro-ops but that is a very different scenario to a memory chip. I do hope that developer briefing leaks in a more substantive way so I can read what the smart people here think!
 
Last edited by a moderator:
The actual pictures of the XBox board would indicate how many channels there are.
That being said, having a nonpower of two on a power of two bus width is possible. The inverse has also been done by certain Nvidia GPU SKUs.

The memory controllers and whatever address partitioning they use can be route accesses appropriately, at the cost of non-uniform bandwidth if accesses to the additional space start hammering the controllers linked to the higher density channels and idling the others.
 
This is mind bending in its audacity. So on one hand, it's entirely reasonable to think Microsoft can pull a last minute upclock on their APU - even though the one they showed, presumed to be running at 1600/800mhz, already has a comically large cooling solution, but we're also thinking that Sony, who appear to be running a much simpler design variation of the same Jaguar package, are having yield problems?
You're reasoning strikes me as very unsound. The fact Ms has a huge case with a huge fan implies they can deal with more heat than PS4, no? Ergo the possibility that MS can upclock but Sony can't.
 
Status
Not open for further replies.
Back
Top