Xbox One (Durango) Technical hardware investigation

Status
Not open for further replies.
Samsung has just announced that DDR4 is going into mass production. Too late for the launch xbone but could it make it's way into the next refresh?
 
Samsung has just announced that DDR4 is going into mass production. Too late for the launch xbone but could it make it's way into the next refresh?

No. That wouldn't be a 'refresh' that would be a whole new device with different timings and performance from the launch unit, introducing performance differences with a refresh is a no-no.
 
Does DDR4-2133 perform better than DDR3-2133?

I have no idea but I can guarantee it performs differently and that's enough. Even if we're only talking a cycle or two faster or slower that adds up fast and makes targeting software to XboxOne v1 and XboxOne v2 a necessity. Take the DDR2 -> DDR3 transition although DDR3 brought more bandwidth it also introduced more latency.
 
Is it me or they just said that the video blocks are the Secret Sauce (TM)

They also implied that there are some hidden blocks or at least some functions of the current blocks that were not shown or talked about at all.
RE Video block:
Not how I read it. I can see why people would say secret source if they we're either trying to big up, or put down the XB1 though. (Not saying you)
It's just something that he thought it would be good and efficient at.
Remember it's an _entertainment/multimedia device_, not just a console.....

RE Hidden blocks:
MS didn't go into details of what they are and what makes them up, he was saying he's making an educated guess at some of them, but cant be sure what all the special purpose processors are.
i.e. move engines etc

Would have to reread it, but swear there was a part/s that I thought was mixing up old and new specs though.
 
I have no idea but I can guarantee it performs differently and that's enough. Even if we're only talking a cycle or two faster or slower that adds up fast and makes targeting software to XboxOne v1 and XboxOne v2 a necessity. Take the DDR2 -> DDR3 transition although DDR3 brought more bandwidth it also introduced more latency.

Because it performs differently doesn't rule it out. As long as it isn't slower in terms of bandwidth and latency, it can be made to work. The memory controller can make the software/performance look identical. And if that's the case you can stay with the cheapest most commercially available ram.

I have not been able to find anything with regards to latency of DDR3 vs DDR4 so maybe someone can post some facts.
 
From the semiaccurate article:

To translate from technical minutia to English, good code = 204GBps, bad code = 109GBps, and reality is somewhere in between. Even if you try there is almost no way to hit the bare minimum or peak numbers. Microsoft sources SemiAccurate talked to say real world code, the early stuff that is out there anyway, is in the 140-150GBps range, about what you would expect. Add some reasonable real-world DDR3 utilization numbers and the total system bandwidth numbers Microsoft threw around at the launch seems quite reasonable. This embedded DRAM is however not a cache in the traditional PC sense, not even close.

150GB/s is not bad.

While it is multi-purpose and Microsoft said it was not restricted in any specific manner, there are some tasks like D3D surface creation that default to it. If a coder wants to do something different they are fully able to, why you would want to however is a different question entirely.
 
Last edited by a moderator:
Didn't see this posted...

Seeking the story behind the silicon, I took a few minutes to interview the engineers behind the custom chips in the Microsoft Xbox One. Patrick O'Connor, the senior engineering manager behind the latest Kinect sensor, recalled the day his team won a bake-off Microsoft held between three or four external time-of-flight sensors and the one his team proposed.

"For an engineering group, it's a big day when you essentially get a design win," O'Connor said during a morning break. "We were proud of our prototype, it was working beautifully and when we demoed the prototype it exceeded what anyone else was showing them," he said.

The 512 x 424 pixel sensor, made in a 130nm TSMC process has a 90 degree diagonal field-of-view and was part of a two-year development project on the next-gen Kinect.

"You can get really close to the camera and still detect a person in a normal living room where there is not a lot of space or light," O'Connor said. "You can also detect a child's hand or wrist even when they are far away from the camera, using just a few pixels," he said.

The net result is "much more accurate" game play. For instance, in popular videogames such as tennis, "you can put spin on the ball now because we see more subtle motions of wrist turns," he added.

Another Microsoft engineer said the company also held a bake-off to determine which CPU core it would use. It looked at all the usual suspects and some unusual ones including an internally designed instruction set before choosing the AMD Jaguar.

http://www.eetimes.com/document.asp?doc_id=1319358

Tommy McClain
 
That the ToF sensor is in-house gives MS more options in terms of licensing and products, but I guess also means they shoulder the entire production costs and concerns. It's a shame they didn't cover that with HotChips - it's by far the most interesting aspect of the new console!
 
Anand also wrote that before he even tested Jaguar. I'd guess that the decision to limit the CPU to 1.6 GHz is for yield reasons, not due to concerns about Jaguar's power efficiency at 2 GHz.

With 8 cores the difference in 1.6vs2.0 TDP multiplies compared to standard jaguar 4 cores..

Sorry but a SOC has a target TDP that cant be increased with tens of wats at will without serious redesigns
 
Semi accurate says gpu uarch is "between hd 6000 and 7000"..
How can that be right, gcn is a complete new architecture from the ground up is it not?
#confused :/
 
It's a claim that runs counter to pretty much every other disclosure and article, and isn't substantiated or explained.

The odds seem to be in favor of it being an innaccuracy.
 
I guess I am not overly familiar with this site and current reputations - http://www.extremetech.com/gaming/1...d-odd-soc-architecture-confirmed-by-microsoft

I think Ekim was asking about this detail, here is a comment from the article:
The other major mystery of the ESRAM cache is the single arrow running from the CPU cache linkage down to the GPU-ESRAM bus. It’s the only skinny black arrow in the entire presentation and its use is still unclear. It implies that there’s a way for the CPU to snoop the contents of ESRAM, but there’s no mention of why that capability isn’t already provided for on the Onion/Garlic buses and it’s not clear why they’d represent this option with a tiny black arrow rather than a fat bandwidth pipe.

Edit: I guess we already covered this article..
 
With 8 cores the difference in 1.6vs2.0 TDP multiplies compared to standard jaguar 4 cores..

Sorry but a SOC has a target TDP that cant be increased with tens of wats at will without serious redesigns

You're looking at 30W max for an 8-core 2 GHz Jaguar CPU, up 8W from maybe ~22W at 1.6 GHz. Maybe too much for a post-design overclock for sure, but it's ridiculous to claim that 2 GHz wasn't chosen at launch due to heat/efficiency concerns. Especially in the PS4 where it's dwarfed by the GPU.
 
So any ideas on the black arrow thing from that slide? Charlie is suggesting MS has their own hUMA-like solution, so maybe that's showing the CPU snooping eSRAM? ExtremeTech seems to think so, but anyone here have guesses?
 
Status
Not open for further replies.
Back
Top