Samsung has just announced that DDR4 is going into mass production. Too late for the launch xbone but could it make it's way into the next refresh?
They are teasing that they know a lot more that they are telling...
Samsung has just announced that DDR4 is going into mass production. Too late for the launch xbone but could it make it's way into the next refresh?
No. That wouldn't be a 'refresh' that would be a whole new device with different timings and performance from the launch unit, introducing performance differences with a refresh is a no-no.
Does DDR4-2133 perform better than DDR3-2133?
RE Video block:Is it me or they just said that the video blocks are the Secret Sauce (TM)
They also implied that there are some hidden blocks or at least some functions of the current blocks that were not shown or talked about at all.
I have no idea but I can guarantee it performs differently and that's enough. Even if we're only talking a cycle or two faster or slower that adds up fast and makes targeting software to XboxOne v1 and XboxOne v2 a necessity. Take the DDR2 -> DDR3 transition although DDR3 brought more bandwidth it also introduced more latency.
To translate from technical minutia to English, good code = 204GBps, bad code = 109GBps, and reality is somewhere in between. Even if you try there is almost no way to hit the bare minimum or peak numbers. Microsoft sources SemiAccurate talked to say real world code, the early stuff that is out there anyway, is in the 140-150GBps range, about what you would expect. Add some reasonable real-world DDR3 utilization numbers and the total system bandwidth numbers Microsoft threw around at the launch seems quite reasonable. This embedded DRAM is however not a cache in the traditional PC sense, not even close.
While it is multi-purpose and Microsoft said it was not restricted in any specific manner, there are some tasks like D3D surface creation that default to it. If a coder wants to do something different they are fully able to, why you would want to however is a different question entirely.
Seeking the story behind the silicon, I took a few minutes to interview the engineers behind the custom chips in the Microsoft Xbox One. Patrick O'Connor, the senior engineering manager behind the latest Kinect sensor, recalled the day his team won a bake-off Microsoft held between three or four external time-of-flight sensors and the one his team proposed.
"For an engineering group, it's a big day when you essentially get a design win," O'Connor said during a morning break. "We were proud of our prototype, it was working beautifully and when we demoed the prototype it exceeded what anyone else was showing them," he said.
The 512 x 424 pixel sensor, made in a 130nm TSMC process has a 90 degree diagonal field-of-view and was part of a two-year development project on the next-gen Kinect.
"You can get really close to the camera and still detect a person in a normal living room where there is not a lot of space or light," O'Connor said. "You can also detect a child's hand or wrist even when they are far away from the camera, using just a few pixels," he said.
The net result is "much more accurate" game play. For instance, in popular videogames such as tennis, "you can put spin on the ball now because we see more subtle motions of wrist turns," he added.
Another Microsoft engineer said the company also held a bake-off to determine which CPU core it would use. It looked at all the usual suspects and some unusual ones including an internally designed instruction set before choosing the AMD Jaguar.
someone should ask Albert Panello about the 1.9GHz CPU theory.
Anand also wrote that before he even tested Jaguar. I'd guess that the decision to limit the CPU to 1.6 GHz is for yield reasons, not due to concerns about Jaguar's power efficiency at 2 GHz.
Semi accurate says gpu uarch is "between hd 6000 and 7000"..
How can that be right, gcn is a complete new architecture from the ground up is it not?
#confused :/
From the semiaccurate article:
150GB/s is not bad.
The other major mystery of the ESRAM cache is the single arrow running from the CPU cache linkage down to the GPU-ESRAM bus. It’s the only skinny black arrow in the entire presentation and its use is still unclear. It implies that there’s a way for the CPU to snoop the contents of ESRAM, but there’s no mention of why that capability isn’t already provided for on the Onion/Garlic buses and it’s not clear why they’d represent this option with a tiny black arrow rather than a fat bandwidth pipe.
With 8 cores the difference in 1.6vs2.0 TDP multiplies compared to standard jaguar 4 cores..
Sorry but a SOC has a target TDP that cant be increased with tens of wats at will without serious redesigns