Xbox One (Durango) Technical hardware investigation

Status
Not open for further replies.
What is intersting is why MS decided to go with ESRAM + DDR3 memory instead of GDDR5.
I cannot believe that they did not know about the viability of GDDR5 memory chips and that for some reason, Sony and "waited" for it.

So the two possibilities we have here are:
1. The specs that are "known" now by the VGLeaks, are correct and that is what we will see in the final silicon. But perhaps this allows MS to compete on price, a $199-299 Durango?

2. The specs we know now are not the final one and lots of heavy customization went into the design, meaning that MS have managed to design a machine that manages to "do more with less". Instead of going Brute Force, they went Bruce Lee. There has to be more reasons MS went with ESRAM and DDR3 memory, more than the ones we know about...

I don't think its that simple.

The eSRAM is directly interface to what looks like pretty fast i/o bus with encoding and decoding logic on it. Durango might be designed to facillitate efficient servicing of outside devices like XTV, Kinect, display glasses, tabs or phones.

You have papers like this
An asymmetric distributed shared memory model for heterogeneous parallel systems
http://dl.acm.org/citation.cfm?id=1736059
Existing programming models for heterogeneous computing rely on programmers to explicitly manage data transfers between the CPU system memory and accelerator memory.
This paper presents a new programming model for heterogeneous computing, called Asymmetric Distributed Shared Memory (ADSM), that maintains a shared logical memory space for CPUs to access objects in the accelerator physical memory but not vice versa. The asymmetry allows light-weight implementations that avoid common pitfalls of symmetrical distributed shared memory systems. ADSM allows programmers to assign data objects to performance critical methods. When a method is selected for accelerator execution, its associated data objects are allocated within the shared logical memory space, which is hosted in the accelerator physical memory and transparently accessible by the methods executed on CPUs.

There may exists numerous reasons why MS went this route versus simply trying to alleviate the cost of including 8 GB of GDDR5.
 
Maybe we could be underestimating the benefit of the latency on that ESRAM? Depending on just how good that latency is, isn't there the possibility that Durango could get surprisingly more mileage out of its available memory bandwidth than might otherwise be the case?
 
Durango has more memory BW than a Bonaire GPU paired with an IVB CPU connected to DDR3 2133 main memory. The leaked docs also claim that latency is important in high ROP efficiency (presumably with gains for early Z test), something that if true would provide at least some limited advantages over any alternative GDDR5 offering.

What is the source of the panic around Durango's memory bandwidth anyway?
 
Oh, no panic at all, as I've always felt that it had more than the necessary memory bandwidth, but I felt that perhaps the potential for what the ESRAM can bring to the table has flown under the radar.

It won't suddenly turn Durango's GPU into a 7970 or make it somehow exceed the PS4, but in the console environment I suspect it will come to be well respected over the life of the next xbox machine. And not to be a bother, but give me your dumbest explanation for what early z test is about. I figure I ask enough silly questions and pretty soon I'll be answering some for future newbies. :)
 
What is intersting is why MS decided to go with ESRAM + DDR3 memory instead of GDDR5.
I cannot believe that they did not know about the viability of GDDR5 memory chips and that for some reason, Sony and "waited" for it.

Most likely it may not have been what they thought "possible", but also 8GB GDDR5 is extremely pricey.

So they thought something like " the only way 8GB RAM is feasible is with this DDR3 setup, and we want 8GB for set top box ambitions".

They were probably as shocked as anybody at the competition's move to 8GB.

But the consolation prize is the price advantage over 8GB GDDR5 is still very large and very much there.

From whispers, Kinect did play a part in MS going for a low BOM on the rest of the console. In that way, only 12 CU's and DDR3 makes sense.

I do like the way Durango is engineered (by rumors). A modest GPU doesn't need gobs of bandwidth. The whole setup should be quite cost effective.

It's a shame the savings were apparently dumped in Kinect, tho, I guess.

I'm still hopeful for a clock bump. They are very effective. As we see with Bonaire, only 14 CU's, yet simply higher clocks allow it to hit ~1.8 TF. 12 CU's would hit >1.5 TF at 1ghz.

I hope MS is being smart here behind the scenes, maybe with some prodding from Rein.
 
They better have some great software made to utilize Kinect alongside the controller then. A big initial limitation was the fact that it couldn't be guaranteed that every xbox owner had Kinect, but that won't be an issue on the next xbox.

So we'll have a chance to see what can really be accomplished.
 
Maybe we could be underestimating the benefit of the latency on that ESRAM? Depending on just how good that latency is, isn't there the possibility that Durango could get surprisingly more mileage out of its available memory bandwidth than might otherwise be the case?

That's been the only real hope for "special sauce" for months now, it just gets lost in all the yelling that there's clearly no special sauce and stop being stupid and hoping for unicorns where none exist.

How effective it might be is the question. I'm certainly not knowledgeable enough to have any idea.

Panajev at GAF wrote a post talking about the ESRAM being maybe able to do some cool things (I think he drew analogies to PS2) a little ways back on GAF. So he's really the second programmer besides ERP I've seen mention it.
 
I'm glad to see that the 1.2tflop rumour appears less likely now, the 7790 seems to be a good choice, even if it's underclocked 1.6tflop minimum seems likely.

Now about that bandwidth....
 
I just wish for the rumors to be over, though honestly, if both Sony and Microsoft are basically going to build the same box, why should I buy two of them! :cry:
 
I'm glad to see that the 1.2tflop rumour appears less likely now, the 7790 seems to be a good choice, even if it's underclocked 1.6tflop minimum seems likely.

Now about that bandwidth....

How does the existence of a new card from AMD make the 1.2TFlop rumor any less likely? The existence of the GHz 7870 doesn't mean the PS4 GPU has magically been bumped over 2TFlops.

It's a mistake acting like either system's GPU is "based on" any particular AMD card. Both are made to order APUs using GCN building blocks in a requested specification. Any resemblance to a retail videocard is coincidental and has no bearing on the final console configuration or clock-speeds.
 
Nah, it's been a Geforce Titan for months now. Shhh :p

You couldn't wait till April. You just had to go and reveal it to everybody. :LOL:

Wow, and here I thought I didn't have a GAF account to actually search out that Panajev post you were referring to, but I realized I signed up back in 2010, but never actually used it for anything. And with the exact same name as the one I have on here no less.

That's also a very interesting comment regarding the ESRAM. Makes me wonder just what heck he means. For example, this part here.

(like GS's very fast bandwidth and ultra low overhead for some operations which are expensive on just about any other GPU out there thanks to its design)

The like GS's part confuses me. I have no idea what that's suppose to mean, but his entire comment makes me most curious as to whether or not Microsoft and their own in house teams might be even remotely on the same page involving these potential algorithms that he says won't work very well on the pc or pc like architecture. Not saying it's why they designed Durango the way they did, but boy would I ever love to hear some more about what exactly he meant. Would be nice if it's something Microsoft thought about and possibly included some information on in their dev kit documentation or code examples.
 
GS=Graphics Synthesizer. What the PS2's GPU was called.

Panajev has always loved the PS2 architecture...one of the few I suppose.
 
Ohh, so that's what he meant. Clearly not using my head. I should have put two and two together when he mentioned the PS2 the first time.

Strange, I would've thought a lot of devs enjoyed the PS2, or perhaps it only seemed that way because it sold as well as it did and had such an incredible games lineup.
 
How does the existence of a new card from AMD make the 1.2TFlop rumor any less likely? The existence of the GHz 7870 doesn't mean the PS4 GPU has magically been bumped over 2TFlops.

It's a mistake acting like either system's GPU is "based on" any particular AMD card. Both are made to order APUs using GCN building blocks in a requested specification. Any resemblance to a retail videocard is coincidental and has no bearing on the final console configuration or clock-speeds.

Because the 1.2Tflops rumour is just that, a rumour and 1.2Tflops GPU is roughly a 7770 which draws around 80W, the 7790 is just shy of 1.8Tflops and draws around 85W(while using much faster ram).

This combined with the fact that Sony has 1.84Tflops and MS having 1.2Tflops puts them way behind Sony and will make them look inferior, therefore it is entirely possible that the 7790 or something similar has made it's way into the NextBox at some point of it's development to narrow the gap between the two consoles.

I'm not saying it makes the rumours obsolete, but don't rely on old rumours too much.
 
I'm not saying it makes the rumours obsolete, but don't rely on old rumours too much.
And everybody else with some experience is saying don't rely on wishful thinking and speculation, because that road always ends in tears.

And what you're engaging in is wishful thinking and speculation. Hence, you're setting yourself up for disappointment.
 
Because the 1.2Tflops rumour is just that, a rumour and 1.2Tflops GPU is roughly a 7770 which draws around 80W, the 7790 is just shy of 1.8Tflops and draws around 85W(while using much faster ram).

The rumoured Durango GPU is not "roughly a 7770"... Yes the performance is, but there is a reason it's achieved by a totally different way, which is more CUs at significantly lower speed. Durango's CUs wont draw as much watts as 7770 or Bonaire. I don't have any problem believing that Durango's GPU will be based on GCN 1.1, but other than that, Bonaire is just one another configuration of that tech and isn't super important with regards to Durango, even if Durango has 14CUs, where two are disabled.
 
And everybody else with some experience is saying don't rely on wishful thinking and speculation, because that road always ends in tears.

And what you're engaging in is wishful thinking and speculation. Hence, you're setting yourself up for disappointment.

it didnt end in tears with a recent major unexpected spec bump to another console, tho.

hopefully we'll find out more soon enough. gdc approacheth, as does an durango unveil event alleged for april 26.

i certainly hope we get major leaks out of gdc. hundreds or thousands of devs converged, somebodies gotta talk. and the good news is it begins monday.
 
Status
Not open for further replies.
Back
Top