Xbox One (Durango) Technical hardware investigation

Status
Not open for further replies.
The ACEs are attached to different Rasterizers and Geometry engines, same as Picairin and Bonaire, while Cape Verde has is attched to one.
The ACEs aren't attached to any geometry engines or rasterizers. The ACEs offer a path to the shader array separate from the graphics pipeline. That's what the ACEs are about. One ACE is basically a trimmed down command processor which is only capable of handling compute shaders and not the complete graphics pipeline.
 
The 102GB/s isn't shared with anything on Durango. It belongs 100% to the GPU and memory clients that are apart of the GPU, such as the Move Engines. It isn't shared with the CPU at all. The 68GB/s of DDR3 is shared with the CPU.

Durango has 102GBs of bandwidth to 32mb, which cant be sustained for any length of time when you are talking about pulling or pushing gpu data to and from main memory.

I can see esram allowing durango to mimick the 360 memory scheme where writes to main memory are minimized and the main memory is mostly used to provide geometry and texture data. But there is nothing to say that durango's virtual bandwidth to main memory is as large as the 7790's bandwidth.

7790 gpu at 1Ghz is probably the result of its faster memory.
 
The ACEs aren't attached to any geometry engines or rasterizers. The ACEs offer a path to the shader array separate from the graphics pipeline. That's what the ACEs are about. One ACE is basically a trimmed down command processor which is only capable of handling compute shaders and not the complete graphics pipeline.

Yeah sorry, I think I interpreted the diagram wrongly. Either way what I am saying is durango's set up has more in common to Picairin and Bonaire than it does to Cape Verde.
 
Durango has 102GBs of bandwidth to 32mb, which cant be sustained for any length of time when you are talking about pulling or pushing gpu data to and from main memory.

I can see esram allowing durango to mimick the 360 memory scheme where writes to main memory are minimized and the main memory is mostly used to provide geometry and texture data. But there is nothing to say that durango's virtual bandwidth to main memory is as large as the 7790's bandwidth.

7790 gpu at 1Ghz is probably the result of its faster memory.

I think that the ESRAM bandwidth will be more available than you are giving it credit for.

In the extreme cases:

When there isn't data present in the ESRAM to be read, the full bandwidth will be usable for writes.

When the ESRAM is full, the full bandwidth will be available for reads.

Most of the time, though, the GPU will be both reading from and writing to this pool fluidly and in combination with reads and writes to main memory. The DMEs will also be working to enable maximum bandwidth utilization.

In operation, I'd actually expect Durango's GPU to typically have more than 102 GB/s available to it.
 
Last edited by a moderator:
Yeah sorry, I think I interpreted the diagram wrongly. Either way what I am saying is durango's set up has more in common to Picairin and Bonaire than it does to Cape Verde.

Clearly.


  1. Take a Bonaire
  2. Disable a couple of CUs and lower the clockspeed for improved yields and lower power/thermals.
  3. Add some Special Sauce™
And there you have Durango's GPU.
 
I think it's quite crazy for anybody to assume based on what we know now that the 7790 will actually have access to more bandwidth than the Durango's GPU. That simply isn't true. At least I don't see how it can be true.

The Durango GPU can simultaneously access bandwidth from main ram, as well as ESRAM. Even with the main ram bandwidth being shared with the CPU, Durango still appears to have quite a bit more memory bandwidth than the 7790 does. Even the memory example we possess regarding Durango showcases the belief they have that Durango's GPU will be able to achieve and even exceed 102GB/s.

It makes sense for us to believe that Durango's GPU has more in common with Bonaire than with anything else that AMD is offering right now.
 
No, but if the yields are bad the launch may as well not happen. You're assuming devs could talk to microsoft about the PS4, it would probably be a massive massive violation of the NDA.

They were perfectly willing to tell Sony that MS was going with 8GB of RAM by the sounds of it. And it is far more important to launch with fewer units in 2013 than to launch with more in 2014.

Also, if you really want Microsoft to make changes, they can, but they won't be launching this year.

Again, it depends entirely on the changes we are talking about. Until we specify what changes we are talking about, there is no discussion to be had.

How long do you think a console takes to go from paper to machine, because I am lead to believe with all the design work, all the manufacturing and respins it takes a very very long time and they are going to need to start manufacturing them before the launch date, probably months before.

Sure, but you are assuming a lot here. First of all, you assume any changes we are talking about are major architectural changes. That's not a given. There is a spectrum of possible changes that can be made to improve performance since the last major update to devs was made.

Secondly, you act as if MS would be doing a knee-jerk reaction. We all agree that is unlikely...but as I pointed out to Shifty, go ask folks working on competitive engineering projects (I've done some myself) how intelligent it is to put all your eggs into a single design without any flexibility. That's not how things are done. The much more likely scenario would be for MS/AMD to have several designs, all being distinct variations of their baseline architecture, with variable parameters that can be adjusted/scaled depending on how certain external factors affect each design's ability to meet certain goals. One such goal would be to perform favorably against PS4 in rendering. So long as their other goals are met (and I think their broader STB goals would be baked into the baseline architecture as top priority) they will tweak said parameters as necessary until they are happy with one design.

It wouldn't take them any time at all to redesign anything, because it's already been designed. They would be choosing between slightly varied designs at that point. The question is how far could they go? Well, in theory, they could go to ridiculous lengths but realistically we know they need some acceptable number of units for 2013. So the focus would likely shift to yields. If yields are lower than other design options they then decide if the lower output is acceptable for 2013. If it is, great. If it isn't, they still have the option of simply making more units to overcome the low yields by brute force. That costs money...so at that point it would come down to how much MS is willing to throw at holding that design.

I agree there are lots of considerations to be made, but there is no sense making assumptions that are unlikely to bear a resemblance to reality, especially when they serve only to kill forthcoming discussion. We dunno what kind of changes are in the cards. Even when we guestimate some of those, we dunno how they designed their power restrictions and heating solutions. Even guessing at those to spur discussion, we still dunno how yields would be affected by any particular changes that may or may not be in the cards. Furthermore, we dunno how MS would handle lower yields (hint: they don't just automatically mean delays), nor how much cash they would consider throwing at low yields to get an acceptable figures for 2013 shipments.

We dunno much of anything on MS's end in terms of their engineering priorities for the actual processors involved. Taking the info from 2012, which wouldn't necessarily reflect any changes they would be making as they finalize their design, is merely for developer guidance.

Btw, in the case where they have various designs with slightly varied parameters between them, they wouldn't tell devs about anything but the lowest common denominator to avoid forcing devs to pare back content right before launch. :cool:
 
Clearly.


  1. Take a Bonaire
  2. Disable a couple of CUs and lower the clockspeed for improved yields and lower power/thermals.
  3. Add some Special Sauce™
And there you have Durango's GPU.

So , it's more or less a 7770 again ... or am i wrong ?
 
So , it's more or less a 7770 again ... or am i wrong ?

The only way Durango's GPU was ever comparable to a 7770 was in the theoretical maximum TF rating and number of ROPS. There are more differences than similarities between the two.
 
Btw, in the case where they have various designs with slightly varied parameters between them, they wouldn't tell devs about anything but the lowest common denominator to avoid forcing devs to pare back content right before launch. :cool:

Oddly enough, that actually mirrors what some developers have been communicating over the last month. Unless all of them are playing coy.

And speaking of which, all next week is GDC2013. Thousands of developers and the industry's press all congregating in one place. It's highly doubtful that Microsoft will be able to maintain secrecy for those 5 days. I'm pretty sure we're about to hear something we could all hang our speculative hats on.
 
I understand being skeptical of surprise changes, but to act as if things that Microsoft has likely known about or been planning around for far longer than any of us realize, are somehow completely impossible for them at this point when we don't entirely know with 100% certainty what their plans actually are, is a bit crazy.

For all we know, Bonaire's general architectural makeup was always what Microsoft intended. I guess we will know more in time, but I wouldn't be the least bit surprised if Microsoft is more prepared to make changes than people suspect.

Oddly enough, that actually mirrors what some developers have been communicating over the last month. Unless all of them are playing coy.

And speaking of which, all next week is GDC2013. Thousands of developers and the industry's press all congregating in one place. It's highly doubtful that Microsoft will be able to maintain secrecy for those 5 days. I'm pretty sure we're about to hear something we could all hang our speculative hats on.

More or less agree. It's possible that devs know about the lowest common denominator and have to be cryptic, because they aren't sure where things may end up, which is why they can say, accurately, the specs are more or less accurate. Better for devs to have an idea for where the floor is, so they don't aim high and have to pull back. Better they aim lower and then find out they have more breathing room later, which is essentially what Sony did for developers by telling them 4GB and then hitting them with 8GB of GDDR5. Maybe Microsoft is doing a little of the same? We'll see, I suppose.
 
Last edited by a moderator:
I think it's quite crazy for anybody to assume based on what we know now that the 7790 will actually have access to more bandwidth than the Durango's GPU. That simply isn't true. At least I don't see how it can be true.

The Durango GPU can simultaneously access bandwidth from main ram, as well as ESRAM. Even with the main ram bandwidth being shared with the CPU, Durango still appears to have quite a bit more memory bandwidth than the 7790 does. Even the memory example we possess regarding Durango showcases the belief they have that Durango's GPU will be able to achieve and even exceed 102GB/s.

It makes sense for us to believe that Durango's GPU has more in common with Bonaire than with anything else that AMD is offering right now.

Not its not (crazy that is). You basically insinuating that Bonaire and Durango are similar GPUs. I have no problem with that assumption. My problem with your argument is if Bonaire and Durango are similar then why pair Bonaire with higher clock speeds but less memory bandwidth?

If a 800 mhz Bonaire like GPU (Durango) with an asymmetrical memory pool is practically given more bandwidth than found on Bonaire how does Bonaire benefit with a increase in GPU clocks but a reduction in memory bandwidth? Thats makes no logical sense when gpus tends to be bandwidth hungry unless MS went with an inefficient design (not enough gpu clocks but more than enough bandwidth), which I highly doubt.
 
It isn't like they paired Bonaire with less memory bandwidth in comparison to Durango's GPU on purpose. I think that's just how things more or less ended up shaking out once Microsoft decided they wanted ESRAM instead of GDDR5 and the overall memory setup that Durango has.

Bonaire wasn't given a reduction in memory bandwidth or anything else, not really. This is apparently AMD's ideal view of what Bonaire should be at its best. My view is that Bonaire as we know it is the desktop part, and perhaps Durango's GPU's roots lie with Bonaire, but is scaled back in some respects (CUs, Core Clock, TMUs), but also customized or enhanced in others (More memory bandwidth, move engines, low latency ESRAM)

If anything was changed or reduced, it was on Durango's part, not the Bonaire desktop part. So the 7790 desktop part is the parent, and Durango GPU is the child.
 
What is intersting is why MS decided to go with ESRAM + DDR3 memory instead of GDDR5.
I cannot believe that they did not know about the viability of GDDR5 memory chips and that for some reason, Sony and "waited" for it.

So the two possibilities we have here are:
1. The specs that are "known" now by the VGLeaks, are correct and that is what we will see in the final silicon. But perhaps this allows MS to compete on price, a $199-299 Durango?

2. The specs we know now are not the final one and lots of heavy customization went into the design, meaning that MS have managed to design a machine that manages to "do more with less". Instead of going Brute Force, they went Bruce Lee. There has to be more reasons MS went with ESRAM and DDR3 memory, more than the ones we know about...
 
Bonaire can't really achieve more bandwidth than it already had on a 128bit bus (without ed/sram). On the other hand they did increase the clocks over Durango so there's got to be an obvious benefit there.

I think the reality is that there's no specific core power to bandwidth sweet spot ratio. Some workloads will be compute limited and others will be bandwidth limited, thus Durango will be better suited to some workloads while the 7790 is better suited to others.
 
What is intersting is why MS decided to go with ESRAM + DDR3 memory instead of GDDR5.
I cannot believe that they did not know about the viability of GDDR5 memory chips and that for some reason, Sony and "waited" for it.

It has a higher up front R&D cost, but lower lifetime BOM cost. As long as they sell X number of machines, they'll come out ahead with regards to total investment (R&D + manufacturing).

It is unlikely to match GDDR5 in all workloads, but could potentially match it in the majority of workloads. And it's not just ESRAM + DDR3. The DMEs also play a significant role in that. And as far as we know (or don't know) there could be more to it than just what has been rumored.

Then again, it could also not turn out that way. And perhaps it only matches GDDR5 in very narrow workloads. We just don't know enough at the moment to say how closely and in how many potential workloads performance will come to a purely GDDR5 solution.

Regards,
SB
 
I don't think that's what they're saying, but they are saying what is more or less being said here, that it's likely the closest thing that we can possibly compare Durango's GPU to at this moment.

They did go for a very attention grabbing headline, but they themselves also point out the differences between this gpu and the rumored Durango GPU.
 
Status
Not open for further replies.
Back
Top