Xbox One (Durango) Technical hardware investigation

Status
Not open for further replies.
2) The display planes are totally separate things that AMD likely had nothing to do with (it's distinctly MS tech). It isn't entirely clear how flexible they are in terms of letting devs move resources from one plane to another.
It's a good point. Since the rumored specs include a hdmi in, I could see 1 pane being the game, one being the console notifications and the third being the hdmi in for picture in picture and an always visible dashboard.
 
It's a good point. Since the rumored specs include a hdmi in, I could see 1 pane being the game, one being the console notifications and the third being the hdmi in for picture in picture and an always visible dashboard.

The Display Planes will probably be used for a lot of things, and the power saving is decent but its not astronomical, iirc in another thread the estimate given was 1CU for 1080P 60FPS emulation of the display planes, on either PS4 or 720 GPU.
 
The Display Planes will probably be used for a lot of things, and the power saving is decent but its not astronomical, iirc in another thread the estimate given was 1CU for 1080P 60FPS emulation of the display planes, on either PS4 or 720 GPU.
Depends what scaling needs to be done. Best case is just 3 1080p buffers composited with alpha which is no effort at all. When you throw in scaling, depending on algorithm you can eat a lot more processing power. But even then, it should be scaling of one buffer only. UI will be natively rendered as that's the whole point! So I doubt the savings are anything substantial - a few ms on 1 CU. They're there as achieving the same workload in less silicon, but it won't make a massive difference to the system performance.
 
Depends what scaling needs to be done. Best case is just 3 1080p buffers composited with alpha which is no effort at all. When you throw in scaling, depending on algorithm you can eat a lot more processing power. But even then, it should be scaling of one buffer only. UI will be natively rendered as that's the whole point! So I doubt the savings are anything substantial - a few ms on 1 CU. They're there as achieving the same workload in less silicon, but it won't make a massive difference to the system performance.

Creating separate silicon for these functions does guarantee that they will be able to achieve consistent performance regardless of system load.

Also, wouldn't the compute cost depend on the quality of the scaling? From the PC media playback side I know of scaling algorithms that when implemented as shaders can be too much for entire low-end GPUs @ 24fps/1080p let alone 30 or 60 fps. Scaling that doesn't produce noticeable artifacts can be very computationally intensive, especially when the algorithm has to deal with variable source material.
 
Does that mean that Bonaire is GCN1.0, its a question that seems hard to get a answer for :p.

This page of the Anandtech review explains the situation.

In short. AMD's product designations have gotten very muddled and they don't seem to be doing any versioning of the different evolutions of GCN at all. You also can't differentiate by product family (Sea Islands, Southern Islands) because AMD are now using Southern Islands to refer to the range of products introduced within a release window instead of using it as an architectural distinction. So, minus any official way to make a distinction between the updated implementation of GCN present in Bonaire versus the prior one they are referring to it as GCN 1.1.

As for what is actually different; Anandtech says:

In this new microarchitecture there are some changes – among other things the new microarchitecture implements some new instructions that will be useful for HSA, support for a larger number of compute work queues (also good for HSA) and it also implements a new version of AMD’s PowerTune technology (which we’ll get to in a bit) – but otherwise the differences from Southern Islands are very few. There are no notable changes in shader/CU efficiency, ROP efficiency, graphics features, etc. Unless you’re writing compute code for AMD GPUs, from what we know about this microarchitecture it’s likely you’d never notice a difference.
 
Yes. I already said, "When you throw in scaling, depending on algorithm you can eat a lot more processing power." ;)

But then you followed up with:

But even then, it should be scaling of one buffer only. UI will be natively rendered as that's the whole point! So I doubt the savings are anything substantial - a few ms on 1 CU.

That's why I pointed out that even scaling one image buffer to 1080p can consume a lot more compute resources than that if you want higher quality. These consoles will end up being hooked up to 4k displays within their product life. I expect that that is being factored in WRT the quality of the scaling that will be necessary.
 
Upscaling one buffer can consume as much or as little processing power as you want. It's hard to quantify GPU savings as such. If there isn't dedicated hardware for the job, I'd expect the developers to use a simpler, less effective upscaling algorithm. So the gain is more in quality rather than performance, as the minimum performance saving is pretty little (linear filter).

4k displays will be upscaling a 1080p feed. The consoles won't be rendering native 4k buffers, not even in UI.
 
To me the most interesting thing about Bonaire is the improved power consumption (gigaflops per watt) that it has versus Cape Verde. Depending on the the benchmark and site, it looks in the ballpark of a 20+% improvement (1.3 Gflops at 80W vs 1.8 at 85)

I'm beginning to think that the APU for Durango will be in the range of 75W power consumption (GPU 40-50W) and around 125W for the whole system.
 
You want the GPU to see a certain bandwidth and those are two solutions to get the same outcome.
They're clear indications of some pretty deep customizations of the GPU, which suggests no direct link to any PC-only GPU part. Not to mention, durango is an APU, 7790 is a discrete chip. Even more differences there.

2) The display planes are totally separate things that AMD likely had nothing to do with (it's distinctly MS tech).
I don't think so. The display planes are alledgedly (according to leaks) capable of discarding occluded pixels on underlying planes, that suggests tying into the fundamental workings of the GPU in a significant way. If the leaked info is correct, then this isn't something MS tacked on with shoestrings and a band-aid. It was specifically engineered by AMD for this GPU.

3) I think the numbers match well enough that it sounds likely that AMD based their design for 7790 on Durango's GPU.
There's no evidence that this is actually the case. Your reasoning is based on speculation only. The silicon differences between the eDRAM, the display planes, move engines, no PCIe interface, custom interface to CPU cores and more all points towards no family resemblance other than the shader array, which may or may not be the case either depending on the specific configuration of the CUs in question. Will durango be capable of double precision? Is it GCN 2.0 or not? How much cache, registers and local store compared to 7790? And so on and on and on.

So durango is for all intents and purposes completely custom silicon really. There's little to potentially no comparison to a discrete PC part. You don't just cut the GPU bit of durango's project files and paste it into a PC. That's a totally not realistic situation.

Is there anything else going on? 7790 gets 1.8Tflops at 1GHz. Durango's GPU would "only" get 1.54Tflops at that clock. The 2 extra CU's maybe?
What 2 extra CUs? There aren't any 2 extra CUs riding in like the cavalry to save the fanboys' day. It's already been established that durango is 1.2TF, all credible rumors point to that. I'm tempted to yell, enough with the wishful thinking shit already.
 
What 2 extra CUs? There aren't any 2 extra CUs riding in like the cavalry to save the fanboys' day. It's already been established that durango is 1.2TF, all credible rumors point to that. I'm tempted to yell, enough with the wishful thinking shit already.
I do agree that the wishful thinking is tiring, I've barely read anything in the console forum for a while.

Durango GPU is every bit a close to 7790 than to 7700 (+/- 2 CUs) and even the 7750 if you look at leaked clock speed. The half full half empty dilemma :LOL:

And that without looking to the very fact than one is an APU, linked to 2 memory pool, on on chip, etc as you pointed out.

Overall I agree it is getting tiring, the same applies to GCN, GCN 1.1, GCN2.0 bullcrap.
 
What 2 extra CUs? There aren't any 2 extra CUs riding in like the cavalry to save the fanboys' day. It's already been established that durango is 1.2TF, all credible rumors point to that. I'm tempted to yell, enough with the wishful thinking shit already.

I'd assume those 2 extra CUs are on the 7790 as opposed to the Durango GPU, after reading that post.

7790 is 14CUs right?
 
There aren't any 2 extra CUs riding in like the cavalry to save the fanboys' day. It's already been established that durango is 1.2TF, all credible rumors point to that. I'm tempted to yell, enough with the wishful thinking shit already.

Why does it have to be riding in on the cavalry to save the fanboys' day to simply be speculating on these things? Weren't all the same rumors from more or less all the same exact sources not also fully convinced the PS4 would have just 4GB of GDDR5?

I think it's unreasonable to expect people not to suspect some surprises after the way Sony surprised everybody at their event by doing the very thing that so many said was impossible. And compared to the more recent PS4 information, the information we have on Durango is based on very old information. I'm not saying there's not a strong chance that it's all accurate, but we know the gist of the information is likely early 2012 old, no matter how many times vgleaks attempts to deny it. What's more likely to be the case is that they simply got this 2012 information in 2013, hence why they call it 2013 information, which by their criteria, it technically qualifies as being. And even if I'm wrong, nobody but Microsoft has the final word on what is inside their machine.

I don't see anything wrong with assuming or even speculating that Microsoft might have changed something, unbeknownst to all of us. It doesn't mean they had to have changed them in the couple days since the PS meeting, but maybe they committed to changes the very month after the Durango conference in 2012. I don't know; anything is possible.
 
Last edited by a moderator:
I'd assume those 2 extra CUs are on the 7790 as opposed to the Durango GPU, after reading that post.
Yeah, that could be so. My mistake if that's the case.

Why does it have to be riding in on the cavalry to save the fanboys' day to simply be speculating on these things?
People simply making up stuff a certain way (because they'd like/prefer it to be like that) isn't worthwile. It's just baseless jabbering, and there are better sites on the web for that kind of mindless junk posting.

I think it's unreasonable to expect people not to suspect some surprises after the way Sony surprised everybody at their event by doing the very thing that so many said was impossible.
That's Gambler's Fallacy in a nutshell. Well, the reverse of Gambler's Fallacy really. Still, not valid basis for such an assumption. Also, who actually said what sony did was "impossible"?

Sometimes a cigar is just a cigar, you know? Not every manufacturer pulls rabbits out of their hats all the time.

And compared to the more recent PS4 information, the information we have on Durango is based on very old information.
Games consoles are planned years in advance (especially true of a seasoned developer like MS), so that doesn't actually mean anything. If plans change it's typically the result of problems, like unsatisfactory manufacturing yields, not to react to competitors' actions. Consoles getting bumps in spec due to competitors is not really the norm.

Original Xbox got a nerf on GPU clock and bump on CPU. Same thing happened with Gamecube. PS3 had GPU and GDDR clocks nerfed. A few times we've seen bumps in RAM, PS4 now of course, 360 as well, but these were not in reaction to any competitors as these consoles were first to be announced for their respective generations. Sega slapped on another CPU in the Saturn and the result was totally half-assed. Changes in existing silicon on the other hand is pretty much unheard of, since it leads to such huge setbacks in schedule. Even just a respin to fix hardware bugs/errata means months of lost time, actually changing the design would mean many more months of validation, new tapeout and all that jazz. At the cost of millions, one might add. MS isn't going to go and add any extra CUs at a late stage to bring their GPU up in performance just to compete with PS4 in paper specs and packaging checkboxes.

we know the gist of the information is likely early 2012 old, no matter how many times vgleaks attempts to deny it. What's more likely to be the case is that they simply got this 2012 information in 2013, hence why they call it 2013 information, which by their criteria, it technically qualifies as being.
If MS's 2012 information isn't 2013 information in 2013, then quite frankly they won't be launching in 2013. Right now MS should be preparing to start up production, soon. If a delay happens with a PC GPU design it doesn't really matter because the vendor will have other GPUs to sell (well, except that time when Geforce 480 was delayed and Nvidia had already EOL'd the 280 months prior). However if a console GPU is delayed, no consoles will get made or sold. That cannot be allowed to happen. Not just MS is at stake, but also developers who will have games lined up. Games that have cost as much as tens of millions to produce. There's also quarterly reports to consider, things of that nature. MS stock would take a beating if an important product launch like durango gets delayed. Executieves' heads could roll.

So they're not going to risk any of that.
 
Upscaling one buffer can consume as much or as little processing power as you want. It's hard to quantify GPU savings as such. If there isn't dedicated hardware for the job, I'd expect the developers to use a simpler, less effective upscaling algorithm. So the gain is more in quality rather than performance, as the minimum performance saving is pretty little (linear filter).

Ah, I see now.

4k displays will be upscaling a 1080p feed. The consoles won't be rendering native 4k buffers, not even in UI.

Well, yeah. I wasn't arguing for the usefulness of split display planes here as much as for free (hopefully high-quality) upscaling, though now that I think about it having the upscaling done seperately for each plane could yield a quality benefit if the image characteristics of those planes have significant differences.
 
I don't see anything wrong with assuming or even speculating that Microsoft might have changed something, unbeknownst to all of us. It doesn't mean they had to have changed them in the couple days since the PS meeting, but maybe they committed to changes the very month after the Durango conference in 2012. I don't know; anything is possible.


Because without a sensible rumor or a reason all it leads to is baseless nonsense to cater to an individuals own wishes for a product they aren't designing....
 
Because without a sensible rumor or a reason all it leads to is baseless nonsense to cater to an individuals own wishes for a product they aren't designing....

Its all senseless since everything we know about Durango is based on rumors and "leaks" right?
 
All fair points that I generally agree with, but I'm thinking maybe there's a possibility Microsoft, way back in early 2012 when they revealed these specifications, had other plans that they were considering and testing, and that may have gone as far as to actually being built in small enough numbers for testing, even while being more than comfortable with developers targeting the specs they've already provided them with.

Keep in mind where I'm approaching this speculation from: I'm someone who absolutely loves the Durango specs as I see them. I think it's a very well balanced and well thought out system that developers will get quite a bit out of. So, even should specs turn out exactly as they are now rumored, there will be no complaints from me. That said, I suspect that Microsoft would have made multiple plans, not plans with drastically different architectural approaches, but plans that are more or less interchangeable to the point where, depending on what they decide to do, they can include or exclude components as they desire, and then whichever potential change their decision necessitates, they also have plans in place to accommodate those changes should they happen.

I understand it's an unbelievably complex process, but because it's such a complex process, it makes me doubt even more they would ever box themselves into any one specific design.

What are the chances they've been weighing 2 or more potential retail box designs/dimensions? High because that's precisely what they did with the Xbox 360, and we've seen pictures of all the different designs and dimensions they were considering.

What are the chances they've been weighing 2 or more possible retail TDP figures?

What are the chances they've been weighing more than one gpu design? What would it really cost them to have a few of their potential designs, including the one we know about, built for the purpose of testing? And let's say for arguments sake that all their designs "work" about as well as you would expect them to, and that while devs may have dev kits with one of these designs -- perhaps their favorite design -- they haven't exactly gone into retail mass production just yet with this part, and so maybe there was an opening at some point -- if they planned for it -- where they could very well have opted to swap in one part and exclude another.

I know I'm using the words swap in and swap out, but I don't mean to imply it's that easy, only that they may have always planned around a very flexible design that could either evolve or devolve over time prior to mass production.
 
If only the feb 2012 leak was where all the info came from.

Vgleaks have late 2012 info and even some from this year.
 
Status
Not open for further replies.
Back
Top