Xbox One (Durango) Technical hardware investigation

Status
Not open for further replies.
I often wonder just how much of an efficiency advantage GCN is...well Tom's hardware benched the 7730. It's 384 800 mhz GN shaders. With GDDR5 the BW is almost exactly the same as 6670 as well to remove that variable.

It performs almost exactly the same as the 480 shader 800 mhz VLIW5 6670. So there, the improvement per flop is about 25% for GCN. 384 GCN shaders performs like 480 VLIW5 shaders.

That sounds about right to me.
 
I often wonder just how much of an efficiency advantage GCN is...well Tom's hardware benched the 7730. It's 384 800 mhz GN shaders. With GDDR5 the BW is almost exactly the same as 6670 as well to remove that variable.

It performs almost exactly the same as the 480 shader 800 mhz VLIW5 6670. So there, the improvement per flop is about 25% for GCN. 384 GCN shaders performs like 480 VLIW5 shaders.

That sounds about right to me.
It is actually ~8% faster with both having GDDR5 (it's not a perfect match in bandwidth), so it's more like +35% per flop. But that's also highly depending on the workload. Somewhat connected to this, another interesting thing is that with restricted bandwidth (both running at 900MHz DDR3), the advantage of the GCN based 7730 grows to 22%. Now we are at +52% advantage per flop for GCN. Obviously the better cache architecture helps here.
 
It is actually ~8% faster with both having GDDR5 (it's not a perfect match in bandwidth), so it's more like +35% per flop. But that's also highly depending on the workload. Somewhat connected to this, another interesting thing is that with restricted bandwidth (both running at 900MHz DDR3), the advantage of the GCN based 7730 grows to 22%. Now we are at +52% advantage per flop for GCN. Obviously the better cache architecture helps here.

good corrections. i hadn't really noticed the 6670 actually has a bit less bw with gddr5.

i also spotted this at gaf which is interesting:

55636.png


now looking up the a4 5000 it's 128 shaders at 500 mhz which would put it at 128 Gflops. and it beats a 7900gtx (~RSX, actually somewhat above as it's clocked at 650) (in that specific bench, anyway)

i think you could very roughly say gcn might be about 2x as efficient per flop as rsx/xenos then (yes i know xenos is better than rsx, it's still not even as advanced as very old amd cards now).

that's pretty impressive. 10x on the gpu advancement (in real terms, with efficiency) should be a doddle for both consoles.
 
8 or 12 GB doesn't matter. MS is moving to a cloud based development and compile method. So dev toward your target HW then load it from the cloud into your device.
 
good corrections. i hadn't really noticed the 6670 actually has a bit less bw with gddr5.

i also spotted this at gaf which is interesting:

...

now looking up the a4 5000 it's 128 shaders at 500 mhz which would put it at 128 Gflops. and it beats a 7900gtx (~RSX, actually somewhat above as it's clocked at 650) (in that specific bench, anyway)

i think you could very roughly say gcn might be about 2x as efficient per flop as rsx/xenos then (yes i know xenos is better than rsx, it's still not even as advanced as very old amd cards now).

that's pretty impressive. 10x on the gpu advancement (in real terms, with efficiency) should be a doddle for both consoles.

While better than Egypt I still don't think that benchmark is a good measure of ALU performance. Shader complexity still very low compared to modern AAA PC/Console games.
 
While better than Egypt I still don't think that benchmark is a good measure of ALU performance. Shader complexity still very low compared to modern AAA PC/Console games.

Wouldn't one think shader complexity would actually increase GCN advantage?

It seems like older GPU's were more designed around throughput and shaders got more complex as time went on.
 
Wouldn't one think shader complexity would actually increase GCN advantage?

It seems like older GPU's were more designed around throughput and shaders got more complex as time went on.
I would think so too. I just wanted to post the same as you did.
 
The ram increase was put to rest and with good reason,it wasn't possible in the short time remaining without affecting numbers of units to be deliver at launch,but it was pretty easy to see for me since they advertized 8GB on E3,going 12 or 16 would require higher density memory i don't know if the xbox one was already using the highest one.

And possible a hardware change if there were no higher density memory,a bump in clock is far easier,hell they can even do it when the console is already out,sony had variable speeds on the PSP and as time went on they changed from 266 to 333.
 
The ram increase was put to rest and with good reason,it wasn't possible in the short time remaining without affecting numbers of units to be deliver at launch,but it was pretty easy to see for me since they advertized 8GB on E3,going 12 or 16 would require higher density memory i don't know if the xbox one was already using the highest one.

And possible a hardware change if there were no higher density memory,a bump in clock is far easier,hell they can even do it when the console is already out,sony had variable speeds on the PSP and as time went on they changed from 266 to 333.

we dont really know why they didn't do it or even if it was ever under serious consideration *shrug*
 
we dont really know why they didn't do it or even if it was ever under serious consideration *shrug*

Well this article gives us a hint. To quote:

"Building a console like Xbox One entails years of planning, supply chain management, purchasing agreements for components, etc and is for the most part locked at the time you start to put development kits into the hands of developers."

This is most likely why the didn't upgrade to 12Gb because specs were locked as soon as devs received the kits and as devs indeed received devkits, I quote, "well before the consoles were officially presented".

Of course DF said the same for months but it was ignored.
 
Last edited by a moderator:
Glad I didn't.

Alright so I guess I don't know why you mentioned this...


However, they didn't start producing 360s till the end of september or something (69 days before launch) so the situation should be much better.

... if it wasn't to allay concerns about MS churning out consoles ? By the way 69 days isn't something to hang ones hat on. It's far too small a number if you are talking about a proper production schedule.
 
Well this article gives us a hint. To quote:

"Building a console like Xbox One entails years of planning, supply chain management, purchasing agreements for components, etc and is for the most part locked at the time you start to put development kits into the hands of developers."

This is most likely why the didn't upgrade to 12Gb because specs were locked as soon as devs received the kits and as devs indeed received devkits, I quote, "well before the consoles were officially presented".

Of course DF said the same for months but it was ignored.

Because the 360 never doubled its RAM amount before release? The PS4 didn't double its RAM amount before release? That line has been proven false multiple times.
 
Well this article gives us a hint. To quote:

"Building a console like Xbox One entails years of planning, supply chain management, purchasing agreements for components, etc and is for the most part locked at the time you start to put development kits into the hands of developers."

This is most likely why the didn't upgrade to 12Gb because specs were locked as soon as devs received the kits and as devs indeed received devkits, I quote, "well before the consoles were officially presented".

Of course DF said the same for months but it was ignored.

It should also be remembered that the 360 memory bump was in the pipeline more than a year before it's introduction. I think the RAM doubling of the PS4 was probably as late as one would like it to be.
 
Because the 360 never doubled its RAM amount before release? The PS4 didn't double its RAM amount before release? That line has been proven false multiple times.

I didn't say that changing specs before release is not possible.
I am just taking MS words as an indication of why the 12Gb didn't happen.

It should also be remembered that the 360 memory bump was in the pipeline more than a year before it's introduction.

Which is the other reason why I never believed the theory that MS could upgrade to 12gb after the reveal and after E3.
I think the RAM doubling of the PS4 was probably as late as one would like it to be.

We don't know exactly when Sony decided to upgrade the PS4 RAM to 8Gb but we know for sure that developers asked for it when Cenry was asking feedback on the PS4.
 
Last edited by a moderator:
8 or 12 GB doesn't matter. MS is moving to a cloud based development and compile method. So dev toward your target HW then load it from the cloud into your device.

That wouldn't help with debugging and profiling which still needs to be run on target hardware.
 
Wouldn't one think shader complexity would actually increase GCN advantage?

It seems like older GPU's were more designed around throughput and shaders got more complex as time went on.

Yes. Wasn't trying to imply that GCN advantage was less than shown. Sorry if that was the inference. I think it can be significantly more it certain situations depending on component mix.
 
This is most likely why the didn't upgrade to 12Gb because specs were locked as soon as devs received the kits and as devs indeed received devkits, I quote, "well before the consoles were officially presented".

Well, that would contradict the Esram and GPU upclock, all of which Penello even stated a while back that microsoft's no stranger to last minute changes.

----------------------------------------------

I'm thinking the memory is going to be the same as the devkits to ensure identical functions, but this won't be said officially because this could be easily mistaken as an official memory jump. It's the same as leaving out Esram's figures from the total.

my two cents.
 
Well, that would contradict the Esram and GPU upclock, all of which Penello even stated a while back that microsoft's no stranger to last minute changes.

Changes in clock are purely changes in software as long as TDP constraints are not violated. That is a much different beast from changes in hardware components where multiple, very elaborate stages of design and testing/validation, as well as supply and manufacturing management make things vastly more complicated and time consuming. For RAM changes, at the very least, supply and manufacturing management would have to be impacted.
 
Status
Not open for further replies.
Back
Top