Xbox One (Durango) Technical hardware investigation

Status
Not open for further replies.
Especially once you consider all simulations MSFT supposedly ran.

Again, that is not a logical argument. The 360 was innovative at the time of release. 7970 was innovative at the time of release. Xbox One could be innovative at the time of release too.

All three had/have to have had many simulations run by a certain time prior to release.

Doesn't have any relevance to two being innovative at the time of release and a third not being innovative/more advanced architecture at the time of release.
 
Anyway how do you know that the PS4 is based on GCN refresh vs something custom that looks close to it?

I don't. I wasn't suggesting or arguing that.

I was suggesting that Xbox One might be innovative or more advanced architecture or "ahead of it's time" at release.

The PS4 could be too.

VI and Hawaii could be too.

All three could be. Some parts could be in common. Some could be different.



But rigidly proclaiming that Xbox One *must* be a two year old architecture at time of release is not logical.

It certainly *could* be. But it could be as innovative as Xenos, or 7970, or VI, or PS4 *at the time of respective release*.



I am perfectly willing to admit (and I have) that if vgleaks is accurate (again) and if it is GCN 1.0 only then that could be one of the possible scenarios for sure.

But I don't think it is the only one. Not sure why there is such a fanatical adherence to the vgleaks line.
 
Granted the vgleaks seems to be doing pretty well so far.

But it is also pretty annoying that some keep pointing to one leak and saying that is gospel before MS confirms and officially releases. (And 100% rejecting all other leaks and speculations.)

It could be true but until MS officially releases I think it is perfectly reasonable for there to be interested in the possibilities. Based upon the research MS does (based upon their publications and staff) it seems MS is interesting in strong customization. The Xenos seems to suggest that too, although things could be quite different this generation.

Plus "xbone", "secret sauce" and "pretending" don't tend to appeal in terms of responses to technical ideas and questions. The adherence to "xbone" being GCN 1.0 and nothing more seems too narrow and restrictive to fit the range of possibilities.

Well its strange that these leaks, which have happened and been corroborated by multiple sources missed something so so major that only you have discovered, if you could just start by providing some evidence like the leaks have, and something a little more tangible then 'it happened in the past ergo it must happen every time'

Again, that is not a logical argument. The 360 was innovative at the time of release. 7970 was innovative at the time of release. Xbox One could be innovative at the time of release too.

All three had/have to have had many simulations run by a certain time prior to release.

Doesn't have any relevance to two being innovative at the time of release and a third not being innovative/more advanced architecture at the time of release.

You are assuming that the MSFT from the XB360 era is the same as the MSFT now. It is clear they have a direction which is not focused purely on games anymore. Its not surprising they chose something that was cheap and affordable.
 
Again, that is not a logical argument. The 360 was innovative at the time of release. 7970 was innovative at the time of release. Xbox One could be innovative at the time of release too.

All three had/have to have had many simulations run by a certain time prior to release.

Doesn't have any relevance to two being innovative at the time of release and a third not being innovative/more advanced architecture at the time of release.
It is logical, MSFT chose a GPU, its specs (# CUs, # ROPS, size of L2, etc.), then run lots of simulations to put together a quite complex SoC which include 2 Jaguar compute cluster, lot of SRAM, a massive sounds DSP, etc.
Then they send that to the fab, talk etc. It takes a lot of time. The same applies to the PS4 by the way. You have no way to claim that Sony uses GCN refresh or if it is a custom efforts as other parts of the design hints.

For GCN 2.0 there is no logic behind your rant, just wishful thinking, the up coming AMD GPU may or may not be a significant departure from GCN. Only rumors so far, nothing really conclusive.
 
Yeah, let nda-type developer comments and leaks speak for themselves (lol), there's enough ambiguity already before you add hope to the mix. ;)
 
You are assuming that the MSFT from the XB360 era is the same as the MSFT now. It is clear they have a direction which is not focused purely on games anymore. Its not surprising they chose something that was cheap and affordable.
Well I would also think that GPU have gone a long way since then and in direction that seems to please both Sony and MSFT.
There is also the complexity, we speak of billions of transistors, it would be really risky to go full custom.
 
It is logical, MSFT chose a GPU, its specs (# CUs, # ROPS, size of L2, etc.), then run lots of simulations to put together a quite complex SoC which include 2 Jaguar compute cluster, lot of SRAM, a massive sounds DSP, etc.
Then they send that to the fab, talk etc. It takes a lot of time. The same applies to the PS4 by the way.

It is very much not logical. Everything you keep saying that must be done is applicable to the past designs, present designs and future other designs of which some are innovative at the time of release and of which others are not innovative at the time of release. Pointing out that such steps and tasks must be completed in no way logically argues that one particular design can or can not be innovative.



You have no way to claim that Sony uses GCN refresh or if it is a custom efforts as other parts of the design hints.

I didn't claim that. What are you talking about? I was arguing against a rigid claim that one design *must* be the 2 year old architecture. That claim is not logical. It is possible, but it is not the only possibility.



For GCN 2.0 there is no logic behind your rant, just wishful thinking, the up coming AMD GPU may or may not be a significant departure from GCN. Only rumors so far, nothing really conclusive.

There is lots of logic and it is not a rant. *MANY* console and GPU and CPU designs are innovative at the time of release. And many are not.
 
It is very much not logical. Everything you keep saying that must be done is applicable to the past designs, present designs and future other designs of which some are innovative at the time of release and of which others are not innovative at the time of release. Pointing out that such steps and tasks must be completed in no way logically argues that one particular design can or can not be innovative.





I didn't claim that. What are you talking about? I was arguing against a rigid claim that one design *must* be the 2 year old architecture. That claim is not logical. It is possible, but it is not the only possibility.





There is lots of logic and it is not a rant. *MANY* console and GPU and CPU designs are innovative at the time of release. And many are not.

Make whatever claims you feel like but please provide real evidence, your claim is being dismissed so easily because the amount of evidence you are providing is practically 0. After you find some come back and we can discuss other possibilities until then its clear from what we know at the moment what it is based on.
 
Well its strange that these leaks, which have happened and been corroborated by multiple sources missed something so so major that only you have discovered, if you could just start by providing some evidence like the leaks have, and something a little more tangible then 'it happened in the past ergo it must happen every time'

Where did I say I had "discovered" something? I never said that I had discovered anything.

I can not comment on the "corroborated by multiple sources" as the only document I can find is the vgleaks. Who are the "multiple sources"? Over the last two months I found a couple AMD and a couple MS people on a couple of forums. In PM conversations none of them knew about the answers to 1T or 6T eSRAM, GCN X or Y, etc. They said they did not know. One said they could not say and that they did not know anyways.



You are assuming that the MSFT from the XB360 era is the same as the MSFT now. It is clear they have a direction which is not focused purely on games anymore. Its not surprising they chose something that was cheap and affordable.

Somewhat yes. Lots of the Si Arch hiring occurred at MS after 360 and lots of the research and publications started around that time and continued over the 8 years or so of this generation. So it is logical to assume some of this is oriented to the next generation.

MS did a big build up of people, equipment/facilities and software/tools in the IC design area over the last 8 years. That seems to argue for increasing capability (to be used) from Xbox to 360 to Xbox One, as opposed to going back the other way.

Now you certainly could be right, but you have to wonder what MS intended to do with all those hardware people, facilities and tools.
 
Now you certainly could be right, but you have to wonder what MS intended to do with all those hardware people, facilities and tools.

I dunno, maybe make one of the worlds (if not the worlds biggest) SoC's?. A giant multi billion transistor monster that needs to work quietly.

Maybe thats what all the hardware people where doing, making a SoC. People saying that we must be missing something because Microsoft sunk all of this money into engineers and hardware gear seem to forget that it takes all these engineers and hardware gear just to make a normal console, and if you want a good console you hire good engineers (as Microsoft and Sony have been doing). So really all that is indicative of nothing, and certainly not of any currently unknown advances in GPU tech.
 
Make whatever claims you feel like but please provide real evidence, your claim is being dismissed so easily because the amount of evidence you are providing is practically 0. After you find some come back and we can discuss other possibilities until then its clear from what we know at the moment what it is based on.

What "real evidence" is required to wonder if the Xbox One could be based upon Nov 2013 current (or even "innovative") architecture at time of release versus just Jan 2012 architecture?

Not allowed to wonder or ask questions? Not appropriate to wonder about anything not in line with the one vgleak? Seriously?
 
What "real evidence" is required to wonder if the Xbox One could be based upon Nov 2013 current (or even "innovative") architecture at time of release versus just Jan 2012 architecture?

Not allowed to wonder or ask questions? Not appropriate to wonder about anything not in line with the one vgleak? Seriously?

You can ask what you want, but you're questions have been answered more then once it seems like your trying too push a agenda more then ask questions.

Real evidence would be anything that is tangible and not based on feelings that shows that it based on a future GCN arch and is not as vgleaks suggests just GCN.
 
You can ask what you want, but you're questions have been answered more then once it seems like your trying too push a agenda more then ask questions.

The best I can tell the only answer given is "vgleaks".

As I mentioned before I found a few MS and AMD people and no one knew. No one else has said they know anything (except for repeating vgleaks).

When I asked for who the "multiple sources" are you ignored that and did not answer.



Now you could be right and vgleaks could be right, but I hardly consider one leak to be definitive enough to slam the door on the conversation.



As for an agenda you can relax and adjust your tin foil hat. I have no agenda. Just curious and interested in the hardware design. (I also clearly want the hardware of the next 6-8 years to be as advanced as possible.)
 
Bonaire and Kabini are of a higher GFXIP level than other products in the GCN architecture, though they are still classified as "GCN".
 
Bonaire and Kabini are of a higher GFXIP level than other products in the GCN architecture, though they are still classified as "GCN".

Are you allowed to say more specifically? I assume in the case of Bonaire that is the two geometry engines? Anything else?

Unfortunately I know you can't say for VI/Hawaii even if you did know.
 
Please help me to interpret this whitepaper (I think I have it right but):

http://www.amd.com/us/Documents/GCN_Architecture_whitepaper.pdf

Is it saying 64 bit L2 interfaces in GCN?
No. The memory controllers are 64bits wide and comprised of two 32bit channels. The L2 can actually transfer 64 byte between each L2 tile (there is one tile per memory channel) and the multiple L1s (each one has also a 64byte/clock bandwidth). There is a massive crossbar inbetween. The bandwidth between an L2 tile and the accompanying memory channel (running at core speed) is not officially disclosed by AMD, but the 256bit from the Durango documentation on vgleaks looks reasonable for that (it's never going to be a bottleneck when it is faster than the memory). One need to compensate for the very high speed of the GDDR5 over narrow interfaces. That means the internal busses running at the nominal clock speed need to be wider than the memory interface itself. And this also means that discrete GCN parts probably need 256bit connections per L2 tile and 32bit channel and not 256bit per L2 tile and 64bit controller as Durango is obviously using (but maybe it's the same setup as with other discrete GCN parts and that is a fuckup in the diagram as this 4x256bit could never transfer more than 102.4GB/s [or there is a separate interface to the eSRAM not running over those 4x256bit] and MS claims an aggregated read speed of ~170GB/s, falls inline with the fact that MS apparently didn't know how fast their eSRAM actually is :rolleyes: :LOL:).Maybe (or not) they saved a bit on that front counting on the lower speed of DDR3.

Difference ACE setup, the XBONE has the default for GCN1.0
Based on that:
From hardware.fr review of bonaire and what the leaks said about Durango I would think that Durango is based on GCN 1.1 (which is not AMD nomenclature).

Sorry I may have missed it but I see only one "compute command block" on that diagram.
As do i now, but its clearly two compute pipes but anyway.

If it is GCN1.1 its odd that they have decided to forgo the extra pipes that the PS4 went with, theres not much reason to axe them.
This is not exactly clear and depends on how MS decided to draw the block diagram (they took strange decision for the SIMD setup, too; they also managed to give borderline [in a strict sense wrong] descriptions of the scheduling of instructions in GCN).
It looks that with "GCN1.1" as seen in Kabini and Bonaire (which features also 4 ACEs contrary to the apparently wrong block diagrams which got published) the ACEs come in blocks of four (if someone wants to check for himself, afaik found by mczak, btw., there is also some other stuff in there). Kabini and Bonaire both have one "MEC", or compute micro engine containing 4 "pipes" or ACEs, each of it handling 8 queues. According to that initialization code Kaveri will have two of these MECs (and hence 8 ACEs and 64 queues), the same number as the PS4 has. So MS could have decided to draw the MECs and not the individual ACEs in it. It may be a stretch, but not impossible.
from the linked file (KV=Kaveri, KB=Kabini, CI apparently refers to all "GCN1.1" discrete GPUs, CIK to CI+Kabini+Kaveri):
Code:
/*
 * CP.
 * On CIK, gfx and compute now have independant command processors.
 *
 * GFX
 * Gfx consists of a single ring and can process both gfx jobs and
 * compute jobs.  The gfx CP consists of three microengines (ME):
 * PFP - Pre-Fetch Parser
 * ME - Micro Engine
 * CE - Constant Engine
 * The PFP and ME make up what is considered the Drawing Engine (DE).
 * The CE is an asynchronous engine used for updating buffer desciptors
 * used by the DE so that they can be loaded into cache in parallel
 * while the DE is processing state update packets.
 *
 * Compute
 * The compute CP consists of two microengines (ME):
 * MEC1 - Compute MicroEngine 1
 * MEC2 - Compute MicroEngine 2
 * Each MEC supports 4 compute pipes and each pipe supports 8 queues.
 * The queues are exposed to userspace and are programmed directly
 * by the compute runtime.
 */

[..]
	 * KV:    2 MEC, 4 Pipes/MEC, 8 Queues/Pipe - 64 Queues total
	 * CI/KB: 1 MEC, 4 Pipes/MEC, 8 Queues/Pipe - 32 Queues total
	 */
	if (rdev->family == CHIP_KAVERI)
		rdev->mec.num_mec = 2;
	else
		rdev->mec.num_mec = 1;
	rdev->mec.num_pipe = 4;
	rdev->mec.num_queue = rdev->mec.num_mec * rdev->mec.num_pipe * 8;
 
Last edited by a moderator:
Are you allowed to say more specifically? I assume in the case of Bonaire that is the two geometry engines? Anything else?

Unfortunately I know you can't say for VI/Hawaii even if you did know.

The launch of the 7790 had reviewers stating that Bonaire used GCN 1.1, 2.0 or whatever they chose to number it.
The link was made to the Sea Islands ISA doc, which was taken offline because Sea Islands is not an architectural categorization.

There are ISA differences and a different addressing mode, though no word on what 1.1, 1.2, 2.0 number that would get applied to it.
Driver identifiers are different for Bonaire and Kabini for compute as well.
The number of specific units isn't a strong determining factor, as unit counts are scalable.

For Durango, I don't recall seeing VGleaks providing information at that level.
 
Bonaire and Kabini are of a higher GFXIP level than other products in the GCN architecture, though they are still classified as "GCN".
Are you allowed to say more specifically? I assume in the case of Bonaire that is the two geometry engines? Anything else?
Quite sure 4 ACEs with 8 queues each (32 queues in total), FLAT memory addressing, some extended debugging features, and some new instructions. Very likely also the volatile flag, memory address watch, or in short, basically everthing in that CI manual which was online for a week or so. They just have half of the 8 ACEs stated (probably as maximum) in the manual.
 
It looks that with "GCN1.1" as seen in Kabini and Bonaire (which features also 4 ACEs contrary to the apparently wrong block diagrams which got published) the ACEs come in blocks of four (if someone wants to check for himself, afaik found by mczak, btw., there is also some other stuff in there).
The pair of new asynchronous DMA engines mentioned in the text sound like they are derived from the DMA engines from before and/or have been exposed for programmer control.

Aside from the overlap in duty with the original GCN DMA functionality, they also seem to overlap in functionality with the DME 0 and 1 engines from VGleaks.
This may indicate at least part of the Durango GPU is on the later revision.
 
Status
Not open for further replies.
Back
Top