Xbox One (Durango) Technical hardware investigation

Status
Not open for further replies.
They could use 8 bit wide chips instead of 16 bit. These are available from micron with essentially the same specs as the ones they already use.

Eight MT41J256M16HA-093 on top
Sixteen MT41J512M8RH-093 in sandwich

That would make them dissipate less heat, it would need just a bit of board rework, and all chips would be single-ended so no worries about frequency or load.

Later revisions could replace the sandwiches with 8Gb parts.

Didn't think of that configuration. Pretty simple solution.

But if the report that the final dev kit is like the Wired box (and has 12GB already) then they already have figured it out themselves and did so some time ago.
 
@MrFox

No it's the "on top" part that I don't get.
I should have been more clear, sorry.
Ah ok, I meant on top of the PCB, not on top of each other... the 8 chips that are 16 bit wide are each on the top side of the PCB but none on the underside, while the others 8 bit would be in sandwich pairs, one on top and one under at each location. The layout would be almost identical, the signals are all already routed pretty much the same way.

Anyway, it's still an ugly proposition, I just wanted to go through every possibilities. If they considered more than 8GB in reaction of the PS4 reveal, they had plenty of time since February and they certainly wouldn't have announced 8GB at the xbone reveal if it was an open question. Regardless of any rumor or leak, my bet is still on 8GB because of that.

OTOH, Sony revealed the GF numbers of the GPU, so that hints their clock is mostly set in stone. This is possibly not so for the xbone, as they still didn't say anything that indicates the final clock?
 
Last edited by a moderator:
They could use 8 bit wide chips instead of 16 bit. These are available from micron with essentially the same specs as the ones they already use.

Eight MT41J256M16HA-093 on top
Sixteen MT41J512M8RH-093 in sandwich

That would make them dissipate less heat, it would need just a bit of board rework, and all chips would be single-ended so no worries about frequency or load.

Later revisions could replace the sandwiches with 8Gb parts.
Then we are back to the not very elegantly looking solution of 8GB of memory at the full bandwidth and 4GB at half of it. And we don't know if the memory controller supports the necessary more flexible address interleaving between the channels (but I would think so). That MS modifies their OS in a way that the game VM preferably gets its memory allocated in the faster 8GB one could believe though.
 
That MS modifies their OS in a way that the game VM preferably gets its memory allocated in the faster 8GB one could believe though.

You would modify it in the hypervisor mapping, I think. you cant reproject&test such a complex change in no time, imho.
 
It would be nice to see someone post the number of modules and part number of the modules in the final dev kit.

But then again I doubt anyone will take it apart to see the back of the board.
 
Ah ok, I meant on top of the PCB, not on top of each other... the 8 chips that are 16 bit wide are each on the top side of the PCB but none on the underside, while the others 8 bit would be in sandwich pairs, one on top and one under at each location. The layout would be almost identical, the signals are all already routed pretty much the same way.

Ok now I get it. :idea:
Thanks for the clarification.

If they considered more than 8GB in reaction of the PS4 reveal, they had plenty of time since February and they certainly wouldn't have announced 8GB at the xbone reveal if it was an open question. Regardless of any rumor or leak, my bet is still on 8GB because of that.

Quoted for truth.
 
Last edited by a moderator:
I've been thinking Over clocking some stuff might be too risky for microsoft, the end result could be millions of overheating units at the costs of a miniscule jump. surely they can just have their companies license over more efficient stock chips that needs no such hassle? the benefits would be shorter R & D efforts, durable hardware, and a sooner release date.
 
I've been thinking Over clocking some stuff might be too risky for microsoft, the end result could be millions of overheating units at the costs of a miniscule jump. surely they can just have their companies license over more efficient stock chips that needs no such hassle? the benefits would be shorter R & D efforts, durable hardware, and a sooner release date.

It has been suggested that they might be able to have the APU tested and binned for clocks similar to 7790, assuming the process and architecture is that.

That would indicate there is lots of headroom and that the current clock is more of a self imposed power/temperature/cooling limit/near silence of 100W, not what could be done with the exact same chip with a different combination of noise, power, cooling, etc.

But no one seems to know for sure what the design is, and we would need to know that more precisely to enable any argument free extrapolation. But 1 GHz for 28 nm TSMC and AMD GCN does not seem to be all that unreasonable if 7790/CGN architecture is true.

But there are lots of arguments that MS might have customized things but we don't know how far.

We also don't know if they have a stock of tested/assembled/fused units. Or how large that stock is. Is it 100k or 20k or 500k?

We also don't know the yields right now.

So maybe than can buy 1 GHz or 850 MHz or 925 MHz from AMD with no trouble. Or maybe not. Those who know more precisely are not talking. [Or they did and it was dismissed.]



My guess is that they set a limit of 100W and near silence and derived from that a clock far below what the silicon is capable of. That could be BS as there are tons of unknowns.



I might get flamed for this but:

Scenario 1: Basically 7790 and so yes, can likely support 1 GHz easily.
Scenario 2: Heavily customized (study Xenos as example) and more advanced than GCN. Can't be overclocked but doesn't need to as it is more powerful clock per clock.
Scenario 3: ...
Scenario 4: ...



In the characterization/verification and testing MS (AMD) would map out the possibilities. They could likely refer to that data pretty quickly while we are all left guessing and speculating.



If it is scenario 2 then MS might be laughing internally right now. If that is the case maybe MS only looks at the reservation or *maybe* the memory. Or maybe they are quite happy and expect that we will be too once we get our hands on it and see what it does.
 
Since the Xbox environment is rumored to have virtualized resources, MS could always just go ahead as planned, and release an upgraded Xbox One, or Xbox Two (lol) that runs all previous xbox one games plus is more powerful in even just a few years.

That's probably where I would take the ball and run with it at MS.
 
That would indicate there is lots of headroom and that the current clock is more of a self imposed power/temperature/cooling limit/near silence of 100W, not what could be done with the exact same chip with a different combination of noise, power, cooling, etc.
This is almost a given. I haven't seen any significant debate premised on the idea that upping the clocks modestly is a physical impossibility instead of a design choice.

But no one seems to know for sure what the design is, and we would need to know that more precisely to enable any argument free extrapolation. But 1 GHz for 28 nm TSMC and AMD GCN does not seem to be all that unreasonable if 7790/CGN architecture is true.
Even if the GPU's overall design were identical to the 7790, and discounting the eSRAM, Durango wouldn't be the same. As was noted, this is an APU, so pointing to a discrete device means ignoring a massive variable on the Durango side.

There may also be physical differences in manufacturing that cannot be deduced from the high level design.
We don't know what Kaveri's GCN clocks will be, but the upper range of clocks for the on-die GPUs in Trinity and Richland do not match the max clocks acheivable by VLIW4 GPUs that were made on a supposedly inferior 40nm process. That can be for many reasons, including the disparate process needs of the two types of silicon. The gap has narrowed significantly, since Cayman's top clocks are only moderately higher than Richland's GPU. If it weren't for the process gap and the larger number of APU bins, they'd be effectively equivalent.
Llano is a painful indicator of how bad things can get when Barts and its ilk ran at times at nearly double the clock speed.
 
It has been suggested that they might be able to have the APU tested and binned for clocks similar to 7790, assuming the process and architecture is that.

That would indicate there is lots of headroom and that the current clock is more of a self imposed power/temperature/cooling limit/near silence of 100W, not what could be done with the exact same chip with a different combination of noise, power, cooling, etc.

But no one seems to know for sure what the design is, and we would need to know that more precisely to enable any argument free extrapolation. But 1 GHz for 28 nm TSMC and AMD GCN does not seem to be all that unreasonable if 7790/CGN architecture is true.

But there are lots of arguments that MS might have customized things but we don't know how far.

We also don't know if they have a stock of tested/assembled/fused units. Or how large that stock is. Is it 100k or 20k or 500k?

We also don't know the yields right now.

So maybe than can buy 1 GHz or 850 MHz or 925 MHz from AMD with no trouble. Or maybe not. Those who know more precisely are not talking. [Or they did and it was dismissed.]



My guess is that they set a limit of 100W and near silence and derived from that a clock far below what the silicon is capable of. That could be BS as there are tons of unknowns.



I might get flamed for this but:

Scenario 1: Basically 7790 and so yes, can likely support 1 GHz easily.
Scenario 2: Heavily customized (study Xenos as example) and more advanced than GCN. Can't be overclocked but doesn't need to as it is more powerful clock per clock.
Scenario 3: ...
Scenario 4: ...



In the characterization/verification and testing MS (AMD) would map out the possibilities. They could likely refer to that data pretty quickly while we are all left guessing and speculating.



If it is scenario 2 then MS might be laughing internally right now. If that is the case maybe MS only looks at the reservation or *maybe* the memory. Or maybe they are quite happy and expect that we will be too once we get our hands on it and see what it does.


Right, most PC gamers are able to save money on the factor of purchasing chips and tweaking them at their own expenses with their own techniques of cooling. In Microsoft's case where they're selling a product as a reliable choice then overclocking would be a risk factor, 1000 mhz is more of a risk factor just like 9 is to 8 and 7. The problem that Microsoft faces is that these regulations are all small in performance and the risk is high on the company image.

If Microsoft is just trying to get by with the leaks and sufficing with good temperatures then they shouldn't be tweaking any of the hardware. Over clocking requires a lot of risks taken and tons of trial and error processing; which is not free. They do have a massive cooling unit but If Microsoft wants to close any kind of gap then they need to obviously come to terms. Having teams fiddling with higher clocks settings only to comeback down to earth is obviously money not well spent.

In my opinion a mega cooling unit with a simply better equipped chip is always better than over clocking risks to achieve parity. so if they want a to achieve a 7790 or higher then they should just optimize for it and cut the time, since they have the area for it.
 
Until exact product binning is defined there is no such thing as overclocking - Overclocking is taking a product which has defined properties and explicitly running them beyond those limits (betting that your part has better characteristics than the baseline). While things are still in an engineering phase there are no set limits and there are multiple parameters that will be analysed. There are many things that may or may not occur that can alter clocks from the targets - i.e. actual material is tracking better than was accounted for in pre-silicon estimates or even early engineering samples (which could be that the overall speeds are different, or yeilds are different and a different speed decision is made based on that); or even other things such that there is sufficient margins in other components that may can alter speeds when they are put together.
 
Until exact product binning is defined there is no such thing as overclocking - Overclocking is taking a product which has defined properties and explicitly running them beyond those limits (betting that your part has better characteristics than the baseline). While things are still in an engineering phase there are no set limits and there are multiple parameters that will be analysed. There are many things that may or may not occur that can alter clocks from the targets - i.e. actual material is tracking better than was accounted for in pre-silicon estimates or even early engineering samples (which could be that the overall speeds are different, or yeilds are different and a different speed decision is made based on that); or even other things such that there is sufficient margins in other components that may can alter speeds when they are put together.

So one question is... ...are things still in an engineering phase? We need to get someone at MS or AMD to talk ;)
 
So one question is... ...are things still in an engineering phase? We need to get someone at MS or AMD to talk ;)

At this close to the launch window I would sincerely hope they are passed the engineering stage. Otherwise something has gone very wrong.
 
Until exact product binning is defined there is no such thing as overclocking - Overclocking is taking a product which has defined properties and explicitly running them beyond those limits (betting that your part has better characteristics than the baseline). While things are still in an engineering phase there are no set limits and there are multiple parameters that will be analysed. There are many things that may or may not occur that can alter clocks from the targets - i.e. actual material is tracking better than was accounted for in pre-silicon estimates or even early engineering samples (which could be that the overall speeds are different, or yeilds are different and a different speed decision is made based on that); or even other things such that there is sufficient margins in other components that may can alter speeds when they are put together.


hmmmm, I was wondering about that. overclocking is an aftermath process; which i was thinking would be the least elected do to unnecessary increased testing time. With the track record Microsoft had at the launch of the Xbox 360, Exerting circuitry should be in the least chosen option for more Flops per Clock.

If higher Flops are ever in suggestion for better narrowing gaps in consoles, one should be doing it though using stock level regulations. It's safer and provides quicker results.

The price of the change might not be that big of a deal, If you compare it to the full scale proses used for clock testing, VS researchers that already went out of their way regardless to avoid bankruptcy. (AMD)
 
Last edited by a moderator:
hmmmm, I was wondering about that. overclocking is an aftermath process; which i was thinking would be the least elected do to unnecessary increased testing time. With the track record Microsoft had at the launch of the Xbox 360, Exerting circuitry should be in the least chosen option for more Flops per Clock.

If higher Flops are ever in suggestion for better narrowing gaps in consoles, one should be doing it though using stock level regulations. It's safer and provides quicker results.

The price of the change might not be that big of a deal, If you compare it to the full scale proses used for clock testing, VS researchers that already went out of their way regardless to avoid bankruptcy. (AMD)

I keep mentioning this big fact: They are trying to cool 100W (near) silently.

100W is nothing for a chip of that size.

They are not in the "overclock" or "burning hot" region at all. They are in the opposite region, by their choice.
 
Pretty much clocks are a matter of thermals and once you set those thermals yields at said thermals I doubt either company is pushing the physical limits of the chip. Just pushing it to a step where they are comfortable with the yields at a given thermal\power envelope.
 
Status
Not open for further replies.
Back
Top