No they are both 4Gb parts, its 256M x16 = 4Gb, the other is 512M x8 = 4Gb.Eight 256Mb chips!?
They could use 8 bit wide chips instead of 16 bit. These are available from micron with essentially the same specs as the ones they already use.
Eight MT41J256M16HA-093 on top
Sixteen MT41J512M8RH-093 in sandwich
That would make them dissipate less heat, it would need just a bit of board rework, and all chips would be single-ended so no worries about frequency or load.
Later revisions could replace the sandwiches with 8Gb parts.
Ah ok, I meant on top of the PCB, not on top of each other... the 8 chips that are 16 bit wide are each on the top side of the PCB but none on the underside, while the others 8 bit would be in sandwich pairs, one on top and one under at each location. The layout would be almost identical, the signals are all already routed pretty much the same way.@MrFox
No it's the "on top" part that I don't get.
I should have been more clear, sorry.
Then we are back to the not very elegantly looking solution of 8GB of memory at the full bandwidth and 4GB at half of it. And we don't know if the memory controller supports the necessary more flexible address interleaving between the channels (but I would think so). That MS modifies their OS in a way that the game VM preferably gets its memory allocated in the faster 8GB one could believe though.They could use 8 bit wide chips instead of 16 bit. These are available from micron with essentially the same specs as the ones they already use.
Eight MT41J256M16HA-093 on top
Sixteen MT41J512M8RH-093 in sandwich
That would make them dissipate less heat, it would need just a bit of board rework, and all chips would be single-ended so no worries about frequency or load.
Later revisions could replace the sandwiches with 8Gb parts.
That MS modifies their OS in a way that the game VM preferably gets its memory allocated in the faster 8GB one could believe though.
Ah ok, I meant on top of the PCB, not on top of each other... the 8 chips that are 16 bit wide are each on the top side of the PCB but none on the underside, while the others 8 bit would be in sandwich pairs, one on top and one under at each location. The layout would be almost identical, the signals are all already routed pretty much the same way.
If they considered more than 8GB in reaction of the PS4 reveal, they had plenty of time since February and they certainly wouldn't have announced 8GB at the xbone reveal if it was an open question. Regardless of any rumor or leak, my bet is still on 8GB because of that.
I've been thinking Over clocking some stuff might be too risky for microsoft, the end result could be millions of overheating units at the costs of a miniscule jump. surely they can just have their companies license over more efficient stock chips that needs no such hassle? the benefits would be shorter R & D efforts, durable hardware, and a sooner release date.
This is almost a given. I haven't seen any significant debate premised on the idea that upping the clocks modestly is a physical impossibility instead of a design choice.That would indicate there is lots of headroom and that the current clock is more of a self imposed power/temperature/cooling limit/near silence of 100W, not what could be done with the exact same chip with a different combination of noise, power, cooling, etc.
Even if the GPU's overall design were identical to the 7790, and discounting the eSRAM, Durango wouldn't be the same. As was noted, this is an APU, so pointing to a discrete device means ignoring a massive variable on the Durango side.But no one seems to know for sure what the design is, and we would need to know that more precisely to enable any argument free extrapolation. But 1 GHz for 28 nm TSMC and AMD GCN does not seem to be all that unreasonable if 7790/CGN architecture is true.
It has been suggested that they might be able to have the APU tested and binned for clocks similar to 7790, assuming the process and architecture is that.
That would indicate there is lots of headroom and that the current clock is more of a self imposed power/temperature/cooling limit/near silence of 100W, not what could be done with the exact same chip with a different combination of noise, power, cooling, etc.
But no one seems to know for sure what the design is, and we would need to know that more precisely to enable any argument free extrapolation. But 1 GHz for 28 nm TSMC and AMD GCN does not seem to be all that unreasonable if 7790/CGN architecture is true.
But there are lots of arguments that MS might have customized things but we don't know how far.
We also don't know if they have a stock of tested/assembled/fused units. Or how large that stock is. Is it 100k or 20k or 500k?
We also don't know the yields right now.
So maybe than can buy 1 GHz or 850 MHz or 925 MHz from AMD with no trouble. Or maybe not. Those who know more precisely are not talking. [Or they did and it was dismissed.]
My guess is that they set a limit of 100W and near silence and derived from that a clock far below what the silicon is capable of. That could be BS as there are tons of unknowns.
I might get flamed for this but:
Scenario 1: Basically 7790 and so yes, can likely support 1 GHz easily.
Scenario 2: Heavily customized (study Xenos as example) and more advanced than GCN. Can't be overclocked but doesn't need to as it is more powerful clock per clock.
Scenario 3: ...
Scenario 4: ...
In the characterization/verification and testing MS (AMD) would map out the possibilities. They could likely refer to that data pretty quickly while we are all left guessing and speculating.
If it is scenario 2 then MS might be laughing internally right now. If that is the case maybe MS only looks at the reservation or *maybe* the memory. Or maybe they are quite happy and expect that we will be too once we get our hands on it and see what it does.
Until exact product binning is defined there is no such thing as overclocking - Overclocking is taking a product which has defined properties and explicitly running them beyond those limits (betting that your part has better characteristics than the baseline). While things are still in an engineering phase there are no set limits and there are multiple parameters that will be analysed. There are many things that may or may not occur that can alter clocks from the targets - i.e. actual material is tracking better than was accounted for in pre-silicon estimates or even early engineering samples (which could be that the overall speeds are different, or yeilds are different and a different speed decision is made based on that); or even other things such that there is sufficient margins in other components that may can alter speeds when they are put together.
So one question is... ...are things still in an engineering phase? We need to get someone at MS or AMD to talk
Until exact product binning is defined there is no such thing as overclocking - Overclocking is taking a product which has defined properties and explicitly running them beyond those limits (betting that your part has better characteristics than the baseline). While things are still in an engineering phase there are no set limits and there are multiple parameters that will be analysed. There are many things that may or may not occur that can alter clocks from the targets - i.e. actual material is tracking better than was accounted for in pre-silicon estimates or even early engineering samples (which could be that the overall speeds are different, or yeilds are different and a different speed decision is made based on that); or even other things such that there is sufficient margins in other components that may can alter speeds when they are put together.
hmmmm, I was wondering about that. overclocking is an aftermath process; which i was thinking would be the least elected do to unnecessary increased testing time. With the track record Microsoft had at the launch of the Xbox 360, Exerting circuitry should be in the least chosen option for more Flops per Clock.
If higher Flops are ever in suggestion for better narrowing gaps in consoles, one should be doing it though using stock level regulations. It's safer and provides quicker results.
The price of the change might not be that big of a deal, If you compare it to the full scale proses used for clock testing, VS researchers that already went out of their way regardless to avoid bankruptcy. (AMD)