The pros and cons of eDRAM/ESRAM in next-gen

If I follow hynix, at 1.5V the lowest bin is 5.0gbps, it's the exact same bin for 4.0gbps at 1.35v, just a choice of voltage. Sorry I edited my post to say "almost" because I forgot they were using 6.0 parts at 5.5. So if they would have used 5.0, they could get everything that comes out without being dependent on the volume demand of lower speed bins.

Ah k. Understood. :)

The other speeds (3.2, 3.6) were GDDR5M, and they have now disappeared from hynix databook (as of Q2/2014). That could have been a much lower cost? I wonder who cancelled their plans for GDDR5M (amd?), maybe it'll never exist.
hm.. Maybe they became superfluous with the yield rates being just as good as another SKU.

edit: oh nevermind, GDDR5M is supposed to be x16/x8 width instead.

----

*snip for brevity*
Durango's DDR3 bus is the same width, but the interface is clocked at less than half the speed of the GDDR5 one. The memory controller (edit: parts of it) and significant portions of the interface would be able to cut power linearly, assuming equal voltages.
Tens of watts of power saved seems like it could be possible.

mm...

On the other hand, power consumption measurements for both systems seem rather comparable at the moment. :s
 
All accounts point to ESRAM coming before 8GB. They were originally going with ESRAM and 4GB of DDR3. I would assume the reason is they projected ESRAM would be the most affordable route to high bandwidth at launch. Sony appears to have lucked out in projecting differently (and it is luck). Both systems appear to be unit sale profitable. Which one is more profitable, and how that changes over the course of gen is another prediction. In terms of overall profits from hardware, that'll depend, because you can go for margins or volume. Sony will come out ahead this gen in terms of sales of software. Not sure who would lead in other content.
I think that whereas both Yukon and Durango featured eSRAM/eDRAM it may not be for the same reasons.
Yukon was a tiny chip, with a 128 bit bus. 4GB was the absolutely necessary and 4gb GDDR5 could have not been available. So it made sense to have DDR4 eDRAM/eSRAM, going up to a 256 bit bus to secure 4GB of RAM (of GDDR5) might have gotten in the way of price reduction.

For durango, contrary to Sony, they put those 8GB as an absolute priority, 4gb memory chip was not a given for launch period => eSRAM.

I would not call Sony's move luck, they let their option opened. 4GB was fine, MSFT planned to used that amount in Yukon and GPU performances aside it was not supposed to less than what durango does. For the ref tablet running windows 8.1 runs game as diablo III, farcry and plenty of others with 2GB, the amount of reserved RAM is pretty enormous, and there is 1GB left which fate has yet to be decided. 1GB is enough to run Windows 8.1 according to recent MSFT policies change.
Overall MSFT may have an advantage on the paper, but at 499$ vs 399$ with most likely exactly the same deficit in performances in games. I don't think it would have impact services either, you don't run (or need) that much things when you play and when you don't both system have resources in sparse as far as media consumption and running is concerned.
Reception of the new kinect, the initial strategy for the system would not have better received even if Sony had launched with less RAM.
I think it was a reasonable bet from Sony part as 4GB was in itself a sane amount of RAM for their system. I think they could have gone with doubling the RAM but it was too good to pass they let MSFT without any PR point, and it may not have cost them that much (less than x2 , x1.5? ).
If it did cost them a significant amount of money it means that they could also have launched slightly cheaper with 4GB, or include a ps eye. It goes both way.

I think they paid a beefy tribute to those extra 4GB of RAM. I've hard time deeming worth it.
 
Last edited by a moderator:
I believe bkillian said that 4GB was the amount of memory planned for Durango and it changed part way through development. Not sure how late.

As for Sony being lucky with GDDR5, I only mean that forecasting and predictions, especially years in advance, are rarely accurate. That would be the same for any company. You.'re more often than not going to be wrong and Sony happened to be right. I'm sure they did a lot of research, like everyone else. I'm not downplaying that. They had planned on 4GB or RAM and were able to change it very late because the price ended up being favourable. That was not something in their control, and maybe something they weren't even able to forecast when they were initially planning their design.
 
On the other hand, power consumption measurements for both systems seem rather comparable at the moment. :s

I've seen a number of comparisons that put the PS4 measurably higher, but there's no clear breakdown as to the largest contributors.

The most recent I've seen is this:
http://www.nrdc.org/energy/game-consoles/files/video-game-consoles-IP.pdf

Except for connected standby, the PS4 burns more power.
The figures appear to include all peripherals, which means the PS4 camera and Kinect are part of the total. I'm pretty sure Kinect's use of infrared LEDs is more power-hungry than Sony's stereoscopic solution, so the Xbox is tens of watts more efficient under load even with Kinect.

The non-gaming uses for these consoles are really high for some reason.
 
Except the modules being used by PS4 are higher density which is not too commonplace for GPU's right now.
The low end gtx750ti says hello ;)

So far as I can tell, practically all brands of GTX750 are using the hc03 bin which is the exact same chip as the ps4. I think that seals the deal for cost advantage. The PCB has the provisions for clamshell, and they still went for 4x 4gbits instead of 8x 2gbits parts. It's the higher density that provided cost savings.

Considering they used the 6.0gbps parts too, I would guess it mean samsung has pretty good yields for speed, otherwise they'd have to dump the 5.0gbps to someone, and even the low end cards don't use it.
 
I believe bkillian said that 4GB was the amount of memory planned for Durango and it changed part way through development. Not sure how late.

As for Sony being lucky with GDDR5, I only mean that forecasting and predictions, especially years in advance, are rarely accurate. That would be the same for any company. You.'re more often than not going to be wrong and Sony happened to be right. I'm sure they did a lot of research, like everyone else. I'm not downplaying that. They had planned on 4GB or RAM and were able to change it very late because the price ended up being favourable. That was not something in their control, and maybe something they weren't even able to forecast when they were initially planning their design.
I think 3dilletante in his answer to one of my post raised a relevant point: the scale of the project.

MSFT went to design its own audio DSP and integrate it into a super complex SoC, they worked on integrating the eSRAM too, they worked on Kinect, etc. By its very scale (huge project) there was no place made for a more flexible approach. They shipped on time and so far there are no issue with the hardware it is quite a feat in itself.
Lots of thing could have gone wrong.

Sony set a less intensive job for itself, mostly integrating IP from AMD with AMD in lead actually.
I mean for Sony putting the PSV inners together might have been a greater involvement for their hardware teams than it was to put the PS4 together. Not that I imply it is easy to put together a +300mm2 SoC but the scale of the project is still more limited than Durango.

Wrt to those 4gb memory module I don't know when the professional heard about it, I asked many time if they were coming as I found it weird for DDR3 to make progress and for the GDDR5 to "stall" whereas there was no replacement solution in sight (delays after delays for DDR4 and it seems that more exotic solutions were never really a possibility for a mass produce device in 2013), I got repeatedly the same answer, no roadmap or info on such product / it was not a public data or a data that one in the known could have released.
I would not believe that AMD, Nvidia, Sony, MSFT heard about it as late as we did (ahead of the PS$ unveiling) but when did they hear about it? :?:
 
Last edited by a moderator:
I can tell you that we launched product in the <$150 space using this density in October last year, that's already public (R7 260X).
 
Can you tell us since when AMD knew those 4gb modules were coming? Or it is data under NDA or sensible info given the context?

They may have had a decent guess or advance knowledge when they gave Bonaire a 128-bit bus, given how very capacity constrained the design would look from a future-proofing perspective if GDDR5 topped out at 2Gbit.

Sony likely had a good idea or was driving (edit: or helping drive) 4Gb as well. The volumes of GDDR5 are such that manufacturers don't have continuous production runs with the assumption that there will be a market that will absorb them. The DRAM manufacturers would be more conservative about releasing new GDDR5 devices unless someone made it worth their while.

There were also rumors posted in this forum about 8Gbit chips being developed and specially ordered by a large customer.

edit: beaten
 
A few GPU's make it far from a commodity RAM.
I never claimed otherwise. I'm just saying that the higher 4gb density doesn't seem to have a negative impact on price, I supported this by pointing to the existence of products using 4gb in situations where the PCB was clearly designed for either 2gb or 4gb parts.
 
Sony predicts PS4 will make more money than PS2. If so, ESRAM wasn't a requirement to avoid financial disaster.

IF sony is already making money on ps4s sold at $400 then MS will be making money on the xbox one sold at $400.

The APU is similar in size and the ddr ram is cheaper in the xbox by a sizable amount.

So I don't see a problem , MS is already at 5m sold , they just have to keep putting out tons of games .
 
IF sony is already making money on ps4s sold at $400 then MS will be making money on the xbox one sold at $400.

The APU is similar in size and the ddr ram is cheaper in the xbox by a sizable amount.

So I don't see a problem , MS is already at 5m sold , they just have to keep putting out tons of games .

you mean shipped.

Also, stopping production is going to increase price per unit. Not to mention inventory costs.

We don't have a confirmation that XB1 costs less to manufacture than PS4, If anything, there are reports that XB1 is actually more expensive. So I don't think PS4 making money automatically means the XB1 is also making money.
 
Lately there is many paper from ISSCC 2014
especially from eSRAM/ SRAM technology.

I will posted 2 techical digest of 6T SRAM from both Samsung and TSMC
both are using 16/14nm process.

Why?, because it is a perfect time to ask numerous question, related Xbox one eSRAM.
Especially the big question regarding is Xbox One SRAM is 6T or what ?
Of course the first thing to do is estimating the area of Xbox one SRAM

For reference
Jaguar + Surrounding area = ~ 26 mm2
http://info.nuje.de/Jaguar_CU.jpg

X1 VS PS4 die shoot
http://images.anandtech.com/doci/7546/diecomparison.jpg

PS4 GPU Area is ~80mm2, PS4 total = 328mm2
http://www.extremetech.com/extreme/...u-reveals-the-consoles-real-cpu-and-gpu-specs

Based on above (Use above data we can extrapolated the SRAM area), or use Ruler tool (etc)
http://i.imgur.com/f8SYdYH.jpg
Xbox one SRAM area,as there is total 2 Area, certainly less than PS4 GPU or Xbox one GPU area.
Xbox one SRAM area , per AREA ~40 mm2 (abosolutely less than 45 or 50mm2)

And we believe this per AREA hold 16MB and the type is 6T
The Question is based on what ??
Most of the people that said Xbox One SRAM 6T,
did not even provide any comparison to SRAM technology of TSMC or Samsung or other.
or even paper or technical digest for comparison.

Now i provided the latest in 6T SRAM from both Samsung & TSMC.
Also based on these data, the density of TSMC 16nm is basically the same as Samsung 14nm FF

Samsung 128mbit (16MB)
6T SRAM, 14nm FF
-----> area : 75.6 mm2 <----
http://i.imgur.com/7exhOXk.jpg
http://i.imgur.com/FqRO9Il.jpg

TSMC 128mbit (16MB)
6T SRAM , 16nm
----> area : 42.6 mm2 <----
http://i.imgur.com/n1BsRob.jpg
http://i.imgur.com/XHZwWrQ.jpg

Now is it possible that Xbox one SRAM is 6T using area less than both examples ? and still using 28nm ?, you be the judge .... :D
 
Transistor density can change according to design specs related to I/O, speed/performance & power characteristics.
 
Transistor density can change according to design specs related to I/O, speed/performance & power characteristics.

but it is impossible the density @28nm is better than 14nm (14nm itself already >2x, real 14nm of course will be ~4x, but currently it is like marketing term )
i can provide more example.
 
Stopping production?

http://www.gamerevolution.com/news/...ox-one-production-due-to-excess-supply--25469

While Sony struggles to keep stores stocked with PlayStation 4s, the Big M has plenty of its consoles lining store shelves. As such, Microsoft CFO Amy Hood went so far as to imply, according to Gamasutra, "that manufacturing of Xbox consoles will slow or stop, to allow retailers time to work through existing inventory."

During a recent investor call, Hood went so far as to say that "we do expect to work through some inventory in Q4," highlighting "channel inventory drawdown for Xbox consoles."

back in late April.

And I don't think that's surprising. Unless they're selling at least 500k a month, reducing production surely will happen. We projected ~1 million consoles per month way back in September, remember?
 
Back
Top