*spin-off* Choice of RAM configuration

GDDR5 doesn't even have a heat sink, that's a good indication this 70W figure is completely wrong. More than a watt per chip would burn it.
 
The ESRAM was there before the 8GB of RAM. The size of the RAM pool was not responsible for the decision to use ESRAM.

I think the question is more along the lines of, "Was the ESRAM a necessity because of MS's desire to use cheap but slow DDR3 memory?"

Not the size of the RAM pool, but more the performance of said pool.

In my mind, MS's thought processes must have gone one of two ways:

1) Microsoft wants and needs lots of system memory, but doesn't want to pay a lot. That necessitates DDR3, which in turn necessitates some sort of fast on-chip memory. And with that taking up so much room, the GPU had to be modest to avoid the chip becoming XBOXHEUG.

2) The boffins in the Direct X department are head-over-heels about some sort of low latency memory shuffling scheme that keeps the GPU fed, and speeds CPU-GPU cooperation. Therefore, the ESRAM is job one. The core of the design. All else follows from that. No room for more CUs, and no actual need for blazing fast system RAM.
 
Drop the "G" from gddr3 and substitute the "D" for an "S" in eDRAM and all the sudden people act like MS is doing something new. I see eSRAM just as analogous to eDRAM as GDDR5 is to XDR.

MS seems to made design changes to its memory system to tackle some of the complaints of the 360. Mainly addressing the complaint that eDRAM was too small.

Plus, GDDR5 is expensive because it's boutique memory that's services a very small market. Unless the PS4 sells like a dog Sony probably going to need at least half a billion gigs of the stuff over the next 5-10 years and will probably end up being the largest consumer of GDDR5. Probably in the end buying gddr5 at low prices only Sony alone is privy to receive because it being supplied by dedicated production lines attached to ram contracts worth billions. That reality would be true for a XB1 that sported gddr5 and sold similarly to the 360.

Furthermore, It's easy to compare 12 CUs versus 18 CUs when you ignore MS's attempt to offload some of tasks normally handled by a GPU and CPU. But that's not hard to do with MS acting like they have a nervous twitch with beads of sweats rolling off their forehead because they are sitting in a police station with a vial of xb1 sitting in the crack of their ass. I guess that's the reality of a software background where it's a lot easier to commandeer someone else's work and intergrated it into a competing product. And MS tight lips is not new. MSR probably more affiliated published detailed journals on Cell then it has on Xenon, Xenos or eDRAM.

AMD and MS console relationship stretches almost a decade with their relationship in the PC space stretching well beyond that time. A lot of the hardware AMD has sold over the years has been paired with MS software. Under those circumstances is it rational to believe that AMD would throw older tech to MS and provide a more forward thinking solution to a competitor? I doubt it especially since AMD gets to spend some time exclusively hawking DX11.2 hardware.

Given what we have seen of Durango, I'm not of of mind that MS simply asked AMD for a base design and went all crazy with their own ideals. That's like going to Mr Miyagi for some basic instruction then showing up a few months later with your own form of karate. It's my belief that MS asks for certain performance metrics and AMD presented designs and worked with MS to come up with new designs to accommodate MS's needs.
 
In the end the difference would at most would be probably 6 W against 16 W. The argument of if that's a lot or not would probably be up to the criteria. Looking from a Wattage/bandwidth perspective GDDR5 is probably equal/better even when comparing 40nm to 20nm modules. Standing from Wattage/size perspective GDDR5 is probably ~half of DDR3.

I'm not sure Samsung's figures take into account the memory controller on the other side of the interface. I don't think they can control for that.
The bulk of the DRAM's power consumption is expended on the interface, and the memory controller on the APU will expend slightly less power.

The upper 20s doesn't seem out of line. If we assume a near doubling of the memory's power consumption once figuring in the APU side, it's ~10-12 vs ~28-32.
20W is significant.
Per unit of bandwidth, GDDR5 can be more power-efficient, but the absolute power factor when faced with a TDP ceiling is significant.

An on-die memory pool should be much lower in power consumption for equivalent bandwidth.
 
The ESRAM was there before the 8GB of RAM. The size of the RAM pool was not responsible for the decision to use ESRAM.

That's very interesting. Not what I would have expected. It'll be interesting to hear exactly how it is being used, to maybe get some sense of why it was part of their design from the start. I suppose there is still the price issue, allowing them to use cheaper DDR3, but is it really a win on price overall, when you have to fab a more complex chip? Serious question there. I have no idea how much the ESRAM adds to the cost.
 
Cost of the esram will fall considerably over the console lifetime as die revisions take advantage of process node shrinks and the soc size also shrinks. The price of gddr5 is relatively fixed over its lifetime.
 
Cost of the esram will fall considerably over the console lifetime as die revisions take advantage of process node shrinks and the soc size also shrinks. The price of gddr5 is relatively fixed over its lifetime.
Even though it did not show in the launch price I think that if MSFT were cornered and had to fight on price foremost they made it so they are likely to be in a good situation.
Now the XB1 includes Kinect, in case of absolute necessity it could be remove of some SKU, I'm close to thing that they may also release a SKU free of optical drive.
THe price of the GDDR5 in the PS4 is going to go down as Sony is to use less memory chips over time, though the same applies to MSFT. The difference is that I don't expect the premium of GDDR5 to dissapear.
MSFT stated that they are to sell in the grey at launch or even to do a small profits, blending everything together I think that if things were to get out of hand they could price hypothetical new SKU at a real low price (say when the 20nm revision of the chip hits the streets).

I think MSFT technical choices were really business oriented, it doesn't show now but that doesn't mean that it could not translate in a competitive hedge if needed.

if not clear I agree with your post, MSFT made some wize decision even though it is not obvious now, with their current business model for the system, etc.
 
I wonder why they didn't go with edram which would have offered much larger sizes and higher bandwidth in a smaller footprint. I would think size and bandwidth would trump any advantages lower latency would provide.

Cutting corners because it would be cheaper with esram in the long run?
 
I wonder why they didn't go with edram which would have offered much larger sizes and higher bandwidth in a smaller footprint. I would think size and bandwidth would trump any advantages lower latency would provide.

Cutting corners because it would be cheaper with esram in the long run?
Edram has no bandwidth advantage only power and die area ;)

The issue with EDRAM is that it is pretty boutique and ties you to a couple of foundries, the roadmap are not as aggressive as with standard lithography, etc. I suspect it is more expansive.
See what could happen to Nintendo if Renesas troubles are mishandled.
 
Edram has no bandwidth advantage only power and die area ;)

The issue with EDRAM is that it is pretty boutique and ties you to a couple of foundries, the roadmap are not as aggressive as with standard lithography, etc. I suspect it is more expansive.
See what could happen to Nintendo if Renesas troubles are mishandled.

Spot on

One soc, one foundry, one design, hence X One
 
Cost of the esram will fall considerably over the console lifetime as die revisions take advantage of process node shrinks and the soc size also shrinks. The price of gddr5 is relatively fixed over its lifetime.
GDDR5 was 70nm in 2008 and it's 30nm right about now.
New nodes. Higher density. Less chips. Lower cost.
 
That starts delving into questions about the costs of using RAM that doesn't get produced at the massive scale of bog-standard DDR3, and what happens to GDDR5's availability or production cost for an alleged 10-year lifespan for the console.
GDDR5 is nearer to the end of its life span, especially if graphics boards move to something new in the next decade.
DDR3 is due to be replaced, but its consumption has a longer tail, and it starts out orders of magnitude higher.

I'm curious if there is consideration of moving to a new memory format, if only for cost savings.
There are some crazy-high bandwidth standards coming out in several years, and eventually using them for more mundane bandwidths could lead to savings in power and manufacturing.
 
Well, the ESRAM was more tied into the type of ram used instead of the capacity chosen due to the bandwidth in the first place anyway.

It was pretty clear that it was either a large pool of high speed GDDR5, or a huge pool of DDR3 + an eSRAM. 8GB of it was just a choice of "how large is enough."

No. We know that the original plans were to have 4GB of DDR4 along with 32MB of eSRAM. That's in the Yukon leak from mid-2010. This narrative about the inclusion of the eSRAM and how it related to the choice of DDR3 etc has been falsified since DF asserted it back in February.

You are making assumptions based on Sony's expressed considerations, then applying that to MS. Sony was the one who looked at embedded RAM pools to boost bandwidth and performance but instead opted for a simpler design as their priority. MS's considerations almost certainly would have been well informed by the utility of having eDRAM on 360 since 2005.
 
That starts delving into questions about the costs of using RAM that doesn't get produced at the massive scale of bog-standard DDR3, and what happens to GDDR5's availability or production cost for an alleged 10-year lifespan for the console.
History doesn't always repeat itself, but iSuppli estimated the XDR in the PS3 at $50 in 2006. 3 years later the slim had a 4x512Mb for only $9.80, and the new super slim has only a pair of 1Gb which cost even less. That seems to contradict all the doom and gloom about XDR that supposedly wouldn't drop in cost. But I have no idea why. Could they have negotiated the contracts very long in advance? Could they do this for GDDR5?
GDDR5 is nearer to the end of its life span, especially if graphics boards move to something new in the next decade.
DDR3 is due to be replaced, but its consumption has a longer tail, and it starts out orders of magnitude higher.

I'm curious if there is consideration of moving to a new memory format, if only for cost savings.
There are some crazy-high bandwidth standards coming out in several years, and eventually using them for more mundane bandwidths could lead to savings in power and manufacturing.
I really loved the idea that was proposed earlier, upgrading 256bit DDR3-2133 to 128bit DDR4-4166, it looks so cool on paper :smile:

For GDDR5, there's 4Gbps GDDR5M parts sampling in Q3'2013, so I wonder how soon the higher speeds would be available (say, 5.5gbps as a random number). They claim the price would be competitive with DDR3/4, so it would be a pretty straightforward cost cutting.
 
No. We know that the original plans were to have 4GB of DDR4 along with 32MB of eSRAM. That's in the Yukon leak from mid-2010. This narrative about the inclusion of the eSRAM and how it related to the choice of DDR3 etc has been falsified since DF asserted it back in February.

You are making assumptions based on Sony's expressed considerations, then applying that to MS. Sony was the one who looked at embedded RAM pools to boost bandwidth and performance but instead opted for a simpler design as their priority. MS's considerations almost certainly would have been well informed by the utility of having eDRAM on 360 since 2005.

While you mistaken my assumption, you also made a big assumption.

You're making assumption that they made the decision based on the eDRAM, I made the assumption they opted for the best choice for as much ram as possible to deal with the multimedia multitasking requirements (while keeping costs and availability in check), not "simpler design", which was Sony's design choice.

We all have the consensus that Sony got lucky with the availability of 4Gb GDDR5 chips just in time to make 8GB GDDR5 mass-production possible.

Choosing 4GB DDR4 back in 2010 may have seemed like a good idea, but looking at how DDR4 is doing now it isn't too hard to see why they changed to DDR3 instead of GDDR5.
 
Last edited by a moderator:
History doesn't always repeat itself, but iSuppli estimated the XDR in the PS3 at $50 in 2006. 3 years later the slim had a 4x512Mb for only $9.80, and the new super slim has only a pair of 1Gb which cost even less. That seems to contradict all the doom and gloom about XDR that supposedly wouldn't drop in cost. But I have no idea why. Could they have negotiated the contracts very long in advance? Could they do this for GDDR5?

It's one thing to be the main consumer of a given piece of silicon and have costs ramp down over time with new process nodes, etc. XDR.

The same with being one of many consumers of a given piece of silicon and have costs ramp down over time with new process nodes, etc. DDR3.

It's somewhat more complicated when you will for the time being be one of 3 major consumers of a given piece of silicon, but then midway through your product life cycle, 2 of the major players stop using it. Economies of scale suddenly diminish greatly.

DDR3 has a long tail life ahead as it is used in all computing devices from servers down to tablets. It doesn't have a relatively abrupt end of support as usually happens with GDDR.

In other words there's a much longer transition with a long period of overlap between product generations for standard DDR versus GDDR. 5 years from now, Sony may be the only consumer of GDDR5 chips. On the other hand there will still be a plethora of companies using DDR3, hence much greater volume, and lower margins required to still make a healthy profit for the memory manufacturer. For example DDR(1) is still being manufactured. GDDR(1)?

Regards,
SB
 
It's one thing to be the main consumer of a given piece of silicon and have costs ramp down over time with new process nodes, etc. XDR.

The same with being one of many consumers of a given piece of silicon and have costs ramp down over time with new process nodes, etc. DDR3.

It's somewhat more complicated when you will for the time being be one of 3 major consumers of a given piece of silicon, but then midway through your product life cycle, 2 of the major players stop using it. Economies of scale suddenly diminish greatly.

DDR3 has a long tail life ahead as it is used in all computing devices from servers down to tablets. It doesn't have a relatively abrupt end of support as usually happens with GDDR.

In other words there's a much longer transition with a long period of overlap between product generations for standard DDR versus GDDR. 5 years from now, Sony may be the only consumer of GDDR5 chips. On the other hand there will still be a plethora of companies using DDR3, hence much greater volume, and lower margins required to still make a healthy profit for the memory manufacturer. For example DDR(1) is still being manufactured. GDDR(1)?

Regards,
SB

AMD says otherwise.

gddr5_01_gpu-ram-roadmap1.jpg
 
While you mistaken my assumption, you also made a big assumption.

You're making assumption that they made the decision based on the eDRAM, I made the assumption they opted for the best choice for as much ram as possible to deal with the multimedia multitasking requirements (while keeping costs and availability in check), not "simpler design", which was Sony's design choice.

We all have the consensus that Sony got lucky with the availability of 4Gb GDDR5 chips just in time to make 8GB GDDR5 mass-production possible.

Choosing 4GB DDR4 back in 2010 may have seemed like a good idea, but looking at how DDR4 is doing now it isn't too hard to see why they changed to DDR3 instead of GDDR5.

When I said 'almost certainly' it wasn't an assumption. It's a conclusion based on the fact that the 32MB of eSRAM predated both the size of RAM as well as the type. So that's not assuming anything. It's simply me noting a fact that taken in context makes sense. Might be wrong, but I highly doubt it.

Your assumption, which is an assumption as it totally ignores viable info, is falsified already. Yukon kills that theory dead in the water. I don't really care what the consensus amongst forumites is. The timelines for the Yukon setup isn't up for debate.

FACT: The 32MB of eSRAM was NEVER included in X1's design as a result of using DDR3 RAM nor having 8GB of it. Period.

If you'd like to talk about the utility of including it in terms of boosting bandwidth, have at it. But regurgitating DF's ignorant narrative isn't helpful.
 
Back
Top