*spin-off* Choice of RAM configuration

The slide above talks nothing about the longevity of GDDR5. It only talks about its ramp and how these 2 interlinked standards will co-exist in the context of serving different GPU memory markets.


If/when GDDR6 becomes ratified, expect most GPU manufacturers to jump ship in 1 generation.
 
When I said 'almost certainly' it wasn't an assumption. It's a conclusion based on the fact that the 32MB of eSRAM predated both the size of RAM as well as the type. So that's not assuming anything. It's simply me noting a fact that taken in context makes sense. Might be wrong, but I highly doubt it.

Your assumption, which is an assumption as it totally ignores viable info, is falsified already. Yukon kills that theory dead in the water. I don't really care what the consensus amongst forumites is. The timelines for the Yukon setup isn't up for debate.

FACT: The 32MB of eSRAM was NEVER included in X1's design as a result of using DDR3 RAM nor having 8GB of it. Period.

If you'd like to talk about the utility of including it in terms of boosting bandwidth, have at it. But regurgitating DF's ignorant narrative isn't helpful.

How does Yukon setup kill the assumption that Microsoft is aiming for as much RAM as possible? I don't see your connection.

My assumption was always:
Microsoft used eSRAM to solve a bandwidth problem in order to open up the possibility of putting in as much RAM as possible while not hurting performance.
If the chosen RAM has relatively anemic bandwidth but has good capacity, then it fits the assumption.
I don't care what specific size was chosen or what specific type was chosen.
As far as I'm concerned, both DDR4 and DDR3 fits the bill perfectly. They probably chose DDR3 over DDR4 for an entire other reason: availability.


http://www.samsung.com/global/business/semiconductor/file/media/DDR4_Brochure-0.pdf

First of all, DDR4 and eSRAM were announced together. Unless I'm wrong, the Yukon leak does not specify which one was chosen first.

Second of all, DDR4 and DDR3 both have similar performance (as in they don't differ from each other as in a comparison between DDR3 vs GDDR5)

Lastly, surprise! DDR4 is not even readily avaliable in 2013! possibly forcing Microsoft to switch to DDR3 somewhere between 2010 and 2012 when they acknowledge the issue. Microsoft probably didn't have a choice BUT to switch to DDR3.

BTW, according to the Yukon leak, 4GB 128bit DDR4-3200 (which in on the roadmap 2015+) bus provides 51.2 GB/s. 2013 probably can see DDR4-2400 which provides a mere 38.4 GB/s

Do you think 51.2GB/s or 38.4GB/s (the more realistic number) is sufficient? I don't think so when DDR3 with 68 GB/s doesn't cut the cake.


If you think the type of main RAM and the choice of using eSRAM is not connected somehow then clearly somebody's not thinking clearly.
 
Last edited by a moderator:
The slide above talks nothing about the longevity of GDDR5. It only talks about its ramp and how these 2 interlinked standards will co-exist in the context of serving different GPU memory markets.


If/when GDDR6 becomes ratified, expect most GPU manufacturers to jump ship in 1 generation.

The slide compares how fast DDR3 and GDDR5 replaced their predecessors, and it is pretty clear they did it at relatively the same pace, which is why I brought the slide up.

Therefore the arguments that is brought upon GDDR5 being quickly outdated can be used against DDR3 as well.
I don't see a clear reason to believe either to be replaced faster than the other if their replacements start to enter the market.
 
History doesn't always repeat itself, but iSuppli estimated the XDR in the PS3 at $50 in 2006. 3 years later the slim had a 4x512Mb for only $9.80, and the new super slim has only a pair of 1Gb which cost even less. That seems to contradict all the doom and gloom about XDR that supposedly wouldn't drop in cost. But I have no idea why. Could they have negotiated the contracts very long in advance? Could they do this for GDDR5?
I'm not sure what factors played into iSupply's estimate in 2006, or the accuracy. It may have been initially high because of its being a new memory type, and around that time Rambus and the DRAM makers may have been still trying to screw each other over.
2009 is about where the DRAM market collapsed, although I don't know how much that would have impacted XDR. The DRAM players don't want to repeat that time period, though wishing and doing are different things.

The two-device layout for XDR is as far as you can go with density increases because bandwidth can't be maintained after another doubling of density without creating a device with an interface double what the spec provisioned.
The same limit would apply to GDDR5. For 8 GB of capacity and a 256-bit bus, one doubling brings things down to 8 full-width devices, and that's as far as it goes.

GDDR5 on a per device level doesn't seem to be starting out as bad as XDR did, which seems reasonable since it's a mature standard with multiple consumers.
The peak price may not have as far to drop to the final cost reduction as it was with XDR, and the bus restriction means there will be four times the number of chips.


For GDDR5, there's 4Gbps GDDR5M parts sampling in Q3'2013, so I wonder how soon the higher speeds would be available (say, 5.5gbps as a random number). They claim the price would be competitive with DDR3/4, so it would be a pretty straightforward cost cutting.
GDDR5M has IO widths of 8x and 16x defined, from what I can see. That would never allow fewer than 16 devices as long as the bus supported by the APU isn't switched out.


The slide compares how fast DDR3 and GDDR5 replaced their predecessors, and it is pretty clear they did it at relatively the same pace, which is why I brought the slide up.

Therefore the arguments that is brought upon GDDR5 being quickly outdated can be used against DDR3 as well.
That's for graphics memory, which is a tiny market compared to the standard memory types. The transitions there were slower, and it would take longer for that big pile to reduce down to the scale of add-in GPUs.
 
AMD says otherwise.

I'm not sure what you are trying to prove with a table of memory used in graphics cards?

The whole volume of memory for AMD's entire graphics lineup (and hell let's throw in Nvidia's graphics lineup as well) is but a very small fraction of the total worldwide DDR (1, 2, or 3) at any given moment.

And of that, GDDR doesn't even constitute the entire graphics card lineup. Hell, DDR (1, 2, and 3) represents over half of all graphics card memory at any given moment in that graph.

Basically part of why GDDR is so expensive is because the volume of GDDR that is used is quite likely well over a magnitude less than the volume of DDR that is used in devices.

And even then it's hardly relevant in any way to DDR. DDR1 is still manufactured and used in devices. Just not in consumer PCs anymore and not in graphics cards, obviously. GDDR1 isn't being used in anything.

Hell, let's use that slide for something though. Take a look at the volume of GDDR3. GDDR3 was introduced in 2004. That graph already has AMD basically phasing it out by the end of 2010 for the most part. A 6 year life cycle. And you expect GDDR5 to have a life of potentially 15 years (2008 - 2023) or more in anything except the PS4? Basically as soon as a successor to GDDR5 comes out, all the GPU manufacturers will relatively swiftly start to swap it out. Roughly 2 years for GDDR5 to basically supplant GDDR3 for AMD. Oh and yes, the same thing happened with DDR2 versus DDR3 in graphics cards but obviously not for all DDR memory consumers. So, if DDR was only used in graphics cards, it too would suffer the same sharp decline when a new memory technology is introduced.

DDR 1 and 2 continue to be produced because there's applications for them in industrial and commercial machines. Nothing like that exists for GDDR.

Regards,
SB
 
That's for graphics memory, which is a tiny market compared to the standard memory types. The transitions there were slower, and it would take longer for that big pile to reduce down to the scale of add-in GPUs.

gddr5_01_gpu-ram-roadmap1.jpg



isuppliddr4-1.jpg



I don't see the discrepancy to matter. Both started in 2008, and both took 3 years time to kill off their predecessors. In the graphics department they've done the job by early 2011 and in global market share they also achieved 89% in 2011.
 
I don't see the discrepancy to matter. Both started in 2008, and in 2~3 years time they've killed off their predecessors. In the graphics departmen they've done the job by early 2011 and in global market share they also achieved 89% in 2011.

And 10% of the DDR memory market will still be much much larger than 100% of the graphics memory market. So if only 1% of the entire DDR memory market was DDR3, it would still have far greater volume than GDDR5 if it is only being used in the PS4.

Regards,
SB
 
So what? If it doesn't scale price wise they'll just revise the memory controller to a new memory std.
With the unpredictable timing due to caches, threads, OS/Apps, network traffic and so on in modern consoles expecting games to fail due to such changes is quite unlikely.
 
Actually looking at Hynix's sales numbers, it's not quite as bad as for others. Graphics memory represents roughtly 6% of Hynix's memory revenue. So in terms of unit volume it's likely somewhere between 1-3% of total memory production as GDDR carries a price premium over DDR.

I wish Hynix would show unit volume for memory like they do for NAND.

Regards,
SB
 
And 10% of the DDR memory market will still be much much larger than 100% of the graphics memory market. So if only 1% of the entire DDR memory market was DDR3, it would still have far greater volume than GDDR5 if it is only being used in the PS4.

Regards,
SB

Xbox 360- 512 MB of GDDR3 RAM clocked at 700 MHz

So you're saying it also must be a great deal for them. Doom and gloom for Microsoft?

Volume means something, but defiantly not as much as new processes. It's not like they're opening up entire new A large part of the high premium for GDDR5 you probably can link to the 40 nm processes versus the 20nm processes of DDR3. If Sony and Microsoft can still secure supplies of GDDR3 and XDR in 2013 where there is no supply to be seen, there's really no reason to believe there will be supply issues with GDDR5 or DDR3.

You're predicting a lot of doom and gloom and expecting to have no idea how to react, respond, or plan ahead of these situations, even with past history pointing out that there they're perfectly fine.
 
So what? If it doesn't scale price wise they'll just revise the memory controller to a new memory std.
With the unpredictable timing due to caches, threads, OS/Apps, network traffic and so on in modern consoles expecting games to fail due to such changes is quite unlikely.

So what? The point being that there was a reason that Microsoft chose DDR3 as the main pool of memory for the system. GDDR is and will be far more expensive in large quantities. And while density and thus cost per GB have gone down for GDDR5, it's also gone down for DDR3.

That of course then impacts how they design the console. And why PRT and fast eSRAM are likely so important to the system. Along with game profiling that they've been doing, it allowed them to make that choice. The question, of course, is whether profiling modern day games will somewhat accurately predict how games use system resources for the upcoming generation.

And it isn't as if Microsoft's profiling was the only indicator that a small fast pool of RAM could be beneficial. Sony saw the same thing, but they decided to take the safer and more expensive bet of just going with GDDR5.

Regards,
SB
 
Xbox 360- 512 MB of GDDR3 RAM clocked at 700 MHz

So you're saying it also must be a great deal for them. Doom and gloom for Microsoft?

Volume means something, but defiantly not as much as new processes. A large part of the high premium for GDDR5 you probably can link to the 40 nm processes versus the 20nm processes of DDR3. If Sony and Microsoft can still secure supplies of GDDR3 and XDR in 2013 where there is no supply to be seen, there's really no reason to believe there will be supply issues with GDDR5 or DDR3.

You're predicting a lot of doom and gloom and expecting to have no idea how to react, respond, or plan ahead of these situations, even with past history pointing out that there they're perfectly fine.

I actually wouldn't be surprised if GDDR3 is actually more expensive for Microsoft now than it was 3 years ago. And if it is, then that also likely plays a role in why they would choose DDR3 over GDDR5.

And if you are the only consumer of a memory technology and you want it on a new node. You are going to be the only one paying for that node transition. Versus something like DDR3 where that cost will be spread among multiple clients since the volume is so much larger.

It's far more likely that when Sony is the only major consumer of GDDR5 that they will just stick to that node as it could be cheaper than paying to transition it to a new node.

I haven't looked at any of the Xbox 360 revisions over the years, but have they bothered to have the memory modules move to a new node in the past 3-5 years?

Oh, and yes, volume does quite often trump node transitions in terms of cost to consumers. It's the main reason that so many pieces of silicon for modern electronics are still manufactured on very old process nodes.

Nvidia had a nice graph of where they predicted the inflection point of 20 nm becoming cost equal to 28 nm. It was, IIRC, well over a year after they expected production of 20 nm class products. For the first 1-2 years of a new node, you mostly go to it if you need either greater performance or lower power consumption. You don't do it in the first year of a node for cost reasons, generally.

Regards,
SB
 
I actually wouldn't be surprised if GDDR3 is actually more expensive for Microsoft now than it was 3 years ago. And if it is, then that also likely plays a role in why they would choose DDR3 over GDDR5.

And if you are the only consumer of a memory technology and you want it on a new node. You are going to be the only one paying for that node transition. Versus something like DDR3 where that cost will be spread among multiple clients since the volume is so much larger.

It's far more likely that when Sony is the only major consumer of GDDR5 that they will just stick to that node as it could be cheaper than paying to transition it to a new node.

I haven't looked at any of the Xbox 360 revisions over the years, but have they bothered to have the memory modules move to a new node in the past 3-5 years?

Oh, and yes, volume does quite often trump node transitions in terms of cost to consumers. It's the main reason that so many pieces of silicon for modern electronics are still manufactured on very old process nodes.

Nvidia had a nice graph of where they predicted the inflection point of 20 nm becoming cost equal to 28 nm. It was, IIRC, well over a year after they expected production of 20 nm class products. For the first 1-2 years of a new node, you mostly go to it if you need either greater performance or lower power consumption. You don't do it in the first year of a node for cost reasons, generally.

Regards,
SB

http://www.digitimes.com/photogallery/showphoto.asp?ID=5366

Hynix 20nm 4Gb GDDR3
Photo: Company [Sep 19, 2012]

SK Hynix has developed 20nm-class 4Gb(gigabit) graphics DDR3 DRAM, designed for notebooks requiring low-power consumption.
The new 4Gb GDDR3, built using Hynix' 20nm-class technology, runs at 1.35V - 30% less power than the previous 30nm-class 1.5V product. Operating at low-voltage levels, the new chip delivers desktop-level graphics performance.
Obviously they do, all the way down to 20nm.
 
The slide compares how fast DDR3 and GDDR5 replaced their predecessors, and it is pretty clear they did it at relatively the same pace, which is why I brought the slide up.

Therefore the arguments that is brought upon GDDR5 being quickly outdated can be used against DDR3 as well.
I don't see a clear reason to believe either to be replaced faster than the other if their replacements start to enter the market.

You forget that DDR3 isn't only used for graphics, while GDDR5 is
 
I don't see the discrepancy to matter. Both started in 2008, and both took 3 years time to kill off their predecessors. In the graphics department they've done the job by early 2011 and in global market share they also achieved 89% in 2011.

A 2010 article that predates the hit the low-end discrete market took from on-die GPUs puts the whole graphics memory segment into perspective.

http://www.isuppli.com/Memory-and-Storage/News/Pages/Graphics-Memory-Faces-Slower-Growth-Compared-to-Overall-DRAM-Market.aspx

DDR2, even when "dead", was projected to be generally as alive as the whole of graphics memory until 2014.
 
GDDR5M has IO widths of 8x and 16x defined, from what I can see. That would never allow fewer than 16 devices as long as the bus supported by the APU isn't switched out.

That's for graphics memory, which is a tiny market compared to the standard memory types. The transitions there were slower, and it would take longer for that big pile to reduce down to the scale of add-in GPUs.
Could it still be useful as a one-time intermediate solution to lower cost, until stacked memory becomes possible? One of the claim was that GDDR5 needs an expensive packaging for heat dissipation and it's a 170bga, lots of power, etc... while GDDR5M is a cheap plastic 96bga. So number of chips is less of an issue, or at least on par with DDR3/4.

There was also a rumor about GDDR6, which may or may not ever exist. If it can reach 11gbps quickly enough, that would make things very interesting.
 
isuppliddr4-1.jpg


And again this slide shows how DDR2 fell as fast in the global sector as in the graphics sector.

And again, since you don't seem to get it yet. Even if DDR3 falls to 1% of DDR volume. That will still be an order of magnitude more than whatever amount of GDDR5 that the PS4 consumes. It will also be an order of magnitude more than the DDR3 being consumed by Xbox One. But in the second case, Xbox One is just one device among many using DDR3. And at that point economies of scale are much more beneficial to devices using DDR3 than they are for GDDR5.

It isn't as if GDDR5 will have a node advantage over DDR3. I fully expect the price gap to grow between GDDR5 and DDR3 once AMD and Nvidia ramp it down.

The only way that doesn't happen is if there is no replacement for GDDR5 in the next 8-10 years. And if that happens, then advanced graphics in compute devices is well and truly dead.

Regards,
SB
 
It's very interesting for company that is not very good financial position to go with GDDR5 and enough of it in fact (gambling) to have 8GB available at launch. Sony must have had more ninjas in place. :)

MS on the other hand, though process seemed more like this:

1. We are going to make a 100% efficient machine within a certain relatively low power usage
2. We are going to evolve what we learned from EDRAM into ESRAM no matter what
3. We want the machine to be capable of games, kinect, multimedia, instant switching, etc
4. 8GB of (some type of) Ram will be needed

If this was the thought process then I can see why they went with the highest spec 8GB DDR3 at end of day.
 
It's very interesting for company that is not very good financial position to go with GDDR5 and enough of it in fact (gambling) to have 8GB available at launch. Sony must have had more ninjas in place. :)
That wasn't the uncertain factor. The question was whether 4Gbit density chips would be production ready in time.
Sony is a big company, and that segment is important to it. Relative miracles have been done with far less than billions of quarterly revenue.

1. We are going to make a 100% efficient machine within a certain relatively low power usage
Power, sure. 100% efficient? Not possible and they didn't think it was.

4. 8GB of (some type of) Ram will be needed
I'm not sure if this is a set of steps or just a numbered list. If the latter, 8 GB was not the original plan--even with DDR3 already decided, according to other posters with some insight into the process.
 
Back
Top