The pros and cons of eDRAM/ESRAM in next-gen

Nah, I doubt that: eSRAM is directly, quickly accessible from GPU, not CPU - CPU penalties to access it are high.
eSRAM is integrated in the GPU core (far right of the chip), as the chip layout clearly shows.
For sure, however, there must be some investment in the CPU/GPU ties to the NB zone (plus whatever arbitration mode/stuff they did use)

The existence of the crossbar that routes data to eSRAM block controllers or memory channels was already disclosed.
In part, this enables a mostly transparent mapping of accesses to the eSRAM or main memory based on properties assigned via page table after the initial setup.
Upping the peak numbers that in terms of the number of eSRAM blocks and memory clients does impact the GPU's internal crossbar, then the on-die interconnect that routes accesses to the necessary endpoints.
AMD's APU read and write paths are very wide for this class of chip.

Could you elaborate why that wouldn't have been possible? Even some chip engineer on the Hotchip conference video asked them about that in the context about yield/price optimization.

http://forum.beyond3d.com/showpost.php?p=1838012&postcount=7745

No realistic/worthwhile eDRAM manufacturer, and no mass-level production of a stacked large SRAM or DRAM chip for a SOC this size and TDP. This console generation came a few years early for stacked DRAM/2.5D, which isn't quite the same but appears more tractable than getting high-power chips in a stack configuration.

The PS Vita gets away with a stacked WideIO interface, but assuming Durango is ~100W TDP, there's a good chance that stacked solution burns as much or less power at peak compared to the larger chip at deepest idle, given that the Vita TV is rated at less than 3W max as a total unit.
 
A chip that comes out this year or so is likely being architected 3-5 years ago at least.

Lalaland; said:
pMax also makes a very valid point, until the last minute availability of 8Gb GDDR5 chips the 8GB DDR3 + ESRAM design was looking pretty smart.

AMD's GPU architectures are designed to be scalable up and down. Do you think they spend 3-5 years to develop multiple variations within the same family to meet various price points? Of course, not. And I never said the design wasn't good. Just under spec'd based on what we know now. And they should have aimed higher. ROP/CU counts and ESRAM size could have definitely been adjusted during development without altering the design investment and would not have needed to be set in stone from the outset.

Clearly its not something that can be changed as late as system RAM modifications, but 1 year prior to planned tape out isn't unreasonable.
 
AMD's GPU architectures are designed to be scalable up and down. Do you think they spend 3-5 years to develop multiple variations within the same family to meet various price points? Of course, not. And I never said the design wasn't good. Just under spec'd based on what we know now. And they should have aimed higher. ROP/CU counts and ESRAM size could have definitely been adjusted during development without altering the design investment and would not have needed to be set in stone from the outset.

Clearly its not something that can be changed as late as system RAM modifications, but 1 year prior to planned tape out isn't unreasonable.

I don't see space for altering the design one way or the other without increasing the already formidable size of the chip and that would start to make the chip uneconomical to manufacture. Sony was rumoured to be planning 4GB GDDR5 for ages so I doubt MS felt uncomfortable with their choice of 8GB DDR3 + ESRAM. Time has been kind to Sony as all signs point to 8GB GDDR5 being a late 2012, early 2013 development and has left MS with no time to redesign unless they wanted to risk giving Sony a 360-esque head start in the market.
 
AMD's GPU architectures are designed to be scalable up and down. Do you think they spend 3-5 years to develop multiple variations within the same family to meet various price points?

You are conflating time to manufacture with time to develop.

This is wrong.

Of course, not. And I never said the design wasn't good. Just under spec'd based on what we know now. And they should have aimed higher.

Under specced is determined - in the real world - by market performance.

It is uncertain how much of the relative PS4/ Bon performance in the market is due to "specs". Even on B3D many of the most vocal "spec" cheerleeders haven't got a fucking clue what they're looking at when it's in action.

ROP/CU counts and ESRAM size could have definitely been adjusted during development without altering the design investment and would not have needed to be set in stone from the outset.

You have absolutely no idea how altering significant aspects of the design during development would have affected costs or - crucially - time to deliver.

Clearly its not something that can be changed as late as system RAM modifications, but 1 year prior to planned tape out isn't unreasonable.

What are you basing that on? And when do you think tape out was?
 
Forgive me. You are right. Of course it's impossible for the design to be anything other than what it is. My mistake.
 
If I understand correctly from the last few years of leaks, and more recent interviews about their design goals... there didn't seem to be any consideration between GDDR5 and DDR3. It's like MS never considered GDDR5, and Sony never even considered DDR3.

It looks like their respective decisions about main memory was made very early. Sony did say they considered an internal SRAM buffer, but it was never a compromise against GDDR5, it was a compromise of bus width (and interestingly, that would have locked them to 4GB max, so they truly expected 4Gb chips to be ready when they planned it). If we follow the leaks, MS was planning DDR3/4 and was targeting 4GB early on so it wasn't like the entire reason for DDR3 was to reach 8GB.

I don't think the 4Gb GDDR5 chips being ready on time was a surprise to either company. Perhaps the surprise was the price point which was very volatile for the last 3 years, and it was a huge risk. Many people here, who are very well informed about it, were definitely surprised though. The cost estimates in the "prediction" thread is showing this very clearly. It's a large discrepancy compared to the current cost estimates from isuppli and others.
 
Can't believe I'm letting myself get drug into this. But what the heck it's a new year.

Fairy dust is always an option, or alternatively, which is all I ever suggested, was for someone to step up and say lets go big or go home and target a more robust spec. Oh no, the horror. I'm sure their Virtuoso platform would have melted under the pressure. And all the RLT done for not, since clearly it would have called for a complete rewrite. I'm not trying to trivialize, but don't over complicate things either. It was a matter of choice where they landed, not a technical limitation.

Read any post I have made. Never said it was simple. Never said it could be slapped on at the last minute. Only that hindsight being 20/20, they could have and IMO should have slightly increased their size and power budgets to avoid the position they are in now. The End.
 
...Samsung, Hynix and others will come out with 128Gb memory chips this year, thanks to 3d geometries (see IEEE Spectrum mag).

Yeah, Sony and MS could have done the same, what the hell would haven taken????
It was a matter of choice where they landed, not a technical limitation.
------------------------
The existence of the crossbar that routes data to eSRAM block controllers or memory channels was already disclosed.
hmmm... by chance, do you remember where they disclosed that?
 
Last edited by a moderator:
Can't believe I'm letting myself get drug into this. But what the heck it's a new year.

Fairy dust is always an option, or alternatively, which is all I ever suggested, was for someone to step up and say lets go big or go home and target a more robust spec. Oh no, the horror. I'm sure their Virtuoso platform would have melted under the pressure. And all the RLT done for not, since clearly it would have called for a complete rewrite. I'm not trying to trivialize, but don't over complicate things either. It was a matter of choice where they landed, not a technical limitation.

Read any post I have made. Never said it was simple. Never said it could be slapped on at the last minute. Only that hindsight being 20/20, they could have and IMO should have slightly increased their size and power budgets to avoid the position they are in now. The End.

That sounds good and all, but what about price point? Does this "more rounded" hardware come with Kinect at a greater price point?

I would think MS original XB1 design plans have always revolved around Kinect inclusion, and how to design affordable hardware around it.

I'm not saying MS made the correct choices (seeing Kinect as the future)...however, it was the choice they made knowing Kinect was their vision for future gaming.
 
...Samsung, Hynix and others will come out with 128Gb memory chips this year, thanks to 3d geometries (see IEEE Spectrum mag).

Yeah, Sony and MS could have done the same, what the hell would haven taken????
A launch in late 2015/2016 maybe at the earliest, if it was just to take advantage of the new memory types in volume. It would also depend on which techs you had in mind. Some of the early high-density types are mobile DRAM with constrained bandwidth.

I kind of wished (very early on) there would have been a meeting in the middle on this, with 2.5D integration coming a bit earlier and the consoles taking a little longer, just to see where relaxing the bandwidth constraint would have taken things.


This wouldn't help with any scheme where the SOC is in a 3D stack with extra memory, the DRAM standards deal with massively lower thermal levels and much higher yield chips.

hmmm... by chance, do you remember where they disclosed that?

http://www.eurogamer.net/articles/digitalfoundry-the-complete-xbox-one-interview

Nick Baker: First of all, there's been some question about whether we can use ESRAM and main RAM at the same time for GPU and to point out that really you can think of the ESRAM and the DDR3 as making up eight total memory controllers, so there are four external memory controllers (which are 64-bit) which go to the DDR3 and then there are four internal memory controllers that are 256-bit that go to the ESRAM. These are all connected via a crossbar and so in fact it will be true that you can go directly, simultaneously to DRAM and ESRAM.

One random aside I noted is that this design pushes the number of controllers as high as the two-time high water mark for AMD's high-end GPUs (R600, Hawaii). If there were a doubled eSRAM, Durango would have had been the broadest memory controller array for a single chip from AMD.
 
That sounds good and all, but what about price point? Does this "more rounded" hardware come with Kinect at a greater price point?

I would think MS original XB1 design plans have always revolved around Kinect inclusion, and how to design affordable hardware around it.

Do You really get the impression the XB1 is optimized for price as a total product?
We have the cheap 8GB DDR3 and a die size comparable to Sony's
The HDMI-In concept looks really iffy to me(TV moving away from analog/cable, TVs have integrated DVB receivers anyway, AV Receivers with all kinds of input sources)
Then we have a too large case and cooling system for the power and noise profile. So unnecessary misc costs, shipping, less sales space at the retailer and so on.

The message here is so unsharp. Like designed by committee.
 
Do You really get the impression the XB1 is optimized for price as a total product?

I'm pretty sure during earlier sales (before all the price cuts), MS was profiting $30-35 per unit. If MS did spec'd any higher, with Kinect inclusion, pricing would definitely be higher, with very little wiggle room for profit (if, MS swallowed some of the additional cost).

But to answer your question; XB1 is optimized within a reasonable price (given the BOM data around the web), at the needed profit margins MS wanted, at a price (right or wrong) they felt consumers will pay for.

XB1 is designed to meet the goals which MS envisioned it to be, an "all inclusive hub" for TV and gaming needs. So, eliminating HDMI-in and other I/O ports, may-not be beneficial for MS goals... XB1 being the ultimate component within the living room space.
 
I would think MS original XB1 design plans have always revolved around Kinect inclusion, and how to design affordable hardware around it.

That absolutely was their design plan.

They were expecting PS4 to be more powerful as they knew they'd have a significant proportion of their BOM in Kinect.
 
That absolutely was their design plan. They were expecting PS4 to be more powerful as they knew they'd have a significant proportion of their BOM in Kinect.
Well it was the PR folks from EA, voted the worst company in America. Hmmm.
 
Last edited by a moderator:
I see what you guys are saying.

They definitely wanted to expand the market outside of cores gamers. Profits from Xbox360 were not huge last generation. Maybe thats why the 400 million dollar nfl deal, and non-core gamer focused reveal back in may. Thats why Kinect included (replicate Nint success in non-coregamer market). Thats why the nfl/skype heavy early promo for Xbox One. Thats why all the settop box plan rumors. During the X360 reveal Allard revealed how they planned to capture the casual noncore gamers with the 360. Here they want to do it again.

Thats why there were to
 
MS always do best bang for your $! So ESRAM dual port quad channelled was the best solution. Same for DDR3. From what I understand DDR3 are a better choice invsome situation. And in some other GDDR5 is better suited. The issue here is probably this. Game dev are used to the ps4 way. XB1 way? Probably not. I rarelly see a DDR3 graphic card in the market. Yet this is pretty much what MS did. Quad channel ESRAM dual ported all in SOC last I checked was a rarity (if it even existed before XB1 on the consumer world) this thing is so sideways that it might take a while to take advantage of all its capability. I recall when we went from one CPU core to two and then to 64 bit. Outch! Parallelisation of any kind isnt easy (unless maybe you call your self MS, who knows) and game dev are very conservative! Unless they have a sae test proving them a is their better way then s, they tend to use the standard old ways usually.

___

MOD : fixed faulty formatting. If you want to maintain posting rights, observe these simple points.
1) Add a space after a full stop '.', question mark '?', or exclamation mark '!', and use a capital letter following it.
2) Capitalise acronyms, so ddr3 = DDR3.
These are important grammatical rules that make parsing and understanding a lot easier. Otherwise it's hard to tell whether an acronym is a typo, or where one idea ends and the next begins.
3) capitalise the pronoun 'I'. This isn't necessary to understanding but it irks. ;)
Typos and spelling mistakes aren't too bad (I add enough of my own to my posts), but the core understanding of your posts has to be communicated effectively for it to actually be a discussion, and that means supporting the readers' parsing of your text with a few very simply applied rules.

Hard to interpret said:
ms always do best bang for your $!so esram dual port quad channelled was the best solution .same for ddr3. from what i understand.
Is $!so a thing? Some odd abbreviation we're supposed to understand? A type of ESRAM? A typo?

Easy to interpret said:
MS always do best bang for your $! So ESRAM dual port quad channelled was the best solution. Same for DDR3.
Ahhh, so you're saying Microsoft provide the best bang for buck, which is reason to believe ESRAM is a good choice.

On your last sentence, the way it's written makes it hard to interpret. Is 'sae' a typo for 'same' or 'sane'? Or an acronym for an 'SAE' test? I get you're comparing 's' to 'a' but it's not very clear.

I hope you understand and can adapt. :)
 
Last edited by a moderator:
Well it was the PR folks from EA, voted the worst company in America. Hmmm.

? Are you talking about Mattrick?

I think bkilian hit the nail on the head with his Xbox.org is now run by 'MBAs with $ signs in their eyes' post.
http://forum.beyond3d.com/showpost.php?p=1696487&postcount=1313

I see what you guys are saying.

They definitely wanted to expand the market outside of cores gamers. Profits from Xbox360 were not huge last generation. Maybe thats why the 400 million dollar nfl deal, and non-core gamer focused reveal back in may. Thats why Kinect included (replicate Nint success in non-coregamer market). Thats why the nfl/skype heavy early promo for Xbox One. Thats why all the settop box plan rumors. During the X360 reveal Allard revealed how they planned to capture the casual noncore gamers with the 360. Here they want to do it again.

Pretty much, MS probably wanted to see Xbox finally become profitable rather than the huge money sink it had mostly been the past two gens.
 
? Are you talking about Mattrick?

I think bkilian hit the nail on the head with his Xbox.org is now run by 'MBAs with $ signs in their eyes' post.
http://forum.beyond3d.com/showpost.php?p=1696487&postcount=1313



Pretty much, MS probably wanted to see Xbox finally become profitable rather than the huge money sink it had mostly been the past two gens.

360 wasn't a money sink, probably ended up turning a tidy profit overall, though the early years were certainly bumpy and the 1B RROD charge didn't help.

So 1st console a money sink because of inexperience/bad business model, 2nd console profitable, doesn't seem too bad.
 
Do we know how much profit they made overall? Taking into account the RROD writeoff, R&D costs and hardware subsidising in the first year or two.

In anycase they probably want to see much bigger profits from XB1 and get them much faster this time around too.
 
Back
Top