The pros and cons of eDRAM/ESRAM in next-gen

Do we know how much profit they made overall? Taking into account the RROD writeoff, R&D costs and hardware subsidising in the first year or two.

In anycase they probably want to see much bigger profits from XB1 and get them much faster this time around too.

Nope, I stopped even trying to follow that a while ago. In fact it's just kind of my hunch, but I feel safe it's correct. I do have some couple years old saved images on my computer showing a nice profitability trend in MS divisions that house Xbox, ending maybe 2011.

I think MS has reorganized their divisions many times in the last few years. Dont even know what their divisions are any more.
 
If anyone wanted to sit down for 40 minutes they could find out a decent estimate give or take a billion for the Xbox Group.

Launches are freaking expensive. Don't forget to remove the profits from the Android patent royalties which was recently quantified iirc. In the Later years they were making money which countered the 360 earlier years. Overall for the 360 we are probably looking at +500 million in profit since the launch of the 360.

But then you have to factor in the original Xbox, so on the whole the entire venture hasn't yet made alot of money.
 
If anyone wanted to sit down for 40 minutes they could find out a decent estimate give or take a billion for the Xbox Group.

The thing about that is that Xbox is lumped in together with the rest of what's in the Entertainment and Devices division, and iirc the billions of $ in profits that Android patents generate are also included in there nicely hiding any possible losses.
 
The cost of distraction and weakening their core brand (Windows) can't be quantified either ... in my opinion consoles and (ARM) tablets will in the final equation be what pushed Microsoft over the edge.
 
The thing about that is that Xbox is lumped in together with the rest of what's in the Entertainment and Devices division, and iirc the billions of $ in profits that Android patents generate are also included in there nicely hiding any possible losses.

The billions of $ in losses (kin, winmo 6.5, Massive Inc, etc) also hide their profits.

I honestly can't see how 360 wouldn't be a tremendously profitable console for Ms:

- 4 years ago they hit 1.2 billion dollars annually from Live revenue (Half of that came from subscriptions). Since then they went from 25 to 48 million subscribers (and the gold subscribers ratio has gone up too), the average annual price also increased in 10$. Back then they didn't had games on demand, so it's most likely both subscriptions and store purchases easily generate over 1 billion dollars / year. That's revenue, I know, but mostly profitable.

- They have long been profitable in hardware, they console did not see any significant price cut and it sold very well.

- The console still sold tons of games.

Where could they possibly lose any money? Or just not make truckloads of them?
 
Where could they possibly lose any money?
They were likely bleeding quite a bit during the early gen, with hardware loss-leading and RROD. >50% failure rate under warranty isn't a small problem.

But I agree, it's probably been very profitable more recently.

What the cumulative behavior was like is very hard to judge accurately enough to say useful things.
 
Last edited by a moderator:
From the Build conference on the X1 going on right now:

weyXjyu.png


Seems like there is no CPU access for the ESRAM.
 
I was under the impression that most of the profits came from software licensing and publishing when it comes to game consoles. Dont they get a percentage of every piece of software sold for the system even 3rd party titles like COD. Most of the consoles that suffered from the RROD were all sold at a loss to begin with. They had to extend waranties and repair alot of early systems but repairs cost much less than giving someone a brand new console. Once they where able to shrink down the chips they were finally making a profit on consoles and solving the RROD issue at the same time.

Now they are making a profit off of every console with the X1 as well as licensing fees and Xbox live.
I dont see any RROD type issue coming with the new system so the future of the Xbox division seems pretty stable as far as I can see.
 
Microsoft Explains why the Xbox One’s ESRAM is a “Huge Win” and Allows Reaching 1080p/60 FPS

XboxOne4-670x334.jpg


So the last thing you have to do to get it all composited up is to get it copied over to main memory. That copy over to main memory is really fatst, and it doesn’t use any CPU or GPU time either, because we have DNA engines that actually do that for you in the console. This is how you get to 1080p, this is how you run at 60 frames per second… period, if you’re bottlenecked by graphics
Savage’s explanation was definitely intriguing, especially with some developers encountering initial difficulties in using the ESRAM effectively. It’ll probably take some time for them to master its capabilities to their full extent, but it’ll be very interesting to see where what they’ll be able to do with those apparently tiny 32 megabytes that can potentially achieve a lot more than what their size suggests.
http://www.dualshockers.com/2014/04...is-a-huge-win-and-helps-reaching-1080p60-fps/

What exactly "DNA Engines" means?
 
Last edited by a moderator:
Has anyone given an idea of what would have been "enough" or an "ideal" amount of ESRAM? I hear a lot of developers saying its a small amount but not clear if enough (128MB?) was even possible.
 
This is how you get to 1080p, this is how you run at 60 frames per second… period
Do they mean to say that developers didn't know these things and will now be enlightened to make their games 1080p/60 thanks to the ESRAM? What's the angle here?

Despite the attitude and hyperboles, there wasn't any new info, just a lot of spin spin spin and PR-speak.
 
Hmm no CPU access to ESRAM, for how much longer will I read about 'HSA' over at misterxmedia (I'm guessing forever as I don't think the author understands the concept in the first place)?

Frank Savage said:
So this is where you put things that you gonna read a lot like a shadow map, put things that you draw to a lot, like your back buffer… We have resource creation settings that allow you to put things into there, and don’t have to all reside in the ESRAM, there can be pieces of it that can reside in regular memory as well. So for example if I’m a racing game, and I know that the top third of my screen is usually sky and that sky doesn’t get touched very much, great, let’s leave that in regular memory, but with the fast memory down here we’re gonna draw the cars. This works practically for any D3D resource there is, buffers, textures of any flavors… There’s no CPU access here, because the CPU can’t see it, and it’s gotta get through the GPU to get to it, and we didn’t enable that.

So he's saying split your buffer but put 'read a lot' bits in RAM and 'write a lot' bits in ESRAM. He offers the example of shadow maps and the sky in a racer but I'm not seeing general applicability here. Racing games rarely offer free look so you can make assumptions about how much of your screen is going to be relatively untouched but anything with a free camera seems a poor fit. I take it by shadow maps (and presumably reflection cube maps?) we're talking about pre-built static shadow with the dynamic shadows staying in ESRAM? Of course I could be over analysing a fairly high level remark.

Not much going on here that hasn't been revealed before that I can see anyway (that's not saying much though ;) ).
 
What exactly "DNA Engines" means?
It was probably DMA Engines, also known as the DMEs for Durango.


So he's saying split your buffer but put 'read a lot' bits in RAM and 'write a lot' bits in ESRAM. He offers the example of shadow maps and the sky in a racer but I'm not seeing general applicability here. Racing games rarely offer free look so you can make assumptions about how much of your screen is going to be relatively untouched but anything with a free camera seems a poor fit.

That would depend on how the residency is assigned and how quickly the ratio can be changed.
Swapping out pages or modifying them on the fly might be possible, but an alternative would be copying pieces out of a partially-resident target to a static allocation in the ESRAM, and then stepping through tile by tile. If a piece doesn't need to be brought in, it can be skipped.
 
Do they mean to say that developers didn't know these things and will now be enlightened to make their games 1080p/60 thanks to the ESRAM? What's the angle here?

Despite the attitude and hyperboles, there wasn't any new info, just a lot of spin spin spin and PR-speak.

Reading it they make it sound so easy to achieve 1080p/60 but from day one it seems to not be the case. Are developers utilizing this technique of copying it from ESRAM to main memory? Or is there some kind of barrier or bottleneck preventing them from doing so?

It's an interesting topic. Looking forward to seeing games like Quantum Break and Halo XB1.
 
It was probably DMA Engines, also known as the DMEs for Durango.

It was. I watched.


Reading it they make it sound so easy to achieve 1080p/60 but from day one it seems to not be the case. Are developers utilizing this technique of copying it from ESRAM to main memory? Or is there some kind of barrier or bottleneck preventing them from doing so?

It's an interesting topic. Looking forward to seeing games like Quantum Break and Halo XB1.

1080P/60 is always going to be relatively rare. How many PS4 (being the alleged much more powerful console for reference) games are 1080/locked 60? I think X1's goal should be to get to some kind of parity, or at least a consistent 1080P in multiplats, not consistent 1080/60 as that's a pipe dream gen after gen.

Personally the CU not being able to access ESRAM seems like it'd probably be a good thing. Just to simplify matters, and make sure it all goes to graphics where it's needed.
 
It was. I watched.




1080P/60 is always going to be relatively rare. How many PS4 (being the alleged much more powerful console for reference) games are 1080/locked 60? I think X1's goal should be to get to some kind of parity, or at least a consistent 1080P in multiplats, not consistent 1080/60 as that's a pipe dream gen after gen.

Personally the CU not being able to access ESRAM seems like it'd probably be a good thing. Just to simplify matters, and make sure it all goes to graphics where it's needed.

http://ca.ign.com/wikis/xbox-one/PS4_vs._Xbox_One_Native_Resolutions_and_Framerates
 
Right. 1080/60 is not the norm.

Some of those also seem a little "generous", as it's a user editable wiki. Is BF4 really 60 I thought it was fluctuating 45-60 ish? Etc. Also they list Witcher 3, not coming out til 2015. I find it pretty hard to believe anybody knows the final res of that yet.

Good reference though, even though it probably needs to be used with care.
 
Here is a more thorough transcript of the Build comments about ESRAM from Dualshockers

http://www.dualshockers.com/2014/04...is-a-huge-win-and-helps-reaching-1080p60-fps/

ESRAM is dedicated RAM, it’s 32 megabytes, it sits right next to the GPU, in fact it’s on the other side of the GPU from the buses that talk to the rest of the system, so the GPU is the only thing that can see this memory.

And what it does is that it gives you very very high bandwidth output, and read capability from the GPU as well. This is useful because in a lot of cases, especially when we have as large content as we have today and five gigabytes that could potentially be touched to render something, anything that we can move to memory that has a bandwidth that’s on the order of 2 to 10 x faster than the regular system memory is gonna be a huge win.

So this is where you put things that you gonna read a lot like a shadow map, put things that you draw to a lot, like your back buffer… We have resource creation settings that allow you to put things into there, and don’t have to all reside in the ESRAM, there can be pieces of it that can reside in regular memory as well. So for example if I’m a racing game, and I know that the top third of my screen is usually sky and that sky doesn’t get touched very much, great, let’s leave that in regular memory, but with the fast memory down here we’re gonna draw the cars. This works practically for any D3D resource there is, buffers, textures of any flavors… There’s no CPU access here, because the CPU can’t see it, and it’s gotta get through the GPU to get to it, and we didn’t enable that.

So the last thing you have to do to get it all composited up is to get it copied over to main memory. That copy over to main memory is really fatst, and it doesn’t use any CPU or GPU time either, because we have DNA engines that actually do that for you in the console. This is how you get to 1080p, this is how you run at 60 frames per second… period, if you’re bottlenecked by graphics.
 
I'm not sure I understood what he said... When he said there's a new setting where content can be marked to reside in esram, does he mean they now have a new sdk feature for the developer mark buffers (or even parts of the buffer using tiled resources) to automatically manage that data on esram? Without require manual tilling or anything of the sort?

Interesting if so.

Microsoft Explains why the Xbox One’s ESRAM is a “Huge Win” and Allows Reaching 1080p/60 FPS

XboxOne4-670x334.jpg


http://www.dualshockers.com/2014/04...is-a-huge-win-and-helps-reaching-1080p60-fps/

What exactly "DNA Engines" means?

I think he meant DMA/ or the data move engines, who are the ones who move data between the two memory pools without wasting cpu or gpu cycles to do that.
 
Has anyone given an idea of what would have been "enough" or an "ideal" amount of ESRAM? I hear a lot of developers saying its a small amount but not clear if enough (128MB?) was even possible.

ESRAM is pretty wasteful from a transistor budget standpoint. I'm sure Devs would have been a lot happier with 192-256MB of EDRAM. Perhaps that would have posed problems in terms of what process and foundry you could use. Challenges like that may have contributed to Sony's decision to go with unified GDDR5 even though they also explored an embedded memory design. But whatever the production realities it's clear the 32MB of ESRAM is not a desirable solution to most.

Reading it they make it sound so easy to achieve 1080p/60 but from day one it seems to not be the case. Are developers utilizing this technique of copying it from ESRAM to main memory? Or is there some kind of barrier or bottleneck preventing them from doing so?

You should note in the quote it says that's how you get 1080p, that's how you get 60fps, but it doesn't actually say it gets you both at the same time.
 
Back
Top