I could've swore that I've seen MSFT slide on the 4 stages of ESRAM adaption somewhere before.
It popped up near when the presentation was made (early June).
I could've swore that I've seen MSFT slide on the 4 stages of ESRAM adaption somewhere before.
“I would say that targeting Full HD with a complex rendering pipeline is a challenge in terms of ESRAM usage. Especially when this pipeline is built on deferred rendering. We are definitely taking advantage of the ESRAM on Xbox One, and we know that we have to fit into these limits, so we are constantly optimizing the usage.”
Deferred rendering needs multiple frame buffers, which can't all fit in that 32MB at the same time.
As far as I know there's no fixed configuration for the G-buffer. Several devs have already managed to optimize memory usage and buffer configuration and even talked about their approaches so it's not even some secret knowledge any more.
Still, 1080p is probably a bit more challenging to get to work.
So does extra AA have any effect on the ESRAM? Or are we talking general horsepower consumption here.
As far as I know there's no fixed configuration for the G-buffer. Several devs have already managed to optimize memory usage and buffer configuration and even talked about their approaches so it's not even some secret knowledge any more.
Still, 1080p is probably a bit more challenging to get to work.
Yet even 12 bytes per pixel, 3x RGBA8 buffers (plus 4Bpp depth, but not stencil) would not fit full 1920x1080. Which should make it quite clear why ESRAM will cause developer discomfort. 12 bytes really isn't much at all given how complex today's material models are..
My favorite, (and what I suspect will see much more traction going forward) is getting the gpu to do material type specific bit packing. So for example, a simpler material gets more precision - while a complex material with more data gets more compression. But each uses the same number of bits per pixel. Per pixel you just output a number of bytes instead of writing a fixed set of separate color buffers (in varying formats) as occurs in traditional gbuffers rendering.
Stencil the stencil.I realized I made a mistake there. 1080 /w a 12Bpp gbuffer would fit without stencil... Just. With stencil wouldn't fit.
Digital Foundry: Is ESRAM really that much of a pain to work with?
Oles Shishkovstov: Actually, the real pain comes not from ESRAM but from the small amount of it. As for ESRAM performance - it is sufficient for the GPU we have in Xbox One. Yes it is true, that the maximum theoretical bandwidth - which is somewhat comparable to PS4 - can be rarely achieved (usually with simultaneous read and write, like FP16-blending) but in practice I've seen only a few cases where it becomes a limiting factor.
The ESRAM doesn't offer any advantages over the PS4 ram does it? They talk about how great it will be when tiling comes along. But the PS4 ram is nearly as fast and there is way more of it. Am I simplifying this too much?
As per the entire thread, there's nothing you can do as an optimisation for ESRAM amount or BW that you can't also use in PS4 or PC for the same sorts of advantages. There's a possibility that in certain cases you can get a higher peak BW in the XB1, but it's probably not something that can be designed around. There's also a possibility that some tasks would benefit from the lower latency, although we've no idea how much so.The ESRAM doesn't offer any advantages over the PS4 ram does it? They talk about how great it will be when tiling comes along. But the PS4 ram is nearly as fast and there is way more of it. Am I simplifying this too much?