From page 15, they show 4 GBuffer MRTs (16 bytes), but since they'll also need a depth buffer, that would put the total at 20 byets per pixel.
At any rate, that should be enough to fit in ESRAM at 900p but since it would leave very little space for other things (shadows, lit buffers etc) it...
It's fairly easy to see that the volumetric effects in SWBF are screen space based, which suggests they are not using what was presented at Siggraph for this game.
That's very much true. Both the X1 and PS4 versions of the Black Ops 3 beta used SMAA Filmic T2x.
Digitial Foundry misidentified it as FXAA, but you can easily try this on the PC beta - if you toggle between FXAA and SMAA Filmic T2x the difference is huge.
You can pack your data in FP16 even right now. There's already hardware support for encoding/decoding to FP16.
What they're adding is hardware support for directly evaluating calculations in FP16.
The big reason that helps performance is that they can pack double the number of variables in the...
Crytek did chroma subsampling of the GBuffer for Crysis3 and I'm pretty sure for Ryse too.
It's tempting to try to extend this to the entire pipeline (from textures, to GBuffer, to lighting to final frame buffer) but unfortunately that won't work as well as one might hope.
It is true the YCoCg...
While I can agree that most COD SP campaigns didn't run solid 60fps all the time, when it comes to COD MP (which is what Titanfall should be compared against) it has always been very solid 60fps. Especially the IW / Xbox360 versions, it will be very hard to argue otherwise.
It just so...
Indeed the Iris Pro is meant as a laptop level graphics chip. It has about 50% of the ALU and bandwidth of Xbox One I believe. But the embedded ram is better design all around - 128Mb of it, it's EDRAM (so way more compact) and more importantly it's part of the cache hierarchy.
In general most...
The GPU already has an L2 cache that I believe is non-coherent with the CPU. Can't the ESRAM just be an L3 cache to that L2 cache?
I admit I don't fully understand the hardware implications so perhaps what I'm suggesting is not that feasible or cheap to accomplish but it does seem having 1...
Actually a much cheaper/smarter modification would've been to turn the ESRAM into full on L3 cache similar to what Intel did with the Iris Pro. It is pretty shocking they didn't think of this, given that (allegedly) the ESRAM is full blown 6-transistor on-chip memory. It's just missing the cache...
I don't remember seeing any research showing Forward+ performing better than Deferred at 1xMSAA.
At higher MSAA perhaps, but let's be honest, Titanfall might be the only game using MSAA on Xbox One (and might not be for long if they patch in FXAA).
MSAA just doesn't make a lot of sense given...
So I had an idea. How about this:
Two machines, identical setup, one runs DX11, the other Mantle.
Join same server, go to observer, go to same spot, look the same way.
Record FPS.
Done :-)
Indirect draw/dispatch only lets you control draw call parameters but you can't modify state.
OpenGL recently got the bindless-resource extension which is a lot closer to what I had in mind. However even that leaves a lot of things to be desired.
However the biggest problem with OpenGL...
The 100k figure is for having the GPU feed itself draw calls. This is still only going to be possible under an API like Mantle.
It will take some time to get there even on consoles though, since re-engineering a graphics engine to such a degree will take some time, and will most likely happen...
HLSL is just a language and is what most developers use to program their shaders. It's pretty much independent on the API it's running on, apart from the fact that MS didn't allow extensions into it till recently.
I'm assuming under Mantle AMD could add any extension they cared to have, which...