Rendering tech of Infamous Second Son

Sure they can. If they need half a gig for whatever pie in the sky feature down the road, then that half-gig won't be available for games using that feature.

What if Facebook, which is part of the basic social functionality, decide they are changing how every bodies content is linked and they go for something more ambitious and this suddenly requires more RAM? You're making bets now based on what you can't predict. This is why I think both Sony and Microsoft have been very cautious with their initial RAM reservations.

I would be happy if Sony designated, say 2Gb RAM, as 'contested' between the game OS and the apps. So if you run a game that needs more RAM, you get prompted by the launcher to shut apps running. Or when trying to run an app needing more RAM, are told this can only be done if the memory-hogging game currently running is closed down. But that's based on my usage of console apps now (almost zero) and I'm one user.

I remember the early PS3 interface being quick and speedy but that is not my perception now. Sony may decide to completely change the UI down the road, like Microsoft did with the dashboard.
 
The memory reserve isn't really a big deal at the moment, considering most games are multiplatform with Xbone which has 5gb available itself. Also, the current reserve really only affects launch window exclusives, which were never going to push the system anyways. Think of it as a small loss for some future insurance.

The real worry is whether or not it stays at 4.5gb available forever, severely affecting the potential of future 1st party exclusives. My guess is they'll free some memory up, which has been the case in every console generation in recent times I believe.
 
The memory reserve isn't really a big deal at the moment, considering most games are multiplatform with Xbone which has 5gb available itself. Also, the current reserve really only affects launch window exclusives, which were never going to push the system anyways. Think of it as a small loss for some future insurance.

You are right. What is a big deal is not the 4.5GB (maybe 5GB or more under conditions) for PS4 versus 5GB for XB1.

The real information of this article is the impressive size of the different buffers required to display a next gen game with great image quality at 1080p. (as promply noticed by AINets and HTupolev).

We are really talking about memory architecture pros & cons here: PS2/XB1 memory architecture (slow main ram/very fast Vram) versus PS4 memory architecture (fast unified ram).
 
Last edited by a moderator:
Why are they using so many buffers and so big though? Seems very inefficient. Maybe they are just not BW bound and they didnt care about optimizing and packing them

----
The slides were made available by Adrian Bentley. Mirrored here (uncheck "Download using download accelerator").

Thanks :)

Damn, that hugefile site is awful. I'll reupload it to copy.com soon.
 
Last edited by a moderator:
My main impression after this one and the performance capture talk, and the Crytek talks is, that game development has some potentially serious issues in the future because of the current state of realtime graphics:

- High quality global illumination solutions all require a LOT of baking and preparation, they're basically trading off artist time vs. runtime computational requirements. Getting a simple scene to render nicely takes a huge effort, whereas in offline CG many studios are moving to a fully raytraced approach that takes very little extra care beyond building "good" assets.
It looks like this is going to become more and more problematic with time as the hardware won't get faster but there's always going to be a need for better results.
It also means that game engines will once again become less viable as an alternative to offline rendering software, as the artist overhead costs more than buying a larger render farm to do the raytracing. The Cryengine for movies type of stuff is probably not going to take off in the end.

- Many of the character deformation tools used for decades in offline work is still not really viable on GPUs. Practically every advanced facial deformation system is still based on bones, as they're almost "free" on the hardware (just upload the mesh and the weights and then it only takes some simple vertex shader code to do the job). Blendshapes and other stuff are way too expensive in computation and memory, so once again a lot of artist and programmer time is traded off to get high quality results.
Same goes for cloth apparently, and body deformations are also mostly based on adding lots of extra helper bones (so they're not working as the actual skeletial bones in a human body, but are placed on top of that to "simulate" muscle bulging and such).

- Memory is - at least on the PS4 - treated as almost "free" which makes me wonder how long it'll take to start to optimize consumption again. Today it looks like virtual texturing as seen in Rage is not as important, but I still expect the tech to resurface within a few years. Oh and no news on virtualizing geometry either.
Also interesting is that most games are using little more than 4GBs, which was my initial estimate for what is needed by nextgen games and what can be reasonably accessed with the current bandwidths for RAM and background storage. What I didn't count on were the additional multimedia and social and other features of these nextgen consoles, which were the reason to bump the total up to 8GB ;) But again, in the end the games are working with ~4GB altogether.

Lastly, it's nice to see so much information about the new engines, content creation pipelines, and the complete takeover of PBR. Funnily enough this could allow studios like us to tap into game artists a lot easier ;)
 
My main impression after this one and the performance capture talk, and the Crytek talks is, that game development has some potentially serious issues in the future because of the current state of realtime graphics:

- High quality global illumination solutions all require a LOT of baking and preparation, they're basically trading off artist time vs. runtime computational requirements. Getting a simple scene to render nicely takes a huge effort, whereas in offline CG many studios are moving to a fully raytraced approach that takes very little extra care beyond building "good" assets.
It looks like this is going to become more and more problematic with time as the hardware won't get faster but there's always going to be a need for better results.
It also means that game engines will once again become less viable as an alternative to offline rendering software, as the artist overhead costs more than buying a larger render farm to do the raytracing. The Cryengine for movies type of stuff is probably not going to take off in the end.
Couldnt this be solvable just by development on PC? With good lighting engine, something like recently showed Nvidia GI Works but in higher quality could be possible to work as real-time solution for development, than it would be just baked for final product in final stage of development.
https://www.youtube.com/watch?v=n7iE2X6muC4

Crytek is using GI probes that are places like cubemaps and they probably calculate in seconds, so i think that for movie production they could have similar solution, but with tons of more probes, with higher precision that can be modified in almost real-time.
But sure, ray-tracer or even path tracer like Brigade is still much higher precision solution for movie production
https://www.youtube.com/watch?v=BpT6MkCeP7Y
 
It's not about how precise the realtime solution is, it's about how much artist time it takes to get a scene to render. It is not a good trade off if it can render in a second but it takes days for an entire team to make it work.

CG animation is mostly about iterations, you build everything at a base level and then based on how it looks in camera, you continuously refine the assets that matter the most. Sometimes as often as every day. So any overhead is multiplied by very high numbers, and artist time is usually a LOT more expensive than buying more render nodes and software licenses because those are a one time cost. Not to mention that you'll need a big render farm for the baking in realtime engines anyway.
 
Ok, so scratch idea of Crytek having a few second GI probes. Still would like to see their recent efforts with Cinema.

What about a real-time system like Nvidia GI Works in development and then just baking it with code compile? Sure, quality is not up to pair to farm rendering, but You dont have any stalls in development because of it.
 
I dunno, I'm not really aware of any engine that doesn't require serious prep work for indirect lighting, or basically getting any asset to just render at all.

The beauty of using a fully raytraced renderer is that the lighters can just load a Maya scene with proxies for the highres geo, pick lights and move them around and adjust their parameters, and then send it off to render, without any additional steps. This would result in a major change in the visuals, and the direct and indirect lighting and reflections and all would just change on their own without any additional human involvement.
The environment team could just update a few models and textures and shaders and export them to the library and have everything change, too.
I could sculpt a new blendshape and add it into the rig, export with a single click, and all the rendered sequences would update on their own again, too. With hair and all.

Changes like these tend to take a lot more effort in a realtime system because the rendering efficiency relies on baking a lot of calculations into specific data formats. I don't exactly know how to change facial deformations on a mostly bones based rig (using corrective blendshapes on top) but I imagine it takes a bit more effort than just selecting the vertices I want to change and moving them into their new position...


Not sure how Nvidia GI Works is built though, but I imagine it's not super simple either.
 
I think Nvidia GI Works work similarly to how CryEngine SDK works: whole lighting is based on real-time GI and light sources and everything can be modified by simple scripts with global light parameters and light sources parameters, so designing a workflow around those parameters that would be synced by grid/level would be pretty easy.

But yeah, when looking at whole pipeline, not just lighting, it doesnt look as bright. Just reading in Crytek's presentation that it took artist a whole week to make LOD0 model from High rez version by hand screams inefficient. Yeah, content creation is still miles behind in game development.
 
Last edited by a moderator:
Well, at least it seems to become more common to paint textures on the highres source model and bake them onto the lower res ingame representation. I believe Naughty Dog was one of the pioneers of this approach with Uncharted 2, although Ninja Theory has textured the highres models on Heavenly Sword as well (we did some outsource work for that one, ages ago). This would allow them to change the ingame model as they want to, and still not lose any texture work. But just the interface screenshots for Crytek's Xnormal-based tool are already kinda scary... :O
 
It's not about how precise the realtime solution is, it's about how much artist time it takes to get a scene to render. It is not a good trade off if it can render in a second but it takes days for an entire team to make it work.

CG animation is mostly about iterations, you build everything at a base level and then based on how it looks in camera, you continuously refine the assets that matter the most. Sometimes as often as every day. So any overhead is multiplied by very high numbers, and artist time is usually a LOT more expensive than buying more render nodes and software licenses because those are a one time cost. Not to mention that you'll need a big render farm for the baking in realtime engines anyway.
Unity 5 is going to have path-traced realtime previews for their lightmaps to avoid the whole "fully bake the lighting before you can see your results and try again".
 
The Infamous postmortem has been updated here with new notes on almost all pages.

Very interesting comments from Adrian Bentley, a few excerpts:

Page 24 & 25, Memory:

It’s incredibly helpful to have enough memory to make things faster and simpler...We’re still
coming up with useful ways to use the memory

Page 39, CPU:

While the CPU has ended up working pretty well, it’s still one of our main
bottlenecks.

Page 41, G-buffers:

Here’s how we store our material properties. Up to 8 gbuffers (5-6 + detph/stencil) is
41 bytes written per pixel. Yes that’s 85 MB just for fullscreen buffers. Good the
PS4 had a huge amount of fast RAM. :smile:

Page 57, Compute

Compute queues are definitely the way of the future though.
 
I doubt they need all those 85 MB of buffers simultaneously. Most likely they could write new data on top of existing data and/or do in-place modifications by compute shaders (instead of needing multiple buffers). But of course if you don't need to optimize your layouts, then you can come up with these big numbers. But I am pretty sure with some extra work you could slice those numbers to half (or even lower) without affecting image quality in any meaningful way.

I don't personally like storing materials properties (or any other static data) to g-buffers because that basically increases your BW usage by 9x. Instead of reading DXT-compressed (per pixel) material property once, you need to read it (1x BW) + store it uncompressed to g-buffer (+ 4x BW) and then read that uncompressed data again in the lighting stage (+4x BW). Of course if you don't have access to your entire texture data set in the lighting shader (by virtual texturing, bindless or packing all texture data to big atlas), you need to gather it to g-buffer (and pay the extra BW cost).
Compute queues are definitely the way of the future though.
Yes :)
 
What are Vertex Normals used for? Seems kinda new (never seen it in G-buffers before).
 
Back
Top