The Game Technology discussion thread *Read first post before posting*

I really don't think you guys are getting the point we are making.

Either way, I'm happy to see what they are doing with Far Cry 3. Hopefully now we won't see anyone making lazy dev comments.

I don't know about calling them lazy but seeing how bad is the frame-rate on both console versions it's safe to say that they made some pretty questionable decisions.

What's the point in using even 100% the SPU's if your game runs like crap? I saw at neogaf that SSAO takes 6ms...isn't this a bit too much? they chose to keep resolution high, SSAO and MSAA at the expense of the framerate, and all that in an FPS game.

FC3 must be the worst performing high-profile game that I've seen this gen and now they publish slides about how much they're using the SPU's and how much they have optimize the engine on the PS3? is this what optimization is? :rolleyes:
 
I don't know about calling them lazy but seeing how bad is the frame-rate on both console versions it's safe to say that they made some pretty questionable decisions.

What's the point in using even 100% the SPU's if your game runs like crap? I saw at neogaf that SSAO takes 6ms...isn't this a bit too much? they chose to keep resolution high, SSAO and MSAA at the expense of the framerate, and all that in an FPS game.

FC3 must be the worst performing high-profile game that I've seen this gen and now they publish slides about how much they're using the SPU's and how much they have optimize the engine on the PS3? is this what optimization is? :rolleyes:

I definitely agree that there is the case of doing too much with too little, but that's different than being lazy. :p

Unfortunately many people aren't that sensitive to a poor frame rate, at least IMO. So whatever creates the prettiest screenshot over what offers smoother gameplay at times.
 
I think one of the people who worked on the game not so subtly hinted that in his opinion the console versions were quite possibly the worst high profile game released this gen.
 
I think one of the people who worked on the game not so subtly hinted that in his opinion the console versions were quite possibly the worst high profile game released this gen.
They surely are...Its worse than Crysis 2. At least that game doesnt tear 50% of the time.
 
I don't know about calling them lazy but seeing how bad is the frame-rate on both console versions it's safe to say that they made some pretty questionable decisions.

What's the point in using even 100% the SPU's if your game runs like crap? I saw at neogaf that SSAO takes 6ms...isn't this a bit too much? they chose to keep resolution high, SSAO and MSAA at the expense of the framerate, and all that in an FPS game.

FC3 must be the worst performing high-profile game that I've seen this gen and now they publish slides about how much they're using the SPU's and how much they have optimize the engine on the PS3? is this what optimization is? :rolleyes:

I doubt the slides were for consumers. Optimization is making something faster than it was before. That doesn't imply that the result is fast.

Framerate is more often an art issue anyway. It's crazy to me that people still don't understand this, especially game artists!
 
I doubt the slides were for consumers. Optimization is making something faster than it was before. That doesn't imply that the result is fast.

Framerate is more often an art issue anyway. It's crazy to me that people still don't understand this, especially game artists!

It might be easier to get artists to change things (and in my experience they are very open to the idea if given proper tools to help them.), but programmers can do *a lot* to improve framerate.
See talks from Mike Acton and others to have an idea.
 
Was doing a bit of digging and found this nice thread over at Steam regarding Trine 2:

We are using textures up to 4096 x 4096 resolution. End boss is using one of those, for example. We compress non-transparent textures to dxt1, alpha tested to dxt3 and alpha blended to dxt5. Our normal maps on PC are dxt5, compared to dxt1 on consoles (dxt5 is twice the space but better quality).
Generally per level we don't have that much texture data - comparing PC to console builds we only have 40 textures or so which have been downscaled by dropping the highest mipmap level.

We don't use any real SSS. We use something we call "wrap shaders" which allow artists to tag certain materials (such as flora) to be lit when not facing the light (it's driven by wrap strength, wrap color and light dot normal value).
For stereo we render both eyes manually. It will cost twice the performance, but it also gives the best results.

We use 3 G-buffers (all 8-bit targets). From memory it goes something like this
1: depth.x, depth.y, normal.x, normal.y (depth is linear, 16bit value encoded to 2 channels)
2: normal.z, albedo.x, albedo.y, albedo.z
3: wrap parameter, fog amount, spec s, spec p
On PC (and PS3) we also render ambient light to 4th render target during g-buffer pass.

We encode our wrap parameter (rgb, intensity) to single value (few bits each component) and get it from a look-up texture as it has to be stored in our g-buffer. On lower details we just use a 2d look up (wrap parameter, dot product) to get faster approximation, as there is still some math involved.
everything (besides shadows) is stored in 8-bit buffers and sadly nothing in the rendering is gamma-correct (well, besides AA on higher PC quality settings). This is nice for portability but it does give a quality hit. We do old school add-smooth to add up our lighting while trying to prevent clamping. No HDR/tone mapping here ;).

Add-smooth is a bit double-edged sword, as it also means many of our scenes don't fully use even the 8-bit precision properly. In fact, we used to do our post-process AA "properly" before post-processing, and it actually meant that AA wasn't really doing much - colors get scaled so much in post-processing. We had to move it after post-processing to actually have a proper effect. As for actual post-processing, we have a simple bloom, dof and some color effects (color factor, brightness, contrast, saturation). In case you are interested, it should be possible to hook up PIX captures for Trine 2. It's not prevented, and it should even have PIX tags to help figuring out what's happening :).

We are not using any light maps. All lighting is done real-time. We have from some tens up to a hundred lights visible for a typical scene - for artists to work around the limits in our light model, gamma issues and so on.
edit:



JoelFB said:
AlStrong said:
It was mentioned that you use a 4th render target for an ambient light term on PC and PS3, but it doesn't seem like that has made a noticeable difference compared to the 360. Do you know what's going on there (maybe done in a separate render pass for 360)? I presume WiiU uses that 4th RT as well.
This is what I got by picking some brighter brains: "Actual lighting calculations are the same with and without the 4th render target. Without it, ambient will be rendered as a separate render pass."
 
It might be easier to get artists to change things (and in my experience they are very open to the idea if given proper tools to help them.), but programmers can do *a lot* to improve framerate.

What kind of tools do they need that are not available now?
 
What kind of tools do they need that are not available now?

In my experience these sorts of tools tend to be very specific to the game or engine. For instance in our engine we have a debug feature that color-codes every pixel depending on how many lights affect that pixel. Then the lighting artists can turn that mode on, and if they see lots of red they know that I'm going be very grumpy when I profile the level.
 
In my experience these sorts of tools tend to be very specific to the game or engine. For instance in our engine we have a debug feature that color-codes every pixel depending on how many lights affect that pixel. Then the lighting artists can turn that mode on, and if they see lots of red they know that I'm going be very grumpy when I profile the level.

Yep that kind, you can also use colours to show texel density, colour code expensive shaders to see whether they are really worth using on a given mesh...
 
It might be easier to get artists to change things (and in my experience they are very open to the idea if given proper tools to help them.), but programmers can do *a lot* to improve framerate.
See talks from Mike Acton and others to have an idea.

I don't want to veer off topic but framerate problems are usually the fault of content creators.

Everyone starts the project with an established engine, usually the one used in the previous project. This engine has various performance characteristics and the project has a target framerate. It's the responsibility of content creators to create content that works within these performance characteristics as they exist at the start of the project or at the time of content creation.

Why? Because there is no guarantee that the engine will get faster throughout the project. Engineers tend to work in parallel optimizing various systems and in many/most cases each iteration of the engine will end up more efficient than the last. But you cannot aim for a moving performance target effectively.

I think Sony Santa Monica and others have talked about how target framerate needs to be a priority all throughout development. You really can't ever let your content slip under your target framerate at any point in the project. Going back and redoing it later for performance reasons is just a waste of everyone's time in the long run when you could have just fixed the issues when they came up.

Unfortunately it seems that in companies where the people calling the shots have art or design backgrounds rather than engineering backgrounds there is no focus on framerate as a priority throughout the project and the engineers are just left to pick up whatever pieces they can in the last couple months.

Anyway that's why Far Cry 3 (and every game with performance issues) has a bad framerate. Because the people in charge of content wanted some artistic look or some design tidbit more than they wanted a smooth running game. The engine is probably significantly more efficient than the previous iteration, but it doesn't matter at all if people keep choking it with bad content.
 
Last edited by a moderator:
I think Sony Santa Monica and others have talked about how target framerate needs to be a priority all throughout development. You really can't ever let your content slip under your target framerate at any point in the project. Going back and redoing it later for performance reasons is just a waste of everyone's time in the long run when you could have just fixed the issues when they came up.

While I agree in principle with Santa Monica Studios on this point, you have to look at things in context. While usually when you ship it's down to artwork, during development there are a lot of reasons games run below a target framerate.

It's a bit easier if you have a mature engine and the bulk of the work is new content.

But you don't for example want to hamstring the gameplay programmers by forcing everything at final framerate all of the time.

As an example on the Xbox racing game Boss worked on for MS, a lot of the performance came from the way the track geometry was divided, this was done manually by a programmer (it wasn't practical to do it automatically) after the level art was "finished" or close to it. Since it was a time consuming annoying process and it wasn't reversible, you couldn't maintain final framerate during development of the track, the difference between before and after could be 15fps to 60fps.

What you need to provide as engineers are a good set of metrics that artists can trivially check to get within the desired budget, and enough skill/experience to set good budgets.

The issue with letting framerate drop too far is that it tends to get out of hand, you can make things much worse if the game is running at 15fps already without anyone noticing a significant difference.
 
I don't want to veer off topic but framerate problems are usually the fault of content creators.

Everyone starts the project with an established engine, usually the one used in the previous project. This engine has various performance characteristics and the project has a target framerate. It's the responsibility of content creators to create content that works within these performance characteristics as they exist at the start of the project or at the time of content creation.

Why? Because there is no guarantee that the engine will get faster throughout the project. Engineers tend to work in parallel optimizing various systems and in many/most cases each iteration of the engine will end up more efficient than the last. But you cannot aim for a moving performance target effectively.

While what you say is probably true in some cases. It's also true from my experience that when a title wraps, the engineers usually still have many things on their to do list. Often we would start a new cycle, and the engineers would know that by the end we can target 30% increase in this or that due to further optimizations and re-writes that they have scheduled.

Indeed content creators have a huge impact on performance, but so does pipeline. You can get huge performance increases using the same art/content just by modifying how the date is exported and used by the engine.

Do the engineers always get their to their targeted goal? No. But having forecasts, and pillars for what they hope to do during the current cycle is key to having your engine/game get better and better. Even arbitrary targets like 30 fps, 720p, 30% increase NPC count are critical.
 
Last of Us Diffuse Inter-Reflection?

As brought out by jlippo in the game's Console Games thread, he noticed that this gameplay trailer had some diffuse secondary bouncing from the flashlight during the end sequence (15:08). Seems to be a single bounce, but I don't really have a trained eye. Perhaps something like Enlighten, but knowing Naughty Dog it's probably in-house designed. Any thoughts? Was there any thing like this in the Uncharted games? If I remember correctly, those game had mostly baked GI for static environment objects with realtime DI deferred light added in.
 
Uncharted 1-3 didn't have any dynamic light bounces nor secondary shadows. (outside the very fast and small radius SSAO)
Also I'm quite sure that all or most of secondary lighting in Uncharted series used was baked into vertexes.
 
Yes, uncharted GI was all baked into vertexes, though on airplane level, when your are fighting inside the plain and there are sunshafts coming through the plane's windos, there is some bouced light coming from the dynamic shafts, but I always assumed they were a hack, probably a dynamic point light with an artist chosen collor and intensity placed in real time by a raycast from the window into the nearest object in the plane along the direction of the light.
 
I think they used probes like Far Cry 3 (and they always have since UC2) to simulate the GI on characters/moving object, because they do fit well into the scene and don't stand out eventhough the GI in the environment is baked.
 
Uncharted 1-3 didn't have any dynamic light bounces nor secondary shadows. (outside the very fast and small radius SSAO)
Also I'm quite sure that all or most of secondary lighting in Uncharted series used was baked into vertexes.

I am pretty sure the same can be said for LoU also. The interior lighting in that trailer is way too complex to be realtime. There is even baked indirect shadows.

I think they used probes like Far Cry 3 (and they always have since UC2) to simulate the GI on characters/moving object, because they do fit well into the scene and don't stand out eventhough the GI in the environment is baked.

I agree, I suspect they were/still are using probes to add secondary illumination to the dynamic objects. Because they are liner games though, maybe they were/are using pre-computed irradiance volumes (like Far Cry 3) as opposed to artist placed ones.

In that gameplay segment I linked to though, a dynamic, moveable light (the flashlight) is reflecting indirect light on to the other walls. My knowledge of the subject is very limited, but I don't think this is doable with probes or irradiance volumes. In the Far Cry 3 case, the PC spec secondary lights, i.e not sun/moon, that bounce light are stationary. The flashlight, though, is moveable.

My guess is still that it is in a similar vein as Enlighten. Seeing as though, some seem to think, or maybe know for sure, that the UC2-3 baked secondary light per-vertex instead of lightmaps, I could be wrong. Can something similar to Enlighten be done without lightmaps?
 
As far as I know, all BF3 does (which uses Enlighten) is that it has different sets of pre computed GI for each destructible building. Once you blow it up it simply switches to an another set (considering the destruction is always canned).
 
Back
Top