The Game Technology discussion thread *Read first post before posting*

From what I can garner, it is quite hard to introduce tiling to a MP engine, that wasn't designed for it (AFAIK the only MP engines that do support tiling are Capcom's MT Framework and Bethesda's Fallout 3 engine).

So while 360 graphics will definitely improve, it may be hard for MP devs to compensate for the limited edram size, and so PS3 games could have a resolution or AA advantage in the future. TR:UW could be the first indication of this.

Unless devs choose not to use PS3's advantage in this area, and go for parity instead, and keep resolution the same for both versions (ie in COD's case 600p).

There are many threads in the forum about tiling and the complexity of it (like this one). But I'm not sure tiling is quite as rare or as arduous as you would suggest.

To add to your list of engines, surely Codemasters' GRID engine and Travellers' Tales LEGO engine would fit the bill?
 
There are many threads in the forum about tiling and the complexity of it (like this one). But I'm not sure tiling is quite as rare or as arduous as you would suggest.

To add to your list of engines, surely Codemasters' GRID engine and Travellers' Tales LEGO engine would fit the bill?

Yeah I was mistaken, there are actually quite a few MP engines using tiling like Ubisoft's Dunia and Scimitar (used in AC), RAGE and many EA games.

I guess I meant a few heavy hitters don't use it like any UE3 titles (though arguably they don't need to, due to their use of postprocessing effects to mask aliasing) and Halo 3 (which really need some sort of AA implemented for future titles).

And it seems that while tiling may work fine for titles running at 30 fps, very few titles (that arent sports games and other limited environment type games) that run at 60 fps seem to use it.
Case in point, Team Ninja's Dead or Alive 4 a 60 fps game although a launch title used tiling, however Ninja Gaiden 2 released 3 years later removed the implementation, possibly due to how hard it is for geometry intensive games to use tiling (if I remember you have to process geometry at least twice for tiling).

So if this is true, then for 60 fps games like the COD series, the scenario described above could be possible.
 
Last edited by a moderator:
There are very few 60fps titles for either console.

Lost Planet, Dead Rising and I think Resident Evil 5 all have variable MSAA where tiling is off when the load is too high and on when it isn’t. That seems like a good idea for games where tiling isn’t best suited.
 
There are very few 60fps titles for either console.

Lost Planet, Dead Rising and I think Resident Evil 5 all have variable MSAA where tiling is off when the load is too high and on when it isn’t. That seems like a good idea for games where tiling isn’t best suited.

But wasn't the MT Framework engine designed to do tiling and this type of variable MSAA. Other devs are going to find it hard to retrofit their engines to do tiling, and variable MSAA (especially since Capcom have the only titles that use variable MSAA).

I wonder how Fallout 3 with its massive environments manages to do 4xAA and 720p, which is like the holy grail of 360 rendering quality, and was promised by ATI and MS way back.
 
Strange, I read that final framebufer stored in main memory ,EDRAM holds only part of it (backbuffer) but Im not a specialist
It is really impossible not to use EDRAM and use AA tricks similar to PS3 on xbox?
eDRAM is dedicated for back-buffer op's, like colour blending, MSAA sampling and depth/stencil testing. The final result (resolved samples, pixels, etc.) is then written in the main memory, to be rendered as terminal display data or re-used as a texture input in some post-process effects.
eDRAM logic does not deal with containing texture data.
 
Lost Planet, Dead Rising and I think Resident Evil 5 all have variable MSAA where tiling is off when the load is too high and on when it isn’t. That seems like a good idea for games where tiling isn’t best suited.
I have always wondered why 360 games would use variable AA but not variable resolution.
Doesn't "360 scaler" support "inter standard resolutions"?
Visually 1,2,4x AA seems pretty harsh stepping as opposed to possibly smoother resolution transitions.
 
I have always wondered why 360 games would use variable AA but not variable resolution.
Xenos is already built to handle up 4x the sampling rate, so the only difference in load is the number of tiles and the accompanying increased geometry processing. Varying the resolution varies the pixel load, which may have far more performance implications.
 
But I'm not sure tiling is quite as rare or as arduous as you would suggest.

It's not, implementing tiling is cake. I didn't do that part on MLB, but another coder added tiling support in one day. I quickly added code after that to calculate object tile #'s which was also easy. That second part incidentally is optional as well, you can totally skip that and let the hardware handle it if you want. The hardware is smart enough to calculate tile#'s itself after the tile#1 pass, and it will auto magically patch the command buffer so that an object say in tile#3 will not be processed during tile#2 rendering, etc. It's not as optimal as calculating tile#'s yourself though since if you let the hardware do it then all objects get parsed in the tile#1 pass, whereas if you do it then you can skip many objects from that tile#1 pass.


I wonder how Fallout 3 with its massive environments manages to do 4xAA and 720p, which is like the holy grail of 360 rendering quality, and was promised by ATI and MS way back.

Fallout 3 is actually a great fit for tiling because the world is made of lots of pieces, frequently small and in the distance, so many objects can be binned into tiles real easily. It's also a PC bound title, meaning the vertex load of the game is probably relatively low, so the extra vert load from 3 tiles (4xmsaa) can be absorbed. If I remember right, the average vertex load increase with 3 tiles across multiple games was about 1.7x.


I have always wondered why 360 games would use variable AA but not variable resolution.

Easy, because variable resolution (for main scene rendering) is a gimmick/marketing ploy. If no one notices the resolution drop, then you are far better off just always rendering at that lower res all the time, and spending the saved pixel time on making the game look better in areas of the world that have pixel time to spare. Long story short, you'd end up with a better looking game by skipping variable resolution nonsense all together.
 
That second part incidentally is optional as well, you can totally skip that and let the hardware handle it if you want. The hardware is smart enough to calculate tile#'s itself after the tile#1 pass, and it will auto magically patch the command buffer so that an object say in tile#3 will not be processed during tile#2 rendering, etc. It's not as optimal as calculating tile#'s yourself though since if you let the hardware do it then all objects get parsed in the tile#1 pass, whereas if you do it then you can skip many objects from that tile#1 pass.

Interesting...predicated tiling is automatic on XNA, and I had wondered whether it was something built-in or whether it was added specifically for the framework.

When I was first working on my engine and some test scenes, enabling MSAA caused a nice performance hit when I was batching together many instances of geometry all over the screen. Then I realized that it was probably replaying everything I'd submitted in that DrawPrimitive call (or at least that's what it seemed to be doing, it's a little hard to tell without access to PIX :p).
 
Xenos is already built to handle up 4x the sampling rate, so the only difference in load is the number of tiles and the accompanying increased geometry processing. Varying the resolution varies the pixel load, which may have far more performance implications.
Are you telling me AA doesn't increase pixel shader load on Xenos even if the geometry complexity is high?

Easy, because variable resolution (for main scene rendering) is a gimmick/marketing ploy. If no one notices the resolution drop, then you are far better off just always rendering at that lower res all the time, and spending the saved pixel time on making the game look better in areas of the world that have pixel time to spare. Long story short, you'd end up with a better looking game by skipping variable resolution nonsense all together.
AA will pop up, resolution won't. And how exactly variable AA is not a marketing plot in the same way? You can pretty much argue the exact same thing.
By the way, as far as I'm aware, variable resolution isn't advertised in any way.
 
Thanks Joker for the explanations, I was just wondering whether the scenario I posted earlier is likely, where 60fps titles like COD6 may have higher resolution on the PS3 (with equal AA levels) due to tiling being too costly for the 360 version.

And variable AA has real benefits, I played Dead Rising and I was impressed by how clean the game looks thanks to the variable AA, and when there is no AA, you hardly notice because atm the screen has usually more important things to look out for, like the 300 zombies bearing down on you.
 
Interesting...predicated tiling is automatic on XNA, and I had wondered whether it was something built-in or whether it was added specifically for the framework.

No PIX? Gaaaah that's painful :) It's possible that XNA does add something to the framework compared to what the standard dev kits do, not sure though. On the dev kits, if you had say 300 objects on screen, 100 in each tile (3 tiles total), and you let the hardware handle it, rendering tile#1 would process all 300 objects, then it patches the command buffer, then tile#2 processes 100 objects and tile#3 processes 100 objects, so 500 objects processed total. XNA on the other hand might be running a task on say core 3 that calculates a tile# for every object as you make your draw calls, and before it sends them to the command buffer. In that case, rendering each tile would process exactly 100 objects each, 300 objects total, so somewhat more optimal. No clue what XNA does though...
 
Are you telling me AA doesn't increase pixel shader load on Xenos even if the geometry complexity is high?

It can increase pixel load if you have complex geometry way into the distance, but that just means your lod system is bad. For "typical cases", AA on 360 has no extra pixel load.


AA will pop up, resolution won't.

Exactly! Hence why variable rendering res is pointless. Lets say we're sitting down looking at both cases.

Variable AA first. We walk around the world, AA sometimes on, sometimes off. Ok, we can see the AA pop if we look close. So why do it? Well, AA always off makes everything always look jaggy. AA sometimes on makes some areas look nicer, where we can afford the performance. So there is a tangible benefit to sometimes turning on AA, because the game world will look better in some areas compared to always having it off. Will the user see a pop? Perhaps. But we deem it worth it, because some sections, like areas of high contrast or whatever, look much better with AA, so the pros of occasional AA outweigh the cons.

Now, variable resolution. We walk around the world and see no difference. Has the resolution been changing? Yup says the coder, and verifies that the res is indeed bouncing around. We walk around the world some more and still see no res pop, no visible difference. So....umm, why are we even bothering? There is no tangible visible benefit to tweaking the res on the fly. What the res change does tell us though is that some areas of the world are less stressing on the gpu. So...why not ditch variable res altogether, always run at lower res, and instead lets add some more foliage in that area that seems to be under light load. The net result, before there was no visible benefit from res swapping, but after, by running at a set but albeit lower res, we can now add more detail to some sections of the world. Hence we went from no benefit to some benefit.

That's why I think variable res is pointless.


And how exactly variable AA is not a marketing plot in the same way? You can pretty much argue the exact same thing.

Because you can demonstrate a visible benefit from it. For example, you may decide to render all small objects like rocks, small foliage, weapons, parts that form the people (heads, arms, etc...) with AA. Those objects are all small and hence very tile friendly. So on 360 we'll AA those and make them nice and smooth, and because they are small we can calculate their tile# and take no performance hit. A total win-win scenario. For everything else, like large buildings, large pieces of terrain, etc, we decide that we don't have the vertex budget to process them multiple times, so forget AA for those, we'll render them without it. This is also a variable AA scenario and a good one that has a visible benefit to the user. You can market spin it I'm sure, but in the end the user benefits so it's a good choice. I can't say the same for variable res.


By the way, as far as I'm aware, variable resolution isn't advertised in any way.

It's not, but we all know that our games are put under the microscope at sites and publications all over the world. Variable res won't fool Quaz, but who knows, maybe some publication someplace will get fooled, measure our res at the start gate as 1920x1080, and then claim our game to be FULL HD even though it's not because they didn't realize the res gets halved right after you walk 10 steps away. It's deception pure and simple (just like bullshots), but it happens.
 
I was just wondering whether the scenario I posted earlier is likely, where 60fps titles like COD6 may have higher resolution on the PS3 (with equal AA levels) due to tiling being too costly for the 360 version.

I'm sure it's possible to conjure up some scenario where it's the case, but I wouldn't expect it too often. The most straightforward case is mano-a-mano vert performance. If you had a situation where both machines were maxed out on vert performance, then you wouldn't be able to enable tiling since that would put you over the top. I wouldn't expect this to really happen though...but I really don't want to go down that discussion again :)

There are other more nebulous cases though. For example, lets say a game is running 2xmsaa, and it does some post processing where in order to work correctly, it must have access to the original double wide (2560x720) msaa buffers. On PS3 this is easy, render everything to your msaa buffers and when done, have your post process steps sample color and depth as needed. On 360 it would be a bit different. You render msaa to edram as normal, and you resolve out a regular 1280x720 buffer as normal. However...since your post process needs access to the original msaa color and depth buffers, those need to be resolved out of edram untouched as well. This is an extra step not needed on the PS3, so one could argue that maybe there is a situation where the 360 runs out of performance and the PS3 doesn't. On the other hand...the msaa resolve is faster on the 360, so even though the 360 has extra steps, maybe the performance is a wash.

So...no clear cut answer for you, but overall I wouldn't expect tiling to get in the way. Then again, it looks like the new Tomb Raider is possibly having this very issue, so maybe SMM can shed some light :)
 
But doesn't that depend on how long the resolution change happens, for example if resolution drops lower for a few ms it might not be noticed vs it dropping a few minutes. I don't know if this is how it works in Wipeout HD but if no one notices the resolution change that means their is no difference to the user between 720p and 1080p?
 
Now, variable resolution. We walk around the world and see no difference. Has the resolution been changing? Yup says the coder, and verifies that the res is indeed bouncing around. We walk around the world some more and still see no res pop, no visible difference. So....umm, why are we even bothering? There is no tangible visible benefit to tweaking the res on the fly. What the res change does tell us though is that some areas of the world are less stressing on the gpu. So...why not ditch variable res altogether, always run at lower res, and instead lets add some more foliage in that area that seems to be under light load. The net result, before there was no visible benefit from res swapping, but after, by running at a set but albeit lower res, we can now add more detail to some sections of the world. Hence we went from no benefit to some benefit.

That's why I think variable res is pointless.


It's not, but we all know that our games are put under the microscope at sites and publications all over the world. Variable res won't fool Quaz, but who knows, maybe some publication someplace will get fooled, measure our res at the start gate as 1920x1080, and then claim our game to be FULL HD even though it's not because they didn't realize the res gets halved right after you walk 10 steps away. It's deception pure and simple (just like bullshots), but it happens.

I fully disagree with you though.

The variable resolution happens only when the GPU is stressed, which by the way comes from heavy action scenes (most of the time). Suppose the GPU is to render 60 frames but there are few frames that take longer to render than others thus those frames are to be at a lower resolution to keep up with the framerate. Now the eyes can detect a lower resolution if the rendered resolution is not fickle. But to detect the resolution of a few frames out of 60 frames, I don't think we can. And that is the point of different resolutions in games.

By the way, WipeOut is rendered at 1920x 1080, 1728×1080, 1645×1080, 1600×1080, and 1440×1080. It is nowhere near the half-resolution of 1920x1080 you implied. (you may not imply the same game though).

I support variable resolution because it gives the best solution to FULL HD.
 
I agree with Joker, there are few if any, real world (non marketing) benefits of having variable resolution.

The complexity introduced to the engine to do this, would be far better off doing variable AA, which has real, observable benefits.

And Joker, so was I simply reading too much into TR:UW better IQ on PS3, and it is highly unlikely that other games will go down the same route (like CoD4).

So is this more of an issue with Crystal Dynamics, who weren't able to implement tiling for the 360 code, rather than any 'foreshadowing' of future MP development choices?
 
I fully disagree with you though.

The variable resolution happens only when the GPU is stressed, which by the way comes from heavy action scenes (most of the time). Suppose the GPU is to render 60 frames but there are few frames that take longer to render than others thus those frames are to be at a lower resolution to keep up with the framerate. Now the eyes can detect a lower resolution if the rendered resolution is not fickle. But to detect the resolution of a few frames out of 60 frames, I don't think we can. And that is the point of different resolutions in games.

By the way, WipeOut is rendered at 1920x 1080, 1728×1080, 1645×1080, 1600×1080, and 1440×1080. It is nowhere near the half-resolution of 1920x1080 you implied. (you may not imply the same game though).

I support variable resolution because it gives the best solution to FULL HD.

See, I'd go the other way. I don't know what Wipeout does, but lets just say for the sake of argument that it always runs at 1920x1080, and occasionally drops to 1440x1080 for just a few ms to accommodate load.

Now I agree, the typical user would not notice that drop. My problem is that the typical user will not notice any difference between 1920x1080 and 1440x1080 anyways. I don't get hung up on marketing so I personally don't care if it's 'full hd'. What I would do instead is drop the resolution full time to 1280x1080, and spend all those freed up cycles making the game look way better. I'd wager that if we sat a bunch of gamers on a couch around a 42" tv and had them play two versions of the game, one at 1920x1080 with variable res, and another stuck at 1280x1080 but with all that freed up pixel shader time spent on visual goodies like more post processing, etc, I'd bet you that most would prefer the look of the lower res games. I know marketing would prefer the 1920x1080 game, but the lower res game will look better.

I think people get overly hung up on resolution because for the longest time it was the be all end all of performance metrics. But each res bump is a diminishing return. Back in the day it was easy to tell the difference between 320x240 and 640x480 even from 10 feet away from a pc monitor. Today though, people can't tell. There's evidence of this everywhere, even on this forum. Check the resolution thread where people frequently guess the resolution wrong. 720? 640? 1280? 1440? People can't tell until someone zooms in and does a pixel count. Sometimes people still can't tell even after looking at zoomed in images! You're better off spending the cycles elsewhere and just dropping the res.
 
And Joker, so was I simply reading too much into TR:UW better IQ on PS3, and it is highly unlikely that other games will go down the same route (like CoD4).

So is this more of an issue with Crystal Dynamics, who weren't able to implement tiling for the 360 code, rather than any 'foreshadowing' of future MP development choices?

Well I certainly don't want to speak on their behalf! I'll let SMM talk about Tomb Raider since I have no clue what they are doing behind the scenes. In general though I didn't think supporting tiling was a deal breaker for most games. But who knows, there's always special cases.
 
See, I'd go the other way. I don't know what Wipeout does, but lets just say for the sake of argument that it always runs at 1920x1080, and occasionally drops to 1440x1080 for just a few ms to accommodate load.

Now I agree, the typical user would not notice that drop. My problem is that the typical user will not notice any difference between 1920x1080 and 1440x1080 anyways. I don't get hung up on marketing so I personally don't care if it's 'full hd'. What I would do instead is drop the resolution full time to 1280x1080, and spend all those freed up cycles making the game look way better. I'd wager that if we sat a bunch of gamers on a couch around a 42" tv and had them play two versions of the game, one at 1920x1080 with variable res, and another stuck at 1280x1080 but with all that freed up pixel shader time spent on visual goodies like more post processing, etc, I'd bet you that most would prefer the look of the lower res games. I know marketing would prefer the 1920x1080 game, but the lower res game will look better.

I think people get overly hung up on resolution because for the longest time it was the be all end all of performance metrics. But each res bump is a diminishing return. Back in the day it was easy to tell the difference between 320x240 and 640x480 even from 10 feet away from a pc monitor. Today though, people can't tell. There's evidence of this everywhere, even on this forum. Check the resolution thread where people frequently guess the resolution wrong. 720? 640? 1280? 1440? People can't tell until someone zooms in and does a pixel count. Sometimes people still can't tell even after looking at zoomed in images! You're better off spending the cycles elsewhere and just dropping the res.

I disagree that developers should stop pushing the edge. I now have a 52" Full HD screen and I find that LBP that I play mostly now a days is suffering in visual fidelity from only being rendered at 720p. I thought it looked marvellous on my old 720p TV. And yes, the scaler in my new TV is state of the art.

I hope that LBP2 whenever it comes will move on to 1080p rendering.
 
Back
Top