The Game Technology discussion thread *Read first post before posting*

Whoever notices the benefits of variable AA will notice the benefits of higher resolution, whether it's typical user or not. I don't think that argument really works in favor of AA.

Furthermore, the argument for variable AA with the example of Wipeout HD, is that you render the relatively empty scene at highest resolution and only decrease resolution when things get hectic (or even maybe smooth framerate is absolutely required). And at this time no user can detect the resolution whether they are avarage joe or Quaz, as long as they are still _playing_.

I think this scenario is much more common (or should I say easily identifiable) than your example and somewhat works for variable AA argument too. However I would imagine, in addition to the smoothness of resolution change, the relative predictability of overhead as a function of resolution should make variable resolution more attractive choice for automated, adaptive techniques. Just a guess of course.
 
I believe variable res. can be worth it if it only occasionally dips and IF the user perceives the difference between hd and full hd. Like joker said, most people can't. I have a 40" Samsung 656 and I think it's noticeable (well, somewhat). Of course, in the case of Wipeout it's the difference between 1080p and 1280x1080, not 720p, so it's not really the same thing, giving more credence to joker's opinion. I also heard (here) that vertical res is more important, but I'm not sure if the comment was directed to specific games, I think it was concerning GT5. Perhaps variable res. is only really worth it for 52"+.

And then there's the more fx+post-processing issue...
 
Now, variable resolution. We walk around the world and see no difference. Has the resolution been changing? Yup says the coder, and verifies that the res is indeed bouncing around. We walk around the world some more and still see no res pop, no visible difference. So....umm, why are we even bothering? There is no tangible visible benefit to tweaking the res on the fly. What the res change does tell us though is that some areas of the world are less stressing on the gpu. So...why not ditch variable res altogether, always run at lower res, and instead lets add some more foliage in that area that seems to be under light load. The net result, before there was no visible benefit from res swapping, but after, by running at a set but albeit lower res, we can now add more detail to some sections of the world. Hence we went from no benefit to some benefit.

See, I'd go the other way. I don't know what Wipeout does, but lets just say for the sake of argument that it always runs at 1920x1080, and occasionally drops to 1440x1080 for just a few ms to accommodate load.

Now I agree, the typical user would not notice that drop. My problem is that the typical user will not notice any difference between 1920x1080 and 1440x1080 anyways. I don't get hung up on marketing so I personally don't care if it's 'full hd'. What I would do instead is drop the resolution full time to 1280x1080, and spend all those freed up cycles making the game look way better.

I completely disagree with this, because you're making a lot of assumptions.

Firstly, you're assuming that the overall resolution is high (1920x1080). You say that running the game at 1280x1080 would therefore not make much noticable difference, which I agree with, but it's incorrect to dismiss the entire technique of variable resolution based on one particular case. Would the same dismissal of the technique as a marketing ploy as the difference is not noticable still hold in an example of dropping 1280x720 to 1024x576?

Secondly, you're assuming that the system load varies due to different locales, which is incorrect for two reasons. If you already know a particular area is under performance budget, you can increase the detail in that area "for free" anyway, you don't need variable resolution to tell you that. But most importantly, variable resolution is not designed for this eventuality, it is designed for when the action in a particular locale gets so hectic that it exceeds what the game budgeted for and therefore what the system can handle.

I see it as another, equally valid and theoretically preferential, option for developers:

1) You have a fixed resolution and a desired framerate, and you design the world around those parameters. When things occasionally get too hectic, the framerate drops but the resolution is maintained.
2) You have a fixed resolution and a desired framerate, but you design the world leaving enough headroom for the most hectic scenes. The world isn't as detailed as it could be as most of the time that headroom isn't being used.
3) You have a fixed framerate and a desired resolution, and you design the world around those parameters. When things occasionally get too hectic, the resolution drops but the framerate is maintained.

I'd label these as theoretical options, as practical considerations aren't taken into account (e.g. how expensive is the variable resolution system itself? What if you don't have a hardware scaler? etc), but I think dismissing variable resolution as simply a marketing ploy is way off the mark, as it can have definite benefits.
 
This and this are 1280x1080, according to Quaz's measurements.

I really like the variable framebuffer solution. The only issue I have with it is that when WipEout HD is really stressed, you get both 1280x1080 *and* some pretty ugly screen tear - in those instances, the illusion it is attempting to create is compromised.

In terms of the 1080p marketing ploy, I'm kinda unconvinced about WipEout's credentials here. Run the game side by side with a *true* 1080p v-locked release like Ridge Racer 7. While WipEout is arguably the better looking game, RR7 has a clear advantage in terms of clarity and isn't that what full HD is supposed to be about? Maybe it's the variable framebuffer, maybe the blur AA (??), maybe the screen tear... RR7 has none of those things and is still the showcase 1080p game for PS3 IMHO.
 
I disagree that developers should stop pushing the edge. I now have a 52" Full HD screen and I find that LBP that I play mostly now a days is suffering in visual fidelity from only being rendered at 720p. I thought it looked marvellous on my old 720p TV. And yes, the scaler in my new TV is state of the art.

I hope that LBP2 whenever it comes will move on to 1080p rendering.

Don't listen to him devs, most would rather have upscaled 720p with AA, and extra effects and better framerate, rather than the extra pixels, especially since game assets are typically much lower in quality than you're average film (ie texture resolutions etc) so the extra detail you are able to perceive at 1080p is mainly wasted.

An upscaled 720p game with 2xAA will look better on a Full HD set, than a game running at 1080p with no AA. GT5 is a case in point.
 
Last edited by a moderator:
An upscaled 720p game with 2xAA will look better on a Full HD set, than a game running at 1080p with no AA. GT5 is a case in point.

I don't see how you can make such an assertion, when it seems you can't tell the difference between no AA, 2xAA and 4xAA?

GT5 Prologue (demo) = 1080p mode is 1280x1080 (2xAA) in-game while the garage/pit/showrooms are 1920x1080 with no AA. 720p mode is 1280x720 (4xAA)
 
Don't listen to him devs, most would rather have upscaled 720p with AA, and extra effects and better framerate, rather than the extra pixels, especially since game assets are typically much lower in quality than you're average film (ie texture resolutions etc) so the extra detail you are able to perceive at 1080p is mainly wasted..
Most people actually have STD screens, so by your way of reasoning, devs may give up on 720p rendering as well. I want devs to keep pushing the envelop, if you don´t it´s fine with me.

An upscaled 720p game with 2xAA will look better on a Full HD set, than a game running at 1080p with no AA. GT5 is a case in point.
Who said anything about no AA? Not me anyway.
In what way is GT5 a case in point?
 
But most importantly, variable resolution is not designed for this eventuality, it is designed for when the action in a particular locale gets so hectic that it exceeds what the game budgeted for and therefore what the system can handle.

Every game already has to deal with the "when things get hectic" situation, but we never assume it will just be for a few 'ms', it could last much longer. So we have to optimize the goings on during that 'hectic' phase already. Regarding res I still think that my same arguments apply, if you could drop it down to a lower res for a few seconds and no one notices, then you should just render the whole game at that lower res all the time. Past a certain point, the bang for the buck for increased res isn't worth it.


I'd label these as theoretical options, as practical considerations aren't taken into account (e.g. how expensive is the variable resolution system itself? What if you don't have a hardware scaler? etc),

It also causes some havoc with post processing. Post processing gets frequently done with 1/2, 1/4, etc sized buffers of various sorts, and the size/look is tuned to the size of the main color and z buffers, basically to balance load and look at an acceptable level. If the size of the main buffers keeps changing, well then you are not only changing the load but also potentially the look.


betan said:
And at this time no user can detect the resolution whether they are avarage joe or Quaz, as long as they are still _playing_.

Well, the crux of my entire argument is ultimately that most people would not detect it no matter what, whether they are playing, not playing, etc. 1920x1080 and 1440x1080 will look identical to most people regardless of the situation. So why bother? There's tons of examples of this.

Outside of games there's video cameras. HDV cameras record at 1440 res, AVC cameras at 1920. Marketing loves it, they out a big 1920 on the box. You really think anyone could pick the better resolution between the two?

Back to games, there's cases like the relatively recent Bioshock PS3 demo which was startlingly blurry compared to the old 360 version. I noticed it easy as did Eurogamer, yet many on this very forum *could not* see the difference! That's just one case, there's plenty of others where people could not tell.

Effect likes particles are already rendered at 1/4 to 1/16th sized buffers all the time on PS3 games. How many people do you think have even noticed? Motorstorm 2 and MGS4 do it, I bet you few people even know that. Resistance 2 has artifacts from very low resolution particles all over the place, plain as day to my eye, but I bet you if I mentioned that in the Resistance 2 thread they would think I was insane. They just don't see it, and these artifacts are all *far* more pronounced than any difference you would see between 1920 and 1440 resolutions.

Hence why I still think if you want to implement variable res, just toss it and stick to the lowest res you were gonna bounce to. It makes stuff like your post processing pipeline better and more predictable, and saves a ton of performance all around that can be spent elsewhere.
 
Well, the crux of my entire argument is ultimately that most people would not detect it no matter what, whether they are playing, not playing, etc. 1920x1080 and 1440x1080 will look identical to most people regardless of the situation. So why bother? There's tons of examples of this.

Outside of games there's video cameras. HDV cameras record at 1440 res, AVC cameras at 1920. Marketing loves it, they out a big 1920 on the box. You really think anyone could pick the better resolution between the two?

Back to games, there's cases like the relatively recent Bioshock PS3 demo which was startlingly blurry compared to the old 360 version. I noticed it easy as did Eurogamer, yet many on this very forum *could not* see the difference! That's just one case, there's plenty of others where people could not tell.

Effect likes particles are already rendered at 1/4 to 1/16th sized buffers all the time on PS3 games. How many people do you think have even noticed? Motorstorm 2 and MGS4 do it, I bet you few people even know that. Resistance 2 has artifacts from very low resolution particles all over the place, plain as day to my eye, but I bet you if I mentioned that in the Resistance 2 thread they would think I was insane. They just don't see it, and these artifacts are all *far* more pronounced than any difference you would see between 1920 and 1440 resolutions.

I think by nature of this forum we are obsessed with quantifiable numbers, but the fact is even if we cannot know the resolution without pixel counting or whatever, higher resolution does give noticeably better IQ. So I don't really thing it's really important whether avarage joe knows what he is playing in the end as long as the thing looks better than it would at lower resolution.

Also I wouldn't really count examples like Bioshock because those comparisons tend to be affected by console fanboyism.

As for particle effect buffer, I have to admit I don't normally notice quarter resolution but in my difference smoke and explosions are generally low on high spatial frequencies. And smaller or sharper particles are too fast to notice anything.

That's totally different from texture or edge quality which heavily depends on resolution.
 
I meant that the game looks better in 720p with 4xAA, than in 1080 anamorphic with 2xAA. The presence of aliasing on car models completely destroys the suspension of disbelief.

I think you're asking too much from devs, by telling them to make their games run at 1080p with AA, when they are clearly limited by the hardware.
If you want such resolutions and AA, that's what PC's are for.
 
Effect likes particles are already rendered at 1/4 to 1/16th sized buffers all the time on PS3 games. How many people do you think have even noticed? Motorstorm 2 and MGS4 do it, I bet you few people even know that. Resistance 2 has artifacts from very low resolution particles all over the place, plain as day to my eye, but I bet you if I mentioned that in the Resistance 2 thread they would think I was insane. They just don't see it, and these artifacts are all *far* more pronounced than any difference you would see between 1920 and 1440 resolutions.

As I said resolution drops when things get hectic at every few frames per 60 ( or 30) frames. That is in matter of few milliseconds. You can hold ground on resolution drops for a period of few seconds, we cannot prove that. For the former, the eye hardly can detect and for the latter the eye might detect depending on the TV size and the rendered resolution.

As for the particle resolution, I don't see any pixelated smoke at all. They look pretty good anyway.
msprmgsvan1ks6.th.jpg

msprmgsvan3xz4.th.jpg
 
I've just read this article about Thread building blocks on gamasutra:
http://www.gamasutra.com/view/feature/3970/sponsored_feature_optimizing_game_.php?page=1
Intel TBB is available at no cost from opentbb.org for Microsoft Windows, Mac OS X, Linux, Solaris, and Microsoft Xbox 360. It ships with Intel compilers. You can learn more about Intel TBB at opentbb.org and in the O'Reilly nutshell book Intel Threading Building Blocks by James Reinders.
Does somebody has a clue about some studio using it on the 360?

------------------------------------

I'm surprised that the new didn't had some echo here but it looks like MS has just improved its devkit. The new devkit intended for AAA games has twice the amount of RAM of "standard devkit" (1GB?).
this was part of of MS announcements @GDC
http://www.gamasutra.com/php-bin/news_index.php?story=22877
The company also, on the fully professional development side, has announced an entirely new version of its XDK development hardware, which contains two times the RAM of the currently released version. This additional RAM will be used for debugging and optimization tools separate to the game's assets and code, and will allow developers to manage their game development process more efficiently. The upcoming XDK software update which will ship alongside these units will continue to support existing XDK development units.
What kind of gain can we expect from this boost from studios willing to spend some extra time on optimizations?
 
Last edited by a moderator:
Why would Intel offer their TBB for the Xbox 360 when the system uses a PowerPC architecture?
 
Some interesting things from the GDC 09 slides:
  • R2: they make a comparison between Light Pre-Pass and G-buffer approaches in the presentation.
  • lots of other presentations regarding the SPU from Insomniac
  • Halo Wars uses the tesselator for terrain rendering
  • The Star Ocean 4 presentation goes rather in-depth about their shaders and camera effects.
  • There's a good breakdown of SPU usage in KZ2
  • Gears of War 2 Rendering techniques. Easy to understand slides on their character lighting, SSAO, gore & blood. (Height field tesselation of their fluid surfaces)
 
I wouldn't go as far as taking this as a hint but it looks like the microsoft halo team considers multicores (larrabee likes) as the future for rendering.
See in this presentation "zen in multicore rendering". Their figures (slide 19) let think that they are actually way ahead so not likely for next generation systems/GPUs/larrabee.
Anyway still makes me wonder if that rapso could been right and that console manufacturer as they did with cell/xenon/xenos may once again push something ahead of their time if not in regard to perfs but to the architecture.
NB He stated this here: http://forum.beyond3d.com/showthread.php?p=1177905#post1177905

EDIT
I forgot thanks for the links Alstrong!!! :D
 
This thread doesn't get much love :(
I guess that crytech and larrabee are stealing the show.
I would have loved to read some devs reactions especially in regard to the new improved 360 devkits.
I remember reading complain about it (ie lack of RAM).
 
Back
Top