What resolution and framerate should next-gen target *spawn

Status
Not open for further replies.
HD is a technical term associated with an exact number (1280x720). Surely, we can use a shorter expression than 1280x720 without being looked down upon on Beyond3D. :)
I might be weird but for me HD = 1080p, 720p is just a wannabe. Also, using the number of scanlines instead of just "HD" gives way much more information and actually tells others what kind of HD you specifically mean.
 
What I find unbelievable is how very smart guys like Alstrong, Andrew Lauritzen, and others show such conformism. I didn't expect conformity, as in everyone must follow the same basic pattern.
Speaking for myself, I have a 1366 x 768 32" TV and a 1680 x 1050 20" monitor that I game on. 1080p vs 720p doesn't look much different in games where I can choose. In the case of Age of Booty I deliberately force 720p because the 60fps is much better than 1080p30. In a game like PixelJunk Monsters, 1080p only makes a small difference perceivable difference on the 1680 x 1050 display (which correctly letterboxes to 16:9).

For people who game on massive displays, or sat really, really close, I can understand the desire for higher fidelity. You're in the minority though. Most other people don't care. Quite what we're all suppoed to be comforming to, I don't know. It's not like we were brainwashed into getting smaller displays!

A guy asked me how much it cost, and I said that it was 400€, to which he replied: "I would never ever pay 400€ for a console, never in my life". I didn't reply, but maybe now his words make sense. I am not going to pay 300-400€ anymore for a console which can only run games at 720p, AGAIN.
Even if those 720p res games look as good as real life? Even if those 720p games are some of the most entertaining games exclusive to a console platform? Do you enjoy Tiger Woods 12 on your PC thanks to it being far higher resolution despite being a poorer game than the console version?

For instance, looking at this game... Driver San Francisco... It runs at 720p, but I just can't read most of the signs, it's muddy. At 1080p I could discern a lot more, like in real life -I borrowed this two pics from Digital Foundry article as an example-
That's an issue of filtering. Grab a copy of this photo and downsample it to 1280x960, and its pin sharp. Sure, if you have a larger TV it won't look as good as a 1080p version of the same photo, but as I said before, most people don't. It's daft to mandate devs to cater to a niche and deprive those who'd benefit from prettier 720p games or higher framerate 720p games to endure a less favourable experience.
 
HD is a technical term associated with an exact number (1280x720).
Untrue, it's ambiguously used in a number of contexts. Sometimes it means 720p, sometimes it means 1080p, sometimes it means 1-2 megapixels, sometimes it means >1 megapixel, sometimes it doesn't even refer to video! It's a stupid marketing term that has zero technical significant at this point. Just write the numbers. If you think 1280x720 is the magical point at which things become good, fine, write that. Just don't expect us to know what "HD" means to you on a given day ;)

In other words, you wouldn't rather have a proper 1080p picture.
For the last time, this is not the question. You're assuming it's some sort of situation in which you can magically get 1080p without sacrificing anything else. This is obviously untrue. What you don't seem willing or able to grasp is that there are ways that increase the overall image quality more than increasing the resolution - many in fact. Yeah we'd all love to have infinity resolution and infinity quality but you make trade-offs to produce the overall best experience, whether that be related to resolution, frame rate or any other factor.

My point is not that 1080p doesn't look nicer than 720p, all other things being equal. Of course that is the case. And obviously I like higher resolutions and quality... hell I do most of my gaming on PCs partially for that reason. But raw framebuffer resolution (DX8 style) is such a small part of the image quality story that fixating on it is just childish and naive.
 
Except MLAA and FXAA both look noticeably worse than MSAA. They are obviously better than no AA, but note that they are a compromise in and of themselves.

I'd like to give a like or a +1 or whatever for that! Really happy to know that not everyone is sold on these post AA solutions which are basically little more than clever blur algorithms...

Then again, any 35mm film based movie footage has a lot of blur and grain, so it could be considered photorealistic ;)
 
So I think there's probably a good chance that we can compute visibility and do primary texturing at 1080p in future consoles (and hopefully some good 4x MSAA on top of that for visibility), but I doubt anyone but the lazy will just brute force compute the entire shader at that rate. It's just not an efficient use of processing resources.

Now that, I can't agree with. More complex shaders will actually start to require oversampling, getting away with undersampling will simply not be possible.
Bumpy speculars and reflections and self-shadowing are a necessary element of better looking materials and introduce serious aliasing artifacts even without raytracing and such.
Imagine something as simple as the brushed metal surfaces with anisotropic reflections and highlights on stuff from everyday furniture to Iron Man's armor. Can't undersample that.

Maybe you can compute GI/indirect lighting at lower frequency, but that's just one element of surface shading. So I strongly disagree on this point, at least in general terms.
 
As for next gen, I expect the following:

- Most games will be 30 fps but try to target a 1080p resolution. This however means anything between 1920 and 960 columns; I'd expect something around 1600 to become the favorite. MSAA and post AA combined will be enough to make it nearly indistinguishable from a 1:1 resolution,and the fill rate and memory wins will be too tempting not to go sub-1920. Especially as deferred rendering becomes mroe and more common.

- COD and GT games will maintain 60 fps. COD will make whatever resolution sacrifices are necessary to maintain this frame rate; GT will finetune the content instead (they got away with fairly simple but effective stuff this gen, too)

- We will see more 1920x1080 games though, particularly arcade/psn titles and probably a lot of sports/racing games as well.

- Yet we'll also going to see a lot of sub 1080p games too. Most people think COD:BO on the PS3 actually runs at 1920*1080 even though it actually renders at a quarter of that resolution! People just can't see and thus they won't give a damn.

- We're also going to see a lot of dynamic resolution scaling to maintain stable frame rates. It just makes all kinds of sense to do it instead of suffering tearing and dropped frames.

8x to 10x the GPU power would probably work well with the above; other important factors would be if there's EDRAM or how much the memory bandwidth is, how much the CPU can help with stuff like deferred lighting/shading, and how efficient and flexible the MSAA implementation is.
Background storage speed and seek times will also become a lot more important if virtual texturing or geometry becomes more common (remember sparese voxel octrees!)


I also spend some time thinking about the actual visual elements and rendering systems.
When I compare ingame stuff with what we're producing, there are many differences of all sorts. A lot of our film-like looks come from shameless cheating and some of that could be implemented as various post processing elements, but the quality of the lighting and shading is quite obviously still lacking, even with energy conserving shaders and linear lighting and all other bells and whistles. Proper reflections and area lights aren't easy to fake, I'd even dare to say it's nearly impossible... but then again some of our stuff was done in PRMan withouth such features, so a rasterizer should be able to go far enough as well. I dunno... if I ever find out what would help in general image quality I'll post aobut it ;)
 
Now that, I can't agree with. More complex shaders will actually start to require oversampling, getting away with undersampling will simply not be possible.
I think I maybe misrepresented my point - I wasn't specifically saying that everything is going to be undersampled, but more that shader terms are increasingly evaluated at varying rates. Some less than pixel frequency, some sample frequency, some more depending on the term.

That said, one thing of note in the last few years has been the happy tendency to actually try and solve/pre-filter various terms, rather than just super-sampling them like is common in film. Normal/bump maps and specular are getting pretty closed to being solved in this regard (even anisotropic), and that's a great! The more we can band limit these signals up front, the better. Super-sampling is really the last resort, although it certainly is necessary in some cases.

Anyways, my point was more than the output resolution is increasingly decoupled, so it's not really worth spending a lot of time arguing about it and trying to draw "lines in the sand".

Agreed with all your other points as well.
 
Last edited by a moderator:
The issue is that band limiting removes details too, not just aliasing. We're actually fighting our renderer a bit right now in this field too, as the current texture filtering washes out a lot of the painstaking work from the texture painter guys and we don't like that on the asset end of the pipeline.

PRMan is certainly the benchmark in texture filtering and detail preservation. I've never seen anything render images with more detail, no wonder it's still preferred for most movie VFX jobs. Sure Sony's pushing Arnold (what we also use) because of the ease of use and the added med frequency detail from raytracing, but the textures and displacement are just not up to Pixar's renderer yet, IMHO.
(The somewhat annoying part is that the latest movie rendered with Arnold is - the Smurfs...)


It really is an interesting and exciting period as movie VFX completes the big transtion from fully rasterized images to fully raytraced ones, but we could certainly use more processing power for this. But eventually CPUs will catch up and we'll slowly be able to dial up the quality settings to previous heights and get some really nice stuff for the compositors to enhance further.

Anyway, my point is that even MSAA won't solve everything and supersampling will eventually become unavoidable. Otherwise we wouldn't need to spend all that money on rack systems and custom aircon solutions ;)

It'd be very ironic though if the PS5/Xbox4 generation would indeed abandon the local hardware and move to cloud based systems as the video compression would probably negate most of the visual advances. We're always annoyed by the things gaming websites do to our movies, but at least we usually get to show the better ones in the Siggraph electronic theater, so some of the CG community gets to see them at a higher quality level...
 
The issue is that band limiting removes details too, not just aliasing.
A "perfect" sinc filter does not of course, but approximations must always err on the side of blurriness. Still, EWA for instance does a pretty damn good job. Even 16x aniso as implemented to GPUs is pretty good quality for games.

Anyway, my point is that even MSAA won't solve everything and supersampling will eventually become unavoidable. Otherwise we wouldn't need to spend all that money on rack systems and custom aircon solutions ;)
Somewhat true, but film is not a perfect predictor for games, as the constraints are different. The move to raytracing in film is as much about production as anything else, whereas game solutions are limited by basically having to ship the renderer as well, and knowing that it's worst-case performance is what dictates the end user experience. Naty Hoffman had some good points about this in his HPG keynote this year (slides should be online if you google).

In any case you can do SSAA with the MSAA sampling patterns now anyways. You can even do fancier stuff too, so the tools are all already there. Still, we're back in a battle to get people to use MSAA again, so one thing at a time ;)

It'd be very ironic though if the PS5/Xbox4 generation would indeed abandon the local hardware and move to cloud based systems as the video compression would probably negate most of the visual advances.
Naty also talked about this, and he thinks it's a pretty stupid place to put the client/server boundary. I tend to agree.
 
Last edited by a moderator:
Somewhat true, but film is not a perfect predictor for games, as the constraints are different.

I agree here, of course.
Although our field (video game intro) actually has a rather flexible tolerance limit because of the video compression used on gaming websites; the best quality versions shown at press conferences can get away with some IQ artifacts, at least noone caught them in our stuff at this E3 ;) and we made sure to update it with better quality versions as soon as we could. Damn deadlines.
But movies can't afford to do that, and since there are some strict constraints for rendering times as well, even if it's measured in hours and not miliseconds, they still rely on scaling up the other variable which is rendering power. Weta and ILM have some utterly amazing computing facilities far beyond the average mortal's comprehension to serve their rendering requirements. You really can't do that with a gaming console so you have to compromise in the other aspects (image quality and scene complexity including lighting/shaders).

Naty Hoffman had some good points about this in his HPG keynote this year (slides should be online if you google).

Thanks for the tip... first of all that talk almost got me to tears in the beginning with the Elite, Ultima Underworld and GLQuake screenshots ;)
His points are well researched and very true, especially about zero tolerance to aliasing artifacts, from spatial through shading to temporal. I'm particularly impressed that he mentions Arnold which really is at the forefront of pure raytracing based rendering and the importance of conserving shading/lighting artist time. I also like his point about voxel based volumetric/fluid effects.

Anyway, games can't avoid the aliasing issues forever. At least there's a trend that's improving - PS1 and 2 and Xbox1 had practically nothing, now at least some form of post process AA or 2x MSAA is expected, so we can hope for 4xMSAA to become a standard.
Same goes for a lot of other stuff that's forgiven - lack of cloth simulations, ridiculous intersection problems with characters and the complete lack of body contact, lack of proper self shadowing and so on. Some will be easier to solve, others might take more than the next console generation.


Naty also talked about this, and he thinks it's a pretty stupid place to put the client/server boundary. I tend to agree.

I really hate all kinds of remote computing that's beyond LAN. Render farms are a nice idea though, even if I personally never deal with them in asset production...
 
Although our field (video game intro) actually has a rather flexible tolerance limit because of the video compression used on gaming websites; the best quality versions shown at press conferences can get away with some IQ artifacts, at least noone caught them in our stuff at this E3 ;) and we made sure to update it with better quality versions as soon as we could. Damn deadlines.
Good point. That's definitely an interesting job/set of constraints that I'm not too familiar with. You should write a B3D article about what you do :)

Anyway, games can't avoid the aliasing issues forever.
Agreed and the top tier game developers know this and acknowledge aliasing (both spatial and temporal) as one of the largest artifacts that they want to eliminate. In fact you'll be happy to know that several say that given more powerful future hardware they would dedicate that power first to eliminating aliasing, and only secondarily to better lighting and shading, increasing resolution, etc. That's good to hear :) Honestly flickering geometry, specular and other aliasing is probably one of the most obnoxious issues in current real-time rendering... it just looks bad.

I really hate all kinds of remote computing that's beyond LAN. Render farms are a nice idea though, even if I personally never deal with them in asset production...
It all comes down to power in the end too. Moving data takes the most power and so you want to move the minimal amount... that's usually not the video stream ;) Naty's point about WoW and similar systems already doing this in a much smarter place is very true.
 
I'm not that optimistic on that front for next generation system. Whereas things like pixel counting were relevant at some point, my belief now is that it's setting mental barriers at least in the internet gaming community. So I expect editors to push more and more eye candies and the costs of performances, pressures of pixel counters will push the trend forward.

For me the target should change on a per game basis, for in H&S I play sacred 2 a lot and when I tried some other games in the same genre I really enjoy what plain 1080p introduces in term clarity to the game (vs contender). In some other games I'm not sure that I would have the same POV.
I completely agree with Andrew and stability in performances should come first versus image quality, so depending on the game AA and/or resolution should vary depending on what is the more relevant thing to do.
 
Last edited by a moderator:
As long as the HUD remains untouched I'm all for dynamically scaling the resolution of the 3D stuff. Would like to see this as an option on PC games. Some already do have dynamic settings based on frame-rate, but not for render res AFAIK.
 
Sacred 2 would drop to diabolically low framerates, making all that fidelity pointless when the temporal resolution was so poor. There's no point aiming for 1080p and losing so much framerate, and few people would really like the graphics of a true 1080p60 game. Pick any of your favourite shooters and imagine it simplified in every way to hit a 1080p60 target, and then say you still prefer 1080p. ;)
 
Sacred 2 would drop to diabolically low framerates, making all that fidelity pointless when the temporal resolution was so poor. There's no point aiming for 1080p and losing so much framerate, and few people would really like the graphics of a true 1080p60 game. Pick any of your favourite shooters and imagine it simplified in every way to hit a 1080p60 target, and then say you still prefer 1080p. ;)
It's acceptable on the 360, actually tearing is more bothering. As TB pointed out CPU was the bottleneck. Anyway it's just an example, for this kind of game I would favor higher resolution versus anti aliasing for example in the same time I acknowledge that Resolution should fluctuate to offer consistent performances (in Sacred 2 the main problem is CPU bottleneck).
 
Short comment about HD terminology - in Europe the terms have been fixed by EU law: HD Ready means at least 1280x720p, and Full HD means at least 1920x1080. They both also have a specific and clearly identifiable logo.

Perhaps I'm an optimist, but I think that if games like Motorstorm 3 and GT5 can run at 1280x1080 this gen, then next-gen they can probably push that further. However, at the same time I absolutely recognise that the most important thing is focus on the current bottlenecks, and rank improvements in order of importance. Shifty's point about if we could do photo-realism at 480p, that would look better than most current games in 1080p holds true for the most part. I agree strongly about the importance of lighting, animation, physics and just sheer detail to make the game-world appear alive.

On the other hand I would, however, also like to point out lest we forget that the cars in GT5 sometimes look a *lot* better than their versions on an SDTV, particularly in Full HD. GT5 is also a nice example of the importance of lighting and IQ if you look at the difference between photo-mode and in-game. Even in-game though, the image quality is something else, and I think 1280x1080p with 2xaa actually looks better than 1280x720p 4xaa


Give me GT5 at 1080p without the artifacts in the buffer effects, a perfect framerate and a more living environment, with enhanced weather effects like gushes of wind that you actually feel on your car, and with the only limits to the amount of cars you have on track being practical ones, not technical, and I think we've gotten to a point where technique is no longer very important. :) It is interesting to see what would be necessary for that to happen. In the video above for example, I think they basically programmed different global lighting parameters per section of the track, which is why so few of the tracks in GT actually support weather and day/night cycles.

Anothing thing is that I'm not absolutely convinced that resolution is the bottleneck always either way. I remember a long time ago we have had discussions about 480p vs 720p vs 1080p in terms of rendering time needed, and back when Resistance for instance still targetted 1080p, if I remember correctly they ran into memory issues more than bandwidth issues - they just didn't have space for the higher resolution geometry and textures that were required.

We are hitting limits for certain applications, but the question is how close to these limits are we?
 
Short comment about HD terminology - in Europe the terms have been fixed by EU law: HD Ready means at least 1280x720p, and Full HD means at least 1920x1080.
So the descriptor is longer than actually just typing the resolution? And "HD Ready"... really? :p

I'll repeat... this is Beyond3D. Let's use something sensible. ;)
 
Untrue, it's ambiguously used in a number of contexts. Sometimes it means 720p, sometimes it means 1080p, sometimes it means 1-2 megapixels, sometimes it means >1 megapixel, sometimes it doesn't even refer to video! It's a stupid marketing term that has zero technical significant at this point. Just write the numbers. If you think 1280x720 is the magical point at which things become good, fine, write that. Just don't expect us to know what "HD" means to you on a given day ;)
HD starts at 1280x720 and not before. Full HD is 1920x1080. I'm going by the ATSC standards.

While hitting the HD mark is not the "...magical point at which things become good...", it's pretty close to that point on MY 70" 1080p TV (for gaming). Where would people be without goals? HD is the goal of these consoles. It is part of the reason buy HD consoles, instead of the Wii. It would be wise to achieve that goal. Sales may and do depend on it.


For the last time, this is not the question. You're assuming it's some sort of situation in which you can magically get 1080p without sacrificing anything else. This is obviously untrue. What you don't seem willing or able to grasp is that there are ways that increase the overall image quality more than increasing the resolution - many in fact. Yeah we'd all love to have infinity resolution and infinity quality but you make trade-offs to produce the overall best experience, whether that be related to resolution, frame rate or any other factor.

My point is not that 1080p doesn't look nicer than 720p, all other things being equal. Of course that is the case. And obviously I like higher resolutions and quality... hell I do most of my gaming on PCs partially for that reason. But raw framebuffer resolution (DX8 style) is such a small part of the image quality story that fixating on it is just childish and naive.
I like how you separated that sentence from the ones that followed. It cuts off the message nicely. :) Anyway, there are ALWAYS sacrifices to be made at resolutions. Even now, not many people believe they can made a game that looks movie quality at 480p on these consoles. 480p! So, this big talk about sacrifices doesn't mean much. Developers will almost always find new techniques to implement in games that require a lot of performance to achieve. That seems to be the point you and others are missing. What I'm getting from 1280x1080 and full HD games on PS3 is just what I want. The sacrifices were minimal and the resolution pay-off was huge. Of course, 720p has been good, too.

I look forward to the next generation of consoles and the 1080p graphics (hopefully, in S3D) they will yield. I predict lots of my money going into those games. :)
 
So, this big talk about sacrifices doesn't mean much. Developers will almost always find new techniques to implement in games that require a lot of performance to achieve. That seems to be the point you and others are missing. What I'm getting from 1280x1080 and full HD games on PS3 is just what I want. The sacrifices were minimal and the resolution pay-off was huge.
Minimal sacrifices - how exactly are you qualifying that? You reckon a drop from 60fps to 30fps (and worse) is minimal? Excluding eveything else that's cut back. Your statement also attributes developers with either considerable stupidity or laziness for not creating more 1080p games this gen, or even games that hit 30fps solid at 720p. If 1080p can be reached with such minimal costs, why don't all devs render 1080p and not waste their time with convoluted AA techniques? How come they often struggle to use 4xMSAA at 720p when twice the shaders and fill needed for 1080p is so easy to get? :???:
 
What I'm getting from 1280x1080 and full HD games on PS3 is just what I want. The sacrifices were minimal and the resolution pay-off was huge.

We must be living in different worlds, or you have a substantially better PS3 than I have. For example I disabled 1080p mode on my PS3 everytime I would play GT5 because GT5 in 1080p didn't look or run as good as 720p mode. Between that and you not being able to notice low quality tv upscale (especially at 70" where it should be blatantly obvious, it's brutaly obvious on my 65") tells me that me and you are somehow seeing completetly different things from the same console.
 
Status
Not open for further replies.
Back
Top