Why very high resolutions are bad.

Frank

Certified not a majority
Veteran
Or at least, as long as we don't have the resources to render them with a lot more detail than they have pixels.

The higher the resolution, the flatter and smoother CGI looks. Less lifelike. More plastic. Better shaders don't change that.

Probably the best solution would be, to use extremely high polygon counts (or even better: you know), very many super high resolution textures and unified lighting, that also takes care of shadows by directly lighting surfaces that only have a "recommended" color (when they would be lighted with half the maximum value) to start with. All done at a higher resolution than the end result.

Some jitter while downscaling often improves the visual experience. Gamma correction and other post-processing effects (like HDR lighting, if done well) help nicely.

And, while MSAA is an interesting concept and nicely takes care of borders, it does squat for surfaces. SSAA is much better in that respect. But none are a substitute for more polygons and more and much higher resolution textures. Although SSAA is definitely a general improvement, if your polygon count and texture sizes are up to par.

Give me more of the above over high resolutions and framerates any day.


Yes, I know there have recently been many threads about these subjects. I simply felt like doing a nice, personal summary. ;)
 
I'd have to say that it depends on the kind of information you need to display. If your scene doesn't need to show far away objects very clearly, such as a Doom 3 or FEAR sort of game, then yes.

Once you need to display distant or small stuff in such a manner that the user can interperet it correctly, like on a large scale battlefield, then super high res is preferable.

I often wondered why we didn't just stick with TV res and simply made the graphics 'better', and then I tried playing Civ 2 on the Playstation. :smile:
 
The higher the resolution, the flatter and smoother CGI looks. Less lifelike. More plastic. Better shaders don't change that.
I would tend to believe that is more related to the lack of proper illumination that becomes more perceptible to the human eye when the viewing area increases, rather than the resolution. Of course, resolution nowadays is scaling by increasing the LCD size, so many people might not realize the difference... I doubt an increase in pixel density would *ever* lower image quality (but it would lower performance, obviously)

use extremely high polygon counts (or even better: you know), very many super high resolution textures and unified lighting, that also takes care of shadows by directly lighting surfaces that only have a "recommended" color
As I said, most of the problem is probably related to indirect/global illumination. It is, IMO, much less visible when using spherical harmonics already (but your mileage may vary, of course!)

Gamma correction and other post-processing effects (like HDR lighting, if done well) help nicely.
Gamma correction is not a post-processing effect, it is a necessary part of the pipeline if you want correct results.

And, while MSAA is an interesting concept and nicely takes care of borders, it does squat for surfaces. SSAA is much better in that respect.
There's no point doing anything for surfaces that don't need any extra treatment. What do you think texture filtering is for? In the case of more complex shaders, you'd want to supersample specific parts of the shader in the code, most likely imo...

But none are a substitute for more polygons and more and much higher resolution textures.
They also have very little to do with each other: if you have a ton of polygons but no AA, your game will look like shit.

Give me more of the above over high resolutions and framerates any day.
Although I disagree with many of your point, ironically, I agree with the conclusion! ;) I don't think there is much point in increasing resolutions further (unless you want to stare only 5cm away from the screen!) for 99% kinds of games. And there's no point in increasing framerate much above 60-75 either...

However, it is necessary to reduce latency. Increasing framerate is one way to do that if the monitor can follow. Better input and display technologies, along with smarter input code in the game, are more appropriate solutions however. Much of the latency nowadays isn't even caused by the framerate, although it is obviously a factor.
 
Seen from that perspective, what would be the best way to benchmark high-performance GPUs? :)
 
Seen from that perspective, what would be the best way to benchmark high-performance GPUs? :)

They should be benchmarked at the settings that are likely to be used in the real world by consumers. If you want to campaign for the use of lower resolutions, that's another matter.

Anyway, I happen to think you are wrong. I'm running a Dell 30-incher and while the detail it allows is less forgiving I think the benefits much outweigh the downsides.

I'd also concur that PC games are typically the sort that benefit from really high resolutions.
 
High resolutions are excellent for larger displays, too. For example, I love being able to play all my games at 1920x1200 on my 24" display :)
 
You should put your specs in your sig. Would save on having to make posts like that :LOL: j/k
 
High resolutions are excellent for larger displays, too. For example, I love being able to play all my games at 1920x1200 on my 24" display :)

Ha! My notebook does that resolution, and it's only 17"!


:( is what I really mean though - why the F don't they sell high dpi screens ?

I'm glad fucktards would call in and complain that their text is too small - I don't care. It's not my problem.

Why doesn't at least one manufacturer take a chance?


/We now return you to your original thread on-topic ness.
 
Ha! My notebook does that resolution, and it's only 17"!


:( is what I really mean though - why the F don't they sell high dpi screens ?

I'm glad fucktards would call in and complain that their text is too small - I don't care. It's not my problem.

Why doesn't at least one manufacturer take a chance?


/We now return you to your original thread on-topic ness.

Ha my old Dell 8500 lappie did 19 x 12 on a 15.4" panel and I bought that in early 2004 :p

I agree the lack of high DPI desktop displays blows. Problem is, not many manufacturers actually make the LCD panel itself. Plus there is really very little demand for high res displays, most punters are just too stupid. Frankly, I'm amazed that all the high res laptop displays exist.
 
Ha! My notebook does that resolution, and it's only 17"!
Heh, my. I just don't see a reason to go for that high of DPI. I mean, I suppose it'd be nice to have a super-high resolution LCD display for playing games where you may want to be able to change the resolution without worrying about blurriness, but I don't see why you'd want to go that high for normal use, what with the existence of anti-aliased fonts and all that.

I'm glad fucktards would call in and complain that their text is too small - I don't care. It's not my problem.
Well, that's not so much the problem of the fucktards as it is the problem of Windows being stupid about display scaling. Decent Linux distributions, for example, automatically scale the text to match the DPI of the display (SUSE does this, I'm not sure what other distributions do as well).
 
I would tend to believe that is more related to the lack of proper illumination that becomes more perceptible to the human eye when the viewing area increases, rather than the resolution. Of course, resolution nowadays is scaling by increasing the LCD size, so many people might not realize the difference... I doubt an increase in pixel density would *ever* lower image quality (but it would lower performance, obviously)

Actually, I'd say that it's more related to the eye/mind filling in extra detail at low resolutions with what we think should be there. Like, in Doom3, I bet having 2 shadow casting light sources per actual light source at 640x480 would look a helluva lot like soft shadows (i.e. shadow edges would have a very small gradient, but the low reslution lets us interpolate what we see in those pixels in a manner similar to what we see in r/l), despite the fact that at 1920*1080 it'd be glaringly obvious that it's just two shadow volumes really close to each other.
 
High resolutions are excellent for larger displays, too. For example, I love being able to play all my games at 1920x1200 on my 24" display :)
I like playing them on my 32" tv even better, even while that is only 1360x768. And with a viewing distance of 1.5m, I get free AA as well.

I would prefer a room-sized very high res screen, but as long as we aren't even close to having videocards who could display something on that, I rather have more detail than a higher res.
 
Plus there is really very little demand for high res displays, most punters are just too stupid. Frankly, I'm amazed that all the high res laptop displays exist.
Makes sense to me.

With a portable you're forced to sit fairly close to be able to use the keyboard. And compactness is obviously a key feature. With a stationary display it's better to sit further away to minimize eye strain. A bigger display also makes it easier to view a movie comfortably, which is important to some buyers. 24" desktop and 17" portable likely give about the same viewing area in typical usage.
 
Partly related, wasn't there a story while back how extremely high resolution display actually caused nauziness or something on the viewers due it's "too high resolution"?
 
The higher the resolution, the flatter and smoother CGI looks. Less lifelike. More plastic. Better shaders don't change that.
Better shaders CAN change that but reading the rest of your post I think you don't really know what you want or is attempting (if that is the case) to propose -- it looks like, to me at elast, that you're just ranting on things you understand. Beyond shaders, there must be better (=less expensive) shadowing and lighting algorithms.

It's also amusing to note that you're talking about CGI and not games. The two will always be different and separated for their purposes.
 
Better shaders CAN change that but reading the rest of your post I think you don't really know what you want or is attempting (if that is the case) to propose -- it looks like, to me at elast, that you're just ranting on things you understand. Beyond shaders, there must be better (=less expensive) shadowing and lighting algorithms.
No, shadowing/lighting is only part of it. Next to that, you want lots of geometry (clothing, hair, fluids) and lots of high-res textures (normal, bump, specular, etc, next to the normal and detail ones). Both are a problem.

Geometry is a problem for the CPU (setup engine) as well as for the GPU (transforming, skinning). And textures require LOTS of very fast VRAM (the more the better, think many gigabytes), super high bandwidth and better filtering.

It's also amusing to note that you're talking about CGI and not games. The two will always be different and separated for their purposes.
I am talking about the computer graphics part of games, especially their looks. There is more to games than graphics, and more to graphics than only resolution, detail and lighting. ;)
 
To explain it a bit better: the higher the res, the more content you need to fill the available pixels.
 
Geometry is a problem for the CPU (setup engine) as well as for the GPU (transforming, skinning). And textures require LOTS of very fast VRAM (the more the better, think many gigabytes), super high bandwidth and better filtering.
CPU isn't involved, triangle setup is done on the GPU. About the only thing the CPU does is decide what to render, which is filling a command-buffer and even that can be accelerated by custom processors like SPUs.
 
CPU isn't involved, triangle setup is done on the GPU. About the only thing the CPU does is decide what to render, which is filling a command-buffer and even that can be accelerated by custom processors like SPUs.
I assume you have lots of objects, consisting of lots of polygons, and you want to do something useful with those, instead of simply pushing them all to the GPU as they are. :)

Like, seeing if they are involved, interact/intersect, running AI routines on them, changing their states, animating them, and taking all those results to do stuff like calculating their clothes and hair. And skinning the difficult ones, all before you send them to the GPU for placing, transforming, etc.

And you would probably want to run through your tree of objects to select all the textures and shaders for the materials you're going to use, and determine what you would need to do the lighting and shadowing, and collect/build those after that, and push them to the GPU as well.

Right?
 
Back
Top