The effects of scaling on output quality

PeterT

Regular
I originally made this image to settle the dispute in the MotoGP thread, but that was closed (and even some messages deleted I think?) before I could post it.

However, the subject of scaling and its effects on the final output image has come up quite often recently in many console discussions, so I decided to make this thread where we can examine the question in detail without the tensions present in game/system specific threads.

So, here is the image:
edge_scaling.gif

As you can see it only shows the effects of downscaling, and only on edges. The effect of upscaling, and that of both types of scaling on textures should of course also be explored, and perhaps bicubic scaling as well - though I don't believe that any console does this at present.

My interpretation: With scaling methods as good as Bilinear or higher, the final image quality is almost always enhanced as soon as the rendered resolution exceeds ~ 1.5 times the native resolution in any direction. Below that - but above 1 of course - it depends on the specific instance.
 
That last bit of your interpretation isn't doesn't follow from your examples as you didn't even show anything between x1 and x1.5 here, let alone any specific instance to support your suggestion that there could be issues there. If give it a try I assure you that you will find that even a small bit doesn't hurt.

But anyway, here is a real world examples; all settings aside from rendering resolution are identical in both shots and AA and AF were intentionally disabled to highlight the effects inherent to rendering at higher than the native display resolution:

Straight output of native 720p rendering.

Rendered at 1080p scaled down to 720p though bicubic sampling.
 
Last edited by a moderator:
kyleb said:
That last bit of your interpretation isn't doesn't follow from your examples as you didn't even show anything between x1 and x1.5 here, let alone any specific instance to support your suggestion that there could be issues there. If give it a try I assure you that you will find that even a small bit doesn't hurt.

But anyway, here is a real world examples; all settings aside from rendering resolution are identical in both shots and AA and AF were intentionally disabled to highlight the effects inherent to rendering at higher than the native display resolution:

Straight output of native 720p rendering.

Rendered at 1080p scaled down to 720p though bicubic sampling.

Your second link is still pointing to native res - fyi
 
kyleb said:
That last bit of your interpretation isn't doesn't follow from your examples as you didn't even show anything between x1 and x1.5 here, let alone any specific instance to support your suggestion that there could be issues there. If give it a try I assure you that you will find that even a small bit doesn't hurt.

But anyway, here is a real world examples; all settings aside from rendering resolution are identical in both shots and AA and AF were intentionally disabled to highlight the effects inherent to rendering at higher than the native display resolution:

Straight output of native 720p rendering.

Rendered at 1080p scaled down to 720p though bicubic sampling.

The basic thing is that if your source render is at a higher resolution, then your downsampling can take a hint from the extra information in deciding which colors should be used to imply the right pixel information at a lower resolution. This basically allows your downsampling to perform a superior form of anti-aliasing. Right?

At the same time, upscaling doesn't need to lose information, but the amount of information it can gain is limited to the predictability of the lines and patterns in your image. With polygons this can actually be done quite well (you can assume you have to improve lines mostly) but for non-predictable stuff (some textures, photos) it is much harder.

(Or so I gather from my limited experience with Paintshop Pro)
 
Arwin said:
The basic thing is that if your source render is at a higher resolution, then your downsampling can take a hint from the extra information in deciding which colors should be used to imply the right pixel information at a lower resolution. This basically allows your downsampling to perform a superior form of anti-aliasing. Right?
Yep, the downsampling is pretty much the same as the ordered gird super-sampling anti-alaising methods avalable with various PC setups.

Arwin said:
At the same time, upscaling doesn't need to lose information, but the amount of information it can gain is limited to the predictability of the lines and patterns in your image. With polygons this can actually be done quite well (you can assume you have to improve lines mostly) but for non-predictable stuff (some textures, photos) it is much harder.

(Or so I gather from my limited experience with Paintshop Pro)
Basically. It is harder to extrapolate pixels than it is to interpolate them; but quality scaling, like that available in the hardware in the 360, can produce respectable results with both.
 
kyleb said:
That last bit of your interpretation isn't doesn't follow from your examples as you didn't even show anything between x1 and x1.5 here, let alone any specific instance to support your suggestion that there could be issues there. If give it a try I assure you that you will find that even a small bit doesn't hurt.
Indeed, it doesn't follow from my examples, but I didn't include all examples I made in the initial post as it would have become even bigger that way. Anyway, I'm not at all sure that bilinear (not bicubic or lanczos or something) filtering on a "real" textured 3D scene rendered at 110% or so of the native size would look better than one rendered at exactly 100%. I guess the only way to tell for sure would be to try it with some games, but I'm too lazy to do so now.

[edit]Here is my try at formulating a somewhat mathematical argument for my proposition:
When you render an image, you discretize the 3D scene to a 2D grid. Obviously, with larger grid sizes, you will get a more detailed ("better") picture. However, the process of resizing introduces some loss of that detail, as you discretize again at a coarser level. The extent of this loss of course depends on the actual resizing mechanism used - point sampling clearly loses more than bilinear filtering, which in turn loses more than bicubic. So the point at which rendering bigger and then rescaling with one of these algorithms produces a better picture than rendering directly at the native resolution has to be larger than 100% in all cases.

kyleb said:
Yep, the downsampling is pretty much the same as the ordered gird super-sampling anti-alaising methods avalable with various PC setups.
Only for 2^n times the original image size.
 
Last edited by a moderator:
Arwin said:
At the same time, upscaling doesn't need to lose information, but the amount of information it can gain is limited to the predictability of the lines and patterns in your image. With polygons this can actually be done quite well (you can assume you have to improve lines mostly) but for non-predictable stuff (some textures, photos) it is much harder.
Upscaling tends to be primitive interpolation. There are some schemes that use weighted interpolations, and other's that mix it up with fractal patterns, but you can't create information that wasn't there on any complex image (except made-up information!). Even a basic triangle in a bitmap won't be upscaled to the same quality as an antialiased polygon draw in any upscaler I've seen.

Here's a good recap of photoshop scalers, and these have the luxury of not being realtime.
 
Back
Top