*spin-off* How should we classify resolution

turkey

Veteran
What part of the pipeline dictates the resolution number ?

It seems like this generation we have seen many tricks and post processing seems to be even more complex with access to internal buffers, are we moving away from that idea of a fixed resolution?

Killzone multiplayer had its temporal (?) upscale and we have other lower quality assest being overlaid and used and abused, compressed and expnaded, interpolated and reconstructed with ever cleverer shader passes,detecting where more detail is required and applying resources more smartly.

Where is the line currently drawn and could we see that change going forward especially with more novel rendering techniques.
 
We've already had this discussion and never came to consensus AFAIK. Back then, I think I used the term 'opaque geometry buffer' as the standard reference point (what pixel counters count), but ultimately I regarded the metric as pointless and one that should be abandoned. You'll get a lot of different viewpoints, most of which will be justifiable and valid.
 
Just to make a note here, this is one of the major differences between real time and offline rendering - in CGI, usually every layer and pass is rendered at the same resolution, which is also the size of the final image. Except maybe IMAX, where a lot of the material is upscaled to some degree and it's very rare to see material that was rendered at full resolution.
 
We've already had this discussion and never came to consensus AFAIK. Back then, I think I used the term 'opaque geometry buffer' as the standard reference point (what pixel counters count), but ultimately I regarded the metric as pointless and one that should be abandoned. You'll get a lot of different viewpoints, most of which will be justifiable and valid.
Yea I remember that discussion (think it was around the days when Alan Wake came out or maybe something more recent), it was getting a bit ridiculous to mention "opaque geometry buffer" everytime someone wanted to refer to what we otherwise know as the native rendering resolution. It's like arguing that 1280*1080 is still 1080P because it's 1080 vertical lines and progressive scanned and 1920*1080 should be referred to it as such.

It's pointless discussion over semantics.
 
nightshade... There is no sematics (not talking about the game you mentioned, just the resolutions).

A 1080p resolution is a resolution with 1080 lines in progressive mode. In this mode 1920*1080 is 1080p, 1280*1080 is 1080p. Even 1*1080 is 1080p!

Problem is people are confusing the resolution used by TVs with an aspect ratio of 1,77 with 1080 lines in progressive mode and normally called 1080p. And this is 1920*1080 also known as Full HD.

So if 1*1080 in progressive mode is 1080p, it shure is not Full HD. And also is not the 1080p TVs and games talk about. Because that 1080p uses an aspect ratio of 1,77, and this means a resolution of 1920*1080 aka Full HD!
 
Sounds a lot like a semantics to me.

1080p is colloquially used to refer to full HD which in tern is used to refer to 1920*1080. By arguing that those norms dont mean a thing as 1080p (or any other resolution such as 720p, 900p or 480p) is to be taken in the very literal sense, when it was never meant to be used in that sense is arguing over semantics imo.
 
Last edited:
I belive I get your point! But I guess the problem is calling 1080p to a specific 1080p resolution instead of Full HD. It's as if we usually called DOG to German Sheppards, and were shocked to see all other races called as dogs.
Question is: they are Dogs! And we should have not chosen that designation to refer to a specific kind of dog in the first place.
 
Yes but it i the way it is now, you can't really change it and bringing in more definitions is only going to be more confusion because not everyone will be using it in the same context.
 
This is an industry that so confused people that we had to redefine MegaByte as 1 x 1,000,000 (10^6) kilobytes instead of it's original definition of 1x 1,048,576 (2^10) kilobytes and name that the MibiByte. As someone who had to field calls for years about 'where's my missing gigabytes?' I think hoping for a consistent definition of anything marketing gets their hands on is like asking for an honest used car salesman.
 
So if we ignore the different data types for the final output image atm we have resolution as well as aspect ratio as variables;

The order uses a sub 1920 x 1080 resolution but its 1:1 pixel mapped with the inclusion of black bars..

If we are talking rendering its not full hd but if we are talking what our tv receives then it is full hd.

Are folks trying to gague the qty of pixels pushed to compare or marvel at gpu power or simply trying to know if they can expect a 1:1 image on their screen.
 
So if we ignore the different data types for the final output image atm we have resolution as well as aspect ratio as variables;

The order uses a sub 1920 x 1080 resolution but its 1:1 pixel mapped with the inclusion of black bars..

If we are talking rendering its not full hd but if we are talking what our tv receives then it is full hd.

Are folks trying to gague the qty of pixels pushed to compare or marvel at gpu power or simply trying to know if they can expect a 1:1 image on their screen.
I think the main confusion now a days is the fact that the general population has just this console generation realized that output resolution and the amount of pixels actually drawn are two entirely different things. It's not like this is a new thing. Genesis and SNES used different resolutions but both output the industry standard (at the time) 480i.

Things are further complicated now because of how games are rendered. They aren't always rendered to a single, full frame buffer anymore. Often they are composed from multiple components rendered at different resolutions. So even if your frame is 1920x1080 it doesn't really matter if you are composing it out of low res components.

I'd also like to point out that many TVs also don't actually have resolutions that match the 640/720/1080 numbers that are on the box. I've seen a few 1080p sets that have an odd resolution, and even more 720p sets that have 760, 768, or even 1050 vertical resolutions. So that output image isn't 1:1 mapped to your output device's resolution (or even aspect ratio!) Even more confusing, most sets have overscan on by default, so your native resolution image is often scaled by your output device anyway.

Resolution is quickly becoming a term much like rendering pipelines were just a few years ago. Back then a pipeline was very much an understood term where all pipelines were more or less equal with few variables, and now it's more more complicated.
 
Some passes (blurs for example) output low frequency data. You do not need to sample these passes at high frequency (one sample per pixel) to reach high quality. Bilateral upscale works wonders for these cases, resulting in quality that is almost impossible to distinguish from higher sampling rate.
 
On PS4 and XB1: is it still possible to use a low resolution output device (aka a super old TV)? If yes, how do the consoles achieve the low res signal when the original output of the game is Full HD? Does this give downsampling AA in such cases?
 
Some passes (blurs for example) output low frequency data. You do not need to sample these passes at high frequency (one sample per pixel) to reach high quality. Bilateral upscale works wonders for these cases, resulting in quality that is almost impossible to distinguish from higher sampling rate.

Yep, this is what Guerrilla are doing for their VL
2tksb0.png

capturecuslw.png
 
Even on Full HD Games not all we see in the screen is calculated at that resolution. Shadows, effects, etc, can be calculated at less. So I think we should just judge by the output frame buffer resolution. If it outputs natively at 1920x1080p then it´s full HD. If all pixels are calculated natively or if they are calculated using Temporal reprojection using motion vectors and color information between three 960x1080 images, although a quite different method of calculation, and a not much desired processing saving trick, seems a bit irrelevant when talking about the output resolution. This is not a cheap interpolation of pixels/re-scaling method of a lesser resolution using a single image/frame buffer, and it even took several months to be detected. I would argue more if this is effectively a progressive mode (in this case applied on columns rather than lines) than if it is Full HD.

Although I admit all of this is rather open to discussion!
 
This is an industry that so confused people that we had to redefine MegaByte as 1 x 1,000,000 (10^6) kilobytes instead of it's original definition of 1x 1,048,576 (2^10) kilobytes and name that the MibiByte. As someone who had to field calls for years about 'where's my missing gigabytes?' I think hoping for a consistent definition of anything marketing gets their hands on is like asking for an honest used car salesman.
Look up the word mega and mibi the original definition of mega = 2^10 is wrong
 
'Mega' adopted two definitions, the second binary meaning with the advent of computers. It was perfectly understood in context and there was no need for a new prefix, until HDD manufacturers started using the decimal definition (at least, that's the first time I saw it confused in the computing industry). It makes scientific sense to have an official set of binary prefixes, but this wasn't necessary, and just as we don't invent whole new words to replace the homophones/homonyms in our general languages, there wasn't any need for a clarification of the prefixes.
 
Back
Top