You mean 'does not'? Yeah, it doesn't need to be a linear interpolation. I was using LERP as a shorthand for interpolation, which was sloppy of me.
What's wrong with my definition? Native res == rasterised pixels. Or, to be more exact to accomodate for non-rasterising renderers, the resolution of the sampling array used to evaluate the virtual 3D space where each value in the array involves computation of the geometry at that spacial location. In the case of current triangle rasterisers, that's literally how many pixels rendered from the geometry.
AA's never been an issue, You can use multiple samples to produce one pixel. eg. 720p with 2x SSAA is understood as 1280x720 pixel each with two samples. In my above definition describing it as a sampling array, nothing precludes multiple samples from each sample space, so I think it still works. It's the resolution of the sampling array (buffer) that the GPU rasterises to as discrete values. If you were to render 1280x720 4xMSAA and then draw a 1280x720 image, it'd be 720p native, 4xMSAA. If you do the same but render each sample into a separate pixel, it'd be 2560x1440 native, 0xMSAA. The actual image quality will be lower because you aren't sampling the shaders at that resolution, but we know there are a gazillion different resolutions in play with different image components, so we just have to accept one for the native resolution and the only sensible one is geometry, which is also what we look for when pixel counting to determine native rendering res.I don't know shifty, what about MSAA then?
That shouldn't be called the "Native res". For discussing IQ we can talk native res and then apparent or equivalent res.With the advent of temporal AAs giving a supersampled look to the game, I think we may soon integrate a "perceptual resolution" concept. For instance when I compare Metro at 1080p + FXAA with Far Cry 4 at 1080p + HRAA with temporal AA, Far Cry 4 seems to run in a much higher perceptual resolution...maybe twice as high resolution actually.
Ordinary gamers probably have no right to an easy, understandable term. they don't use the data they have available sensibly as it is. If a gamer isn't tech enough to understand the concepts involved in assertaining a rendering resolution, they shouldn't be using the numbers in discussion. It's for the people who write reviews and analyses to understand the metrics (and definitions) so they can supply Joe Gamer with a simple yet accurate number. That would perhaps tie in to Globalisateur's mention of a 'perceptual resolution' which would need an even more convoluted discussion to define!Shifty there 's nothing "wrong" with that definition but the average gamer doesn't know what "rasterized" means so IMO we have to opt for another definition, one that is is closer to the average gamer understanding/vocabulary of video games...if we want to find a definition for the gamers.
I've long been an advocate of abandoning pixel counting altogether as the concept of a 'rendering resolution' is, IMO, a fallacy. that said, I'll take a stab at integrating your examples into my definition."Native resolution" is going to be harder thing to specify when more complex rendering algorithms get more common.
1080p with a cunning optimisation for framerate. If none of the pixels can be reused, the game would render all unique pixels.A)
The graphics engine calculates per pixel error metric and uses reverse reprojection for pixels that can be reprojected without any noticeable error. These pixels are reused from the last frame. Other pixels will be rasterized to the 1080p render target (including always rasterizing all the polygon edge pixels to get perfect edge quality). Error metric guarantees that the areas that are reprojected look close to perfect.
Sounds like 1080p with a non-standard rasteriser. The moment you break from ordered grid spatial sampling, the notion of a rendering resolution starts to fall apart. You're left with 'what resolution backbuffer does the rendering engine spit out' as its native resolution.B)
A pipeline rasterizes an intermediate UV buffer at lower than native resolution (UVs point to a big virtual texture that contains all the texture data). It also stores triangle edge data to this buffer to allow almost perfect reconstruction of the triangle edges. UV coordinate of the pixels that are not in the edge are interpolated from it's neighbors (producing almost perfect result since UV gradients are linear in triangle surfaces). This pipeline also does analytical (mathematical triangle edge pixel coverage) antialiasing to get perfect (infinite) antialiasing quality. The resulting image looks better than 8xMSAA 1080p in most of the cases.
1080p with adaptive subsampling, akin to dynamic resolution in Joe Gamer speak. But as per A, if the engine will then render 100% pixels if they all need rendering, then it's 1080p just with a performance optimisation.C) A pipeline rasterizes the g-buffers at full resolution, but does contrast analysis on 8x8 tiles. Tile sampling pattern is based on the contrast analysis results. For areas that have smooth gradients (for example are away from camera focus or are heavily motion blurred) the pipeline uses less samples (even less than one sample per pixel when that is possible). Image quality difference is very hard to see compared to processing every pixel.
A and C can be modified to be constant time (locked 30 fps or locked 60 fps) with adaptive quality. Pipeline C could recursively increase the tile quality (based on contrast based importance) until time runs out. Pipeline A could adjust the error metric threshold based on how many pixels need to be refreshed. Adaptive rendering with pipeline A is actually quite nice, since it spends all the excess GPU time to improve the quality (never idling the GPU). This is nice if the amount of work fluctuates, since spending more time on frame X improves the re-projected data quality of frame X+1, X+2, etc (making the future frames cheaper).Now if A and C were to adapt not to whether the work needed to be done to be an accurate representation of the spatial data, but to a framerate target, and started producing lossy image data that was perceivable, they couldn't claim 1080p as a native res.
Perhaps my definition should be a little extended to incorporate an engine that matches with a very low tolerance the same results as the x*y ordered sample grid of pixels of native resolution dimensions?