hash-DEFINE "Native Res" *spawn

Shortbread

Island Hopper
Legend
Rockstar already announced that it's 1080P/30fps on both PS4 and Xbox One

Yes they announced 1080p, but that doesn't mean native 1080p (see below). IMHO, games with sub-native 1080p resolutions (960 x 1080 ala PS4 KZ:SF MP... 1360 x 1080 ala XB1 COD:AW... 1920 x 800 ala PS4 TO:1866 ) shouldn't be listed or eluded to as being native 1080p (1920 x 1080).

Point being; we have no solid proof R* is running both at native 1080p... could be dynamic situation or maybe a slightly scaled back vertical resolution.

Alongside the new First Person Mode, Grand Theft Auto V for PlayStation 4, Xbox One and PC features hundreds of additions and enhancements including 1080p resolution at 30FPS on PS4 and Xbox One (4K compatible on PC).
 
I am no pixel counter, but from the direct feed images of the XB1 version of GTA5 it looks pretty sharp. I dont see Rockstar games lying when they say both versions are 1080p native. Is it really that hard to believe that a re release of GtAV can run at 1080p 30 fps on the Xbox One?
 
Has nothing to do with XB1. It's what a few developers/publishers are labeling 1080p these days. Plus R* never stated native. I'm just stating there could be a possibility one or both could be using different methods (dynamic or lower vertical scan). It's not that I'm doubting R*, it's just better to err on the side of caution this generation.
 
Yes they announced 1080p, but that doesn't mean native 1080p (see below). IMHO, games with sub-native 1080p resolutions (960 x 1080 ala PS4 KZ:SF MP... 1360 x 1080 ala XB1 COD:AW... 1920 x 800 ala PS4 TO:1866 ) shouldn't be listed or eluded to as being native 1080p (1920 x 1080).

Point being; we have no solid proof R* is running both at native 1080p... could be dynamic situation or maybe a slightly scaled back vertical resolution.
KZSF MP is 1080P, it's not sub native just because of the technique it uses to create a 1080P image.
The end result you get is a 1080P native image with no upscale and hence it is 1080P because that's what it matters, how it achieves that is a different discussion, even more so because it is the only game out there which uses this technique. But if we are to go down the route of calling it sub native because of it's non traditional technique then we might as well call every game sub native because they all use elements that are rendered below 1080P. When people speak of resolution they are strictly speaking about the opaque geometry and in that case Shadowfall gives you an output of native 1080P in MP using a non traditional technique.

Likewise for order it's 1080P native because the pixel mapping is 1:1 with no upscale. Sub native and Sub HD are terms used when the pixel mapping is not 1:1 and there is an upscale involved. Halo 2 Anniversary and AW on XB1 would be sub native by this definition, Shadowfall and Order won't.
 
KZSF is only 1080p in stills. In motion, its results differ from 1080p. It is rendering 960 x 1080 and then using a clever upscale, giving at times a 1080p native image, but it's still an upscale. The lack of interpolated (blurred) pixel values doesn't make it any less of an upscale from 960 x 1080.
 
KZSF is only 1080p in stills. In motion, its results differ from 1080p. It is rendering 960 x 1080 and then using a clever upscale, giving at times a 1080p native image, but it's still an upscale. The lack of interpolated (blurred) pixel values doesn't make it any less of an upscale from 960 x 1080.

Technically it is native 1080p in stills and slow motions. It's only when the motion is moderately fast that the engine switches to 960x1080.

It's very similar to the way some temporal AAs work, only on stills and slow motions and tend to break on fast motions.

EDIT: Source from gaf showing the game at a medium steady rate is still 1080p and interlaced only in fast motions: http://www.neogaf.com/forum/showpost.php?p=103106999&postcount=1694
Technically the game has a pure dynamic resolution.
 
Last edited:
AFAIK it never 'switches' to 960x1080. Every frame, 960x1080 new pixels are rendered, and 960x1080 pixel values are calculated using previous and current data.

If we count rendering resolution as number of pixels drawn (probably not valid for an IQ thread that's more concerned with native resolution) then Shortbread's post is correct. Even if not though, KZSF isn't doing the same workload nor achieving the same IQ as a 1920x1080 unique pixel renderer.
 
AFAIK it never 'switches' to 960x1080. Every frame, 960x1080 new pixels are rendered, and 960x1080 pixel values are calculated using previous and current data.

If we count rendering resolution as number of pixels drawn (probably not valid for an IQ thread that's more concerned with native resolution) then Shortbread's post is correct. Even if not though, KZSF isn't doing the same workload nor achieving the same IQ as a 1920x1080 unique pixel renderer.
Yes that is correct, however what I was saying is that it is 1080P even when in motion. However, during fast motion the artifacts start to appear which is obviously due to the rendering approach they have.
 
Why are we still having this KZSF MP 1080pness discussion?

If "native resolution" refers to spatial sample rate, KZSF MP is not native 1080p.

If "native resolution" refers to the maximum clarity the game can (sometimes, heh) resolve prior to being sent to the system for output, KZSF MP is native 1080p.

Funsies: If KZSF MP can be called "native 1080p", why don't we call ISS "1080p2xSGSSAA"? (The answer is that we're talking about the sampling in the two ways.)

This is pointless linguistics. The only thing happening is people claiming that other peoples' English is wrong.
 
Why are we still having this KZSF MP 1080pness discussion?
...
This is pointless linguistics. The only thing happening is people claiming that other peoples' English is wrong.
If we're going to have a discussion, we need common terms. Sorting out the meanings of words and phrases is part of that, especially as the goalposts move. Last gen it was literally a case of how many pixels an engine was rendering (opaque geometry) that defined resolution. Those samples were then upscaled with interpolated values introducing blur, affecting image quality. Both technique and subjective result were directly proportional so the application of the metrics was comparable - 600p meant blurry quality. With the introduction of image reconstruction techniques, we need to decide whether it's the engine workload that we're measuring or the on-screen results, at least as a common vernacular so we don't need to clarify every single resolution reference.

The very existence of KZSF in this thread proves this as there isn't a common interpretation of its rendering resolution, making it impossible to talk about KZSF's resolution without clarifying which measure one's using.
 
With the introduction of image reconstruction techniques, we need to decide whether it's the engine workload that we're measuring or the on-screen results
I'm not sure that's quite the distinction that resolves the issue; as image reconstruction techniques are compromises relative to raw spatial sampling that give approximate results, engine workload and on-screen results can be related with some vague degree of proportionality. More or less everyone seems to agree, for instance, that KZSF MP has reduced engine workload compared to taking 1920x1080 spatial samples every frame, and that there are some drawbacks in image quality (i.e. the on-screen result) under some circumstances (particularly moving scenes with high-frequency spatial details).

To me, it seems like the contentious distinctions are:
1-how we regard samples based on how they'e produced (i.e. the distinction of spatial samples versus "sampling" through temporal reprojection), and
2-whether we're talking about sample rates or final reconstruction resolutions.
 
Last edited:
KZSF is native 1920x1080p. Yes, it uses interlaced fields of 960x1080, but there is no actual upscaling going on, or even much interpolation for that matter. The engine uses motion vectors to reproject previously rendered pixels into the missing areas of the interlaced frames. That technique is able to reproduce a nearly perfect native 1080p image 90% of the time. Even when some interpolation needs to be used, it's a different technique than upscaling. For example, 1080i is still native because each field represents each alternate row of pixels. They don't simply render at a lower resolution, they render each alternating column of pixels as a field, which is then combined with the remaining columns the next frame. Any motion gets interpolated under a normal interlacing situation, but in this case most of that motion gets reconstructed from existing motion information. Only what's left needs to be interpolated. I'd say it results in an image identical tonative 1080p90% of the time.
 
Around the circle we go...

Short counter-argument, how can something that achieves the same as 1920x1080 90% of the time be native 1920x1080 which achieves 1920x1080 100% of the time?
 
Yes, it uses interlaced fields of 960x1080
"Interleaved" might be a better word. "Interlacing" is a fairly specific term, and it's not something that KZSF does.

but there is no actual upscaling going on, or even much interpolation for that matter.
This can be true in static imagery, since it's possible to map spatial samples directly to their screen-space locations in the 1920x1080 reconstruction buffer.

However, in motion you're wrong on both counts. KZSF MP does use spatial upscaling when it detects that the reprojection isn't valid. And perhaps more significantly (since it doesn't merely apply in pathological cases) the reprojection is always interpolated when in motion unless there's perfect alignment between a pixel in the reconstruction buffer and in the buffer that's being reprojected from.
 
Native resolution argument will be quite interesting in future as shading may become resolution independent.
Meaning one may shade 1M samples per frame independent of resolution of final render and polygon edges would be at final resolution.
 
Meaning one may shade 1M samples per frame independent of resolution of final render and polygon edges would be at final resolution.
So, MSAA?

:p

(Thought: If a dev rendered a "540p4xMSAA" game that used the geometry samples as pixels in a 1080p output buffer, would we have a discussion analogous to the temporal reprojection discussion?)
 
This is what GG said anyway:

"Native is often used to indicate images that are not scaled; it is native by that definition.
In Multiplayer mode, however, we use a technique called “temporal reprojection,” which combines pixels and motion vectors from multiple lower-resolution frames to reconstruct a full 1080p image.
If native means that every part of the pipeline is 1080p then this technique is not native.
Games often employ different resolutions in different parts of their rendering pipeline.
Most games render particles and ambient occlusion at a lower resolution, while some games even do all lighting at a lower resolution. This is generally still called native 1080p"
 
The notion of scaling is outdated. Scaling means lots of pixels have their value calculated from neighbours', and they represent a LERP between those values resulting in an ill-defined image. Best case scenario, you have a resolution that's a factor of the target res and every other pixel is a raw value. eg. 960x1080 upscaled to 1920x1080 means every odd column is native and every even is an interpolated value. Worst case, every single pixel value is derived from the source data and not untouched.

But scaling's just one type of image reconstruction. "Native" != "not upscaled". It means supplied without modification from the pixel renderer. To date, that's always meant upscaling, but the introduction of cleverer techniques means we must broaden the scope. Native therefore means 'a pixel produced directly from the rasterisation process for that pixel and not calculated from other rasterised pixel values.'

KZSF falls foul of that because half the pixels are computed not by their rasterisation but by the construction of image data from other rasterised data, just like image upscaling does even if with a different algorithm with different visual results.

So in it's simplest form, "native res" == "number of rasterised pixels (of opaque geometry - see previous discussions on res!)"
 
Back
Top