Digital Foundry Article Technical Discussion Archive [2014]

Status
Not open for further replies.
you are saying that there is a way to sample the black lines into 1 frame and the whites into another one, I am saying that you can not.
You can, easily. When you render the scene, you sample at discrete points. The first frame you sample at relative positions 0,2,4,6 etc. The next frame, you sample at points 1,3,5,7 etc. This is absolutely doable in game. It's trivial. You can just offset the camera's horizontal position by a half pixel between each 960x1080 frame. The first frame, your camera sees only the black lines and skips over the white lines because the horizontal sampling frequency isn't high enough to capture them. Next frame, shift the camera right half a pixel (1/1920th of the screen width). Now you are sampling the white lines and skipping over the black lines. In the recombination step, draw the black lines at their positions in the first frame and the white lines at their positions.
With a lowered resolution buffer you always lose details. Reading the blog, its not what GG is doing anyways.
The blog's doing exactly what I and others have described, except with the (considerable) addition of tweaking the interleaved values and not leaving them at their original, untouched positions. This reduces the deviation from the absolutes that I had in my example because the deviation is processed to be more in line with what we'd predict/expect the values to be, rather than what they were. In essence, the alternate pixel values are only interlaced when deviation is close to zero. When the deviation increases, the processing increases and we end up not using the interlaced data but computed pixel values, similar to computing AA or upscale values.
 
Sony and KZ, I guess I should've known. Not gonna mention this again, but the attitude has once again been noted.

Talking about attitude? Maybe take it up a notch to a higher altitude or stay down and go for those that made a big deal of this on this forum (bf4?), unless the thread is locked their should be plenty to quote and post about in those threads. Rub it in, show them who was wrong and who was right..?

I am totally missing the point here, i just don't see the big fuss..
 
what? Then what's the point of texture filtering like AF...
The point of AF is approximating a supersampled texture result over the entirety of a pixel's coverage, as you suggest.

But that's the point; you have to yell and scream at the world to get it to account for coverage over a pixel. It's actually much, much easier to take a discrete point sample than to get a supersampled average color result for a pixel. That's why early 3D titles had no texture filtering and no AA; the color result for a pixel was basically just the texture color on the object at that location, maybe modified by some extremely simple shading if you're lucky.
Thankfully we have GPU features that help us do this stuff more easily (like configurable TMUs which accelerate the texture-filtering process in hardware).

Reading the blog, its not what GG is doing anyways.
The blog makes it sound like exactly what GG is doing, albeit using more than just the last field when determining what to reproject.

They're also using standard upscale on pixels which don't have anywhere to reproject from, to avoid massive artifacts in areas where the interleaved reprojection would have otherwise fallen apart completely.

In any case, I thought it was well understood in this discussion that the naive "interlace" we were discussion was a highly simplified version of what's happening. I thought we were using it because you were arguing with the basic claim that it's possible to get a 1080p native result from small buffers, and it's easier to discuss the matter with a simple case.
So yes, the actual solution GG is using is more complex, but not conceptually very different.
 
You can, easily. When you render the scene, you sample at discrete points. The first frame you sample at relative positions 0,2,4,6 etc. The next frame, you sample at points 1,3,5,7 etc. This is absolutely doable in game. It's trivial. You can just offset the camera's horizontal position by a half pixel between each 960x1080 frame.

Well, I don't dispute on this, because that's what I said, that you need to judder the camera, yes? To reconstruct better details out of a static scene? When it's moving and you get your motion compensation you get just better sampling, no?

But even with that, given a infinitely high resolution scene, color:
0202020202

You are saying, you can sample
00000 in frame 1, and 22222 in frame by judder the camera.

I'm saying you can not because of texture filtering, and your end result would be something like 1111111111. Sure you can do point sampling on the texture, but then you are trading detail/noise with shimmering, no?

In essence, the alternate pixel values are only interlaced when deviation is close to zero. When the deviation increases, the processing increases and we end up not using the interlaced data but computed pixel values, similar to computing AA or upscale values.

That's not what I said, no?

Maybe it is some kinda of naive implementation, say pixels:
0 1 0 1 0 1 0 1 0 1, 0 from n-1, 1 for frame n.

if the color difference between neighboring pixels is > than a threshold, then blend the neighboring 1s, otherwise, take the 0s as input.

So like, we are talking about the same thing but then you think I'm wrong? I seriously can't express myself clearly.
 
The point of AF is approximating a supersampled texture result over the entirety of a pixel's coverage, as you suggest.

So I'm gonna stop you right there.
You think, that with supersampled texture filtering, you will be able get as much detail as the native by merging 2 offset samples?

Let's revisit math:

0204060802040608020 <- assuming highly detailed scene in math
If your sample, with texture filtering, to form your first "native" shot

You get:
12341234, do you dispute on this?

so, let's try sample the offset 2 frames, at half the resolution from the scene:
2424, frame 1, do you dispute on this?
2424, frame 2, judder by 1 original, or preferably,
3333, a better frame 2, when judder by 2, do you dispute?

combined, naive:
23432343, when your original is
12341234

The resulting image would look very good in terms of reproducing the native, but that's not to say it's a perfect reconstruct, and the number does show that it's blurred.

You can try point sampling on the texutre, but I think it'll just give worse result.
 
Just buy a progressive DVD. They've done it for a LONG while actually.

I think you are missing the point. When your instantaneous moment only contains half the sample, and you try to plug in the gaps from the past frame/field, you get a good quality reconstruction, but the instantaneous details lost are still lost, because it hasn't happened in the past. Even when the scene is static and you offset the camera, the sampling at lower resolution means that details are lost and can't be recovered.
 
Last edited by a moderator:
so one, that figure is exactly trying to demonstrate that the pixels are discrete (see the horizontal lines), so I don't understand why you'd say its wrong. If you are being picky on the exact positions on the samples, then they are wrong because I draw this quickly. The point of that you are losing details when you lowered the resolution, and you don't get to reconstruct that back perfectly.
Assuming you can sample odd and even columns separately, do you dispute a stationary scene would be same as the real 1080p sampling?

and 2, I am saying there is no way that you can pick and choose your don't get all your odd pixels into your half wide odd frame, and your even pixels into your half wide even frame.

I find your texture filtering argument quite interesting, so let's simplify this question as well.
No AF, no PP, just with basic rasterisation (pixel shading, texturing w/ mipmaps etc) is it possible to just render even columns only, for example buy appropriately changing mipmaps?
 
I welcome anyone to prove mathematically that you can reconstruct perfectly a full frame from 2 half sampled frames. ;)
.

DLP TVs have been doing this for years! DLPs (1080p) scan half the image in and then make a second pass for the other half. The regulations for 1080p is that the full image be on screen within a certain time. Since that time is met by DLP TVs and KZ:SF, they are considered 1080p. ;)
 
This is all I have to say:

A lot of people "judged" SF (but also Ryse, BF4, etc...) based solely on the resolution, the frame-rate, or the polygon count but have never actually played it.
Don't follow their example ;)
 
The resulting image would look very good in terms of reproducing the native, but that's not to say it's a perfect reconstruct, and the number does show that it's blurred.

You can try point sampling on the texutre, but I think it'll just give worse result.
Well, obviously unfiltered texture sampling isn't ideal.

However, "not using any filtering" and "treat the pixels in the 960x1080 buffer as though they have anamorphic coverage" are not the only two options. This is the point I've been trying to make through this whole discussion: you want to make graphical choices for the pixels in the 960x1080 buffer as though they were in the 1920x1080 buffer. In the case of texture filtering, that means using a slimmer sample pattern.

For instance, if you're using basic bilinear or trilinear filtering, you would simply apply a negative LOD bias; it'll look undersampled and cause aliasing if you're just upscaling the 960x1080 buffer, but it'll look correct when interleaved with the other field into the full 1920x1080 buffer. Things probably get a bit more complicated when AF gets involved, because then you have to start thinking hard about sample pattern, but there's no theoretical reason that you couldn't sample textures for your pixels as though they were part of a full-res buffer.
 
I think you are missing the point. When your instantaneous moment only contains half the sample, and you try to plug in the gaps from the past frame/field, you get a good quality reconstruction, but the instantaneous details lost are still lost, because it hasn't happened in the past. Even when the scene is static and you offset the camera, the sampling at lower resolution means that details are lost and can't be recovered.

A movie dvd is actually a progressive image interlaced for DVD pressing and reconstructed on playback. I know it's an unfair comparison of sorts. But it shows that you can deconstruct an image perfectly into 2 half fields and still have full resolution afterwards (albeit in this case at half framerate of sorts).

So, to make my comparison work. Render all odd pixels in frame one, and all even pixels a frame later. With no movement, you get the same scene just shifted by one pixel. Now reconstruct the even and odd pixels. The same as you would if you'd author a DVD. There's NO loss happening here.

In movement, we have to used movement data to shift the previous fields according to the camera movement. Might work well, might not work at all. H264 does this to safe data, too. Just not in fields, but with the whole image. Why redraw (in data) the whole image, if the camera just moved a pixel to the left?

It looks like KZs full 1080P is a keyframe of sorts to reduce the loss in movement/reprojection artifacts.

Your argument is invalid to my comparison. My comparison was for static scenes, or rather in DVDs, it uses twice the framerate to reconstruct the image (as if the game was rendered at 120Hz). Of course your image information is often wrong. Nobody is questioning that. If in frame a, just the background is there and in frame b, a unit is spawned in front if you, the old frame will have little detail on that new unit. But the rest around it will still be perfect. Thus you need to "blur the lines" to have no combing effect between the frames. But that is the point of this technique.
 
Assuming you can sample odd and even columns separately, do you dispute a stationary scene would be same as the real 1080p sampling?

Assume that you can, I don't dispute the conditional.
I dispute the claim though, it's where I'm getting at.

I find your texture filtering argument quite interesting, so let's simplify this question as well.
No AF, no PP, just with basic rasterisation (pixel shading, texturing w/ mipmaps etc) is it possible to just render even columns only, for example buy appropriate changing mipmaps?

That's a good argument, the less details there is in the original scene, the better you can reconstruct, because there's less to reconstruct. I actually don't dispute this, but I think this is skipping a more accurate model and trading more more artificial details?

Texture filtering is just one dimension, when you change the camera, even just so slightly, don't you change the light as observed on the pixels, at least in some extreme cases like on the tangent? It's probably not detectable with human eyes, which is why it works pretty well, but that's not to say that you can reverse a pixel-to-pixel perfectly?

I don't dispute on the fact this is quite a feat given the the quality obtained and the amount of computation saved. I dispute on the claim that this is somehow exactly equal to what you get with native 1080p60 (maybe it's obvious, it just seem to me that few people are still yammering about it)
 
Of course your image information is often wrong. Nobody is questioning that. If in frame a, just the background is there and in frame b, a unit is spawned in front if you, the old frame will have little detail on that new unit. But the rest around it will still be perfect. Thus you need to "blur the lines" to have no combing effect between the frames. But that is the point of this technique.

I don't disagree you on this, you'll get some pixels that are perfect, some pixels that are close, and some pixels that are unusable and you'll blend.

Since nobody is saying that it's as good as native 1080p60, then I'll just have to ignore the nobody(s) then :rolleyes:
 
Since nobody is saying that it's as good as native 1080p60, then I'll just have to ignore the nobody(s) then :rolleyes:
I'm not sure why you're making this quip, because it's true that nobody is claiming otherwise. What people are claiming is that you can get equivalent results for static scenes, which isn't the same thing as claiming that it's always as good as native 1080p60.
 
but there's no theoretical reason that you couldn't sample textures for your pixels as though they were part of a full-res buffer.

I think the bias can compensate for some of the loss of detail, I don't think it perfectly solve the problem, which itself is not trivial. This is a logical fallacy, because if this ends up being more expensive than just rendering natively, then you'd just render natively, because it's faster.
 
@taisui

are you basically saying its impossible to generate a native like 1080p image from multiple 960X1080 images without judder, because you are simply sampling from copies of the same 960X1080 image?
 
I'm not sure why you're making this quip, because it's true that nobody is claiming otherwise. What people are claiming is that you can get equivalent results for static scenes, which isn't the same thing as claiming that it's always as good as native 1080p60.

Except that the scene is technically not static, because the camera got moved to created the offset.
 
@taisui

are you basically saying its impossible to generate a native like 1080p image from multiple 960X1080 images without judder, because you are simply sampling from copies of the same 960X1080 image?

I suppose that's what I'm trying to say? It seems like everyone is interpreting differently on what I'm saying??
I'm also saying it'll not going to be a perfect reconstruct because of filtering.

I don't even know at this point :rolleyes:
 
Last edited by a moderator:
Status
Not open for further replies.
Back
Top