Digital Foundry Article Technical Discussion Archive [2014]

Status
Not open for further replies.
and the analysis showed that the details are not on par with native 1080p.

Or put this another way, "it's true 1080p but looks just a little blurred" :rolleyes:
...exactly what happens when you upscale an image to a higher resolution.

Not exactly the same. When you upscale traditional resolutions it affects all pixels at once, all the time. Here they are interpolating some pixels sometimes.
 
Not exactly the same. When you upscale traditional resolutions it affects all pixels at once, all the time. Here they are interpolating some pixels sometimes.

Exactly the same in terms of how the blur is introduced through interpolation, not exactly the same in terms of the algorithm.
 
Last edited by a moderator:
The consensus definition for the resolution of a renderer is pretty strong, with an assumption of non-interlaced full resolution with the outputs coming from the timestep (given if the framerate is defined as well) they are assigned to. Playing cute new language games afterwards doesn't mean you can keep using the old definition.
There is a certain amount of data that is generated as the render output of a frame as the actual fragment output for frame N, and that instantaneous sample is not full resolution.

This wouldn't be the first time terminology was used in a fast and loose manner in relation to KZ:SF, what with the earlier debate about an imprecise usage of the term ray-tracing.

It's safe to say what is being done here is not in accordance with the consensus definition.
I wouldn't call it 1080p or 1080p60 without a big honking asterisk.

How about we call it this: 1 8 p 0?
 
Does "60 fps" denote 60 960X1080 frames per second or 60 1920X1080 frames per second where half the pixels are done with their reprojection tech.

It is the later.
Also, the tech clearly works - while many have noticed that something isn't entirely right, noone really suspected how few fragments were actually rendered per frame.

I agree with 3dilettante's points in general here, but it is important to remember that the loss of image quality from the tech is a function of how fast the movement is. So a completely still image will be very very close to the "real thing"; and when there's fast movement, much of the detail probably can't be noticed anyway.
 
I have a question regarding the temporal interpolation and what affects its quality (negatively) the most: is it

a) variable frame update time (KZ MP fps fluctuate quite a bit, maybe the artifacts are especially prominent in scenarios with lower than expected 60Hz update)

or b) change in camera (or viewpoint) speed, as this needs to be taken into account to compute the interpolation

or even c) both(!): the actual camera viewpoint distant per actual frame time, which is something like the actual 'view frame velocity'.

?

So anyway, you need an adaptive interpolation algorithm...right?
 
Exactly the same in terms of how the blur is introduced through interpolation, not exactly the same in terms of the algorithm.
We've covered this already.

It's not even remotely the same in terms of how the blur is introduced through interpolation. Partly because in the case of KZSF MP, "interpolation" isn't even all that accurate of a term. You're not producing intermediate values from a smaller number of samples, you're reusing known information which approximates the real values that would have been there had you done full-res spatial sampling.

Which gives a completely different result, visually, than typical upscaling. In some circumstances (no motion), KZSF MP looks like a native 1080p image. In the theoretical worst-case scenario where the scene changes entirely from one frame to the next, obviously you're not going to do better than raw 960x1080 would, because in that situation your temporal samples are worthless.
The result is not "the same" as typical upscaling. That is blatantly wrong.
 
I have a question regarding the temporal interpolation and what affects its quality (negatively) the most: is it

a) variable frame update time (KZ MP fps fluctuate quite a bit, maybe the artifacts are especially prominent in scenarios with lower than expected 60Hz update)

or b) change in camera (or viewpoint) speed, as this needs to be taken into account to compute the interpolation

or even c) both(!): the actual camera viewpoint distant per actual frame time, which is something like the actual 'view frame velocity'.

There are cases where gross movement of the camera or geometry may not be sufficient indicators for the dynamic behavior of the surfaces being rendered.
A naive implementation might have problems with fine detail or effects that react independently or disproportionately to small geometry changes.

A 30 Hz strobe light might cause a naive implementation problems even if nothing is moving, as a possible example.
Some of the most noticeable examples shown appear to be rapid particle effects or lighting changes, for which geometry and the camera don't seem to be good proxies for tracking.
The implementation details should be interesting.
 
The result is not "the same" as typical upscaling. That is blatantly wrong.

Ya ya ya, clam down, I think I actually provided the details on the subject, so cool your jets. Where exactly did I say results are the same anyways? My analysis shows it's superior in fact.

It is what it is, refusing to call it a certain way doesn't change the fact that it's just born this way. I've written code that reverse jpeg compression artifacts a decade ago, I think I probably know a little about image processing.
 
Last edited by a moderator:
There are cases where gross movement of the camera or geometry may not be sufficient indicators for the dynamic behavior of the surfaces being rendered.
A naive implementation might have problems with fine detail or effects that react independently or disproportionately to small geometry changes.

A 30 Hz strobe light might cause a naive implementation problems even if nothing is moving, as a possible example.
Some of the most noticeable examples shown appear to be rapid particle effects or lighting changes, for which geometry and the camera don't seem to be good proxies for tracking.
The implementation details should be interesting.

Maybe it is some kinda of naive implementation, say pixels:
0 1 0 1 0 1 0 1 0 1, 0 from n-1, 1 for frame n.

if the color difference between neighboring pixels is > than a threshold, then blend the neighboring 1s, otherwise, take the 0s as input.

I was thinking maybe there's some motion compensation, but then that would introduce blur on the weapons, but you can probably render it with the UI (pretty common that they are usually in a different pass anyways).
 
Well, I wasn't sure how else to read this:

Particularly given that you had previously expressed this sentiment:

It's pretty simple, let me break it down:

"Exactly the same in terms of how the blur is introduced through interpolation, not exactly the same in terms of the algorithm."


What I am saying is that because the source material is not native 1080p, the process of "voodoo" and turn two 960x1080 frames into one single 1080p is a form or upscale. I think this "voodoo" is a form of upscaling, and I think this "voodoo" uses some forms of interpolation.

If you disagree that this "voodoo" is a form of upscaling, then hey, I can't stop you from calling your dog a cat, right?

"Lol, how is "looks blurred" not an "upscaled" look?"

This is to respond to the selected few that insisted the MP images look like native 1080p but at the same time says they look a little bit blurred, but they are not upscaledbased on how a 1080p native frame is stitched for the output. This is a false at the logic level.
 
You're rendering 960x1080 and pulling in data from another frame to fill out the rest. I definitely would not say it is equivalent to rendering 1920x1080. You're pulling in other information to complete the frame, which is kind of what upscaling algorithms do.
Nope (not 2D upscales anyhow). This reprojection/interpolation concept will give differing results depending on the difference between the previous frame and current one. An upscale gives differing results depending on neighbouring pixels.

Let's consider a line of pixels of values :

2 2 4 4 4 2 6 6 6 8 8 6 4 2 : frame one
2 2 2 4 4 4 2 6 6 8 8 6 4 4 : frame two

Frame two has the scene scrolling to the right with some changes happening on right side. The interlaced framebuffer for the second frame on screen would look like this:
Interlaced said:

2 2 2 4 4 4 2 6 6 8 8 6 4 2
: interlaced data
2 2 2 4 4 4 2 6 6 8 8 6 4 4 : frame two source

...with deviation...

0 0 0 0 0 0 0 0 0 0 0 0 0 2
- total 2
If we instead render frame two at half horizontal res and tween the inbetween values, we get:
Interpolated said:

2 _ 2 _ 4 _ 2 _ 6 _ 8 _ 4 _
: frame two rendered data

2 2 2 3 4 3 2 4 6 7 8 6 4 4 : frame two with interpolated data
2 2 2 4 4 4 2 6 6 8 8 6 4 4 : frame two source

...with deviation...

0 0 0 1 0 1 0 2 0 1 0 0 0 0
- total 5
Now let's do the same where the second frame is radically different from the previous one:
2 2 4 4 4 2 6 6 6 8 8 6 4 2 : frame one
8 8 8 0 0 0 6 6 4 0 0 0 0 6 : frame two

Interlaced said:

8 2 8 4 0 2 6 6 4 8 0 6 0 2
: interlaced data
8 8 8 0 0 0 6 6 4 0 0 0 0 6 : frame two source

...with deviation...

0 6 0 4 0 2 0 0 0 8 0 6 0 4 - total 30
Interpolated said:

8 _ 8 _ 0 _ 6 _ 4 _ 0 _ 0 _ : frame two rendered data
8 8 8 4 0 3 6 5 4 2 0 0 0 0 : frame two with interpolated data
8 8 8 0 0 0 6 6 4 0 0 0 0 6 : frame two source

...with deviation...

0 0 0 4 0 3 0 1 0 2 0 0 0 6 - total 16
We can see that interpolation has 2.5x the deviation of the interlaced data in a similar set of values between frames, but interpolation has 1/2 the deviation of the interlaced data when the interlaced data has a significant delta between frames. And that's with a very tidy, regular upscale. An irregular upscale like 900p will have a different deviation amount and distribution.

Obviously the interlaced reconstruction can involve more than a straight interlacing that can affect the selected values, but I hope this post clarifies the different approaches and outcomes, and that all solutions have strengths and weaknesses. Much like postFX AA, smartbuffer reconstruction techniques have a lot of variables to play with to balance performance with IQ, with notable pros and cons.
 
If you disagree that this "voodoo" is a form of upscaling, then hey, I can't stop you from calling your dog a cat, right?
Upscaling involves taking the values of neighbouring pixels from the same frame and averaging them. Interlacing involves taking values from a different frame and dithering them together. The interlacing artefacts of striping seen in KZ are mathematicallly impossible with an upscale, hence we can be very clear that they are two different animals.
 
It's pretty simple, let me break it down:

"Exactly the same in terms of how the blur is introduced through interpolation, not exactly the same in terms of the algorithm."

What I am saying is that because the source material is not native 1080p, the process of "voodoo" and turn two 960x1080 frames into one single 1080p is a form or upscale. I think this "voodoo" is a form of upscaling, and I think this "voodoo" uses some forms of interpolation.

If you disagree that this "voodoo" is a form of upscaling, then hey, I can't stop you from calling your dog a cat, right?

"Lol, how is "looks blurred" not an "upscaled" look?"

This is to respond to the selected few that insisted the MP images look like native 1080p but at the same time says they look a little bit blurred, but they are not upscaledbased on how a 1080p native frame is stitched for the output. This is a false at the logic level.

What are you trying to prove? That this is just like all those other upscaled games we have seen last and current generation?
AFAIK not a single of them was able to produce a real non upscaled full resolution screen. Killzone can if you don't move around... A upscale would still look blurry.

This is a new approach and it will have to be judged as time goes by vs the other solutions , but it seems like this would be a awesome solution to the xb1. I hope they didn't patent it. And that it isn't a special Sony hardware thingy they came up with or it requires extraordinary CU resources.

We have blurry games already, let's get something that looks better and in those not so frantic and fast paced games it seems this technique would be even stronger.. Interlacing, we didn't even know we missed you..
 
Upscaling involves taking the values of neighbouring pixels from the same frame and averaging them. Interlacing involves taking values from a different frame and dithering them together. The interlacing artefacts of striping seen in KZ are mathematicallly impossible with an upscale, hence we can be very clear that they are two different animals.

I'm not sure if upscaling by definition, limits you to just single 2D sample, the fact that you can utilize z-value (not a neighboring pixel) for edge detection and enhance the scaling quality does not make it not an "scaling"

Or as DF called it, "temporal upscale"

Obviously, judging by the quality the MP is not just de-interlacing, which makes it that much more interesting. It's still resampling the pixels, which includes some form of convolution/blending. From what I gather, the buffer is still samples at half the frequency, even if you get 2 frames, you won't be able to recover all the details, because at the source level, 2 "real" pixels were already sampled at 50% lower detail.

If I read correctly you are merging a half resolution frame with a full resolution frame though with your examples?

note: I don't think dithering is the right term.
 
Last edited by a moderator:
What are you trying to prove? That this is just like all those other upscaled games we have seen last and current generation?

Before you go and try to smear me on things that I had never said, go back and read a little. Or let me refresh the memory, I freaking quantified the additional benefits with an analysis.

Let me say it again and once for all: the technique provides a far superior final image, comparable to a 900P upscaled, at almost 40% less pixel cost. It's really cool.
 
Last edited by a moderator:
This is a new approach and it will have to be judged as time goes by vs the other solutions , but it seems like this would be a awesome solution to the xb1. I hope they didn't patent it.

I think the most important issue here is that the tech works best if the framerate is at 60fps. That means the game engine has half the CPU time to simulate the world and this is a serious limitation.

Also, the amount of GPU power used is almost the same as it'd have to be for a non-interlaced full res image at half the frame rate (60 fps).
On top of that, the final frames moved around still have to be moved at 60fps, so bandwidth requirements are somewhat greater than 30fps full res would be.

So for RPG type games and complex open world games, it's much better to just go with full res and 30fps. This tech is almost exclusive for 60fps games.
 
Before you go and try to smear me on things that I had never said, go back and read a little. Or let me refresh the memory, I freaking quantified the additional benefits with an analysis.

Let me say it again and once for all: the technique provides a far superior final image, comparable to a 900P upscaled, at almost 40% less pixel cost. It's really cool.

So in fact, the cat is actually a dog!?
 
To illustrate the merge reconstruction (naive):

2488246086 actual

(typical upscale)
3388333377 half sample, upscale 2X, final
1100113311 deviant=12

(deinterlace, naive)
3 8 3 3 7 half sample, bias left
6 5 5 4 3 half sample, bias right
3685353473 merged, naive
1203113413 dev = 18

the interesting part is actually how the details being preserved:

2488246086 actual
2406226820 edge=8, avg 4

3388333377 method 1
0505000400 edge=3, avg 5

3685353473 method 2
3232221340 edge=9, avg 2

method 2 preserves more edge, though less sharp from the lower original sampling.

Ya, ya, ya, I know I know, but being able to show this mathematically is pretty cool.
 
Status
Not open for further replies.
Back
Top