Digital Foundry Article Technical Discussion Archive [2014]

Status
Not open for further replies.
While standing still, you can always achieve 100% coverage with your samples.

I welcome anyone to prove mathematically that you can reconstruct perfectly a full frame from 2 half sampled frames. ;)

I think what you're getting hung up on is that you're visualizing pixel coverage, rather than sample point.

Please, go read about rasterization, what you are describing is not possible, because it's not how it works, and you'll understand what I am saying once you grasp the subject. I am calling you out because you are trying to argue about something that you lack knowledge of, and I am not good at bringing you up to speed so we can have constructive technical discussion

Put it another way, essentially you are arguing that you can take 2x1M pixel photos of a scene and perfectly reconstruct one single 2MB photo of the same scene.
Explain to me, how do you take 2 photos so that 1 has the odd lines and 1 has the even line, in real life.
 
Last edited by a moderator:
So what exactly are we arguing about in this thread?

It seems pretty simple really: Guerilla have discovered an interesting, credible alternative to reducing resolution to maintain framerate.
 
I welcome anyone to prove mathematically that you can reconstruct perfectly a full frame from 2 half sampled frames. ;)
If you're defining one of the half-sampled frames to be the odd samples of the full frame, and the other half-sampled frame to be the even samples of the full frame, it's trivially obvious.

So, if I manage to convince you of the plausibility behind the premise that it is possible to produce two such half-sampled frames, that will hopefully be sufficiently proofish.

Please, go read about rasterization
What about it do you think I need to learn? Anywhere to point me to? As I noted in that last post, I know a fair bit about rasterization already.

Explain to me, how do you take 2 photos so that 1 has the odd lines and 1 has the even line, in real life.
Assuming you had a static scene, and the pixel sampling plate behind your lens had half its lines disabled?
You could do it by taking a picture, slightly rotating your camera, and then taking another picture. Obviously you'd have to make sure the lens and the rotational amount was set up so that the pixels aligned "correctly" between the two shots, and real-world considerations would make it somewhat difficult to actually carry out on the fly, but the basic concept is pretty straightforward.

So what exactly are we arguing about in this thread?
People have claimed that for a static scene, the information from two offset 960x1080 buffers can be used to exactly produce a "correct" native 1920x1080 buffer.

Tatsui disagrees with the claim that it's actually possible to render a 960x1080 buffer which is the odd pixels, and at a separate time render a 960x1080 buffer which is the even pixels. Even for a static scene.
 
People have claimed that for a static scene, the information from two offset 960x1080 buffers can be used to exactly produce a "correct" native 1920x1080 buffer.

Well, you'd think that'd be obvious...

Tatsui disagrees with the claim that it's actually possible to render a 960x1080 buffer which is the odd pixels, and at a separate time render a 960x1080 buffer which is the even pixels. Even for a static scene

If Tatsui think it's impossible to render an interlaced buffer, what does he think Guerilla is doing?
 
i just thought of an example, let me ask you this. Imagine there are 1920 vertical lines in the scene, the odd lines are black, and the even lines are white, stripes.

you think there is a way to take 2 photos (960wide), 1 is completely black, and the other completely white, and you can stitch them back to the stripes. And I am saying all of them will be gray.
 
And I am saying all of them will be gray.
If you generated an anamorphic, perfectly-supersampled 960x1080 representation of the scene, then yes, all the lines would be grey.

But we're not talking about anamorphic images. We're talking about buffers generated for interleaved combination.

You're imagining that the pixels in the half-size buffer would concern themselves with wide coverage zones, maybe a bit like this.
For "correct" interleaved rendering, they would concern themselves with non-anamorphic coverage zones, just like in a "full-res" render, more like this.
(Obviously in this example, a perfectly-supersampled result would fade to grey anyway when the pixel centers are in the middle of the transition between stripes. But that's not a weakness in the approach of interleaved rendering, it applies fully to a full-res render as well; it's a plain old resolution problem.)

This is the reason I responded to your camera example by saying that half the lines in the camera would be disabled; that's the decision that makes a real-world camera example analagous to interleaved rendering. You're NOT rendering anamorphic buffers, where each pixel in the buffer is designed with a 2x1-pixel coverage in the output, you're rendering a buffer with the even pixels and a buffer with the odd pixels.
Now, I don't know, it's possible that GPU hardware makes this a bit flaky in the real world; you'd have to ask someone else whether or not it's possible to make something like TMU or ROP multi-tap sample patterns play 100% correctly with this. But the concept is solid.
 
we are talking exactly that, because rasterization is taking a virtual photo of the digital world...which is why I brought up the camera example, but looks like it didn't help.

or put it this way, this perfect interleaving buffer does not exist, at, all.
 
we are talking exactly that, because rasterization is taking a virtual photo of the digital world...which is why I brought up the camera example, but looks like it didn't help.

or put it this way, this perfect interleaving buffer does not exist, at, all.
.. What?

Basically you are saying following is impossible..
j3A6a5f.jpg

+
8unelQX.jpg

cannot be used to create following?
zZIYBEC.jpg

Simply point sample scale horizontally, create interleave pattern and additional blend or alternatively do not write every other column.

Also, you apparently didn't read any of the papers I linked which show that interleaved buffers are commonly used in realtime graphics..
 
Last edited by a moderator:
i just thought of an example, let me ask you this. Imagine there are 1920 vertical lines in the scene, the odd lines are black, and the even lines are white, stripes.

you think there is a way to take 2 photos (960wide), 1 is completely black, and the other completely white, and you can stitch them back to the stripes. And I am saying all of them will be gray.

But there was a screnshot linked that was from mp and didn't look blurred or lowres?

It seems you arguing that what already has been demonstrated as working isn't?
 
jlippo, your 2 shots came from a full image, w/o the full image you can't create those 2 shots. Your links are irrelevant, mathematical it's impossible, feel free to prove me wrong.
 
But there was a screnshot linked that was from mp and didn't look blurred or lowres?

It seems you arguing that what already has been demonstrated as working isn't?

the visual quality is good, but it is blurred. The fact that they only render 50% of the pixels and retain more details is quite a feat, still doesn't pass for native.
 
I welcome anyone to prove mathematically that you can reconstruct perfectly a full frame from 2 half sampled frames. ;)
In your argument you've missed one vital fact - the sampling offset can change between frame. You render every odd pixel on every odd field. You render every even pixel on every even field. If the odd field and even field are from the same image, you reconstruct perfectly the alternating odd and even pixel data as you'd experience sampling continuously across the image.

Going back to my earlier visual representation, you said,
If I read correctly you are merging a half resolution frame with a full resolution frame though with your examples?
I was showing what the source data at 1080p should look like, and what the data rendered using either trick was. You wouldn't render all the pixels in a field and then replace half of them - that'd be a complete waste of time!

you think there is a way to take 2 photos (960wide), 1 is completely black, and the other completely white, and you can stitch them back to the stripes. And I am saying all of them will be gray.
Why would you average the two values when sticking them back together? Just draw black line, white line, black line, white line. An upscale (render or sample at half res) would render all black as it only samples every other line.

Or in a concise summary, you've got it all wrong. ;) You've misunderstood the interlacing method and what it's doing. Your original analysis of the image quality was incorrect and subsequent arguments aren't valid.
 
jlippo, your 2 shots came from a full image, w/o the full image you can't create those 2 shots. Your links are irrelevant, mathematical it's impossible, feel free to prove me wrong.
I'm pretty sure that GPU has all the scene information necessary to render an image.
Thus it can sample it in any way it needs to.

I did show that how you can combine 2 960x1080 images with sample centers in slightly different locations to form a full 1920x1080 image.

I would love to hear what you mean by the mathematically impossible part you repeat.

edit:
Actually, it seems like you propose that when rendering an image, final color of a pixel comes from all information within a volume of a pixel pyramid. (area that pixel covers in a world space.)
This is not the case when rendering an image using rasterization, each sample is a point in space.
 
Last edited by a moderator:
V0bi4sp.jpg


black line is your scene, red lines are samples at 1920 wide, blue lines are samples at 960.
The scene is the same, and you are losing details by samples at 960 pixels a frame.
and to compensate the loss of detail, you judder the camera to sample it differently to get more details out of it, the green lines represent the next frame.
This image is incorrect, because it assumes the computer generated image is a continuous signal, and pixel colors are continuous integrals of light coming to the pixel. If this would be the case, we would also have perfect anti-aliasing, but unfortunately the rasterization process samples only a infinitiely thin sampling point in the middle of each pixel, and thus we get aliased result.

This also means that if we sample odd/even 1080p pixels every other frame and combine them together, the result is perfect 1080p when the image doesn't move (with all the same aliasing artifacts that native 1080p has). No blurring is added at all. The image is also perfect every time when the scene scrolls sideways (just scrolling, not any other movement). When something else happens the reconstruction starts to become lossy. But still, a well designed interlacing algorithm that uses all the internal scene data (to generate perfect motion vectors) at (half) 1080p would most of the time beat upsampled 900p in image quality (at lower pixel processing cost).
 
I welcome anyone to prove mathematically that you can reconstruct perfectly a full frame from 2 half sampled frames. ;)


Judging by their own explanation, they do it not from 2 but from 3 960x1080 frames + one 1920x1080p frame for the temporal AA.

So 3 temporal "history" 960x1080 frames with pixels storing motion vector + one full HD frame for the temporal AA. Seems very expensive :oops:

Apparently it's meant to improve input latency.

The temporal reprojection technique gave subjectively similar results and it makes certain parts of the rendering process faster. This reduces controller lag and increases responsiveness, which improves the KILLZONE SHADOW FALL multiplayer experience.
 
Ok, My understanding in short: They do a 960x1080p render and nudge the sampling every other frame. This WILL create a perfect 1920x1080p image for static scenes. This is not unlike how interlaced video feed can display a static scene at full resolution. In motion, they use motion information, as hinted by Sebbbi in some other post, to re-use information by moving the pixels in their predicted new locations in the new frame, to line up with the actual rendered set of pixels. If they can't predict it, they use information from neighbour pixels, which would be blurry but only locally.
 
But still, a well designed interlacing algorithm that uses all the internal scene data (to generate perfect motion vectors) at (half) 1080p would most of the time beat upsampled 900p in image quality (at lower pixel processing cost).

But it'd work much worse at 30fps, wouldn't it?
 
Status
Not open for further replies.
Back
Top