Digital Foundry Article Technical Discussion Archive [2014]

Discussion in 'Console Technology' started by DieH@rd, Jan 11, 2014.

Thread Status:
Not open for further replies.
  1. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    43,576
    Likes Received:
    16,031
    Location:
    Under my bridge
    You can, easily. When you render the scene, you sample at discrete points. The first frame you sample at relative positions 0,2,4,6 etc. The next frame, you sample at points 1,3,5,7 etc. This is absolutely doable in game. It's trivial. You can just offset the camera's horizontal position by a half pixel between each 960x1080 frame. The first frame, your camera sees only the black lines and skips over the white lines because the horizontal sampling frequency isn't high enough to capture them. Next frame, shift the camera right half a pixel (1/1920th of the screen width). Now you are sampling the white lines and skipping over the black lines. In the recombination step, draw the black lines at their positions in the first frame and the white lines at their positions.
    The blog's doing exactly what I and others have described, except with the (considerable) addition of tweaking the interleaved values and not leaving them at their original, untouched positions. This reduces the deviation from the absolutes that I had in my example because the deviation is processed to be more in line with what we'd predict/expect the values to be, rather than what they were. In essence, the alternate pixel values are only interlaced when deviation is close to zero. When the deviation increases, the processing increases and we end up not using the interlaced data but computed pixel values, similar to computing AA or upscale values.
     
  2. -tkf-

    Legend

    Joined:
    Sep 4, 2002
    Messages:
    5,633
    Likes Received:
    37
    Talking about attitude? Maybe take it up a notch to a higher altitude or stay down and go for those that made a big deal of this on this forum (bf4?), unless the thread is locked their should be plenty to quote and post about in those threads. Rub it in, show them who was wrong and who was right..?

    I am totally missing the point here, i just don't see the big fuss..
     
  3. Billy Idol

    Legend Veteran

    Joined:
    Mar 17, 2009
    Messages:
    6,007
    Likes Received:
    849
    Location:
    Europe
  4. HTupolev

    Regular

    Joined:
    Dec 8, 2012
    Messages:
    936
    Likes Received:
    564
    The point of AF is approximating a supersampled texture result over the entirety of a pixel's coverage, as you suggest.

    But that's the point; you have to yell and scream at the world to get it to account for coverage over a pixel. It's actually much, much easier to take a discrete point sample than to get a supersampled average color result for a pixel. That's why early 3D titles had no texture filtering and no AA; the color result for a pixel was basically just the texture color on the object at that location, maybe modified by some extremely simple shading if you're lucky.
    Thankfully we have GPU features that help us do this stuff more easily (like configurable TMUs which accelerate the texture-filtering process in hardware).

    The blog makes it sound like exactly what GG is doing, albeit using more than just the last field when determining what to reproject.

    They're also using standard upscale on pixels which don't have anywhere to reproject from, to avoid massive artifacts in areas where the interleaved reprojection would have otherwise fallen apart completely.

    In any case, I thought it was well understood in this discussion that the naive "interlace" we were discussion was a highly simplified version of what's happening. I thought we were using it because you were arguing with the basic claim that it's possible to get a 1080p native result from small buffers, and it's easier to discuss the matter with a simple case.
    So yes, the actual solution GG is using is more complex, but not conceptually very different.
     
  5. taisui

    Regular

    Joined:
    Aug 29, 2013
    Messages:
    674
    Likes Received:
    0
    Well, I don't dispute on this, because that's what I said, that you need to judder the camera, yes? To reconstruct better details out of a static scene? When it's moving and you get your motion compensation you get just better sampling, no?

    But even with that, given a infinitely high resolution scene, color:
    0202020202

    You are saying, you can sample
    00000 in frame 1, and 22222 in frame by judder the camera.

    I'm saying you can not because of texture filtering, and your end result would be something like 1111111111. Sure you can do point sampling on the texture, but then you are trading detail/noise with shimmering, no?

    That's not what I said, no?

    So like, we are talking about the same thing but then you think I'm wrong? I seriously can't express myself clearly.
     
  6. TheWretched

    Regular

    Joined:
    Oct 7, 2008
    Messages:
    830
    Likes Received:
    23
    Just buy a progressive DVD. They've done it for a LONG while actually.
     
  7. taisui

    Regular

    Joined:
    Aug 29, 2013
    Messages:
    674
    Likes Received:
    0
    So I'm gonna stop you right there.
    You think, that with supersampled texture filtering, you will be able get as much detail as the native by merging 2 offset samples?

    Let's revisit math:

    0204060802040608020 <- assuming highly detailed scene in math
    If your sample, with texture filtering, to form your first "native" shot

    You get:
    12341234, do you dispute on this?

    so, let's try sample the offset 2 frames, at half the resolution from the scene:
    2424, frame 1, do you dispute on this?
    2424, frame 2, judder by 1 original, or preferably,
    3333, a better frame 2, when judder by 2, do you dispute?

    combined, naive:
    23432343, when your original is
    12341234

    The resulting image would look very good in terms of reproducing the native, but that's not to say it's a perfect reconstruct, and the number does show that it's blurred.

    You can try point sampling on the texutre, but I think it'll just give worse result.
     
  8. taisui

    Regular

    Joined:
    Aug 29, 2013
    Messages:
    674
    Likes Received:
    0
    I think you are missing the point. When your instantaneous moment only contains half the sample, and you try to plug in the gaps from the past frame/field, you get a good quality reconstruction, but the instantaneous details lost are still lost, because it hasn't happened in the past. Even when the scene is static and you offset the camera, the sampling at lower resolution means that details are lost and can't be recovered.
     
    #808 taisui, Mar 6, 2014
    Last edited by a moderator: Mar 6, 2014
  9. betan

    Veteran

    Joined:
    Jan 26, 2007
    Messages:
    2,315
    Likes Received:
    0
    Assuming you can sample odd and even columns separately, do you dispute a stationary scene would be same as the real 1080p sampling?

    I find your texture filtering argument quite interesting, so let's simplify this question as well.
    No AF, no PP, just with basic rasterisation (pixel shading, texturing w/ mipmaps etc) is it possible to just render even columns only, for example buy appropriately changing mipmaps?
     
  10. Lucid_Dreamer

    Veteran

    Joined:
    Mar 28, 2008
    Messages:
    1,210
    Likes Received:
    3
    DLP TVs have been doing this for years! DLPs (1080p) scan half the image in and then make a second pass for the other half. The regulations for 1080p is that the full image be on screen within a certain time. Since that time is met by DLP TVs and KZ:SF, they are considered 1080p. ;)
     
  11. Cjail

    Cjail Fool
    Veteran

    Joined:
    Feb 1, 2013
    Messages:
    2,027
    Likes Received:
    210
    This is all I have to say:

    A lot of people "judged" SF (but also Ryse, BF4, etc...) based solely on the resolution, the frame-rate, or the polygon count but have never actually played it.
    Don't follow their example ;)
     
  12. HTupolev

    Regular

    Joined:
    Dec 8, 2012
    Messages:
    936
    Likes Received:
    564
    Well, obviously unfiltered texture sampling isn't ideal.

    However, "not using any filtering" and "treat the pixels in the 960x1080 buffer as though they have anamorphic coverage" are not the only two options. This is the point I've been trying to make through this whole discussion: you want to make graphical choices for the pixels in the 960x1080 buffer as though they were in the 1920x1080 buffer. In the case of texture filtering, that means using a slimmer sample pattern.

    For instance, if you're using basic bilinear or trilinear filtering, you would simply apply a negative LOD bias; it'll look undersampled and cause aliasing if you're just upscaling the 960x1080 buffer, but it'll look correct when interleaved with the other field into the full 1920x1080 buffer. Things probably get a bit more complicated when AF gets involved, because then you have to start thinking hard about sample pattern, but there's no theoretical reason that you couldn't sample textures for your pixels as though they were part of a full-res buffer.
     
  13. TheWretched

    Regular

    Joined:
    Oct 7, 2008
    Messages:
    830
    Likes Received:
    23
    A movie dvd is actually a progressive image interlaced for DVD pressing and reconstructed on playback. I know it's an unfair comparison of sorts. But it shows that you can deconstruct an image perfectly into 2 half fields and still have full resolution afterwards (albeit in this case at half framerate of sorts).

    So, to make my comparison work. Render all odd pixels in frame one, and all even pixels a frame later. With no movement, you get the same scene just shifted by one pixel. Now reconstruct the even and odd pixels. The same as you would if you'd author a DVD. There's NO loss happening here.

    In movement, we have to used movement data to shift the previous fields according to the camera movement. Might work well, might not work at all. H264 does this to safe data, too. Just not in fields, but with the whole image. Why redraw (in data) the whole image, if the camera just moved a pixel to the left?

    It looks like KZs full 1080P is a keyframe of sorts to reduce the loss in movement/reprojection artifacts.

    Your argument is invalid to my comparison. My comparison was for static scenes, or rather in DVDs, it uses twice the framerate to reconstruct the image (as if the game was rendered at 120Hz). Of course your image information is often wrong. Nobody is questioning that. If in frame a, just the background is there and in frame b, a unit is spawned in front if you, the old frame will have little detail on that new unit. But the rest around it will still be perfect. Thus you need to "blur the lines" to have no combing effect between the frames. But that is the point of this technique.
     
  14. taisui

    Regular

    Joined:
    Aug 29, 2013
    Messages:
    674
    Likes Received:
    0
    Assume that you can, I don't dispute the conditional.
    I dispute the claim though, it's where I'm getting at.

    That's a good argument, the less details there is in the original scene, the better you can reconstruct, because there's less to reconstruct. I actually don't dispute this, but I think this is skipping a more accurate model and trading more more artificial details?

    Texture filtering is just one dimension, when you change the camera, even just so slightly, don't you change the light as observed on the pixels, at least in some extreme cases like on the tangent? It's probably not detectable with human eyes, which is why it works pretty well, but that's not to say that you can reverse a pixel-to-pixel perfectly?

    I don't dispute on the fact this is quite a feat given the the quality obtained and the amount of computation saved. I dispute on the claim that this is somehow exactly equal to what you get with native 1080p60 (maybe it's obvious, it just seem to me that few people are still yammering about it)
     
  15. taisui

    Regular

    Joined:
    Aug 29, 2013
    Messages:
    674
    Likes Received:
    0
    I don't disagree you on this, you'll get some pixels that are perfect, some pixels that are close, and some pixels that are unusable and you'll blend.

    Since nobody is saying that it's as good as native 1080p60, then I'll just have to ignore the nobody(s) then :roll:
     
  16. HTupolev

    Regular

    Joined:
    Dec 8, 2012
    Messages:
    936
    Likes Received:
    564
    I'm not sure why you're making this quip, because it's true that nobody is claiming otherwise. What people are claiming is that you can get equivalent results for static scenes, which isn't the same thing as claiming that it's always as good as native 1080p60.
     
  17. taisui

    Regular

    Joined:
    Aug 29, 2013
    Messages:
    674
    Likes Received:
    0
    I think the bias can compensate for some of the loss of detail, I don't think it perfectly solve the problem, which itself is not trivial. This is a logical fallacy, because if this ends up being more expensive than just rendering natively, then you'd just render natively, because it's faster.
     
  18. dobwal

    Legend Veteran

    Joined:
    Oct 26, 2005
    Messages:
    5,494
    Likes Received:
    1,578
    @taisui

    are you basically saying its impossible to generate a native like 1080p image from multiple 960X1080 images without judder, because you are simply sampling from copies of the same 960X1080 image?
     
  19. taisui

    Regular

    Joined:
    Aug 29, 2013
    Messages:
    674
    Likes Received:
    0
    Except that the scene is technically not static, because the camera got moved to created the offset.
     
  20. taisui

    Regular

    Joined:
    Aug 29, 2013
    Messages:
    674
    Likes Received:
    0
    I suppose that's what I'm trying to say? It seems like everyone is interpreting differently on what I'm saying??
    I'm also saying it'll not going to be a perfect reconstruct because of filtering.

    I don't even know at this point :roll:
     
    #820 taisui, Mar 6, 2014
    Last edited by a moderator: Mar 6, 2014
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...