Digital Foundry Article Technical Discussion Archive [2014]

Status
Not open for further replies.
Ever since I've seen Reach screenshots and then learned that they were still not full 1280x720 images, I've been convinced that horizontal scaling is a perfectly valid option for video games. I actually wonder why Crytek chose to scale in both dimensions; although the results do seem to justify their approach.

And yeah, 1080p seemed to be far to much for these GPUs and bandwidths from the beginning. I've already said it - Last of Us looked totally amazing at 720p, noone complained about the image quality, so there's not much reason to waste performance on more pixels instead of better looking ones. Twice the fragment shading power will always look better than a somewhat sharper image with worse looking pixels.
 
I understand that resolution is very important to alot of pc gamers. Resolution is also important in the console realm, no one wants to go back to 640x480 or lower. The thing that I dont understand is why console gamers these days put resolution above everything else when it comes to judging a games visual performance. Last gen and the jump to hd did alot of good for console gaming. I worry with the current attitude of resolution means everything that we could miss out on some amazingly beautiful and large gaming worlds. When is the expectation of resolution jumps between console gens going to stop.
4K tvs are going to be much more common in 6 years. Do we really need games to be that high def?
To me I would rather have amazingly realistic games that come close to cinema quality visuals where the worlds feel alive. If we could have quality like this would it really matter if it only ran at 900p?
 
A 2x jump in pixels is easier to see as an improvement, compared to higher quality pixels at the same resolution.

I've also talked about diminishing returns well before the launch of these systems. Most people are just unable to see many of the subtle advancements like proper gamma correct lighting pipelines and physics based shading and such, not to mention artistic differences, so it's easier for them to look for checklists and technical parameters when they want to see advancements.

I don't want to sound elitist here, but it really is a fact that sight is a skill that can and has to be trained, which usually takes years, and most gamers just don't get enough information or advice on what to develop and how, so they are sort of blind to the subtle stuff that most developers are trying to implement on these new systems. It's just natural that they get stuck on the easy to read parameters instead.

Fortunately many developers are much better than that and they put their focus on the stuff that really matters - and they do a good enough job to convince even these people. So the spec fans may not know why a game looks better, but they can still sense it... However they'll try to explain the advances with tech specs and bullet points again, missing the point.
 
Can anyone offer detail explanation on what this "temporal upscale" phrase means?
Is it just a term that means composing the current output frame with the current and the -1 frame, or there's some voodoo happening with reprojection of the pixels?
 
Why do they choose to do vertical interlace instead of horizontal interlace? there is gotta be more horizontal camera movement than vertical movement. This means they choose to make the game look worse just so the vertical lines were 1080. This seems to me like they were trying to lie to the public.
 
Is it just a term that means composing the current output frame with the current and the -1 frame, or there's some voodoo happening with reprojection of the pixels?

Yeah it should be something like taking the Z and color info from the previous frame and trying to re-apply the colors to the new Z values and blending them with the new color info. At least that's what I'd try to do ;)

Differences at 30+fps should be small enough for the tech to work reasonably well...
 
Yeah it should be something like taking the Z and color info from the previous frame and trying to re-apply the colors to the new Z values and blending them with the new color info. At least that's what I'd try to do ;)

Differences at 30+fps should be small enough for the tech to work reasonably well...

I don't understand, please elaborate. What's the benefit of doing this versus say, just to stitch the final output by toggling the even and the odd lines?

It would make more sense to do a motion estimate and try to project the n-1 frame, and just double the lines from the current frame for areas that were rendered in n-1.
 
I guess it's that most movement is continuous in any video game, so there's a good chance that the camera had a different coverage of the scene in the previous scene and thus there's more info available in combining the current frame with the previous one. A complete change of direction in scene traversal would happen less than once in a second most of the time, even in multiplayer.

So if you reproject pixel info from the previous frame, there's a good chance that you can display more info than what you could sample at just 960x1080. Also, because it's most likely that there was some slight movement, there's a good chance that the extra pixel info is mostly a good coverage of the columns that you've skipped rendering on the current frame.

I've never done or seen any measurements of how many pixels the camera movement in a 30 or 60fps game may account to, but I imagine devs have done the math and decided to use these techniques because they're a good match to the actual data...
 
So in reality it appears as though it lies somewhere between 1920ix1080 and 1920 x 1080, but closer to 1920i?

It probably also depends on camera movement - ie. if you stay completely still, it's closer to 960 columns, but if your movement speed is a good match then there's more information in the final frame and it gets quite close to 1920 columns.
 
But your original sample (from the previous frame) is offset by X pixels from the current one.
If you're lucky, X is 1, so it's an exact coverage of the empty columns of the current frame at 1920 columns.
If you're not so lucky, it can still offer extra information to patch up the "empty" rows of the current frame to some extent.

Obviously X is going to be a function of things like FOV and player movement speed and framerate. GG has probably measured that X is reasonably close to 1 at their player speed and target resolution + framerate; at least it probably offered better results compared to the other trade-offs available.
 
I'd alternate the 960 columns every frame so that
1) for stills it would be a perfect 1080p
2) the "interpolated" columns would be based on previous frame's "real" columns (in addition to current real columns?).
 
Obviously there are many possible ways to implement this tech and probably each approach offers unique advantages and disadvantages. The important point is that with a high enough frame rate, it would look better than a simple static 960x1080 buffer upscaled to 1920x1080 - and yet offer faster rendering times compared to a full 1920 buffer, and better overall image sharpness compared to an 1280x720 buffer.

I'd still like to see comparisons of 720p upscaled to full 1080p, 960x1080 upscaled to full 1080p, and 960x1080 with temporal reprojection upscaled to full 1080p; with maybe an 1600x900p upscaled to full 900p thrown in as well.
Obviously all would be inferior to a native 1920x1080 image, but I really wonder which trade off would offer the best image quality.
 
Obviously there are many possible ways to implement this tech
The approach is basically always to render frames at alternating offsets. You get perfect coverage in still shots, and reprojection based on stuff like motion buffers handles the rest reasonably naturally (none of this "pray that player movement just happens to usually put the pixels in the right place" shenanigans).

Of course, usually this general technique is used for temporal supersampling, so the "alternating offsets" are jumping between subpixels. For instance, Halo Reach alternates every other frame between two locations offset by a diagonal half-pixel; it doesn't do any reprojection (it attempts to avoid ghosting by only turning on AA blending when the motion buffer registers minimal movement at a pixel), and it blends the results in a quincunx pattern.
In the case of KZSF MP, it's likely that every other field has its render done by a half-pixel horizontal offset (if you're looking at the anamorphic 960x1080 buffer), or a full-pixel horizontal offset (if you're looking at it in terms of the final 1920x1080 buffer).

I'd still like to see comparisons of 720p upscaled to full 1080p, 960x1080 upscaled to full 1080p, and 960x1080 with temporal reprojection upscaled to full 1080p; with maybe an 1600x900p upscaled to full 900p thrown in as well.
You'd really have to compare the results under different circumstances of scene makeup and motion.

This reprojected interlacing thing, barring any really funky approximations and funny business, should look exactly like a raw 1920x1080 render for still shots.
 
But your original sample (from the previous frame) is offset by X pixels from the current one.
If you're lucky, X is 1, so it's an exact coverage of the empty columns of the current frame at 1920 columns.
If you're not so lucky, it can still offer extra information to patch up the "empty" rows of the current frame to some extent.

Obviously X is going to be a function of things like FOV and player movement speed and framerate. GG has probably measured that X is reasonably close to 1 at their player speed and target resolution + framerate; at least it probably offered better results compared to the other trade-offs available.

The problem is that the original sample only has half the resolution at 960, even when you offset it you don't gain the pixels that were originally missing. Can you elaborate on your color remap to Z technique? I"m having a hard time understanding it.
 
That's precisely the problem, people actually knew something's off but then they still shrug it off when it comes to "their" platform but would put the same shit under a microscope when it's not.

I think GT4 did the same trick on the PS2 for 1080i?
I wonder, given 1080p output, how would the following compare:

1. 960 x 1080, stretched 2X horizontally
2. 1358 x 764, scaled up
3. 960 x 1080, with frames blending

The assumption is that the cost are comparable (it might not be), obvious GG chose 3 over the other 2, but is it because it's the best looking solution?

Their platform? I don't get it, if they were told this is 1080 which wasn't even a complete lie, then I can understand they would blame the blur on something else. We have seen examples on AA that made the high res questionable. When it then turns out it was a wacky resolution that was to blame I think they just got their point proven, as I said, even if they didn't know.

I am not sure if these machines are really capable of full 1080p, I would question it on mp titles as we get into the generation.
 
Status
Not open for further replies.
Back
Top