Image Quality and Framebuffer Speculations for WIP/alpha/beta/E3 games *Read the first post*

First getting Shiffty's point across, what he is stating is that the game not only operates at a lower native res but also is using a filter of sorts.

900p + filter.

what i'm saying it's 900p suppersampled. (hence the statement of the native buffer claim by Yerli.)

900p suppersampled > 900p + filter

The proof is this.

aay9e0.jpg




VS



20ae6bk.jpg
 
You misunderstand what's happening. Namco are rendering at >720p resolution and downsampling. This adds supersample antialiasing (a good thing). The target output is 720p. When that 720p image is then upscaled for higher resolution displays, the result is inferior to using the native 1365x960 framebuffer the game is rendered at. Quote DF:
It strikes me as a real shame that the 720p downscale has been included at all in the 360 code, when that extra resolution could have been used for all manner of better things - for example, an improved image at 1080p, better picture quality on 1360x768 LCDs and plasmas or a cracking picture on the forthcoming 1440x900 mode coming to the next dash update. Instead, all that effort seems to have been pretty much wasted.
Downsampling from a higher resolution to a lower is a good, expensive form of AA. Upscaling from that downsample is just a stupid waste that adds blur and is worse than using the higher resolution render without downscaling.

And the results was better than 720p with blur filters like Portal 2, proof.
Of course it is. But the result is worse than outputting at native.

If you compare game renders:

1) Render at 800p, downsample to 720p, output to 720p TV
2) Render at 720p, output to 720p TV
3) Render at 720p, blur, output to 720p TV

1 will generally look the best. It also costs the most to render.

For Ryse:

1) Render at 1080p, display on a 1080p TV
2) Render at 1080p, downsample to 900p, upscale to 1080p on 1080p TV
3) Render at 900p, upscale to 1080p TV

Option 2 does nothing good. Option 1 provides the most clarity but costs the most. Option 3 is softer than option 1 but faster to render. Option 2 looks soft and blurry like option 3 but costs as much, more even, than option 1!

It isn't worded that way though. The upscaler IS the AA solution they say they're using.
It's a tweet from a non-English speaker AFAIK. Those two don't mix well. Upscalers do not antialias so there's obviously something wrong with the source info (unless they are calling blurring AA). There's no sense in taking some factoid like that at face value when it doesn't fit in with other known quantities (not least of which is Crytek telling us they render at 900p!)

I'm just going by what is worded and what i see.
If you go by rendering engine design, you'd appreciate your suggestion is something no-one would bother with.
 
what i'm saying it's 900p suppersampled. (hence the statement of the native buffer claim by Yerli.)

I don't understand, where is the "super" part? If it's rendered at 900P, then it has to be down sampled to 720P or lower to be considered super sampling. If you just scale it up, then call it scaling.
 
I think what Shifty is saying is that if you can render it at 1080p, there's no benefit of shrinking it down to 900p and blow it back up to 1080p again.

This is different than say, 1080p->900p versus 900p+blur filtering.
and when all things are equal, higher resolution = better IQ.
 
First getting Shiffty's point across, what he is stating is that the game not only operates at a lower native res but also is using a filter of sorts.

900p + filter.

what i'm saying it's 900p suppersampled. (hence the statement of the native buffer claim by Yerli.)

900p suppersampled > 900p + filter

What I'm not understanding here is what is the super sampling res?

All MSAA and SSAA solutions employ a higher than final res render target and then downsample to the final output res. What you appear to be suggesting is that Ryse is rendered at higher than 900p downsampled using SSAA/MSAA and the rescaled back to a 1080p buffer for display to the TV. TVs are not monitors and can only handle 720p and 1080p input so the idea that they are running this image down from a higher res and then scaling again just doesn't seem to make sense to me.

In the article you cite Namco's final render target is 720p but the intermediate target is about 1365x960 which is frankly far too little for effective SSAA/MSAA probably accounting for why the AA is barely noticable (also it's not a linear scale in both directions which I find even more bizarre, ~7% horiz and ~33% on the vert???). Given that the final render target for XB1 and PS4 is 1080p it seems to me this technique is not relevant unless Crytek are using an intermediate target of 2047x1440 (for the exact Namco scaling) or higher. Cervat tweets indicate he is rendering at 1600x900 and upscaling, I'm really not seeing how the Namco technique is meant to be applied here

Edit: Ninja'd by Shifty with a much better explanation!
 
Of course it is. But the result is worse than outputting at native.

Settling for 900p it's self is lesser than 1080p native. At this point it's back and forth, because you say it's worse than outputting at Native while still defaulting to a lower res as a favorable choice.


The uncrazy solution here - Ryse is rendered at 900p with AA just as normal, and upscaled just as normal, like every developer does when rendering sub-native.

What if Crytek's goal was to just remove shimmering, and do to Ryse's maximized shadder usage they chose suppersampling instead of ppAA or filters.
 
Settling for 900p it's self is lesser than 1080p native. At this point it's back and forth, because you say it's worse than outputting at Native while still defaulting to a lower res as a favorable choice.

You appear to be suggesting that Ryse is rendering at res 'A', downsampling to 1600x900 and then using the h/w upscaler to output 1920x1080 to the tv?

This makes no sense as if A>1920x1080 then downsampling to 1920x1080 would be preferable.

If A<1920x1080 AND A>1600x900 then the optimal IQ is obtained by just upscaling A to 1920x1080 and applying ppAA

Adding a downsample stage to a target below your final output res (1920x0180) and then upscaling will never produce better IQ than just upscaling to to final res and applying a postprocess AA technique. MSAA and SSAA require your render res to be higher than your output res.
 
The results is up there as i posted. What You guys are thinking about over and over is the Clarity you could have at the higher native res, but you're not thinking about edges or shimmering, or even the fact that 900p is pretty sharp at it is. suppersampling would just give a softer feel.

the conclusion that everyone keeps defaulting to doesn't make any better sense.
"Because suppersampling doesn't look better than native they should not bother and just stick with either a lower or higher rez" (as if there can be no in between.)
 
You keep using the term super sampling, but you don't define it or use it in the traditional sense. BTW "shimmering" and "edges" are a product of under-sampling, so rendering at a lower frame buffer resolution makes it worse.

At this point I think you are just hand waving and using imprecise language trying to justify 900P. Occam's Razor; 900P is ~69% the number of pixels to process of 1080P, you don't need to look any further for an answer.
 
But using a internal higher resolution buffer makes it look better.

example.


http://oi41.tinypic.com/aay9e0.jpg

VS

http://oi44.tinypic.com/20ae6bk.jpg

like i said, no one is thinking about an in between. the realm doesn't exist. :no:

I can't see the links, only reading your words. Just be explicit, every time you post you leave too much up to interpretation. Map out all the buffers, their sizes and the operations between them. For example, what "internal higher resolution buffer" are you talking about? They are rendering at 1600x900 and scaling to 1920x1080. What are you suggesting they are doing other than that?
 
I'm just Using Yerli's quotation as the base reference. (whether be it a good or poor translation.)

hsgU48L.png


If 1600X900 IS the native frame buffer then upscaling it to 1080p would have no effect. something like that only works when rendering a higher native resolution. literally what i find fascinating is yerli's choice of words, the key words that tie it to the method that i'm referring to is because he stated "native framebuffer 1080p" "upscaler for AA".

which is similar to super sampeling. (regardless of it being a poor choice to a native higher rez)
27yreic.jpg



Had it been said any other way it would have been simplified to "900p with AA upscaled to 1080p" ; the choice of 6 simple words, with the addition of "same as E3".
 
As has been pointed out this picture is 1365x960 downsampled to 1280x720

If you believe Ryse is X resolution downsampled to 1600x900 what is that higher res?

Even If X>1600x900 then upscaling would produce worse IQ including the shimmering etc you describe, that is only eliminated by downsampling to the display res not to an intermediate target that is lower than display res (for next gen that is 1920x1080).

From the wikipedia article you quoted "This is achieved by rendering the image at a much higher resolution than the one being displayed, then shrinking it to the desired size" super sampling only works for downsampling to the display resolution.
 
well, there are algorithms that sharpens the image as it scales up, it doesn't necessarily look worse to human perception...can look sharper at times by enhancing the edges...perhaps that's what Cevat is saying. From image processing POV it's really introducing artifacts, but IQ perception is more than just math.

Actually let's visit the one that's hitting the fan in the last couple of days, COD:Ghost:

This is 1080p YouTube feed from E3:
028Eomr.jpg


This is the 1080p Capture off IGN's "1080p PS4" feed:
(http://www.ign.com/articles/2013/10/18/inside-call-of-duty-ghosts-squads-mode)
ppxQw9f.jpg


I don't know about you but it seems to me that the bottom image is no way native 1080p...
 
Last edited by a moderator:
^

Lousy Video compression, what is it?

That's before we get into taking into account trying to base things off of youtube compression to begin with
 
^

Lousy Video compression, what is it?

That's before we get into taking into account trying to base things off of youtube compression to begin with

high video compression ratio actually would make jaggies less obvious, perhaps it's a bit counter-intuitive.
The jaggies on the gas station roof is pretty clear.
 
If you look at the first IGN videos from Monday and Tuesday where they are just in the menus the compression isn't as challenging and the soldier model is still jaggy as hell. I've suspected since then the PS4 build they were using didn't have any anti-aliasing at all implemented, post process or otherwise. All the trailers released previously appear to be from PC builds with lots of AA. Hopefully the shipping version of the game on next gen consoles will have some form of AA.
 
The results is up there as i posted. What You guys are thinking about over and over is the Clarity you could have at the higher native res, but you're not thinking about edges or shimmering, or even the fact that 900p is pretty sharp at it is.

The gist of what you're trying to say basically seems to be that a combination of downsampling/upscaling purportedly lends better overall image quality in the final result than just outputting the image at native res (or, as a matter of fact, just using the resources you apply for all the resizing to run a basic AA alogarythm) - and that honestly doesn't make much sense.

Putting it to the extreme, it's just like saying that games should be rendered @1080x1920, then downsampled to 9x16, and finally upscaled to 1080p again - because that would (most certainly) get rid of any shimmering ... :rolleyes:

The result, however, will basically just be a VERY brute force, resource-intensive method of BLURRING the original 1080p image ... and the lower the resolution you initially downsample to, the more intensive the final blur will be.

All that being said, the example you keep posting (http://oi44.tinypic.com/20ae6bk.jpg) is flawed, because it just compares the IQ of one image natively rendered @1280x720 to the IQ of another image originally rendered @1365x960 and then downsampled to 1280x720. In that case (as many have stated before), of course the latter one will look better.

The proper comparison for your argumentation, however, would be to upscale that latter image (orginally rendered 1365x960 and then downsampled to 1280x720) to 1365x960 again - and compare the result to a clean, native 1365x960 render - with that comparison most probably turning out to be practically indiscernible from comparing a native 1365x960 render to a 1365x960 render beneath a simple blur filter ...
 
The results is up there as i posted. What You guys are thinking about over and over is the Clarity you could have at the higher native res, but you're not thinking about edges or shimmering, or even the fact that 900p is pretty sharp at it is. suppersampling would just give a softer feel.
The same sort of softer feel as a blur gives (if you're upscaling the supersampled buffer) but at more cost.

"Because suppersampling doesn't look better than native they should not bother and just stick with either a lower or higher rez" (as if there can be no in between.)
So you're saying downsample+upsample is in between native res rendering and just upsampling, right? Okay, I agree with that. But it costs even more than native rendering! Why bother downsampling when you can just output native? If it's to solve shimmering, which a poky little 20% super sample won't achieve, you can just blur the result to get the same results as your suggestion but for less effort. We're not saying 900p looks between than 1080p downsampled and upsampled. We're saying that 1080p was too taxing for the hardware so Crytek picked 900p (which they've told us), and are upsampling that because it's less effort. What you propose is more effort than rendering at 1080p.

If you still don't get it, I'll post pictures.
 
Back
Top