Various Rendering Techniques: Sparse Rendering, Temporal Reprojection, Artifacts Filtering

Temporal reconstruction was still blatantly ignored. With it, anything that spends no more than ONE frame still will already be resolved with as many samples as it would in native 4k. Quincux never did that in any of its uses. I'm not even sure if its hardware integration was flexible enough to allow that to be implemented.
 
It's obvious... the X version is sharper than that of PC... DOF explains a part of the difference.

Overall, 3 reasons explain the difference :

- Native 4k vs CB
- Higher assets
- DOF

jpg


jpg


jpg


When there is no DOF on PS4 Pro, the difference is more like that :

jpg


jpg
it doesn't look that bad with the same settings. As you pointed out there is some loss in detail and it's no magic and nothing out of the ordinary.

Is it inferior to native? At the moment it's used in cases the GPU cannot sustain 4k with first generation implementations, you are comparing to 4k native using far more GPU resources.
It is fake 4k. You are discussing the use of resources and while it uses less resources, it doesn't compensate for the loss of detail.

OneX : native 4k (2160p or 3840x2160) which is 8.294.400 pixels

PS4 Pro: 1920x2160, which is 2160 checkerboard = 4.147.200 pixels

What if we allowed a fairer fight with similar GPU usage, say 5 or 6k checkerboard supersampled down to 4k Vs 4k native. Would that be as cut and dry?
what do you mean by that? Using SSAA on top of checkerboard wouldnt just blur the image somewhat? I don't think that method can be reliable to elucidate differences because if you go 5k-6k checkerboard supersampled you are just adding more inexistent detail in the first place.
 
Temporal reconstruction was still blatantly ignored. With it, anything that spends no more than ONE frame still will already be resolved with as many samples as it would in native 4k. Quincux never did that in any of its uses. I'm not even sure if its hardware integration was flexible enough to allow that to be implemented.
an interesting comparison is going to be PS4 Pro checkerboard vs OneX 4k Enriched visuals mode, which uses some kind of upscaling
 
Quincunx had one solution, blur. CB has many different degrees of implementation giving anything from blurs to checker patterns to crisp, high fidelity renders.
It feels like there might be some confusion in this thread where people are assuming that NVidia Quincunx was basically a half-res checkboard-pattern render that produced intermediate pixels by blending the known neighboring pixel colors. That's not what it was.
NVidia Quincunx was an implementation of 2xMSAA that used a wide resolve filter. While "standard" 2xMSAA only blends the two samples associated with a pixel when calculating the final pixel color, Quincunx used those two samples plus three associated with neighboring pixels. In that sense, it was basically applying a subpixel-wide-ish blur to the image.

There's a good reason for using a wide resolve filter like this. The "blur" can also be thought of as weighting samples to pixels, rather than picking a single pixel to tie each sample to.
If a small bright speck is sitting in the middle between two pixel centers, should it add brightness to only one pixel, or be distributed across both?
If the bright speck starts at one pixel center and slowly moves toward the other pixel center, should it have constant contribution to the first pixel until it starts crossing the centerline between the two pixels, at which point it rapidly transitions to having constant contribution to the second pixel? Or should it smoothly fade out of the first pixel and smoothly fade into the second as it goes through the full motion from one pixel center to the next?
The first case will look almost like the detail is popping between the two pixels... even with perfect supersampling! That's an example of reconstruction aliasing, and it's happening because you're using small rectangles (that rectangle covering the area that people often visualize as "the pixel") as a resolve filter.

That's not to say that a wide resolve is necessarily the right thing to do. Obviously the softness is a compromise that needs to be weighed. But devs don't do it without reason, which should be especially clear when you consider that a wide resolve tends to be technically costlier than a narrow one, since it blends more samples to calculate the final pixel color.

NVidia Quincunx was a bit of a silly case, though, because NVidia presented it as if it blending more samples gave visual results similar to rendering more samples. Which is silly nonsense. The two things solve two different problems: games using NVidia Quincunx still look 2x sampled, and it's because they are 2x sampled.

//==========================

Anyway, "checkerboarding" as the phrase is currently being used has a broader and different meaning. Where Quincunx is a 2xAA pattern, the samples in "checkerboarding" are all full pixels, and the intermediates get reconstructed to create a full non-checkerboard pixel grid... probably typically with the checkerboard being alternated between frames, and temporal sampling assisting in the reconstruction process.

So, checkerboard inevitable looking worse than native doesn't really have anything to do with the sample pattern looking like a Quincunx sample pattern. It inevitably looks worse because it's only producing half as many fresh pixels each frame as native. That's not really any more interesting than pointing out that spatially upscaling an image to a high resolution tends to produce results inferior to rendering at that high resolution.

(Of course, a blurry resolve filter could also be used in a game with checkerboarded sampling.)

Temporal reconstruction was still blatantly ignored. With it, anything that spends no more than ONE frame still will already be resolved with as many samples as it would in native 4k.
Depending on changes between frames, a temporal sample might not end up in a place that makes it very useful, or the sample's color is no longer meaningful because of lighting changes, or maybe motion made it hard to position accurately in the new frame, or it's a part of a surface that's no longer visible (and perhaps some other surfaces have just become visible and have no prior-frame samples to represent them), etc.

Temporal reconstruction is extremely useful, but it's only very boring special cases (i.e. no scene change between frames) where you sort of have the ability to get "as many samples" as you would if all the potential samples (temporal and new) were produced new in the new frame.

Quincux never did that in any of its uses. I'm not even sure if its hardware integration was flexible enough to allow that to be implemented.
Whether Quincunx can use temporal samples depends on how broadly you're using the phrase "Quincunx." Halo Reach obviously isn't using NVidia's implementation, for instance, but according to Bungie it uses a diagonal half-pixel jitter between frames and a quincunx resolve.

It is fake 4k. You are discussing the use of resources and while it uses less resources, it doesn't compensate for the loss of detail.
Who's saying it does?

what do you mean by that?
They mean a 5K or 6K image using checkerboard sampling and temporal reconstruction internally, then scaling the result down to 4K.

This would be one way of comparing checkerboard with native at similar rendering costs.
(Alternately, compare 4K checkerboarding to a non-checkerboarded render upscaled from a resolution much lower than 4K.)
 
Last edited:
OneX : native 4k (2160p or 3840x2160) which is 8.294.400 pixels

PS4 Pro: 1920x2160, which is 2160 checkerboard = 4.147.200 pixels.
Such a naive statistical comparison misses the point. I point to JPEG as an example - a high quality JPEG is a fraction of the size of a raw bitmap, but virtually indistinguishable from the original in subjective quality. The purpose of any computer rendering is not to be a mathematically perfect, but subjectively appealing, using all sorts of hacks and compromises. CBR and other reconstruction techniques are an enabler in this regard, the same as rendering lower resolution buffers which is so commonplace yet no-one's up in arms. Is a 'native 4K' game with quarter res shadow, light and reflection buffers more true to 4K than a CBR game all buffers at the same full resolution?
 
It is fake 4k. You are discussing the use of resources and while it uses less resources, it doesn't compensate for the loss of detail.

OneX : native 4k (2160p or 3840x2160) which is 8.294.400 pixels

PS4 Pro: 1920x2160, which is 2160 checkerboard = 4.147.200 pixels

Wrong. It's the same amount of pixels on Pro... the Pro simply uses the data from the previous frame to fill the current frame. It's far less expensive than native 4k, but you have the same amount of pixels.

1920 from the previous frame + 1920x2160 = 8 294 400

Obviously, native 4k still looks better for some reasons. It might change in the future if reconstruction techniques become better.
 
Last edited:
uhh wow this thread went to shit fast.
This topic is about checker boarding and quincunx techniques. This is not a thread about PS4 and XBO.
This is not a topic about praising one shit over the other at the cost of feeling good about your purchases.

Stop with that shit. It's off putting that this console war shit spreads to every single thread within the console forum.
You guys should focus more playing games on your console, or both consoles, instead of warring about it on every thread; you might actually be able to appreciate what you have instead of having to put down another to feel good about it.
 
Nicely put @iroboto .

The rules have been restated as the first post in this thread. We don't want to see any more posturing from either side in the Technology forums from either area. If we do, then don't be surprised to see reply-bans handed out on certain topics.

Some material has been expunged into it's own closed purge dump.
 
I have no dog in this fight but how about changing the title to encourage a generic thread about all the different versions of Sparse Rendering + Temporal Reprojection + Artifacts Filtering?

It's the future. A cryptic, acronym filled future.
 
@MrFox is this any better, the title and the original post have been expanded.

Personally, I'm all for expanding techniques to provide developers a wide array of choices in how they approach bringing the best experience of their vision(s) to the gamers.
 
@MrFox is this any better, the title and the original post have been expanded.

Personally, I'm all for expanding techniques to provide developers a wide array of choices in how they approach bringing the best experience of their vision(s) to the gamers.
:yep2:

I can't contribute much, since all I know is either ridiculously outdated or non-realtime rendering, but I really like to watch...
 
Checkerboard rendering in itself is not free. Particularly the final part when all those buffers (those at half 4K and fully 4K for ID Buffer and previous framebuffer) are 'smartly' blended into the final 4K framebuffer.

Without severe optimization that last part (blending) should theoretically cost roughly as much as if the game was 4K native.
 
Checkerboard rendering in itself is not free. Particularly the final part when all those buffers (those at half 4K and fully 4K for ID Buffer and previous framebuffer) are 'smartly' blended into the final 4K framebuffer.

Without severe optimization that last part (blending) should theoretically cost roughly as much as if the game was 4K native.
This is not true. The reconstruction filter simply reads two frame (half res) color buffers once (really a neighborhood of pixels, but most are in L1/L2 cache, so amortized read = 1x). Then does some math and finally writes the results to memory once (to a 4K target). This step is significantly cheaper than rendering everything at 2x resolution. G-buffers are roughly 4x as fat as the final color image, and there's lots of overdraw. Doubling this cost alone is more expensive than reconstruction. Then you need to run the lighting shader at 2x resolution (including sampling all shadow maps and running shadow filters at 2x resolution). Post processing at 2x resolution isn't cheap either. Motion blur, DOF, tone mapping, color grading, etc become 2x more expensive.

The reconstruction step of a modern temporal upsampler (not exactly checkerboarding, but similar) is around 1.5 ms on a console GPU. That is roughly 10% of the frame time at 60 fps, or 5% of the frame time at 30 fps. Nowhere near the same cost as doubling the resolution.
 

I am just talking about the final blending / glowing part of a complete checkerboard technique using the G-buffers at half 4K + 1 buffer at half 4K for the motion vectors + 4K ID buffer (on Pro) + 1 buffer at 4K containing the previous frame. All this without optimization.

I think the CBR technique explained by @sebbbi is different.
 
I am just talking about the final blending / glowing part of a complete checkerboard technique using the G-buffers at half 4K + 1 buffer at half 4K for the motion vectors + 4K ID buffer (on Pro) + 1 buffer at 4K containing the previous frame. All this without optimization.

I think the CBR technique explained by @sebbbi is different.

So what you mean is using cb just to rasterize the g-buffer, reconstruct that into 4k and shade at full 4k? Even that would still be faster than rendering at native before any crazy optimization just because of the overdraw avoided. Regardless, nobody is doing that. Most solutions do the entire rendering at half res and reconstruct the end color buffer.
Using reconstruction to simply undersample the G-buffer sounds a lot more like what sebbbi called MSAA-trick on his siggraph presentation on Trials.
 
Using reconstruction to simply undersample the G-buffer sounds a lot more like what sebbbi called MSAA-trick on his siggraph presentation on Trials.
Yes. However in the MSAA-trick technique, the reconstruction is much cheaper than checkerboard reconstruction. And the reconstruction is practically lossless. The downside is that the MSAA-trick technique doesn't save any lighting or post processing cost. It only makes G-buffer rendering cheaper.
 
If Checkerboard rendering (as explained by Cerny and Sony) was free, then with 2.3x more GPU it would be easy for dev to reach 2160c from 1080p. But most developers reach only 1800c and some with initial difficulty (more drops that on regulat PS4 at 1080p). And some games that try to reach 2160c using a very clean CBR technique (like The witcher 3) drop frames when lots of alphas present (drop to 25 instead of stable 30 which is a big difference).

Some others that reach 2160c successively (like Guerrilla) use their own CBR technique and with tons of optimization.

So there must be a step in the rendering that is much more slower than the rest, comparatively with a 'native' pipeline. That must be the blending / glowing part.
 
If Checkerboard rendering (as explained by Cerny and Sony) was free, then with 2.3x more GPU it would be easy for dev to reach 2160c from 1080p. But most developers reach only 1800c and some with initial difficulty (more drops that on regulat PS4 at 1080p). And some games that try to reach 2160c using a very clean CBR technique (like The witcher 3) drop frames when lots of alphas present (drop to 25 instead of stable 30 which is a big difference).

Some others that reach 2160c successively (like Guerrilla) use their own CBR technique and with tons of optimization.

So there must be a step in the rendering that is much more slower than the rest, comparatively with a 'native' pipeline. That must be the blending / glowing part.

Guerrilla Games explained than the Id Buffer CBR rendering has some step using native resolution.
 
Back
Top