Quincunx had one solution, blur. CB has many different degrees of implementation giving anything from blurs to checker patterns to crisp, high fidelity renders.
It feels like there might be some confusion in this thread where people are assuming that NVidia Quincunx was basically a half-res checkboard-pattern render that produced intermediate pixels by blending the known neighboring pixel colors. That's not what it was.
NVidia Quincunx was an implementation of 2xMSAA that used a wide resolve filter. While "standard" 2xMSAA only blends the two samples associated with a pixel when calculating the final pixel color, Quincunx used those two samples plus three associated with neighboring pixels. In that sense, it was basically applying a subpixel-wide-ish blur to the image.
There's a good reason for using a wide resolve filter like this. The "blur" can also be thought of as weighting samples to pixels, rather than picking a single pixel to tie each sample to.
If a small bright speck is sitting in the middle between two pixel centers, should it add brightness to only one pixel, or be distributed across both?
If the bright speck starts at one pixel center and slowly moves toward the other pixel center, should it have constant contribution to the first pixel until it starts crossing the centerline between the two pixels, at which point it rapidly transitions to having constant contribution to the second pixel? Or should it smoothly fade out of the first pixel and smoothly fade into the second as it goes through the full motion from one pixel center to the next?
The first case will look almost like the detail is popping between the two pixels... even with perfect supersampling! That's an example of reconstruction aliasing, and it's happening because you're using
small rectangles (that rectangle covering the area that people often visualize as "the pixel") as a resolve filter.
That's not to say that a wide resolve is necessarily the right thing to do. Obviously the softness is a compromise that needs to be weighed. But devs don't do it without reason, which should be especially clear when you consider that a wide resolve tends to be technically costlier than a narrow one, since it blends more samples to calculate the final pixel color.
NVidia Quincunx was a bit of a silly case, though, because NVidia presented it as if it
blending more samples gave visual results similar to
rendering more samples. Which is silly nonsense. The two things solve two different problems: games using NVidia Quincunx still look 2x sampled, and it's because they
are 2x sampled.
//==========================
Anyway, "checkerboarding" as the phrase is currently being used has a broader and different meaning. Where Quincunx is a 2xAA pattern, the samples in "checkerboarding" are all full pixels, and the intermediates get reconstructed to create a full non-checkerboard pixel grid... probably typically with the checkerboard being alternated between frames, and temporal sampling assisting in the reconstruction process.
So, checkerboard inevitable looking worse than native doesn't really have anything to do with the sample pattern looking like a Quincunx sample pattern. It inevitably looks worse because it's only producing half as many fresh pixels each frame as native. That's not really any more interesting than pointing out that spatially upscaling an image to a high resolution tends to produce results inferior to rendering at that high resolution.
(Of course, a blurry resolve filter could also be used in a game with checkerboarded sampling.)
Temporal reconstruction was still blatantly ignored. With it, anything that spends no more than ONE frame still will already be resolved with as many samples as it would in native 4k.
Depending on changes between frames, a temporal sample might not end up in a place that makes it very useful, or the sample's color is no longer meaningful because of lighting changes, or maybe motion made it hard to position accurately in the new frame, or it's a part of a surface that's no longer visible (and perhaps some other surfaces have just become visible and have no prior-frame samples to represent them), etc.
Temporal reconstruction is extremely useful, but it's only very boring special cases (i.e. no scene change between frames) where you sort of have the ability to get "as many samples" as you would if all the potential samples (temporal and new) were produced new in the new frame.
Quincux never did that in any of its uses. I'm not even sure if its hardware integration was flexible enough to allow that to be implemented.
Whether Quincunx can use temporal samples depends on how broadly you're using the phrase "Quincunx." Halo Reach obviously isn't using NVidia's implementation, for instance, but according to Bungie it uses a diagonal half-pixel jitter between frames and a quincunx resolve.
It is fake 4k. You are discussing the use of resources and while it uses less resources, it doesn't compensate for the loss of detail.
Who's saying it does?
what do you mean by that?
They mean a 5K or 6K image using checkerboard sampling and temporal reconstruction internally, then scaling the result down to 4K.
This would be one way of comparing checkerboard with native at similar rendering costs.
(Alternately, compare 4K checkerboarding to a non-checkerboarded render upscaled from a resolution much lower than 4K.)