Upscale vs reconstruction vs native

Scott_Arm

Legend
http://www.vg247.com/2016/09/17/sony-claims-ps4-pros-upscaled-4k-gaming-is-not-misleading/

It seems like there is general confusion about the difference between upscaling and reconstructive techniques, and how they different from native rendering. Very interesting situation. It's probably too difficult to explain to the average person why they're different, and how they work. Is it dishonest to market the product as 4k? I don't think it is. How do you market the box at 4k if the avg person only understands native resolution to be true 4k?
 
It's pretty impressive the way the entire games media industry managed to forget every single thing said in the PlayStation Meeting about the Pro. One of the very first things Cerny admits is that the Pro isn't quite powerful enough for traditional 4K. I've also heard journalists go on and on about how they didn't even mention VR or 1080p benefits.
 
Same as every generation, the numbers war is happening only among a vocal minority of forum dwellers trying to justify the choice they already made.

A good marketing trick to sell video games is showing the games. The average gamer is not confused, because the average gamer couldn't care less about technology.

The game journalists however are very confused because most of them are incredibly technologically illiterate. They need click money, so they need to talk about it to get paid. It's a very sad situation.

Some of them have a brain, and instead of writing bullshit, they book interviews with actual developers to demystify and answer their audience submitted questions. Some know what they are talking about (eurogamer), and others are reading the beyond3d console tech forum to understand more about it.

It's more willful ignorance and cognitive dissonance, there's nothing to be done about it.
 
But here's the thing ... Say they're "checkerboard" method is a temporal reconstructive technique. I wouldn't view that as an upscale. It'll have artifacts, but it will be using real rendered pixels to reconstruct half the image. So, is it 4k, or not? I kind of shows the meaninglessness of talking about resolution. The reality is much more complicated than that. It's useful in understanding performance, and that's about it.

I mean, if a game were 4k native, but with dynamic resolution, then would that be false advertising? And pretty much zero games use native resolution for all of their render targets, as far as I know. Lots scale back many different render targets for performance reasons.

I guess the problem is resolution is very easy to market and the general public, understandably, does not have a grasp on the nuance.
 
you can have two sound formats ones lousy the other lossless. Is it wrong for the lossless one to market itself as such?
the point is that which ever method is used you need to explain why it was the best one for the job.

yes it was a lot easier in the past when everything was brute force. But it's changing for the better/more options.

remember all the 1080pr comments, regardless of method used to get to 1080p when it wasn't native.

native can be a huge waste of resources, but unless we stop wanting to know the details of how it's being done(us right here) they will be asked about it, and it will be used as selling point, even if not by the companies involved.

general media and gamers will soon move onto the next thing once it becomes obvious that It's hard to tell the difference. When it reaches that point.
 
I guess whether dynamic res is misleading depends how often it locks to the max resolution, exactly the same for games advertised as 60fps. There's going to be a lot more fixed frame rate with either dynamic resolution or variable image quality because engines are now aiming towards that, and VR absolutely needs it.

Also, from the presentation about PSVR, it looks like the checkerboard rendering could possibly be dynamic, and that would be an amazing method where you can decide which area to degrade to keep the frame rate constant (start in corners?), the less demanding parts of the game could reach closer to full resolution, technically a different method of dynamic resolution. I can't wait to hear more about it from the hardware acceleration point of view, and why it never became widespread. Maybe it's simply the more powerful hardware and more complex shaders today are the enabler, the overhead is now a small enough percentage.

For years we've had rendering passes that were lower than native resolution, nobody sued anyone.
 
@MrFox Some idiot did try to sue Guerrilla Games over Killzone Shadowfall multiplayer resolution :(

I guess with this gen it's going to be weird. If you say you have a 4k console, what does that mean? Personally, I don't think there's anything dishonest about what Sony has advertised for PS4 Pro. If some game media and gamers get upset about it, I don't think they really have any valid complaints.
 
A good marketing trick to sell video games is showing the games. The average gamer is not confused, because the average gamer couldn't care less about technology.

That isn't true. The sub 1080P business hurt the XBO quite a lot IMO, especially initially.

As for PS4 Pro dunno about misleading cause dunno what exactly they said. Now, they didn't call it PS4K though, THAT would have just put a huge bullseye on them.
 
Sony was not dishonest about the 4K potential of the PS4 Pro, but I would say that calling the resolution 4K is misleading to the majority of customers who will not understand the difference. The most reasonable thing for them to have done is speak all about their minimum requirement of 1080P and then explain that the system incorporates technology that will make up scaling to higher resolutions look better than it normally would. Basically, Sony has released a console that doesn't really "fit" anywhere: it's not anywhere close enough to be a generational leap if games are rendered at 1080P and it's not capable of even keeping the same graphical quality if games are rendered natively at 4K.

A better solution would have been to add some extra hardware to the PS4 slim model that would have allowed for better quality upscaling to 4K than the regular PS4, but would not have offered any additional benefit. This would have been far less expensive than the expansion of the chip in the PS4 Pro. However, it could have been something they could push that might have given them an extra couple of years before they needed to release the PS5.
 
That isn't true. The sub 1080P business hurt the XBO quite a lot IMO, especially initially.
You blame sub-1080p whereas I attribute initial sales to bad messaging and Kinect. You consistently believe performance is really important yet you yourself picked the console that was, on paper, much weaker than the alternative. :yep2:
 
This reminds me when Toshiba came out with it's first HD DVD player. The discs were all 1080p but the player output was limited to 1080i. The argument was that the TV set can de-interlace 1080i to 1080p and there would be no noticeable difference.

The BluRay group pounced on this heavily and all their in-store training and marketing material shifted to "HD DVD can't do full 1080p. Blu Ray is the only real hi def format."

Could a consumer tell the different in PQ between a Gen 1 and Gen 2 HD DVD (which was now 1080p output) player running the same disc on the same TV. Absolutely not! but the damage was heavy and HD DVD group had to spent most of their time and energy to discredit this.

So what does it all mean? You're already starting to see MS touting how Scorpio will be the only real 4k console. I'm curious to see how aggressive they get with trying to discredit the PS4pro and how much time and energy Sony put into deflecting it. Once the buzz spreads it's *really* hard to pull it back. Will be an investing few months ahead.
 
Last edited:
I doubt all Scorpio games will be native 4k either. I mean it will definitely have more native 4k games and generally render games at a higher resolution, but Sony will definitely have a significant price advantage, as well as a year headstart.
 
Last edited:
This reminds me when Toshiba came out with it's first HD DVD player. The discs were all 1080p but the player output was limited to 1080i. The argument was that the TV set can de-interlace 1080i to 1080p and there would be noticeable difference.
Hardware de-interlacing (TV sets) has no extra data beyond two color buffers (half Y resolution). With only color data available, you can only reconstruct still scenes perfectly (matching 1080p).

Game engines have extra data that helps in reconstruction, such as depth buffer, objectID / triangleID / materialID buffer and most importantly per pixel motion vector buffer. Motion vectors allow much better reconstruction and the other extra data can be used to calculate occlusion (removing artifacts). The reconstruction quality is very close to native in areas with no occlusion (both frames' data can be used). Occluded areas of course must be interpolated, as previous frame data is not available. Checkerboard interpolation produces slightly better quality than interlacing, as each reconstructed pixel has 4 valid neighbors (+-X, +-Y). In comparison, interlacing must reconstruct the image without any X neighbors. The quality for occluded areas is equal to upscaled 1920x2160. However it's very hard to see this (esp at 60 fps), as the occluded area will receive full resolution update in the next frame.

It is hard to notice image quality issues of 1080i source when modern de-interlacing hardware is used. Checkerboard rendering in games will have less issues and will look sharper. Most people would not notice any difference in moving image, even when looking up close. Additionally, 4K makes pixels 2x2 smaller. It is considerably harder to notice any de-interlacing issues, and even harder to notice checkerboard rendering issues.
 
It is hard to notice image quality issues of 1080i source when modern de-interlacing hardware is used. Checkerboard rendering in games will have less issues and will look sharper. Most people would not notice any difference in moving image, even when looking up close. Additionally, 4K makes pixels 2x2 smaller. It is considerably harder to notice any de-interlacing issues, and even harder to notice checkerboard rendering issues.

Am I reading this correctly if I say that native 4K rendering is not worth the resources vs Checkerboard or similar as long as the source resolution from Checkerboard is "high" enough to start with? Basically diminishing returns means that the extra resources used for native 4K is better used elsewhere in combination with Checkerboard or similar method?

What does Checkerboard help most with, CPU/GPU cycles (is it cycles on a GPU?) , memory, bandwidth or all of it?
 
Hardware de-interlacing (TV sets) has no extra data beyond two color buffers (half Y resolution). With only color data available, you can only reconstruct still scenes perfectly (matching 1080p).

Game engines have extra data that helps in reconstruction, such as depth buffer, objectID / triangleID / materialID buffer and most importantly per pixel motion vector buffer. Motion vectors allow much better reconstruction and the other extra data can be used to calculate occlusion (removing artifacts). The reconstruction quality is very close to native in areas with no occlusion (both frames' data can be used). Occluded areas of course must be interpolated, as previous frame data is not available. Checkerboard interpolation produces slightly better quality than interlacing, as each reconstructed pixel has 4 valid neighbors (+-X, +-Y). In comparison, interlacing must reconstruct the image without any X neighbors. The quality for occluded areas is equal to upscaled 1920x2160. However it's very hard to see this (esp at 60 fps), as the occluded area will receive full resolution update in the next frame.

It is hard to notice image quality issues of 1080i source when modern de-interlacing hardware is used. Checkerboard rendering in games will have less issues and will look sharper. Most people would not notice any difference in moving image, even when looking up close. Additionally, 4K makes pixels 2x2 smaller. It is considerably harder to notice any de-interlacing issues, and even harder to notice checkerboard rendering issues.

Thanks for the detailed explanation, Sebbi! I just realized I put "noticeable" instead of "not noticeable" in the 1080i portion. lol changes the context completely!

It's good to see that the 4k rendering techniques will be even less of a concern.

So begs the question naturally, why go native 4k?
 
Am I reading this correctly if I say that native 4K rendering is not worth the resources vs Checkerboard or similar as long as the source resolution from Checkerboard is "high" enough to start with? Basically diminishing returns means that the extra resources used for native 4K is better used elsewhere in combination with Checkerboard or similar method?
Yes.

What does Checkerboard help most with, CPU/GPU cycles (is it cycles on a GPU?) , memory, bandwidth or all of it?
It's rendering less pixels, so helps with everything GPU related in drawing pixels. Less memory (smaller buffers), lower BW requirement, fewer GPU instructions so you can spend more per pixel.
 
  • Like
Reactions: JPT
It's rendering less pixels, so helps with everything GPU related in drawing pixels. Less memory (smaller buffers), lower BW requirement, fewer GPU instructions so you can spend more per pixel.
Pretty much everything, except that checkerboard/interlace doesn't help texture sampling memory bandwidth at all. You still need to fetch textures according to the mip level of the actual resolution (4K in this case). This is similar to temporal supersampling. Temporal supersampling requires negative mip bias to match resolution of real supersampled image (otherwise the result will be blurry). GPUs (like CPUs) don't access memory one byte at a time. 64/128 byte cache lines are used. Checkerboard touches as many texture cache lines as native, thus texture bandwidth cost is the same.

Since checkerboard only samples half the texels, it reduces the filtering cost by half. So you most likely end up being bandwidth limited before being filter limited. Excess filter cycles could for example be used for anisotropic filtering. 2x anisotropic costs 2x filter cycles (4x = 4x, etc). Anisotropic filtering also slightly adds bandwidth cost, so it's likely not free even in this case.

It's worth noting that by "texture sampling" I mean specifically texture sampling of 3d objects (mipmapped trilinear/aniso texture reads). Render targets are obviously tightly packed in memory, so checkerboarding halves the memory bandwidth of post processing, etc.

For more details, I recommend reading this Rainbox Six Siege GDC 2016 presentation:
http://twvideo01.ubm-us.net/o1/vaul...s/El_Mansouri_Jalal_Rendering_Rainbow_Six.pdf
 
Last edited:
What's 'ansiotopic filtering'? Never heard of such a thing on consoles before.
And you certainly never will hear of such a thing. :mrgreen:

I think it's very revealing that some of the best looking exclusives this gen (KZSS, InfamousSS, UC4) and some of the best looking multiplat (like The Witcher 3) have decent amount of AF on console (4x to 16x). So the hardware is definitely capable of AF on all types of games.
 
Back
Top