Digital Foundry Article Technical Discussion [2018]

Status
Not open for further replies.
Thank god for PCs.


Most people wouldn't notice if not for DF telling them. Even last gen some people thought they were playing in HD while using RCA cables.

Also, MSAA > 4K.
An assumption and an anecdote. I and I would say other people on this tech forum can probably differentiate between 720 and 1080 at least. Personally, I can tell the difference between 900 and 1080 as well, though for eg. with 1440p vs 1600 and dynamic resolution scaling at higher resolutions it becomes murky to me. Native 4k however is strikingly obvious to me on the xbox one x.

And what is that supposed to mean? Like 720p with msaa looks better than 4k no AA? 1080p? I would be shocked if you actually compared this yourself to come to that conclusion and aren't just rambling. :-?

There is so much aliasing that MSAA doesn't touch since it only works on geometry and not shaders and sub pixel details. It's clean, but so is smaa 1x which is far cheaper computationally and tackles shader aliasing as well so I think that's the best option if you don't want any blur.

The downside compared to MSAA would be it's a post effect, so some edges will look better than others while with MSAA coverage on geometric edges will look even throughout. It's probably best fit for racers where you're moving so fast you won't notice sub pixel shimmering anyways, but you're always facing the car which it'd work well with. Probably why the Forza devs have used it since xbox 360 and still do on X1X.

Also I can't get over how good FH4 looks.
 
A cleverer solution would be resolution independent shading. Surfaces with low-frequency detail could be rendered at a 'lower res', combined with high quality edges and masking. Using something like the ID buffer in PS4Pro, you could render all IDs in native, then render some geometry at 720p and combine it masked with the high-quality stuff, sort of thing. You could also render different parts of a surface's shaders at different frequencies with interpolation on low frequency detail. We already need and are implementing computational solutions for high-frequency values that otherwise generates noise if rendered simply.

The whole notion of a single resolution buffer is outmoded, and doesn't correlate with the advances in rendering tech and know-how being used to accelerate other fields in computer-imaging that are looking at how we perceive information to avoid wasted efforts.
 
While i really liked the PS2, im more impressed by the graphics the PS4 is producing then PS2's at the time. I dont think it would be a good idea to move something Emotion Engine again.
 
A cleverer solution would be resolution independent shading. Surfaces with low-frequency detail could be rendered at a 'lower res', combined with high quality edges and masking. Using something like the ID buffer in PS4Pro, you could render all IDs in native, then render some geometry at 720p and combine it masked with the high-quality stuff, sort of thing. You could also render different parts of a surface's shaders at different frequencies with interpolation on low frequency detail. We already need and are implementing computational solutions for high-frequency values that otherwise generates noise if rendered simply.

The whole notion of a single resolution buffer is outmoded, and doesn't correlate with the advances in rendering tech and know-how being used to accelerate other fields in computer-imaging that are looking at how we perceive information to avoid wasted efforts.

Gravity rush 2 does something like that, it renders the geo at 4k and the in surface detail is 1080p. Haven't seen it in person.

Anyways that all sounds smart when resources are at an absolute premium, but I can't imagine it looking as good as 100% native in even the best case scenario. Seems to me this kind of thing will be more valuable when we move past 4k ; the minimum res could be 4k while other components are 8k, 16k etc.

Edit : The way I think about it is, games with a lot of simpler details will by default have resources leftover to splurge on a higher resolution anyways so a bunch of different buffers would be pointless.
 
Last edited:
A cleverer solution would be resolution independent shading. Surfaces with low-frequency detail could be rendered at a 'lower res', combined with high quality edges and masking. Using something like the ID buffer in PS4Pro, you could render all IDs in native, then render some geometry at 720p and combine it masked with the high-quality stuff, sort of thing. You could also render different parts of a surface's shaders at different frequencies with interpolation on low frequency detail. We already need and are implementing computational solutions for high-frequency values that otherwise generates noise if rendered simply.

The whole notion of a single resolution buffer is outmoded, and doesn't correlate with the advances in rendering tech and know-how being used to accelerate other fields in computer-imaging that are looking at how we perceive information to avoid wasted efforts.

You're seeing this increasingly on the PC. Heck, increasingly your seeing resolution scaling options in games. Negative scaling (sub native resolution scaled to native resolution) when hardware can't easily maintain chosen settings and positive scaling (higher than native resolution scaled to native resolution for a super sampling effect) if you have excess hardware power for the chosen settings.

And of course, many of the graphics sliders are just adjusting the resolution/detail of components that make up the scene. The one that's existed for over a decade for example is shadow resolution scaling and texture resolution scaling. We see similar things with shaders, particle effects, etc.

So, it isn't anything new...at least on PC. On console, developers traditionally haven't done much scaling as they have a fixed hardware target and they just create the assets WRT that fixed hardware target.

PS4-P and XBO-X have shaken things up a bit for consoles, however, and some console only developers are using that to gain experience with variable resolution/detail assets.

And as we've seen on PC for many years now, a 2x increase in the detail used for X graphical feature sometimes leads to almost imperceptible increases in graphics quality for that graphical feature. So going lower resolution/detail/quality for some things has almost no visual impact whereas it has a relatively larger impact for other things.

Regards,
SB
 
An assumption and an anecdote. I and I would say other people on this tech forum can probably differentiate between 720 and 1080 at least. Personally, I can tell the difference between 900 and 1080 as well, though for eg. with 1440p vs 1600 and dynamic resolution scaling at higher resolutions it becomes murky to me. Native 4k however is strikingly obvious to me on the xbox one x.

And what is that supposed to mean? Like 720p with msaa looks better than 4k no AA? 1080p? I would be shocked if you actually compared this yourself to come to that conclusion and aren't just rambling. :-?

There is so much aliasing that MSAA doesn't touch since it only works on geometry and not shaders and sub pixel details. It's clean, but so is smaa 1x which is far cheaper computationally and tackles shader aliasing as well so I think that's the best option if you don't want any blur.

The downside compared to MSAA would be it's a post effect, so some edges will look better than others while with MSAA coverage on geometric edges will look even throughout. It's probably best fit for racers where you're moving so fast you won't notice sub pixel shimmering anyways, but you're always facing the car which it'd work well with. Probably why the Forza devs have used it since xbox 360 and still do on X1X.

Also I can't get over how good FH4 looks.
Pixel quality over pixel quantity. An offline CGI film at 720p looks better than 4k game graphics.

MSAA for opaque geometry, something cheaper for transparencies.

A cleverer solution would be resolution independent shading. Surfaces with low-frequency detail could be rendered at a 'lower res', combined with high quality edges and masking. Using something like the ID buffer in PS4Pro, you could render all IDs in native, then render some geometry at 720p and combine it masked with the high-quality stuff, sort of thing. You could also render different parts of a surface's shaders at different frequencies with interpolation on low frequency detail. We already need and are implementing computational solutions for high-frequency values that otherwise generates noise if rendered simply.

The whole notion of a single resolution buffer is outmoded, and doesn't correlate with the advances in rendering tech and know-how being used to accelerate other fields in computer-imaging that are looking at how we perceive information to avoid wasted efforts.
Yes, remember when rendering particles at quarter-res was all the rage? The Saints Row games also used a technique called inferred lighting where they could do the shading at half-res and then use a discontinuity filter to blend with the full-res frame. Post-effects are commonly done at lower resolutions, specially AO. This is definitely going to be the case with ray-tracing. Already the PICA PICA demo did reflections at half-res.
 
Apparently there was a edge interview with devs in the vein of todays Rich article which spawned another mainstream articles,links in post.

https://www.resetera.com/threads/am...4-image-quality-requires-7-4-teraflops.55382/

Timothy Lothes and others talk again about huge burden of 4k and some omniscient nerds of gaming are sticking head in the ass arguing that forza can because it's smart cutting corners in rendering game port from S on X, so now we should expect next-gen games looking like true generation at native 4k with only maybe two times the flops, hundred GB/s for gpu more and only double or tripple the RAM achievable next gen. Ohh god what is happening with nerdz on internetz lately. MODs please pardon this a bit gasssed saturday night gmt+2 rant and maybe unrelated links but every time I see pixel "talk" lately, im triggered i swear. Rich &co if You read this, please next time tell bluntly how it is: 50TF-2TB/s for true next gen 4k@60 is needed and you won't pay for that in console shape in 2020.
 
Did they know even their Xbox360 can render 4k/60fps Forza if the game has 200 polygons for the cars:LOL:. AMD made a huge mistake for not hiring them for Navi design.
 
Anyways that all sounds smart when resources are at an absolute premium, but I can't imagine it looking as good as 100% native in even the best case scenario.
It won't. And a reconstructed 4K image won't look as good as native 4K in all cases. And a compressed video stream won't look as good as native and a JPEG won't look as good as the RAW bitmap and a quarter res gaussian blur won't look as good as full res and compressed audio won't sound as good as uncompressed. However, the differences can be so minimal that people can't notice them. What's the point of rendering your blurs in perfect res if people can't perceive the improvement? What's the point of rendering your dirt texture in full res if people can't see the difference between that and half res?

Realtime CG has always been about working smarter, not harder. The more programmable the rendering pipeline has become, the more opportunities there are to work smarter and it strikes me as silly to adhere to a 'pure' standard if what's on screen doesn't look as good.

In the below two screenshots, one of them I've manually masked 20% of the render with 720p material. Can you tell the difference? I can't! Then consider it in motion. That's a 10% saving there just with a simple process. A smarter in-game process could probably save 20% minus reconstruction overheads.

If it's something possible to do without too much bother, why would you choose not to? Why would you choose to render more pixels/detail if players can't actually notice that detail?

Edit: These images are broke - the edited one is just 720p without it being selectively applied. New images further down.

Image1.jpg

Image2.jpg
 
Last edited:
Apparently there was a edge interview with devs in the vein of todays Rich article which spawned another mainstream articles,links in post.

https://www.resetera.com/threads/am...4-image-quality-requires-7-4-teraflops.55382/

Timothy Lothes and others talk again about huge burden of 4k and some omniscient nerds of gaming are sticking head in the ass arguing that forza can because it's smart cutting corners in rendering game port from S on X, so now we should expect next-gen games looking like true generation at native 4k with only maybe two times the flops, hundred GB/s for gpu more and only double or tripple the RAM achievable next gen. Ohh god what is happening with nerdz on internetz lately. MODs please pardon this a bit gasssed saturday night gmt+2 rant and maybe unrelated links but every time I see pixel "talk" lately, im triggered i swear. Rich &co if You read this, please next time tell bluntly how it is: 50TF-2TB/s for true next gen 4k@60 is needed and you won't pay for that in console shape in 2020.

The PS4Pro, with its relatively meagre 4.2TF, is able to output PS4 quality games at 4K. Albeit, using checkerboarding.

We're looking at a minimum of 8TF more performance, double the bandwidth and capacity of memory, and a major improvement to the CPU.

So take 4KCB Horizon Zero Dawn, and apply the above to it. People need to stop flapping, we're all going to be fine.
 
Technically, but speaking honestly about it it's a massive difference. To the point where, I honestly prefer an artifact free 1080p image to checkerboard. At least in the games I've played. The uneven sawtooth edges and the sort of interlaced look particles can have is offputting.

And i'm talking checkerboard vs. 1080p on a 4k screen.
Don't you get a headache trying to play games with your forehead pressed to the TV? The idea that you can see sub mm sawtooth artifacts while actually playing is comical.
 
The bottom image is worse.
Don't you get a headache trying to play games with your forehead pressed to the TV? The idea that you can see sub mm sawtooth artifacts while actually playing is comical.
Whatever you say.

Pretty obvious in Nex machina in 4k mode. It's 900p on base Ps4 so probably not 100% 4k checkerboard on pro.

Switching from the native 4k mode on wipeout to the checkerboard mode with blur is really evident at the start of races and noticeable on the vehicle always.

I've not played Horizon ZD though which is supposedly the best implementation.
 
Sadly JPEG ruins the comparison a bit as even the native 1080p is a little soft, but PNGs were too big to be uploaded.
 
Status
Not open for further replies.
Back
Top