Digital Foundry Article Technical Discussion [2020]

Status
Not open for further replies.
I never said it will be faster using boost clock a 3700X is 25% faster than the PS5 CPU in theory. I said the gap is not so high and first things if SMT is done well on Ryzen CPU and consoles CPU it will help. There is other things than decompression done on the PS5 I/O complex memory mapping and check in and 3D audio is done on Tempest Engine side. And consoles API are a bit thinner.

I dont either think a 3700x and consoles CPU are that far from eachother due to optimizations etc. True on the IO side for consoles, on the pc its being worked on to let the GPU decompress.

My main concern is not the CPU on Cyberpunk 2077 but the GPU and the triangle based raytracing.

Let CDPR concern about that, they will find the best settings for the consoles. Its not about matching the pc experience, its about finding the best experience possible for the given hardware.
 
The triangle based RT, which is what Crysis uses on ... last gen consoles? The PS4 Pro and One X if I'm correct is what works great on PS5 and AMD hardware. The stuff that needs the de-noise, like GI, is what is greatly accelerated by the Nvidia hardware. So I think they will do reflections before the lighting stuff, especially as the game seems to be designed without RT; the atmosphere can change dramatically. I love the RT more, but it looks art was meant without RT for the lighting
 

PC Best Settings


https://www.eurogamer.net/articles/digitalfoundry-2020-cyberpunk-2077-pc-best-settings

jpg


Note we recommend DLSS quality mode at 1080p, balanced at 1440p and performance at 4K
For optimised ray tracing settings, I recommend turning off ray traced shadows, running RT lighting at medium and turning on reflections.
 
Last edited:
Agreed on his RT recommendations if you need to budget for your card. For 1440p, I found the IQ hit with dlss going from quality to balanced to be too much personally. Like I want to go Psycho and Balanced but overall IQ is much better outdoors with Ultra and Quality. But certainly it’s a personal preference and raw power available between those two at 1440p.

Using DLSS Performance and Ultra Performance at 1440p starts to feel like launch window PS3 multi platform games.
 
Last edited:
Agreed on his RT recommendations if you need to budget for your card. For 1440p, I found the IQ hit with dlss going from quality to balanced to be too much personally. Like I want to go Psycho and Balanced but overall IQ is much better outdoors with Ultra and Quality. But certainly it’s a personal preference and raw power available between those two at 1440p.

Using DLSS Performance and Ultra Performance at 1440p start to feel like launch window PS3 multi platform games.

Yah, at 1080p I'm finding quality is the only good DLSS mode, but I can still see it has pros and cons. It gets rid of a lot of the issues of TAA, but at the cost of a bit of a softer image at times. I imagine 1440p you wouldn't want to go any lower than balanced without a big cost to image quality. Performance mode starts to probably become viable at 4k.
 
I'm really curious how the overrides work for ray tracing. I'm assuming if you turn on RT reflections, screen space reflections are not used. Or is this a game that uses SSR when it can and falls back to RT reflections when SSR fails. Same thing for SSAO. If you have RT lighting on, can you just disable SSAO, or does the game override the SSAO setting anyway? I guess the same for RT shadows. I'm assuming it overrides all of the other shadow settings, or is it RT shadows only for certain lights? Low settings with ultra RT would be an interesting experiment.

One thing I hadn't really considered is the CPU cost for RT. In Control, my 3600 seems to be a bottleneck at times, or at least I can't identify what the bottleneck is. My framerate can drop in particular scenes, and if I adjust between DLSS modes I don't gain any performance. GPU usage shows around 60-70% some I'm assuming it's cpu. I imagine there's a sunk cost in the driver for the BVH which ends up being my limit, but it's just a guess.

Edit: Oh, I tried running nsight with Control, but it said it wasn't supported because it was a D3D11ON12 implementation. Kind of interesting. Uses an API layer that converts DX11 to DX12 instead of being a native D3D12 implementation.
 
Agreed on his RT recommendations if you need to budget for your card. For 1440p, I found the IQ hit with dlss going from quality to balanced to be too much personally. Like I want to go Psycho and Balanced but overall IQ is much better outdoors with Ultra and Quality. But certainly it’s a personal preference and raw power available between those two at 1440p.

Using DLSS Performance and Ultra Performance at 1440p starts to feel like launch window PS3 multi platform games.

Thanks for the impressions. How would you compare DLSS with native, is it better than native like some videos claimed? Or is it in this game a slight hit compared to native?
 
Thanks for the impressions. How would you compare DLSS with native, is it better than native like some videos claimed? Or is it in this game a slight hit compared to native?

He sums my views pretty well. He also talks about where DLSS can be better than native but it's not always the case.

https://forum.beyond3d.com/posts/2182614/ I showed some worst case scenarios for DLSS here.
 
Tuning on RT uses completely different path and ssr Form - the ssr setting the menu affects only the ssr when RT reflections are off. It makes sense if you think about it.

The ssr without rt does it's stretchIng and filtering etc. Based on an assumption of the distribution of rays on th surface...it to NDA to just stretch nd elonate all reflections once it is a dumb screen space effect. The ssr blendend into RT reflections are just those rays that terminate into objects that happen to be screen space and are still denpiser in th same manner as the rest of the reflections.
 
Tuning on RT uses completely different path and ssr Form - the ssr setting the menu affects only the ssr when RT reflections are off. It makes sense if you think about it.

The ssr without rt does it's stretchIng and filtering etc. Based on an assumption of the distribution of rays on th surface...it to NDA to just stretch nd elonate all reflections once it is a dumb screen space effect. The ssr blendend into RT reflections are just those rays that terminate into objects that happen to be screen space and are still denpiser in th same manner as the rest of the reflections.

In my testing (3090 FE) Disabling SSR fully with RT maxed out lowers overall IQ. I’ll assume there’s probably a limit on how much RT budget there is given a scene and SSR fills in the blanks. I can try to find some scenes but certainly in the beginning parts where the guy is eating ramen.

I was surprised as I thought of I can use performance budget from SS and shift it to RT. Didn’t work that way.
 
He sums my views pretty well. He also talks about where DLSS can be better than native but it's not always the case.

https://forum.beyond3d.com/posts/2182614/ I showed some worst case scenarios for DLSS here.

Thanks, that does work great, your example is clear, but I think it only matters if you are sitting in very close proximity to the screen. At 'normal'l viewing distances (for example 3 meter from a 65inch screen) it might not matter that much, whereas you can see the performance increase even from 100m distance :p
 
quite strange that xox has worse performance than ps4pro on cpu limited scenes
Funny thing is, that there are also more NPCs in the xbox version in the comparison shots of the DF video. Might just be random, but it is very consistent throughout the video.

But this again speaks for a API problem on xbox side. The xbox one had the stronger CPU but most of the time, even in CPU limited scenes, the xbox one ran worse than the ps4 version. Seems like Sonys API has still a CPU overhead advantage.
 
Funny thing is, that there are also more NPCs in the xbox version in the comparison shots of the DF video. Might just be random, but it is very consistent throughout the video.

But this again speaks for a API problem on xbox side. The xbox one had the stronger CPU but most of the time, even in CPU limited scenes, the xbox one ran worse than the ps4 version. Seems like Sonys API has still a CPU overhead advantage.

XB1X is also pushing way higher res, maybe the CPU is not the limiting factor.
 
XB1X is also pushing way higher res, maybe the CPU is not the limiting factor.
That is true, but it is still a dynamic resolution. So before the frames drop, it should drop resolution. If the lowest resolution is reached, that we have a GPU bottleneck, but if the framerate dips and the resolution is still high, we have a CPU bottleneck.
But after all, there are still many bugs in that game that just might cost performance.
E.g. the physix integration seem to lead to "dancing" stuff all over the place (already seen this). So there seems to be some kind of phyiscs in play (it is physix from nvidia as far as I remember) that go rogue and also costs much performance especially on pc. The much bigger CPU of the next gen consoles can compensate that, but I guess it is to much for the small jaguar cores.

And btw, corpses don't disappear. I came back hours later to a market and there were still the corpses of the guys I should assassinate (and even some weapons were still laying around). Normally this stuff shouldn't happen in that game. Yes it might be consistent but on an open market someone should have removed those.
 
Funny thing is, that there are also more NPCs in the xbox version in the comparison shots of the DF video. Might just be random, but it is very consistent throughout the video.

It think it's partially procedural and partially random. Some areas during the day are packed but are quite empty at night, which makes sense. The same is true of traffic, especially at night.
 
yeah Corpo plaza on PS5(PS4pro) is pretty crowded during the day, i saw more people than in the XsX comparison video, but certainly on Xsx Corpo Plaza is even more crowded. Would be nice to see this place on this console (and PC for the fun)
I remember in GTA4 on console, there was a toggle in the menu to adjust crowd density.
 
Funny thing is, that there are also more NPCs in the xbox version in the comparison shots of the DF video. Might just be random, but it is very consistent throughout the video.

But this again speaks for a API problem on xbox side. The xbox one had the stronger CPU but most of the time, even in CPU limited scenes, the xbox one ran worse than the ps4 version. Seems like Sonys API has still a CPU overhead advantage.

No. Cyberpunk2077 uses the XDK. It's the same set of APIs since the beginning of last gen, so that isn't the answer as to why OneX does not outperform 4Pro in this situation.
 
Funny thing is, that there are also more NPCs in the xbox version in the comparison shots of the DF video. Might just be random, but it is very consistent throughout the video.

But this again speaks for a API problem on xbox side. The xbox one had the stronger CPU but most of the time, even in CPU limited scenes, the xbox one ran worse than the ps4 version. Seems like Sonys API has still a CPU overhead advantage.

Well, that isn't really too surprising considering they aren't writing a devkit API suite that has to accommodate for a wide range of hardware configurations. There's PS5 and that's pretty much it, though it might be arguable that the PS4 falls into that bracket too since PS5's suite is essentially PS4's with a lot of newer things added on top (according to a few devs).

Somewhat related but, I was watching a podcast the other day and one of the guests there (software dev) made an interesting point insofar as "coding to the metal". I think sometimes I see that as an advantage because it implies getting more out of the hardware, but if you really think about it, isn't it kind of funny how "to the metal" programming is now seen as a favorable bullet point? Coding "to the metal" didn't help out consoles like the Saturn, and it made programming on PS2 and PS3 a pain in the ass.

Ideally you'd want to spend as much time away from going into assembly for coding as possible which means you'd want your high-level language tools to be good enough at extracting as much performance from the hardware as possible. That's obviously something Microsoft has been pushing towards for years now (even decades), but I think for some people who get caught up in "teh wahrz" stuff, they don't see how Sony are going a similar route. There's a reason Cerny stresses Time to Triangle so much; they may give the option for low-level programming but ideally they know devs want to stay away from doing that because it just increases development time and leaves less time to focus on the more artistic/creative parts of the game design.

That said I guess dipping into assembly (or as it's called these days hand-written assembly, which at least still sounds like some type of abstraction from raw assembly) is not as much a counter to timely dev if the underlying architecture is well-built and documented. The documentation should ideally relate how parts of each component in the design impact (positively or negatively) performance of other components in the design, but I understand that level of documentation wasn't really a thing back in the older days, partly because so many components of the system design would come from disparate manufacturers. Not that there aren't components in the modern systems that come from outside companies; there are. But they aren't usually the "main" components.

So yeah, it's sometimes odd to see some folks latch onto "coding to the metal" as a perceived advantage in a system's API design because in a way you'd always want the high-level language dev tools to be good enough for any performance you would require. And Sony were arguably the first company to start a wider industry shift away from pure (or near-pure) assembly coding to high-level language support for things like C when they released the PS1. "Time to triangle" was important to them even back then.
 
Status
Not open for further replies.
Back
Top