Digital Foundry Article Technical Discussion [2022]

Status
Not open for further replies.
DF Article @ https://www.eurogamer.net/articles/digitalfoundry-2022-ghostwire-tokyo-pc-tech-review

Ghostwire: Tokyo on PC debuts impressive new DLSS competitor
But it's another port with stutter problems.

Ghostwire: Tokyo is a game with many surprises in terms of its technical make-up. Developer Tango Gameworks has delivered a gameplay concept I wasn't expecting, wrapped up in a very different engine from prior titles, offering up an exceptional level of graphical finesse. The move away from its own idTech-based Unreal Engine 4 has clearly been a great enabler for the team, but I approached the PC version with some trepidation. Many recent PC releases have arrived with intrusive levels of stutter that impact the experience - no matter how powerful your hardware. It's especially common in Unreal Engine 4 titles - and unfortunately, it impacts Ghostwire: Tokyo too.

And that's frustrating for me, because there's so much to like here from a visual perspective - especially in terms of ray tracing features. On PC and PlayStation 5, ray traced reflections steal the show. RT reflections are applied liberally in Ghostwire: Tokyo, most striking on highly reflective surfaces where we get a perfect mirror-like effect. That said, they also apply to duller materials too, with a soft distorted look - computationally expensive but adding greatly to lighting realism.


Oh no, someone check on @Remij :)

(edit: Ah too late)
 
RP6hc0Pb_o.jpg

How would TSR compare to FSR2?
PS5 settings in image.

RP6hc0Pb
 
Last edited:

They cover FSR2.0 on consoles. Its a pc solution, view the video for their take.
Yeah, I mean what a lot of console gamers don't realize is that games on consoles are already doing all sorts of reconstruction techniques to output the visuals they have currently. Tailored by the developers for the hardware and engine to target the image quality and performance they desire. Most do a competent enough job of it as well, given the fact that they don't use ML. So new techniques likely aren't going to move any needle performance-wise, allowing for things never before possible... what they're going to get are perhaps a slightly more temporally stable image, and refined clarity.

The main thing separating these technologies on console vs PC is that with FSR on console, it will be something you enable and you'll get the visual/performance tradeoffs the developer has chosen for you... whereas on PC, you have different levels and can tailor the experience to better suit a wider range of hardware.

It would actually be pretty cool for the console peeps, if FSR2.0 supported games allowed you to select which level of FSR you wanted... but then again, once you start adding in too many options for people to tinker with.. you start to take away from the simplicity of "pick up and go" gaming.

I think it would only be more interesting on consoles once you have a tensor core equivalent inside them which can offload this work from the shader cores. Something like a true system-wide hardware based implementation which every developer can easily tap into.
 
Yeah, I mean what a lot of console gamers don't realize is that games on consoles are already doing all sorts of reconstruction techniques to output the visuals they have currently. Tailored by the developers for the hardware and engine to target the image quality and performance they desire. Most do a competent enough job of it as well, given the fact that they don't use ML. So new techniques likely aren't going to move any needle performance-wise, allowing for things never before possible... what they're going to get are perhaps a slightly more temporally stable image, and refined clarity.

The main thing separating these technologies on console vs PC is that with FSR on console, it will be something you enable and you'll get the visual/performance tradeoffs the developer has chosen for you... whereas on PC, you have different levels and can tailor the experience to better suit a wider range of hardware.

It would actually be pretty cool for the console peeps, if FSR2.0 supported games allowed you to select which level of FSR you wanted... but then again, once you start adding in too many options for people to tinker with.. you start to take away from the simplicity of "pick up and go" gaming.

I think it would only be more interesting on consoles once you have a tensor core equivalent inside them which can offload this work from the shader cores. Something like a true system-wide hardware based implementation which every developer can easily tap into.

FSR2.0 is the equalivant to say TAAU or whatever studios implement themselfs on consoles. DLSS and Intels XesS are ML/AI hardware accelerated technologies for nvidia and intel hardware (where they work or work the best). FSR2.0 is basically a console feature coming to PC (ish). it's a nice addition to the pc gaming space. FSR2.0 could see some use on the consoles, but i wouldnt know why devs would unless their own implementation wasnt good enough or too time consuming. UE games will go for TAAU, some for TSR etc.
I still think or hope that AMD will come with their own AI/ML reconstruction tech, one way or another. RDNA3 may or may not have it yet, but RDNA4 perhaps, who knows.
 
I still think or hope that AMD will come with their own AI/ML reconstruction tech, one way or another. RDNA3 may or may not have it yet, but RDNA4 perhaps, who knows.

I'm confused by this comment, AMD GPU's have supported ML for a few generations now and are more than capable of doing an ML based upscaling.

RDNA2 fully supports INT4 and INT8.

PS5 has already used ML in Spiderman:MM with inference run on the GPU.

So there's no waiting for RDNA3 or 4, it's here right now.
 

They cover FSR2.0 on consoles. Its a pc solution, view the video for their take.
listened to it in the background the other day.

Unsurprisingly, Richard doesn't have much time to play great games given the fact that he has a library of unfinished excellent games. It's common nowadays to suffer from gaming Diogenes syndrome, where you have a lot of superb games to play, you end up completing just those you love to death and get you hooked.

Like them, I am sooooo excited to know about Intel GPUs. From 2005 to 2020 I used Intel GPUs on laptops and yeah, they were poor, but I remember fondly wrestling with them to run games at good framerates (Diablo 2 day one and the original The Witcher comes to mind) and I liked that kinda faulty but professional touch of Intel and how they handled the drivers and their control panels. There was something really special to that.

While I liked AMD these last years more -I have an AMD desktop computer after all-, I've always been more of an Intel person 'cos my first PC ever had a Pentium 100, then I got a Celeron, then a Pentium III, then a laptop Core Duo.., etc etc, and was hyped with things like Larrabee....
 
I'm confused by this comment, AMD GPU's have supported ML for a few generations now and are more than capable of doing an ML based upscaling.

RDNA2 fully supports INT4 and INT8.

PS5 has already used ML in Spiderman:MM with inference run on the GPU.

So there's no waiting for RDNA3 or 4, it's here right now.

As noted by DF in their video, AMD gpus lack the hw acceleration cores that Intel and nv have on their gpu's. FSR2.0 is not using ML/AI reconstruction, it is more akin to TAAU and whatever reconstruction already being used on consoles (like insomniac does), Digital Foundry has explained this in their video, its worth a watch.

RDNA2 supports INT4, the PS5, as noted by DF, does not. What Spiderman was doing has nothing to do with ML/AI upscaling at all, its a totally different thing for a different discussion, and something that even as far back as the PS3 probably could do.

RDNA3/4 might or might not add fixed function hardware to accelerate ML/AI/neural processing (like every smartphone nowadays does, and apple computers). With that comes the comprehensive neural network training aswell. It's not here right now because there is no AI/ML reconstruction technology on RDNA gpu's as of yet.
Anyway, watch the DF video, FSR2.0 is mostly a PC solution, a reconstruction solution already available (probably more performant aswell) on consoles for a while.
 
It's not here right now because there is no AI/ML reconstruction technology on RDNA gpu's as of yet.

Which is a software problem, not a hardware one.

So can you please explain why AMD need to wait for RDNA3/4 to get AI/ML based upscaling tech?

Especially as RDNA2 is capabale on running Intels XeSS via DP4A, which is an AI/ML based upscaler.
 
Which is a software problem, not a hardware one.

Tensor Cores do exist for example, and they are being used for AI/ML work. It is hardware accelerated according to NVIDIA. Do you have any proof they are blatantly lying?

https://developer.nvidia.com/rtx/dlss

''Powered by Tensor Cores, the dedicated AI processors on NVIDIA RTX™ GPUs, DLSS gives you the performance headroom to maximize ray-tracing settings and increase output resolution.''

So can you please explain why AMD need to wait for RDNA3/4 to get AI/ML based upscaling tech?

Because RDNA doesnt have dedicated hw acceleration for just that purpose. AMD's solution allegedly isnt as fast, hence why FSR2.0 is akin to TAAU and other reconstruction tech already available on consoles, it is not AI based. There might be a chance AMD will implement it in the future.

Especially as RDNA2 is capabale on running Intels XeSS via DP4A, which is an AI/ML based upscaler.

It is, but it isn't as performant. I doubt its usage will be that wide since the baseline (PS5) doesnt support dp4a. Any hardware could potentionally run AI/ML, but the cost will be higher when not using fixed function hardware for the purpose.
 
Tensor Cores do exist for example, and they are being used for AI/ML work. It is hardware accelerated according to NVIDIA. Do you have any proof they are blatantly lying?

https://developer.nvidia.com/rtx/dlss

''Powered by Tensor Cores, the dedicated AI processors on NVIDIA RTX™ GPUs, DLSS gives you the performance headroom to maximize ray-tracing settings and increase output resolution.''
...
I guess what he means is that Tensor cores are not really necessary for DLSS. They make it faster as they are extra processing power on board. But you can make the same by using the shader-egines on the GPU and
accelerate with half-/quarter-rate accuracy.
Even DLSS ran purely on the shaders when nvidia did experiment with better algorithms between version 1 and 2.x. So it is possible. The question is just how big the impact would be.
So currently it is really just a software problem. It would be something else if the Tensor cores were instead some kind of fixed-function units but they aren't. They are just "normal" mini-cpu-cores (if you want) that can be quite efficient in brute forcing stuff like that but are quite bad when used for other stuff.
 
Tensor Cores do exist for example, and they are being used for AI/ML work. It is hardware accelerated according to NVIDIA. Do you have any proof they are blatantly lying?

Let me explain my comment further it as you seem to be struggling.... I said this:

"Which is a software problem, not a hardware one."

What that means is this:

If AMD currently don't have an ML based upscaler it is not a hardware problem as the hardware can do ML upscaling, it's merely a software problem, as in the software just not available for it (yet)

Now can you explain why you started talking about Nvidia and stupidly talking about lying? Like what are you even talking about here? Where did that come from? Who's accusing who of lying?

Because RDNA doesnt have dedicated hw acceleration for just that purpose. AMD's solution allegedly isnt as fast, hence why FSR2.0 is akin to TAAU and other reconstruction tech already available on consoles, it is not AI based. There might be a chance AMD will implement it in the future.

That is irrelevant to your original comment, let me remind me of what you said...

"I still think or hope that AMD will come with their own AI/ML reconstruction tech, one way or another. RDNA3 may or may not have it yet, but RDNA4 perhaps, who knows"

At no point in your original comment did you talk about specific ML hardware being added, not only that, AMD doesn't need specific ML hardware to get ML based upscaling in RDNA2.

Which is why I quoted it in the first place as what you said makes no sense, AMD have ML support right now so don't need to wait for RDNA3/4 to get ML upscaling.

It is, but it isn't as performant. I doubt its usage will be that wide since the baseline (PS5) doesnt support dp4a. Any hardware could potentionally run AI/ML, but the cost will be higher when not using fixed function hardware for the purpose.

PS5 has nothing to do with this dicussion, AMD have access to ML/AI tech RIGHT NOW....just because you are not happy with how the hardware handles such calculations or that you think it's not fast enough doesn't mean we won't see it.

The current hardware is capable of it so your "wait for RDNA3/4" comment for AMD ML/AI upscaling is ridiculous.
 
I guess what he means is that Tensor cores are not really necessary for DLSS. They make it faster as they are extra processing power on board. But you can make the same by using the shader-egines on the GPU and
accelerate with half-/quarter-rate accuracy.
Even DLSS ran purely on the shaders when nvidia did experiment with better algorithms between version 1 and 2.x. So it is possible. The question is just how big the impact would be.
So currently it is really just a software problem. It would be something else if the Tensor cores were instead some kind of fixed-function units but they aren't. They are just "normal" mini-cpu-cores (if you want) that can be quite efficient in brute forcing stuff like that but are quite bad when used for other stuff.

Yay, you get it!
 
As noted by DF in their video, AMD gpus lack the hw acceleration cores that Intel and nv have on their gpu's. FSR2.0 is not using ML/AI reconstruction, it is more akin to TAAU and whatever reconstruction already being used on consoles (like insomniac does), Digital Foundry has explained this in their video, its worth a watch.

RDNA2 supports INT4, the PS5, as noted by DF, does not. What Spiderman was doing has nothing to do with ML/AI upscaling at all, its a totally different thing for a different discussion, and something that even as far back as the PS3 probably could do.

RDNA3/4 might or might not add fixed function hardware to accelerate ML/AI/neural processing (like every smartphone nowadays does, and apple computers). With that comes the comprehensive neural network training aswell. It's not here right now because there is no AI/ML reconstruction technology on RDNA gpu's as of yet.
Anyway, watch the DF video, FSR2.0 is mostly a PC solution, a reconstruction solution already available (probably more performant aswell) on consoles for a while.

AMD lacks dedicated cores but are accelerated no less. INT4, INT8 and FP16 do not run at the same rate as FP32 on RDNA2. They run at multiples of the rate of FP32.
 

Would have liked to see some comparisons of DLSS and the native TAA on the PS5, as the DC provided the first 'native' high-res mode on PS5, whereas with the original, the comparison video Alex's did was comparing the enhanced image stability of DLSS to checkerboarding. From my look at the game initially I noticed there was quite a bit more detail being shown in the DC on the PS5 compared to the original on PC, due to changes in contrast/sharpening/new TAA implementation - not sure what specifically the reason was, but it would have been interesting to see if they were equivalent now.

This does indeed perform extremely well on the PS5, but this is also the result of continual patches - the Quality mode on release was almost never holding 60, now it holds it through much of gameplay. It's viable to play this at native 4k now whereas before you'd be dropping frames almost constantly.

Also not really the purview of DF but just to note one addition they did make to the PC version is apparently full Dualsense haptic support, which was nice (albeit need a wired connection for it - boooo).
 
Status
Not open for further replies.
Back
Top