Of course it is.But it's another port with stutter problems.
Of course it is.But it's another port with stutter problems.
DF Article @ https://www.eurogamer.net/articles/digitalfoundry-2022-ghostwire-tokyo-pc-tech-review
Ghostwire: Tokyo on PC debuts impressive new DLSS competitor
But it's another port with stutter problems.
Ghostwire: Tokyo is a game with many surprises in terms of its technical make-up. Developer Tango Gameworks has delivered a gameplay concept I wasn't expecting, wrapped up in a very different engine from prior titles, offering up an exceptional level of graphical finesse. The move away from its own idTech-based Unreal Engine 4 has clearly been a great enabler for the team, but I approached the PC version with some trepidation. Many recent PC releases have arrived with intrusive levels of stutter that impact the experience - no matter how powerful your hardware. It's especially common in Unreal Engine 4 titles - and unfortunately, it impacts Ghostwire: Tokyo too.
And that's frustrating for me, because there's so much to like here from a visual perspective - especially in terms of ray tracing features. On PC and PlayStation 5, ray traced reflections steal the show. RT reflections are applied liberally in Ghostwire: Tokyo, most striking on highly reflective surfaces where we get a perfect mirror-like effect. That said, they also apply to duller materials too, with a soft distorted look - computationally expensive but adding greatly to lighting realism.
I'm like Beetlejuice... except you say ShaderStutter 3 times and I appear
Yeah, I mean what a lot of console gamers don't realize is that games on consoles are already doing all sorts of reconstruction techniques to output the visuals they have currently. Tailored by the developers for the hardware and engine to target the image quality and performance they desire. Most do a competent enough job of it as well, given the fact that they don't use ML. So new techniques likely aren't going to move any needle performance-wise, allowing for things never before possible... what they're going to get are perhaps a slightly more temporally stable image, and refined clarity.
They cover FSR2.0 on consoles. Its a pc solution, view the video for their take.
Yeah, I mean what a lot of console gamers don't realize is that games on consoles are already doing all sorts of reconstruction techniques to output the visuals they have currently. Tailored by the developers for the hardware and engine to target the image quality and performance they desire. Most do a competent enough job of it as well, given the fact that they don't use ML. So new techniques likely aren't going to move any needle performance-wise, allowing for things never before possible... what they're going to get are perhaps a slightly more temporally stable image, and refined clarity.
The main thing separating these technologies on console vs PC is that with FSR on console, it will be something you enable and you'll get the visual/performance tradeoffs the developer has chosen for you... whereas on PC, you have different levels and can tailor the experience to better suit a wider range of hardware.
It would actually be pretty cool for the console peeps, if FSR2.0 supported games allowed you to select which level of FSR you wanted... but then again, once you start adding in too many options for people to tinker with.. you start to take away from the simplicity of "pick up and go" gaming.
I think it would only be more interesting on consoles once you have a tensor core equivalent inside them which can offload this work from the shader cores. Something like a true system-wide hardware based implementation which every developer can easily tap into.
I still think or hope that AMD will come with their own AI/ML reconstruction tech, one way or another. RDNA3 may or may not have it yet, but RDNA4 perhaps, who knows.
listened to it in the background the other day.
They cover FSR2.0 on consoles. Its a pc solution, view the video for their take.
I'm confused by this comment, AMD GPU's have supported ML for a few generations now and are more than capable of doing an ML based upscaling.
RDNA2 fully supports INT4 and INT8.
PS5 has already used ML in Spiderman:MM with inference run on the GPU.
So there's no waiting for RDNA3 or 4, it's here right now.
It's not here right now because there is no AI/ML reconstruction technology on RDNA gpu's as of yet.
Which is a software problem, not a hardware one.
So can you please explain why AMD need to wait for RDNA3/4 to get AI/ML based upscaling tech?
Especially as RDNA2 is capabale on running Intels XeSS via DP4A, which is an AI/ML based upscaler.
I guess what he means is that Tensor cores are not really necessary for DLSS. They make it faster as they are extra processing power on board. But you can make the same by using the shader-egines on the GPU andTensor Cores do exist for example, and they are being used for AI/ML work. It is hardware accelerated according to NVIDIA. Do you have any proof they are blatantly lying?
https://developer.nvidia.com/rtx/dlss
''Powered by Tensor Cores, the dedicated AI processors on NVIDIA RTX™ GPUs, DLSS gives you the performance headroom to maximize ray-tracing settings and increase output resolution.''
...
Tensor Cores do exist for example, and they are being used for AI/ML work. It is hardware accelerated according to NVIDIA. Do you have any proof they are blatantly lying?
Because RDNA doesnt have dedicated hw acceleration for just that purpose. AMD's solution allegedly isnt as fast, hence why FSR2.0 is akin to TAAU and other reconstruction tech already available on consoles, it is not AI based. There might be a chance AMD will implement it in the future.
It is, but it isn't as performant. I doubt its usage will be that wide since the baseline (PS5) doesnt support dp4a. Any hardware could potentionally run AI/ML, but the cost will be higher when not using fixed function hardware for the purpose.
I guess what he means is that Tensor cores are not really necessary for DLSS. They make it faster as they are extra processing power on board. But you can make the same by using the shader-egines on the GPU and
accelerate with half-/quarter-rate accuracy.
Even DLSS ran purely on the shaders when nvidia did experiment with better algorithms between version 1 and 2.x. So it is possible. The question is just how big the impact would be.
So currently it is really just a software problem. It would be something else if the Tensor cores were instead some kind of fixed-function units but they aren't. They are just "normal" mini-cpu-cores (if you want) that can be quite efficient in brute forcing stuff like that but are quite bad when used for other stuff.
As noted by DF in their video, AMD gpus lack the hw acceleration cores that Intel and nv have on their gpu's. FSR2.0 is not using ML/AI reconstruction, it is more akin to TAAU and whatever reconstruction already being used on consoles (like insomniac does), Digital Foundry has explained this in their video, its worth a watch.
RDNA2 supports INT4, the PS5, as noted by DF, does not. What Spiderman was doing has nothing to do with ML/AI upscaling at all, its a totally different thing for a different discussion, and something that even as far back as the PS3 probably could do.
RDNA3/4 might or might not add fixed function hardware to accelerate ML/AI/neural processing (like every smartphone nowadays does, and apple computers). With that comes the comprehensive neural network training aswell. It's not here right now because there is no AI/ML reconstruction technology on RDNA gpu's as of yet.
Anyway, watch the DF video, FSR2.0 is mostly a PC solution, a reconstruction solution already available (probably more performant aswell) on consoles for a while.