Current Generation Games Analysis Technical Discussion [2024] [XBSX|S, PS5, PC]

This is the official text. However, both Afmf2 and Lossless Scaling work great even at 30 FPS, practice shows this. The lag is barely noticeable in singleplayer games, it will be fine there. I bet they are already working on a console version, which is better than the plain FSR3 motion generation.
I recently played the PC port of Space Hulk Vengeance of the Blood Angels, a game locked at 15fps, bumped up to 60fps using Lossless Scaling's 4x option. It's transformative to say the least.
 
Unfortunately these consoles using off the shelf hardware became dated quickly as the hardware they used was pre transition.

With RT, ML based upscaling and frame gen coming into the picture early into this console generation’s life, they were unable to adapt as the hardware had no consideration for these technologies.
Lossless Scaling also works on old PCs, a similar technique can easily be implemented on the RDNA2 XSX. It is only necessary to write it in Compute Shader.
 
If Alex covers the Monster Hunter Wilds Open Beta... I suspect he's going to blow a gasket. It's rough.. to say the least. Capcom are CLEARLY leaning on frame gen to make up for their inefficiency on the CPU side. Also it uses direct storage but there are texture and asset pop-in.
 
I think it's likely RDNA 4 will be required to support this. I assume it has the same tweaks to the shaders to accelerate matrix math as seen on PS5 Pro. Expecting something fairly similar to the quality of PSSR seems like a safe bet.
I definitely see that as the safe bet, though, just looking at the progression of things here for AMD and MS:

  • FSR has traditionally hit a very wide range of hardware, narrowing this down to RDNA 4 would be very specific, and if few people purchase RDNA4, then all this investment to train a model is worthless.
  • AMD made an announcement to cede the high end market
  • All their mobile devices and above are approximately DX12U and above now
  • MS releases DirectSR - which effectively abstracts super resolution code so that developers can code once to support a variety of upscaling algorithms. This would be extremely handy for ML upscaling and fallback requirements for lower end systems, it could allow for AMD to support multiple variants of ML algos.
  • MS owns CoD Black Ops 6
  • AMD announces ML upscaling for Black Ops 6, it is already optimized today for FSR3.1
  • The Series consoles were delayed for RDNA2 -- this may have been a for a dp4a requirement, I would be surprised that FSR being the default solution for DirectSR, the Xbox consoles would be excluded from running the ML variant.
  • We know XeSS can run on DP4A and DLSS1.9 was entirely compute based without DP4A
Lastly from this article:
2024 for us is a huge year because we have spent so many years developing hardware and software capabilities for AI. We've just completed AI enabling our entire portfolios so cloud, edge, PCs, our embedded devices, our gaming devices, we're enabling our gaming devices to upscale using AI and 2024 is really a huge deployment year for us so now the bedrock's there, the capabilities are there.

Sounds like AMD is not referring to RDNA4 here, and are intending to going wide.

IMO, the only reason I don't think we've seen ML algorithms on all GPUs is because Nvidia keeps it locked to tensor cores now, they are the premium market leader, they will force their users to upgrade to the latest and greatest. XeSS is not quite open source, and AMD had not completed making their own, these are costly models to train. Otherwise I think we could have had ML upscaling available on all sorts of hardware. I've never seen DLSS as a tensor core requirement. We also don't have any benchmarks on how long these networks take to run on hardware, it's not something we have any real insight into. So most of it, is us just relying, that the IHV needs this hardware to be able to run this software. But my experience in the field, this has not been the case, unless model training is involved. That being said, more tensor power would enable you to run denser networks resulting in high quality in terms of output.

PSSR is likely a customized or earlier release of the AMD FSR4.0 property, and we've seen this type of behaviour before, 5Pro will be launching with some RDNA4 tech before it's announcement.

As you stated, the safe bet is RDNA4 and imo RDNA3 devices will definitely be supporting this. The question is how much farther are they willing to broaden it.
 
Last edited:
Where does the impression that 4k is standard come from. According to the most recent steam hw survey more than half the steam gamers use a 1080p monitor.
From the fact that it's been the standard resolution for consumer TV's for many years now.

And 'half' of people on Steam aren't playing AAA games in the first place. They're playing Counterstrike and whatnot. Either way, a lot more people would upgrade to 4k monitors if GPU's weren't so unreasonably priced for what you get nowadays.

I am also not talking 4k native rendering demands, either. Just some sort of clearly higher resolution than 1080p, be it native 1440p or reconstructed 4k or whatever. 1080p is dated. It'd be like still using 720p in 2017.
 
I definitely see that as the safe bet, though, just looking at the progression of things here for AMD and MS:

  • FSR has traditionally hit a very wide range of hardware, narrowing this down to RDNA 4 would be very specific, and if few people purchase RDNA4, then all this investment to train a model is worthless.
  • AMD made an announcement to cede the high end market
  • All their mobile devices and above are approximately DX12U and above now
  • MS releases DirectSR - which effectively abstracts super resolution code so that developers can code once to support a variety of upscaling algorithms. This would be extremely handy for ML upscaling and fallback requirements for lower end systems, it could allow for AMD to support multiple variants of ML algos.
  • MS owns CoD Black Ops 6
  • AMD announces ML upscaling for Black Ops 6, it is already optimized today for FSR3.1
  • The Series consoles were delayed for RDNA2 -- this may have been a for a dp4a requirement, I would be surprised that FSR being the default solution for DirectSR, the Xbox consoles would be excluded from running the ML variant.
  • We know XeSS can run on DP4A and DLSS1.9 was entirely compute based without DP4A
Lastly from this article:


Sounds like AMD is not referring to RDNA4 here, and are intending to going wide.

IMO, the only reason I don't think we've seen ML algorithms on all GPUs is because Nvidia keeps it locked to tensor cores now, they are the premium market leader, they will force their users to upgrade to the latest and greatest. XeSS is not quite open source, and AMD had not completed making their own, these are costly models to train. Otherwise I think we could have had ML upscaling available on all sorts of hardware. I've never seen DLSS as a tensor core requirement. We also don't have any benchmarks on how long these networks take to run on hardware, it's not something we have any real insight into. So most of it, is us just relying, that the IHV needs this hardware to be able to run this software. But my experience in the field, this has not been the case, unless model training is involved. That being said, more tensor power would enable you to run denser networks resulting in high quality in terms of output.

PSSR is likely a customized or earlier release of the AMD FSR4.0 property, and we've seen this type of behaviour before, 5Pro will be launching with some RDNA4 tech before it's announcement.

As you stated, the safe bet is RDNA4 and imo RDNA3 devices will definitely be supporting this. The question is how much farther are they willing to broaden it.
Maybe AMD offers a token version that runs on older GPUs similar to XESS, but I don’t think that will be the primarily advertised version. XESS doesn’t offer a performance benefit for older GPUs, even with the DP4 support. There is no reason to use it. I think at this point we have much more reason to believe Nvidia limiting DLSS to tensor cores is a legitimate choice given the complete absence of a worthwhile ML upscaler from anyone that runs on general shaders.

I don’t think Xbox support matters at all here. This type of tech really only affects potential PC sales. The fate of the current Xbox is sealed. The same applies to current RDNA GPUs as well. The good will would be nice, but not if the realities of performance make the tech useless to the end user.
 
Last edited:
I don't think a token version of AI FSR makes sense. The non-AI version still exists and isn't going anywhere. The FidelityFX SR 4 SDK will almost certainly include the legacy algorithm alongside the new AI model, and ideally the interface will be the same so developers can code once for both solutions. That being said I also think RDNA 3 will support AI FSR, because the APUs will stay on RDNA3/3.5 for a while.
 
Nobody with a modern nVidia GPU is playing in 1080p. They are using DLSS performance to upscale to 90% of the native 4K quality.

They're not as the performance overhead is crazy, if they are only getting 60-70fps at native 1080p then they're not going to get 4k 60fps via DLSS performance mode.
 
I am also not talking 4k native rendering demands, either. Just some sort of clearly higher resolution than 1080p, be it native 1440p or reconstructed 4k or whatever. 1080p is dated. It'd be like still using 720p in 2017.

1080p is not dated, it only appears dated because of shit show smear fest that is modern gaming.

1080p simply isn't enough pixels to overcome that shit show smear fest.

Older games that doesn't use TAA look crisp and sharp at 1080p.

And then there's display technology, 1080p on a blurry sample and hold display looks trash.

1080p on a CRT looks crisp and sharp.

1080p is not the problem.
 
About the resolution discussion, I'm looking at MH Wilds on PS5 and they implemented some really great hair physics. It would look great, but especially in performance mode, it's a mess of pixels, since the resolution is so low. So developers are making great advancements in rendering, and they are getting obscured by those low resolutions. At that point, was that even worth it to enable it in the first place?
 
As much as I would love to see CRT make a come back it's sad that it never will.

The solution is better displays that don't blur everything during motion and can replicate CRT's motion clarity.

There should be Nvidia Pulsar monitors before the end of the year. They’re claiming high-quality strobing during variable refresh. Now that mediatek can build Gsync technology into their scalers there’s some hope of getting mass availability of low-motion-blur sample-and-hold displays. Assuming they work as advertised.
 
As much as I would love to see CRT make a come back it's sad that it never will.

The solution is better displays that don't blur everything during motion and can replicate CRT's motion clarity.
Ignoring that 1080p on a CRT isn't actually 'sharper' than on an LCD(it's usually the opposite) and does nothing for the sort of image quality problems caused in 3d rendering at such a resolution, which is the main issue, I dont understand how you think saying there should be some magic display tech is 'the solution'. A solution is something that's workable and will improve things. Not just day dreaming about tech that doesn't exist, that people aren't using.

The solution is better value for GPU's so people dont have to play at 1080p. No magic required!
 
As much as I would love to see CRT make a come back it's sad that it never will.

The solution is better displays that don't blur everything during motion and can replicate CRT's motion clarity.
OLEDs do a pretty good job at that. The problem is that, they can't replicate CRT's ability to blend the colors of pixels, creating some sort of organic and natural AA of its own.
Modern HDs are too pixel perfect
 
OLEDs do a pretty good job at that. The problem is that, they can't replicate CRT's ability to blend the colors of pixels, creating some sort of organic and natural AA of its own.
Modern HDs are too pixel perfect

I've had multiple OLED monitors and they're still no where close to CRT.
 
Back
Top