AMD FSR antialiasing discussion

  • Thread starter Deleted member 90741
  • Start date
Games without motion vectors are a dead end. FSR is great for old games, but any cutting edge engines for forward thinking titles are going to have to think about temporal upscaling. Unless there's some unforeseen breakthrough in ai upscaling that can "imagine" the missing details.

AI upscaling already exists without the need for any temporal data. They're currently aimed at non real time applications however. At least my understanding is that part of the reason for relying on motion vectors is there is a big quality/performance efficiency gain. As in we don't have the hardware capable of doing "effective" AI upscaling in real time needed for a gaming application otherwise.

I'm actually curious about a comparison of DLSS against those non real time AI upscalers. Albeit they aren't for the most part designed nor trained for gaming (nor tend to be for CG style images., Pixar though interestingly actually uses an internal AI upscaler without a temporal component).

Maybe in the future if hardware gets fast enough would that then lead to the possibility adding a completely game agnostic AI upscaler? This would also have an application in being able to "remaster" old games. I know some fan mod "remasters" for textures for instance already rely on AI upscaled textures.

As an aside The Nvidia Shield does AI upscaling without relying on motion vectors, and therefore any software support requirement, however I'm not sure if it relies on temporal data. It's however too slow for real time gaming.
 
I totally disagree on small studios. In fact, these companies use ready-made game engines like UE and Unity which already integrate TAA and DLSS in their core. TAA/DLSS is mostly a check box and doesn't require any extra development with these engines. On the other side, big studios doing AAA games generally use in house engines that need IHV effort and help to integrate their proprietary solution.

That second part is absolutely true, but like I said in the my post that you replied to, badly implemented temporal upscaling in a game is much worse than no temporal upscaling.

Even AAA developers with all their technical know how still screw that up. Just look at Dying Light 2. The DLSS implementation by the developers in that game was so bad that many people turned off both DLSS and RT (since they needed DLSS in order for RT to not completely tank their framerate) because it was so bad and introduced so many annoying and obvious rendering artifacts.

I know that Techland were working on a patch to address the issue, but since noone I know is playing it anymore, I have no idea if those issues were properly resolved or not.

So, yes, you can enable it like a "check box" but whether that will lead to an improvement? Sometimes it does. Or will it lead to a worse looking game? Sometimes it does.

So, in the end, many developers will continue to choose to not implement it as they don't have either the time or technical know how to ensure that it is implemented well in their game.

Regards,
SB
 
Sorry if the question is stupid, but I'm a little out of the loop.
Is RSR an alternative method than run on drivers and rdna2, better than FSR2? Just a catch all solution? Can they work in tandem?
 
Sorry if the question is stupid, but I'm a little out of the loop.
Is RSR an alternative method than run on drivers and rdna2, better than FSR2? Just a catch all solution? Can they work in tandem?
RSR is just FSR 1.0 - but forced through the driver and the game has no idea it exists. So if you use RSR in a game the HUD will be scaled (which is bad) and the game will not have a negative LOD bias applied to textures like standard FSR 1.0 integrations recommend (so the textures might be less aliased and more blurry than a native FSR 1.0 integration).
 
AI upscaling already exists without the need for any temporal data. They're currently aimed at non real time applications however. At least my understanding is that part of the reason for relying on motion vectors is there is a big quality/performance efficiency gain. As in we don't have the hardware capable of doing "effective" AI upscaling in real time needed for a gaming application otherwise.

I'm actually curious about a comparison of DLSS against those non real time AI upscalers. Albeit they aren't for the most part designed nor trained for gaming (nor tend to be for CG style images., Pixar though interestingly actually uses an internal AI upscaler without a temporal component).

Maybe in the future if hardware gets fast enough would that then lead to the possibility adding a completely game agnostic AI upscaler? This would also have an application in being able to "remaster" old games. I know some fan mod "remasters" for textures for instance already rely on AI upscaled textures.

As an aside The Nvidia Shield does AI upscaling without relying on motion vectors, and therefore any software support requirement, however I'm not sure if it relies on temporal data. It's however too slow for real time gaming.

With the current state of the art faster HW won't enable better AI upscaling without any temporal data, at least not for real-time content. Since image reconstruction is essentially an ill-defined problem, without temporal information is pretty much impossible to add detail in a temporally stable fashion.
You can get a network to hallucinate high frequency details but without temporal data, a tiny change in the next frame can cause the network to hallucinate something completely different.

When the input to the network is already sampled at very high rate (e.g. movies) the problem is not as ill-defined and temporal stability improves, but with real-time content there's not much hope to get high frequency and temporally stable results without making use of temporal information.
 
I wish AMD could impelment RSR by internally reducing the resolution instead of getting the game to run in windowed mode and then upscale it. I'm effectively gaming at 1080p and the asset quality can differ wildly between games due to the engine doing different things with LoD.

Days Gone looks fine at both 1080p and 1440p, but Cyberpunk downgrades 1080p to such a level that it seems as if the game is perennially loading textures as you get close to objects. RSR in such a game will downgrade it even further, and those using it at 1440p would see much worse quality.
 
I wish AMD could impelment RSR by internally reducing the resolution instead of getting the game to run in windowed mode and then upscale it. I'm effectively gaming at 1080p and the asset quality can differ wildly between games due to the engine doing different things with LoD.

Days Gone looks fine at both 1080p and 1440p, but Cyberpunk downgrades 1080p to such a level that it seems as if the game is perennially loading textures as you get close to objects. RSR in such a game will downgrade it even further, and those using it at 1440p would see much worse quality.

If the new FSR works well then we could see wide support for it in game. Neither the xbox series x or ps5 can really render new games at 4k. But using the new FSR if it works well at 1440p or worse case 1080p and using the upscaling to 4k could be a huge performance increase for the consoles. If they are already implementing it for the consoles it should be trival for the pc esp if its implemented on the xbox.

So lets hope this new stuff is really good
 
No idea, wrong topic perhaps.
right topic.
Upscaling solutions are limited on console. TAAU and CBR are about the only one we see and these are all custom built for each specific engine/game resulting in a variety of inconsistency one game to the next. With a universal one like FSR 2.0, we may see widespread adoption which is a great get for consoles.
 
  • Like
Reactions: snc
right topic.
Upscaling solutions are limited on console. TAAU and CBR are about the only one we see and these are all custom built for each specific engine/game resulting in a variety of inconsistency one game to the next. With a universal one like FSR 2.0, we may see widespread adoption which is a great get for consoles.

Why? UE4 has in built TAAU and most UE4 games dont use it.
 
Why? UE4 has in built TAAU and most UE4 games dont use it.
with respect to? sorry I'm didn't catch the context of my post you were referring to.

Why TAAU and CBR are about the only ones? Or? Why it's a great get?
 
right topic.
Upscaling solutions are limited on console. TAAU and CBR are about the only one we see and these are all custom built for each specific engine/game resulting in a variety of inconsistency one game to the next. With a universal one like FSR 2.0, we may see widespread adoption which is a great get for consoles.
You anticipate the inconsistency from game to game to disappear across different engines if FSR 2.0 is used?
 
Back
Top