Intel XeSS anti-aliasing discussion

Hey what do you know - Intel literally reached out to us again in the night for *further* clarification. I just tweeted about it.

So I think this means we will see varying levels of XeSS speed depending on the GPU generation and make... which is good.
So they are allowing it to run on cards without DP4a support after all. This will be interesting in terms of performance.
 
Hey what do you know - Intel literally reached out to us again in the night for *further* clarification. I just tweeted about it.

So I think this means we will see varying levels of XeSS speed depending on the GPU generation and make... which is good.
Finally got around to looking at this thread.
So glad it's actually clarified, thanks for letting us know.

This is precisely how I expected it to work.
The reason for the DP4a kernel on Intel igpu's is I suspect there's overhead to doing it through SM6.4 even when using DP4a.

@Dictator are you working on follow up videos or you sick of doing TAAU comparisons at the moment?
Be interesting to see performance across vendors, especially the lower end cards.
 
Finally got around to looking at this thread.
So glad it's actually clarified, thanks for letting us know.

This is precisely how I expected it to work.
The reason for the DP4a kernel on Intel igpu's is I suspect there's overhead to doing it through SM6.4 even when using DP4a.

@Dictator are you working on follow up videos or you sick of doing TAAU comparisons at the moment?
Be interesting to see performance across vendors, especially the lower end cards.
ATM I am not working *directly* on a follow-up video but I already have done some performance capture and such on non-intel GPUs as to prep for such a video in the future. I also made some quality comp videos for the non-intel arc version of XeSS.

But to be blunt, I am a bit tired of working on IQ comparison videos. I feel like I have done soooo many in the last few months... I am looking forward to the eventual RTX 4*** series launch whenever that is because it hopefully means I can get away from all this IQ stuff for a bit lol
 
ATM I am not working *directly* on a follow-up video but I already have done some performance capture and such on non-intel GPUs as to prep for such a video in the future. I also made some quality comp videos for the non-intel arc version of XeSS.

But to be blunt, I am a bit tired of working on IQ comparison videos. I feel like I have done soooo many in the last few months... I am looking forward to the eventual RTX 4*** series launch whenever that is because it hopefully means I can get away from all this IQ stuff for a bit lol

/comfort...

More IQ comparisons incoming considering one of the big selling points for RTX 4xxx is DLSS3. :p

Although I'm more interested in seeing input response benchmarks and some kind of input consistency benchmark for DLSS3 performance enhancement mode (assuming IQ remains the same, which it probably doesn't). Is input response and latency improved commensurate with the improvement in frame times or is it relatively flat? Additionally does it introduce any potential input bubbles?

Regards,
SB
 
Now I'm curious if the issues Alex mentioned in his preview are fixed.

Also Performance on non DP4a GPUs like RDNA1 would be highly interesting. Anyone got one?
 
Now I'm curious if the issues Alex mentioned in his preview are fixed.
Guess you never know, but I wouldn't hold your breath. Unless they knew about it before.
It's not really like an easily fixed game bug.
It's going to take lot more training to fix, and test against other games to make sure hasn't made things worse.
Early days those are the sort of things they'll need to QA.

Hopefully we'll start to see a lot of diverse comparison etc, although obviously still be waiting on @Dictator to get his second wind.
 
So one caveat which wasn't obvious with XeSS on non-Intel h/w - it works only in D3D12.

From my (very fast and unsophisticated) test on a 3080 XeSS is considerably softer than DLSS.
The latter looks more detailed than native+TAA+CAS while the former is closer to native+TAA+CAS although less detailed than that on some surfaces.
This is true for all presets.

The difference in performance is also substantial. XeSS is obviously faster than native but DLSS is about 15-20% faster than XeSS.
 
Last edited:
Yeah from my quick tests in SOTT the big downside is the fps cost - it's pretty expensive compared to DLSS, and the gap is bigger than when I've tested FSR 2 in other games. XeSS Performance requires more GPU to run than DLSS Balanced. I have to turn off some pretty significant graphical settings (such as tessellation) to have enough headroom to maintain 60fps with XeSS Performance at 4K on my 3060 which isn't great.

I'm sure it will fare much better on Arc of course, but on other cards it seems pretty costly.

Updated: Added a video showing DLSS Balanced vs XeSS Balanced + Native 4k when rendering water puddles. It's not the exact spot DF got to so that may be different, but the shimmering effect is definitely more pronounced with XeSS, however DLSS also suffers from this in motion. Native reveals significantly more detail than either when looking at the objects in the water with the least shimmering.

While this is not necessarily the best example of the 'artifacting' I've complained about in other titles, this is a decent illustration imo that shows when these reconstruction methods can particularly be noticeable. These effects that incorporate lower-res buffers stand out far more than the occasional ghosting.
 
Last edited:
Not official but here are some benchmarks running on older hardware. GTX1060
Unfortunately as of right now I am unable to find benchmarks for RX5700 or the Vega GPUs that support dp4a hopefully they people will be testing this soon.
 
Last edited:
Anybody know if the Shadow of the Tomb Raider Trial supports XeSS? I can get my hands on a Laptop that has an Iris Xe GPU in it but don't really want to purchase a game that I'm have not plans on playing anytime soon.
 
Back
Top