Non-DLSS Image Reconstruction techniques? *spawn*

sonen

Newcomer
https://www.dsogaming.com/news/intel-and-ucsc-showcase-dlss-like-ai-upscaling-technique/

Intel and UCSC have showcased a DLSS-like AI upscaling technique that will be further detailed at SIGGRAPH ASIA in December 20th. Intel and UCSC have used the Unreal Engine 4 Infiltrator Tech Demo in order to showcase this new AI upscaling image reconstruction tech.

This new AI upscaling technique is called QW-Net. According to the teams, QW-Net is a neural network for image reconstruction, where close to 95% of the computations can be implemented with 4-bit integers.


“This is achieved using a combination of two U-shaped networks that are specialized for different tasks, a feature extraction network based on the U-Net architecture, coupled to a filtering network that reconstructs the output image. The feature extraction network has more computational complexity but is more resilient to quantization errors. The filtering network, on the other hand, has significantly fewer computations but requires higher precision. Our network uses renderer-generated motion vectors to recurrently warp and accumulate previous frames. This produces temporally stable results with significantly better quality than TAA; a widely used technique in current games.”

Unfortunately, though, this image reconstruction technique is for offline usage (at least for now). UCSC claims that an optimized implementation for real-time inference remains future
work. In other words, we may never see this technique in games. Still, it’s fascinating witnessing alternatives to DLSS. It also shows how ahead of its time NVIDIA was with its image reconstruction
technique.

Concurrent to our work, Xiao introduced a reconstruction technique based on U-Net. Using an optimized inference implementation they reconstruct a 1080p image in 18 to 20 ms on a high-end GPU. In comparison, DLSS reconstructs a 4K image in under 2 ms. Both these approaches can reconstruct images at a higher resolution than the input render.”

https://player.vimeo.com/video/456337582
 
So DLSS is ~10x faster on the same hardware? That's pretty crazy.
yea. That's what I think many of us were trying to say earlier in response to 'DirectML' as being an option; it's not the hardware or hardware support that is driving the speed. The DLSS model itself that is the IP that needs to be protected. It's not easy to replicate what Nvidia has done. 2ms is insane.
 
yea. That's what I think many of us were trying to say earlier in response to 'DirectML' as being an option; it's not the hardware or hardware support that is driving the speed. The DLSS model itself that is the IP that needs to be protected. It's not easy to replicate what Nvidia has done. 2ms is insane.

Series S would make things interesting in this regard. The Series X has half the INT4/8 throughput of an RTX 2060 meaning 4k upscale would take -5ms using DLSS or an equivalent model. Still potentially worth it at 60fps if you're upscaling from 1080p to 4k.

However the Series S with likely no more than 50% of the Series X throughout would be looking at ~10ms. Basically worthless at 60fps. So what happens if XSX does get a DLSS like solution and starts upscaling from 1080p to 4k? Where would that leave the S? I know the S would be targeting a lower resolution (1440p) but does that have a linear impact on the ML processing requirements? I'd guess not.
 
Series S would make things interesting in this regard. The Series X has half the INT4/8 throughput of an RTX 2060 meaning 4k upscale would take -5ms using DLSS or an equivalent model. Still potentially worth it at 60fps if you're upscaling from 1080p to 4k.

However the Series S with likely no more than 50% of the Series X throughout would be looking at ~10ms. Basically worthless at 60fps. So what happens if XSX does get a DLSS like solution and starts upscaling from 1080p to 4k? Where would that leave the S? I know the S would be targeting a lower resolution (1440p) but does that have a linear impact on the ML processing requirements? I'd guess not.
Series S is designed with an advanced hardware scaler to 4K. I don't expect DLSS to ever be used on the S. Even for lower resolution.
The resolution won't have an impact on model speed unfortunately. It shouldn't at least, but no one knows how Nvidia did it.
 
Series S is designed with an advanced hardware scaler to 4K. I don't expect DLSS to ever be used on the S. Even for lower resolution.
The resolution won't have an impact on model speed unfortunately. It shouldn't at least, but no one knows how Nvidia did it.

Indeed but if S is rendering at 1440p native, and a dev wants to use a DLSS type algorithm on the X to target an internal res of 1080p (or 1440p for that matter), that poses a problem. Rendering requirements on the S could potentially be higher than the X. They could always half the frame rate if X is targetting 60fps, but if its already at 30fps then the only route left is massive graphical compromises. Which seem to go against the principle of the S.
 
Indeed but if S is rendering at 1440p native, and a dev wants to use a DLSS type algorithm on the X to target an internal res of 1080p (or 1440p for that matter), that poses a problem. Rendering requirements on the S could potentially be higher than the X. They could always half the frame rate if X is targetting 60fps, but if its already at 30fps then the only route left is massive graphical compromises. Which seem to go against the principle of the S.
Yea. I see where you're going with this. I'm not sure what the options are for when they push both PS5 and XSX to the limit. Hopefully MS provides some tools there to help out (aka ML upres or something similar), but it's entirely possible that it's going to miss out on some things.

I can only assume if they manage to get ML upres working on XSX. They can make a faster model for XSS.
But using the same model on both, should run in the same time.
 
Series S is designed with an advanced hardware scaler to 4K. I don't expect DLSS to ever be used on the S. Even for lower resolution.
The resolution won't have an impact on model speed unfortunately. It shouldn't at least, but no one knows how Nvidia did it.
I guess GeForce Now, if offered as an option could potentially provide DLSS on the S.
 
So theres no need for tensor hardware for dlss?
No, just that the games already have DLSS support. Granted this is just a future possibility for consoles.
The requirements for Windows are:
PC HARDWARE REQUIREMENTS
  • Dual core x86-64 CPU with 2.0GHz or faster
  • 4GB of system memory
  • GPU that at least supports DirectX 11
INTERNET REQUIREMENTS
GeForce NOW requires at least 15Mbps for 720p at 60fps and 25 Mbps for 1080p at 60fps.
You’ll need to use a hardwired Ethernet connection or 5GHz wireless router.
 
I guess GeForce Now, if offered as an option could potentially provide DLSS on the S.
But that's not providing DLSS on the S per se, it's just streaming games running on Nvidia hardware through the S.
 
But that's not providing DLSS on the S per se, it's just streaming games running on Nvidia hardware through the S.
I don't see any other way other than streaming given the S hardware setup.
 
I don't see any other way other than streaming given the S hardware setup.
It's not going to happen at all. The only alternative is MS providing a shader-based AI reconstruction that's similar. The original comment was a "DLSS type algorithm", not DLSS specifically since that's Nvidia only.
 
It's not going to happen at all. The only alternative is MS providing a shader-based AI reconstruction that's similar. The original comment was a "DLSS type algorithm", not DLSS specifically since that's Nvidia only.
True, if MS can provide a decent AI reconstruction process for the S that would be ideal. However, AFAIK there has been no indication that MS will do anything other than support DirectML.
 
Last edited by a moderator:
True, if MS can provide a decent AI reconstruction process for the S that would be ideal. However, AFAIK there has been no indication that MS will do anything other than support DirectML.
If they are working on something, they have some time. Maybe 2 more years before the consoles will start needing it and another 3 years after that when it's mature and pushing the envelope with 30fps titles.
 
Will console gamers accept 30fps? (especially in 5 years time) for a few years now anything less than a solid 60fps has been unacceptable to many in p.c space
 
Back
Top