Good point. DLSS would completely break with mods.
That's not correct. DICE/4A only needs to provide the full solution to nvidia (textures, code, etc). How it's trained is fully determinant by nvidia - which to be honest, I'm unsure how nvidia trains the system. Training would not involve providing screen shots, the NN is not looking at the players position and extrapolating what image it has been trained on and trying to fill in the blanks. That would be an interpretation of what it's doing, but that's actually not what it's doing.That's assuming DICE/4A has provided all the source screens already. Keep in mind that ultrawide resolutions aren't even an option yet since DLSS needs to be trained on every single resolution separately and ultrawide hasn't been done. Whether it's in a queue at Nvidia or DICE hasn't provided non-standard resolutions yet, who knows.
And what happens when there's a significant change in rendering, enough to make the existing DLSS training obsolete and they need to provide ALL the resolution sources again to Nvidia for re-training.
If the model changes, yea, you can speed up the speed. We can do some really inefficient models with poor performance and some really effective models. Some ML algorithms are very fast with lots of attributes, some really slow with tons of attributes. etc. etc.How does ML performance change? That is, let's say it is taking 7ms per frame at the moment. How can that be sped up? Is there a minimum time but quality over that time can be increased?
Generally more data is better for NN. That would be a general statement.Are better models dependent on more data? What size resources would something like this take (I appreciate that might be a broad answer!)? Reconstruction tends to work in megabytes for screen-res buffers. I envision ML datasets becoming huge, but I've no idea really!
One can't determine that until it's proven itself.
Reconstruction on a 1.8TF PS4 to Spider-Man/HZD quality takes a few ms.
I'm okay with commentary that it sucks today. But for me, it's a matter of time when nvidia finds the right way to model and you're going to get a steep increase in performance very quickly.
A high level explanation on how Nvidia does training. They do mention based on gamer feedback and screenshots they are adding different techniques and data pool is increasing. Wonder how long training takes on the Saturn V computer as it ranks 28th in TOP 500 worldwide in performance and most efficient.That's not correct. DICE/4A only needs to provide the full solution to nvidia (textures, code, etc). How it's trained is fully determinant by nvidia - which to be honest, I'm unsure how nvidia trains the system.
ahhh I stand corrected then @MaloA high level explanation on how Nvidia does training. They do mention based on gamer feedback and screenshots they are adding different techniques and data pool is increasing. Wonder how long training takes on the Saturn V computer as it ranks 28th in TOP 500 worldwide in performance and most efficient.
indeed straight up most forward. I think I was in a completely different understanding of what they were trying to accomplish.Upscaling just on image data would be the most straight-forward, drop in solution if it worked well. How does nVidia update the datasets? Are the driver downloads for DLSS including large data files?
Well I believe you're right in that Nvidia are generating the screenshots rather than the developers, so the bottleneck on training is really purely on Nvidia once they have what they need from the game devs.ahhh I stand corrected then @Malo
The problem I have is you assumed DLSS would be better. Faux 4K is noticeable*, so you wanted a better solution. Well, DLSS's Faux 4K (it's just as fake as compute reconstruction) is also very noticeable; far more so. Why not wait and see what are the best solutions for upscaling instead of assuming DLSS was the perfect magic bullet? And why not discuss both technologies equally in terms of pros and cons instead of siding 100% with one for no particular reason?Since the faux 4k is noticeble i wouldnt mind a superconputer/tensor cores take care of it att less or no cost to performance.
Half 4K res in the case of HZD IIRC. We don't know how Insomniac's Temporal Injection works.From what resolution are those 1.8TF base consoles reconstructing?
Why? Why is using ML more forward thinking than using more sophisticated algorithms on compute using ever more local, deep data?Its that what i mean, offcourse it sucks but aside from that the tech/idea seems to be forward thinking.
There's no offloading. The supercomputer trains the model. The GPU then does a helluva lot of work implementing that model. Using ML to upscale is demanding (more demanding than compute at the moment). Ergo, we need to see where ML goes, and where reconstruction goes, and evaluate the different solutions neutrally to ascertain the best options for devs and gamers alike.Consoles with their limited tech could use a supercomputer for offloading.
yea thanks for saving my faceWell I believe you're right in that Nvidia are generating the screenshots rather than the developers, so the bottleneck on training is really purely on Nvidia once they have what they need from the game devs.
nvidia's answer:Are better models dependent on more data? What size resources would something like this take (I appreciate that might be a broad answer!)? Reconstruction tends to work in megabytes for screen-res buffers. I envision ML datasets becoming huge, but I've no idea really!
We have seen the screenshots and are listening to the community’s feedback about DLSS at lower resolutions, and are focusing on it as a top priority. We are adding more training data and some new techniques to improve quality, and will continue to train the deep neural network so that it improves over time.
https://wccftech.com/ffxv-nvidia-dlss-substantial-fps-boost/The implementation of NVIDIA DLSS was pretty simple. DLSS library is well polished so, with DLSS, we were able to reach a functional state within a week or so, whereas it could take months if we implemented TAA on our own. The velocity map and how it’s generated differ depending on each game engine. In order to support that aspect and to keep pixel jitter under control, we needed to modify parameters.
I think you might be correct regarding how the process is currently implemented, though in another Q&A they did mention developers would be providing Nvidia data.Well I believe you're right in that Nvidia are generating the screenshots rather than the developers, so the bottleneck on training is really purely on Nvidia once they have what they need from the game devs.
https://news.developer.nvidia.com/dlss-what-does-it-mean-for-game-developers/Question: How much work will a developer have to do to continue to train and improve the performance of DLSS in a game?
At this time, in order to use DLSS to its full potential, developers need to provide data to NVIDIA to continue to train the DLSS model. The process is fairly straightforward with NVIDIA handling the heavy lifting via its Saturn V supercomputing cluster.