Because based on it's INT4/8 capabilities the XSX is half as fast with ML than an RTX 2060. And a 4K upscale on an RTX 2060 takes 2.5ms. That means it will take ~5ms on the XSX which is more than 1/3rd of the frame time at 60fps. This may still be worth it, but it's also dependent on this single dev studio creating a model that's comparable in quality and performance to Nvidia with all their $billions in R&D, access to the worlds fastest ML supercomputers, synergy with their own hardware/tensor cores and massive ML experience (being the world leaders in the hardware that runs it and all).
So while I'm not saying it's impossible. I do think it's sensible to take any such claims with a massive pinch of salt until real world results have been shown and independently verified. After all, "ML upscaling" could mean almost anything and doesn't necessarily have to be comparable to DLSS which is outputting anywhere between 2.25x - 9x the original number of pixels.
TBF, couldn't MS leverage their own resources and expose a lot of this functionality to devs through the APIs? That way the work devs require to implement their own upscaling models, etc, are lessened? They are already doing this to a large extent with auto HDR and while there are some cases of it not working as intended with a small handful of BC titles, by and large it seems to do as advertised, and I'm guessing can be tuned to what a dev wants in particular if they wish to expend the resources.
I don't see any major barriers in preventing a similar model for image upscaling; MS already did mention of an upscaler in Series S. Granted that could just be in reference to standard upscalers a lot of devices use, but what would necessarily be the point in explicitly bringing it up for a games console when they can't be oblivious to the tech-focused discussions happening here and elsewhere WRT resolution upscaling techniques?
At the very least I'm hoping they have some type of API stack present regards image upscaling through DirectML that devs can leverage immediately, and fine-tune to their own results if they wish. Nvidia definitely has been king in this field, but it's not like MS doesn't have the resources or R&D teams into their own research for data models to build and train image res upscaling techniques and the customized silicon to provide it at the hardware level. In fact I'd argue they have more resources in that department, question is if they have utilized them to such a purpose. Guess we'll find out sometime soon, but AMD seemed pretty bold about this when addressing DX12U support in their new GPUs today.
FWIW, a lot of the same could be said for Sony as well, I'm sure they have been doing some legwork into training data models to push forward with checkerboarding techniques. They have patents for further implementation of foveated rendering, among other things. Maybe they have a set of APIs for devs on their platform to leverage for image upscaling that's relatively easy to implement into existing engines and frameworks, but has flexibility where needed. In either case, we'll find out within a year where the consoles are on this front.
Welp, there it is. Still not sure if it says too much though regards PS5. No mention of cache scrubbers today for example, that could however be one of the features Cerny talked about that AMD adopt for a future design (maybe an RDNA 3 GPU?).
I think it's VERY clear tho that DX12U runs very deep in RDNA2's design, so if Sony are providing equivalent features but can't use DX12U, then it's clear they'd have to of customized large parts of their GPU out of necessity to provide them.
yea it is. In the same manner that esram was there to make up for the bandwidth deficit for XBO.
It makes sense to have infinity cache on the larger ones, the more CUs you have the more bandwidth you need to feed things. But that gets extremely expensive as you keep going up so you need an alternative method. This looks like a decent method of doing it, I was expecting HBM at first, but clearly still not ready for prime time here.
Yea, I don't think PS5 or XSX has this cache. The CUs are probably too low for it to matter.
Maybe there's still a slim chance the Infinity Cache is present, just in a cut-down fashion. I remember people doing an SRAM count of the Series X GPU but a decent chunk of it still wasn't accounted for. Some things like constant caches were mentioned, but maybe there's a small chance a bit of it is also for Infinity Cache?
The 6900 series are basically 2 PS5s smashed together, there's still maybe a small chance that system has some Infinity Cache implementation too? I'm just being a wishful dreamer here, but it's fun.
Also quick mention: keep in mind AMD switched the cache labeling for RDNA2. L2 is really L3 on their GPUs, L1 is L2, and L0 is L1. So the L2 (really L3) might be 5 MB, but that doesn't mean the L1 (really L2) collectively is smaller.
And while a small chance, could still have some form of Infinity Cache built in? I'd suspect it's scalable, so much of RDNA 2 seems scalable as-is.