PS5 Pro *spawn

According to a known dev at era 8-bit 300 TOPS is equivalent to a 3090Ti, and about 20% lower than a 4080. He thinks that's well enough to properly reconstruct to 4K similarly to those GPUs using DLSS.

Even if Pro falls short of 3090/TI in RT performance and hits more closely to a 3080/TI in RT performance, that's still a very respectable upgrade for this generation of consoles. More stable framerates, higher resolution bounds, and nicer RT sounds pretty good to me.

The unknowns so far...
* NVMe size
* Any I/O and compression block improvements
* Ram size, bus width, and bandwidth improvements.
* VRR improvements.
 
Even if Pro falls short of 3090/TI in RT performance and hits more closely to a 3080/TI in RT performance, that's still a very respectable upgrade for this generation of consoles. More stable framerates, higher resolution bounds, and nicer RT sounds pretty good to me.

The unknowns so far...
* NVMe size
* Any I/O and compression block improvements
* Ram size, bus width, and bandwidth improvements.
* VRR improvements.

Leaked specs still show it has 16GB RAM.

So guessing it's still 256bit bus.
 
It is quite interesting that Intel's XeHPG architecture works with significantly similar performance data as the PS5pro.
 
According to a known dev at era 8-bit 300 TOPS is equivalent to a 3090Ti, and about 20% lower than a 4080. He thinks that's well enough to properly reconstruct to 4K similarly to those GPUs using DLSS.

He is wrong, rtx3090 is weaker with 285 tops 😁 all numbers for series 4xxx for int8 tops is doubled and I have no idea how nvidia calculate it, could be marketing edit: my mistake ti version is 320tops

It's not about how many TOPS you throw at the problem. An RTX 3050 has way less TOPS than any of those systems, but still achieves the full DLSS quality of a 4090. It's about the sophistication of the AI model. And Nvidia has a near half decade advantage over Sony in that regard to say nothing of the supercomputer training resources they must have available to throw at the problem.

Expecting PSSR to be on par with DLSS is pretty unrealistic IMO (even ignoring Ray Reconstruction and Frame Generation). Neither Microsoft or AMD have managed to get out of the gate yet while Intel with all of it's resources has released an AI based upscaler, but one that is clearly not up to the same level as DLSS.
 
It's not about how many TOPS you throw at the problem. An RTX 3050 has way less TOPS than any of those systems, but still achieves the full DLSS quality of a 4090. It's about the sophistication of the AI model. And Nvidia has a near half decade advantage over Sony in that regard to say nothing of the supercomputer training resources they must have available to throw at the problem.

Expecting PSSR to be on par with DLSS is pretty unrealistic IMO (even ignoring Ray Reconstruction and Frame Generation). Neither Microsoft or AMD have managed to get out of the gate yet while Intel with all of it's resources has released an AI based upscaler, but one that is clearly not up to the same level as DLSS.
I agree with it but it will be better with time (already on this specifiaction we have remark that it takes 2ms but further improvement could reduce it). Good hw in not restricting it like it was with ps5/xsx
 
It's not about how many TOPS you throw at the problem. An RTX 3050 has way less TOPS than any of those systems, but still achieves the full DLSS quality of a 4090. It's about the sophistication of the AI model. And Nvidia has a near half decade advantage over Sony in that regard to say nothing of the supercomputer training resources they must have available to throw at the problem.

Expecting PSSR to be on par with DLSS is pretty unrealistic IMO (even ignoring Ray Reconstruction and Frame Generation). Neither Microsoft or AMD have managed to get out of the gate yet while Intel with all of it's resources has released an AI based upscaler, but one that is clearly not up to the same level as DLSS.
The small comparison snippet we saw already looks very promising. I am not really worried about that and they have likely being optimizing it for years and will continue doing so (as written in the slides).

But on what basis it couldn't be on par with DLSS? Their CBR technique (software) was great with the very little hardware ressources it was given (ID Buffer). RT implementation on Insomniac games is the best on consoles (and only on their games using their own tools) and even more impressive given the weak RT hardware.

What did they fail previously that would give us a hint that they would fail again here?
 
Last edited:
It's not about how many TOPS you throw at the problem. An RTX 3050 has way less TOPS than any of those systems, but still achieves the full DLSS quality of a 4090. It's about the sophistication of the AI model. And Nvidia has a near half decade advantage over Sony in that regard to say nothing of the supercomputer training resources they must have available to throw at the problem.

Expecting PSSR to be on par with DLSS is pretty unrealistic IMO (even ignoring Ray Reconstruction and Frame Generation). Neither Microsoft or AMD have managed to get out of the gate yet while Intel with all of it's resources has released an AI based upscaler, but one that is clearly not up to the same level as DLSS.

Entirely irrelevant, programmers always tell everyone else what they're doing given the chance. XESS is as good as DLSS 2, XESS 2 is already better than DLSS 3.5 and that's before it's out (thank Intel for putting up all these cool papers early instead of the BS Nvidia does).

The reason AMD has "failed" is singular, a concentration on universal applicability. By avoiding AI acceleration they've made FSR2 backwards compatible all the way down to mid range phones (FSR2 runs on Switch for No Man's Sky); but this assumes developers are the primary customers rather than people buying new GPUs. Obviously not, AMD is there to sell new GPUs, what a silly strategy.

Sony's patent is A. BS, because all software patents are. B. concentrated on fixing disocclusion holes in framerate doubling. Which is the most obvious image quality issue right now, it'll be interesting to see anyway.
 
To be fair, it doesn't have to match DLSS. It just has to be better than the various console solutions and easier for developers to implement. They do that, they're golden. They just need it to be good enough to hit their framerate targets while maintaining relatively good acceptable image quality.

Intel managed to do very well with their first outing.. nothing says Sony can't.

Personally I'd be happy if it was better than DLSS.. and keep the pressure on Brian and Ed and the rest of the teams at Nvidia to raise the bar even further. :p
 
I don't think Microsoft should respond to Pro with another machine.

They've been absolutely wrecked by Sony this generation in terms of hardware sales and from that point of view the generation is already over.

So why waste money on a 3rd SKU when it's not really going to allow them to recover some ground?

Best attack would be to price cut Series-X as much as possible (which might actually be less of a cost than the R&D cost of a new machine) to get as many machines installed as possible and look to start next generation a year earlier than Sony.
This is what I'm saying. Just focus on the Series X honestly, they've been wrecked. But they can salvage something out of it and launch a single device next gen. Otherwise the PS5 pro sales with GTA 6 might be a match made in heaven.
 
It's not about how many TOPS you throw at the problem. An RTX 3050 has way less TOPS than any of those systems, but still achieves the full DLSS quality of a 4090. It's about the sophistication of the AI model. And Nvidia has a near half decade advantage over Sony in that regard to say nothing of the supercomputer training resources they must have available to throw at the problem.

Expecting PSSR to be on par with DLSS is pretty unrealistic IMO (even ignoring Ray Reconstruction and Frame Generation). Neither Microsoft or AMD have managed to get out of the gate yet while Intel with all of it's resources has released an AI based upscaler, but one that is clearly not up to the same level as DLSS.
Lets wait and see, its also about the available data. Sony and MSFT have enough training data to create a solution not just comparable to DLSS but eventually even outperform it. FSR performs quite well despite the lack of dedicated hardware acceleration!! Combining the strengths of AMD and Sony/MSFT will definitely provide a compelling upscaling solution. Its too early to say DLSS will surely be better than PSSR or PSSR will not perform close to or as good as it is.
 
The small comparison snippet we saw already look very promising. I am not really worried about that and they have likely being optimizing it for years and will continue doing so (as written in the slides).

Even if Sony have been optimizing this upscaler for years as you suggest, why would that be any different to Nvidia who could have also done the same before DLSS2 launched?

But on what basis it couldn't be on par with DLSS?

Because AI upscaling quality depends on the quality of the model. Not only do Nvidia have a 4 year head start in the refinement of that model right now (nearer 5 by the time the Pro launches), but they also dominate the global AI industry and so both from a training model expertise perspective, and from a training hardware perspective, they are likely to have a noticeable advantage over Sony.

Their CBR technique (software) was great with the very little hardware ressources it was given (ID Buffer). RT implementationn on Insomniac games are the best on consoles (and only on their games using their own tools) and even more impressive given the weak RT hardware.

I'm not really sure what any of that has to do with Sonys ability to outperform the global leader in AI hardware and graphics technology in the creation of an AI upscaling solution for real time graphics when said leader has a nearly half decade head start.

What did they fail previously that would give us a hint that they would fail again here?

I didn't say they would fail. I'm sure Sonys solution will be fine, just like XeSS. I'm simply saying that maybe expectations should be reigned in a little based on the relative market positions and time frames.
 
What the heck are you talking about? Of course it increases performance quite dramatically by relying on AI upscaling instead of compute. Just like DLSS.

So you can explain these results then and why there isn't a 'dramatic' increase in performance despite moving upscaling to AI hardware and away from compute.
Screenshot 2024-03-16 230715.png

Screenshot 2024-03-16 230738.png

EDIT: I've been really kind and run the CP2077 benchmark with both FSR2 and DLSS at matched quality levels for you. So once again, can you please explain why there isn't a 'dramatic' increase in performance despite moving upscaling to AI hardware and away from compute.

FSR2

FSR.png

DLSS

DLSS.png

EDIT #2: Alan Wake 2 - Explain why there's no 'dramatic' increase in performance despite moving upscaling to AI hardware and away from compute.

FSR2

FSR (2).png

DLSS

DLSS (2).png

Once again you have added absolutely nothing to this discussion or forum and have ben made to look extremely foolish.
 
Last edited:
Back
Top