Sony PlayStation 5 Pro

I'm sorry, but I'm looking at that and just thinking... console gaming is literally devolving (or evolving depending on your viewpoint) into PC gaming right before our eyes.

What a bunch of BS. We essentially have a low tier console sku (Series S) mid tier console skus (PS5, Series X) and a high tier sku (PS5 Pro) and between all of them you have games with limitations depending on which device you play them on.. modes which work on some which don't on others, support for some graphics effects on some but not others and so on. They're literally shoving all this shit down people's throats and it's going to backfire on them.. because eventually it will become so messy, and people will get so accustomed to being able to tailor graphics and effects to how they want, to get the performance they want from the device they bought... that they might as well have just bought a PC.

They're slowly but surely pushing people who used to swear up and down that the benefits of the console were the pick up and play, no BS about having to fiddle with settings, to become accustomed to it... and when they do, it loses its distinction.

PCs are infinitely more varied and complex than a handful of console SKUs though. Windows itself is a headache then you’ve got to deal with all the IHV hardware and driver issues. It’s still quite a leap.
 
I'm sorry, but I'm looking at that and just thinking... console gaming is literally devolving (or evolving depending on your viewpoint) into PC gaming right before our eyes.

What a bunch of BS. We essentially have a low tier console sku (Series S) mid tier console skus (PS5, Series X) and a high tier sku (PS5 Pro) and between all of them you have games with limitations depending on which device you play them on.. modes which work on some which don't on others, support for some graphics effects on some but not others and so on. They're literally shoving all this shit down people's throats and it's going to backfire on them.. because eventually it will become so messy, and people will get so accustomed to being able to tailor graphics and effects to how they want, to get the performance they want from the device they bought... that they might as well have just bought a PC.

They're slowly but surely pushing people who used to swear up and down that the benefits of the console were the pick up and play, no BS about having to fiddle with settings, to become accustomed to it... and when they do, it loses its distinction.
How’s it gonna backfire when the people who are paying $700 for the console want to do stuff like this and get these features? I’m glad insomniac is doing this and letting ps5 pro players decide how they wanna play.
 
How’s it gonna backfire when the people who are paying $700 for the console want to do stuff like this and get these features? I’m glad insomniac is doing this and letting ps5 pro players decide how they wanna play.
With the base PS5 and Graphics vs Performance modes and all that, it created this unease among many players about having to make a choice and accept the compromises. It could be unsatisfying knowing that you were 'missing out' on something because you chose one way or the other. It gnawed at people's sense of FOMO.

This was really the main point of the PS5 Pro wasn't it? To help these folks go from a win-lose compromise to a win-win default. But if you're gonna add in ever more options for Pro, you're kind of taking people right back to that same win-lose feeling again, ya know?

I do personally think there's something to be said about a straightforward approach for console gaming. As much I may like options and PC gaming, there's often something quite refreshing about just booting up some Switch or PS4 game or something and getting stuck in and simply enjoying what's presented to me.
 
Last edited:
I'm sorry, but I'm looking at that and just thinking... console gaming is literally devolving (or evolving depending on your viewpoint) into PC gaming right before our eyes.

What a bunch of BS. We essentially have a low tier console sku (Series S) mid tier console skus (PS5, Series X) and a high tier sku (PS5 Pro) and between all of them you have games with limitations depending on which device you play them on.. modes which work on some which don't on others, support for some graphics effects on some but not others and so on. They're literally shoving all this shit down people's throats and it's going to backfire on them.. because eventually it will become so messy, and people will get so accustomed to being able to tailor graphics and effects to how they want, to get the performance they want from the device they bought... that they might as well have just bought a PC.

They're slowly but surely pushing people who used to swear up and down that the benefits of the console were the pick up and play, no BS about having to fiddle with settings, to become accustomed to it... and when they do, it loses its distinction.
You can always ignore those options and just play at default settings. For an even more console like experience, you can leave your console on sleep mode, and don't have to wait for updates to download or install, since it does everything while sleeping. I don't understand this sentiment wherever I hear it, you have pretty simple choices.
 
With the base PS5 and Graphics vs Performance modes and all that, it created this unease among many players about having to make a choice and accept the compromises. It could be unsatisfying knowing that you were 'missing out' on something because you chose one way or the other. It gnawed at people's sense of FOMO.

This was really the main point of the PS5 Pro wasn't it? To help these folks go from a win-lose compromise to a win-win default. But if you're gonna add in ever more options for Pro, you're kind of taking people right back to that same win-lose feeling again, ya know?

I do personally think there's something to be said about a straightforward approach for console gaming. As much I may like options and PC gaming, there's often something quite refreshing about just booting up some Switch or PS4 game or something and getting stuck in and simply enjoying what's presented to me.
To be fair, insomniac is already presenting you with a win/win default mode for each game on the pro. I personally love the fact that if I wanna push the console even more, I can do so. If someone doesn’t have a 120hz tv they’ll be missing out too…But that shouldn’t stop Insomniac from giving more options to players on a device that only 10-15% of the userbase is gonna buy. And those are most likely the people that want this stuff.
 
You could absolutely specialize hardware to focus on accelerating a specific type of instruction above all. 'ML hardware' is not some strict, fixed thing at all. Different ways to skin a cat.

I really just do not understand your preoccupation with this idea that you need the ML hardware to be active the whole time for it to be justifiable. Using a small bit of extra die space for what essentially gives you a large performance boost is easily justified. Just shifting it to compute instead cuts into the main rendering budget, and because it'll be slower, you're *really* eating into that budget. Making up for it would basically require a fair chunk larger GPU, which will be much more costly in terms of die space.

And tensor cores were absolutely included in the GAMING line of GPU's in order for games to make use of them. By your reasoning, GeForce GPU's should strip the tensor cores out and just do DLSS via general compute, because they aren't being utilized enough. But we know that's absurd. They are very worth their inclusion for DLSS alone.
You can customize silicon for a particular type of algorithm yes. That is precisely what tensor cores do.

But I think it’s more important to discuss what the 5pro has than what ML hardware can potentially be.

There is ample proof that PSSR is running on compute, not only can we calculate its leaked performance numbers, but PSSR titles are pixel counted to be running lower resolution before upscaling than the performance mode on PS5. And that shouldn’t be the case if it were bespoke dedicated hardware.

In the recent code masters game on DF, in 120Hz there is no PSSR mode entirely. Of which once again, to me feels like another sign that there isn’t dedicated silicon for matrix calculations, just ML based customizations on the SIMD units.

Im more than happy to do a donation bet here that PSSR is running on the SIMD units, and that there is no separated silicon like matrix crunchers or tensor cores.

Firstly it doesn’t matter, haves and have nots doesn’t make the 5pro better or worse in its value proposition, but secondly that is significant additional costs associated to add that hardware into a mainstream device. While tensor silicon uses less transistors than the general purpose simd units, it’s no doubt still a significant amount of transistors. I have very little doubt that the price point would come in below $699 for a 5pro.
 
There is ample proof that PSSR is running on compute, not only can we calculate its leaked performance numbers, but PSSR titles are pixel counted to be running lower resolution before upscaling than the performance mode on PS5. And that shouldn’t be the case if it were bespoke dedicated hardware.
That doesnt make sense ? If there wasnt any computed demanding upsacling of course pssr will have some costs even if it would be on dedicated hardware (not saying its the case). All rtx cards has some performance impact native resolution vs dlss upsaling from same resolution
 
That doesnt make sense ? If there wasnt any computed demanding upsacling of course pssr will have some costs even if it would be on dedicated hardware (not saying it’s the case). All rtx cards has some performance impact native resolution vs dlss upsaling from same resolution
there would be some impact yes. But let’s be clear, when you say native you mean TAA. There is a cost for that too. Less impact with dedicated units. But that would be the same GPU compared to itself. Now you’re comparing 60CU 5pro with more bandwidth versus 36CU PS5.

If dedicated silicon was there, frame generation (NN) is also possible to run in parallel.
 
Last edited:
there would be some impact yes. Less impact with dedicated units. But that would be the same GPU compared to itself. Now you’re comparing 60CU 5pro with more bandwidth versus 36CU PS5.

If dedicated silicon was there, frame generation (NN) is also possible to run in parallel.

There is your answer. There is dedicated silicon for ai workloads. It's not as much silicon as Nvidia dedicates I think, but it's dedicated anyways.
 
Now you’re comparing 60CU 5pro with more bandwidth versus 36CU PS5.
Not sure why you are so focused on cu number comparison ;d we know from sony that real perf. advantage is around 45% Btw dlss perfromance hit is not tha small at all
 
That was never at debate.

Whether it is an entirely separate core is what was at debate.
Why should it be at debate? Who asked for this debate besides you?

There is custom ML hardware from RDNA4, very likely some new Sparse Wave Matrix Multiply Accumulate instructions, and nobody ever claimed it was in its own NPU / Unit (which would be complete overengineering on a console as this silicon would sit idle for the majority of the frame time). It's just another moving goalpost of yours. By the way even the tensor cores share some ressources with the shaders, maybe less than WMMA, but still.
 
Why should it be at debate? Who asked for this debate besides you?

There is custom ML hardware from RDNA4, very likely some new Sparse Wave Matrix Multiply Accumulate instructions, and nobody ever claimed it was in its own NPU / Unit (which would be complete overengineering on a console as this silicon would sit idle for the majority of the frame time). It's just another moving goalpost of yours. By the way even the tensor cores share some ressources with the shaders, maybe less than WMMA, but still.
Then you should go back up and read the thread from the OP I was replying to before getting defensive. SG was clearly referring to dedicated cores. Versus ML instructions built into the CUs. These are way far apart in performance differences.

I did not criticize 5pro performance for not having tensor cores. You can follow any of my discussions on this, in fact I go as far to say that this is ideal. But you call it goal post moving which is once again, getting defensive over something I did not say.

Dedicated chips like the NPU would never work well in games. Once you take it off chip, it’s too slow to return the data to the GPU to finish processing, there’s limited applications for that type of setup.
 
They are not practically dead silicon though, they are being used concurrently with the shader cores all the time, they also do a lot more than upscaling now, they do frame generation, denoising in several ray traced and path traced titles, and they also do HDR conversion post processing in most title. They are almost as active as the shader cores now.
Yes. However, if they were only doing upscaling, the rest of the time they'd be dead. 5Pro's description of its ML abilities is upscaling only, not upscaling + frame generation + denoising + HDR conversion. If 5Pro has general function ML hardware that operates independently of the CUs (doesn't tie them up) that is used for 2 ms upscaling, why isn't it used the rest of the 14 ms to do other workloads?
Why should it be at debate? Who asked for this debate besides you?

There is custom ML hardware from RDNA4,
The whole past few pages has been debating the nature of 5Pro's upscaling hardware, of which a number of people have been involved - it's not just an Iroboto thing.

The nature of that hardware is unclear, hence the debate on the possible and probable solutions. Rather than talk about a 'unit', it's more a case of whether the use of the ML for upscaling ties up a CU, preventing it from work on other things. Whether we call that in the compute or not is immaterial and clearly confuses people by what's being suggested.

So, what might PSSR be using?

1) A dedicated upscaling core
2) ML silicon that runs in parallel to the CUs
3) ML capabilities within the CUs that occupy them on ML workloads

1) Seems highly unlikely for a number of reasons and I don't think that theory has any advocates here
2) Doesn't make sense to me for my points about in response to DavidGraham. If there's ML hardware in there capable of running parallel to the CU workloads, why is it only described as providing 2 ms of upscaling work in Cerny's description? Why isn't it described as ML hardware that can do more?
3) Fits the current maths and results most readily of these three options

Now 2) isn't out of the question, but it'd be poor marketing from Sony to massively undersell their hardware potential, particularly in light of the high price. $700 would have gone down a lot better with talk of denoising and framegen and future ML abilities.
 
There is no debate to have! we know from RDNA4 infos (I think?) that it's sharing ressources with CUs similar to HWRT intersection. It's just some artificial moving goal post discussions: "Yeah but it's not in a separate unit like tensor cores so it's not really custom ML".

It's like a combination of "PS5 is not RDNA2" + "PS5 doesn't have hardware RT" discussion. In then end in made no difference in actual games vs XSX. Right now PSSR is been compared to be roughly at the same level as DLSS 3.7 and the console is not even released. Proof is in the pudding and sharing ressources with CU is no problem at all on a console. Besides, again, tensor cores share ressources with shaders!
 
  • Like
Reactions: snc
"Yeah but it's not in a separate unit like tensor cores so it's not really custom ML".
No one said this.

You're making every discussion to discuss technology not about discussing technology. What is the point of discussion if you're only recourse is to say all people are trying to do is downplay what Sony has done. You're posts have 0 value in actually discussing technology, and you always somehow manage to make this into some sort of situation where people have to pick sides.

People have the right to discuss things they may think they know, or don't know about. It's perfectly fine for people to debate. A lot of what is at debate is of course definitions and clarifications.

In the CDNA line of cards, they have Matrix Cores. There are 4 Matrix Cores per CU. Those Matrix Cores do some heavy duty work, and the reason they can do that work, is because how memory is setup for them to do matrices math. RDNA is not setup for that. That's why I don't think 5Pro has matrices cores, or they would have them starting as early as RDNA 2 or 3.

And before you say, oh well, how do you know PS5 custom silicon isn't that.
Because AMD even talks about it here, that's why they're moving towards UDNA.

What precisely will UDNA change compared to the current RDNA and CDNA split? Huynh didn't go into a lot of detail, and obviously there's still plenty of groundwork to be laid. But one clear potential pain point has been the lack of dedicated AI acceleration units in RDNA. Nvidia brought tensor cores to then entire RTX line starting in 2018. AMD only has limited AI acceleration in RDNA 3, basically accessing the FP16 units in a more optimized fashion via WMMA instructions, while RDNA 2 depends purely on the GPU shaders for such work.

Our assumption is that, at some point, AMD will bring full stack support for tensor operations to its GPUs with UDNA. CDNA has had such functional units since 2020, with increased throughput and number format support being added with CDNA 2 (2021) and CDNA 3 (2023). Given the preponderance of AI work being done on both data center and client GPUs these days, adding tensor support to client GPUs seems like a critical need.

Still, with what we've heard about AMD RDNA 4, it appears UDNA is at least one more generation away.

So when I read this, I think Shifty is very justified in what's he saying. And that's all there is to it. That's what discussion is about. Stop trying to close things down.
 
Who cares about all that? Let's just compare results, shall we? Because in Ratchet PSSR seems to be at about the same generation as DLSS 3.7! And already outperforming XeSS!

From this DF clip comparing PSSR against all others solution: FSR3.1, XeSS 1.3 and DLSS 3.7 using arguably the most difficult use case for reconstruction techniques: the dreaded grass!

DLSS = PSSR > XeSS >>> FSR3.1

PSSR is right there with DLSS...
In motion we can see why PSSR is superior to XeSS.

AXe4uRl.jpeg
 
There is ample proof that PSSR is running on compute
There is not. There is speculation by some of you. That's a world away from 'proof'.

You can always ignore those options and just play at default settings.
Again, the mere *existence* of options creates FOMO. That's the whole psychological issue to begin with that makes the PS5 Pro potentially worthwhile. But if you're pushing options on PS5 Pro users as well, then you're not getting rid of the FOMO.
 
Back
Top