PS5 Pro *spawn

What would they do with that hardware beside doing ai things? I'm sure that with time and-or future playstation hardware they will add things like ray reconstruction, frame gen and other stuff. But right now they are simply not ready.

Also, we have games like Alan wake 2 that are upscaling to 4k from the same base resolution as the PS5 and adding the settings of the quality mode. If they just used the additional CU's to upscale they wouldn't have the time for the higher settings.

PS5 has like 50 tops (can't find a number), and pro has 300. They added something, whatever it is.
17.55TF is 5pro
X2 for dual submission 35TF
X2 to downscale to 16FP = 70 FPOPS
X2 to downscale to 8bit = 140 TOPs
X2 to allow for ML sparsity customization = 280~300 TOPs

It really comes down that base number which is between 17TF and 18TF.
 
A bit too many responses for the time I have today 😅

We should define dedicated hardware in this discussion. Because even tensor cores are part of the compute units.

Also, Kepler has talked about how the PS5 pro takes some aspects of rDNA 4, like the matrix acceleration and sparsity. To me that is dedicated hardware, by my definition.
 
That's exactly what tensor cores do with DLSS, so I'm not sure why it'd be so strange here.
The Tensor cores are there predominantly for non gaming tasks such as machine learning. They are fitted as standard, for which you're paying a premium, so may as well be used for upscaling even if it's only for a fraction of a frame. What's the argument for there inclusion on a console?
 
The Tensor cores are there predominantly for non gaming tasks such as machine learning. They are fitted as standard, for which you're paying a premium, so may as well be used for upscaling even if it's only for a fraction of a frame. What's the argument for there inclusion on a console?
You aren't paying a premium for tensor cores 'fitted'. Of course they will have a cost in die size, but they're really not that big, and wouldn't need to be that big for something with a more specialized purpose like in a console, purely for reconstruction. Given what it essentially adds in terms of performance overhead, it's an easily justifiable inclusion. How much they're active per frame really doesn't need to be relevant.
 
You aren't paying a premium for tensor cores 'fitted'. Of course they will have a cost in die size, but they're really not that big, and wouldn't need to be that big for something with a more specialized purpose like in a console, purely for reconstruction.
ML hardware is just ML hardware. You can't specialise it for upscaling. A dedicated HW upscaler could maybe be smaller but that's clearly not what we've got as it's never been described as such.
Given what it essentially adds in terms of performance overhead, it's an easily justifiable inclusion. How much they're active per frame really doesn't need to be relevant.
If you want them to work in 2ms, you'll need a certain size, that's then doing nothing when not upscaling. If you want the optimal HW choice, you want just enough ML hardware to process the frame in 15 ms and be a frame behind, which isn't what we've got.

The Tensor cores in DLSS are there because nVidia wanted ML hardware in their GPU for AI stuff, nothing to do with gaming. They then found a use for it for gamers. How much time do those Tensor cores spend on DLSS? Maybe a couple of ms. At which point, the rest of the time, it's dead silicon. It could of course be used by devs, but only really on nVidia exclusive content. Hence we get a situation where the Tensor cores are used briefly to upscale and then do nothing the rest of the frame. Dead silicon is not really an ideal for consoles that need more efficient HW. Moving the upscaling to either a dedicated upscaler or across the existing compute achieves this.
 
How much time do those Tensor cores spend on DLSS? Maybe a couple of ms. At which point, the rest of the time, it's dead silicon
They are not practically dead silicon though, they are being used concurrently with the shader cores all the time, they also do a lot more than upscaling now, they do frame generation, denoising in several ray traced and path traced titles, and they also do HDR conversion post processing in most title. They are almost as active as the shader cores now.
 
You aren't paying a premium for tensor cores 'fitted'. Of course they will have a cost in die size, but they're really not that big, and wouldn't need to be that big for something with a more specialized purpose like in a console, purely for reconstruction. Given what it essentially adds in terms of performance overhead, it's an easily justifiable inclusion. How much they're active per frame really doesn't need to be relevant.
Tensor silicon in inherently way different than SIMD units. Yes they are housed in the SM. But how they access cache is dramatically different and what can be accomplished in a single cycle on tensor cores would take 20+ cycles on a CU. Tensor cores sit idle waiting to be filled with memory to do work, they only spend 1 cycle to do all the calculations they need to do. But needing to write out and wait for the next batch of data to come in.
 
They are not practically dead silicon though, they are being used concurrently with the shader cores all the time, they also do a lot more than upscaling now, they do frame generation, denoising in several ray traced and path traced titles, and they also do HDR conversion post processing in most title. They are almost as active as the shader cores now.

And thank god for rtx hdr so I don’t to suffer one shit implementation after another.
 
ML hardware is just ML hardware. You can't specialise it for upscaling. A dedicated HW upscaler could maybe be smaller but that's clearly not what we've got as it's never been described as such.

If you want them to work in 2ms, you'll need a certain size, that's then doing nothing when not upscaling. If you want the optimal HW choice, you want just enough ML hardware to process the frame in 15 ms and be a frame behind, which isn't what we've got.

The Tensor cores in DLSS are there because nVidia wanted ML hardware in their GPU for AI stuff, nothing to do with gaming. They then found a use for it for gamers. How much time do those Tensor cores spend on DLSS? Maybe a couple of ms. At which point, the rest of the time, it's dead silicon. It could of course be used by devs, but only really on nVidia exclusive content. Hence we get a situation where the Tensor cores are used briefly to upscale and then do nothing the rest of the frame. Dead silicon is not really an ideal for consoles that need more efficient HW. Moving the upscaling to either a dedicated upscaler or across the existing compute achieves this.
You could absolutely specialize hardware to focus on accelerating a specific type of instruction above all. 'ML hardware' is not some strict, fixed thing at all. Different ways to skin a cat.

I really just do not understand your preoccupation with this idea that you need the ML hardware to be active the whole time for it to be justifiable. Using a small bit of extra die space for what essentially gives you a large performance boost is easily justified. Just shifting it to compute instead cuts into the main rendering budget, and because it'll be slower, you're *really* eating into that budget. Making up for it would basically require a fair chunk larger GPU, which will be much more costly in terms of die space.

And tensor cores were absolutely included in the GAMING line of GPU's in order for games to make use of them. By your reasoning, GeForce GPU's should strip the tensor cores out and just do DLSS via general compute, because they aren't being utilized enough. But we know that's absurd. They are very worth their inclusion for DLSS alone.
 
I'm sorry, but I'm looking at that and just thinking... console gaming is literally devolving (or evolving depending on your viewpoint) into PC gaming right before our eyes.

What a bunch of BS. We essentially have a low tier console sku (Series S) mid tier console skus (PS5, Series X) and a high tier sku (PS5 Pro) and between all of them you have games with limitations depending on which device you play them on.. modes which work on some which don't on others, support for some graphics effects on some but not others and so on. They're literally shoving all this shit down people's throats and it's going to backfire on them.. because eventually it will become so messy, and people will get so accustomed to being able to tailor graphics and effects to how they want, to get the performance they want from the device they bought... that they might as well have just bought a PC.

They're slowly but surely pushing people who used to swear up and down that the benefits of the console were the pick up and play, no BS about having to fiddle with settings, to become accustomed to it... and when they do, it loses its distinction.
 
i see it more as a slow merge, and not a migration to PC, as PC also got things form consoles over time, like steam launcher etc... becoming more "console interface" for easier use, more PC gamers using gamepads instead of KB/Mouse, PC gamers more and more connecting their PC to a big TV when it was not possible before.
So yeah consoles and PC are converging.
 
Back
Top