Sony PlayStation 5 Pro

On PS4 since 2016 in AFAIK almost all 120hz PSVR games. From 60fps to 120fps.
That was just moving the existing frame FOV. The contents weren't interpolated between frames.
It's no completely different, is it?
Yes, completely different. ;)
It's still frame generation.
No frame is generated. The current 60fps frame is just moved a bit in that half frame interval. You end up with 120 fps FOV motion but 60fps animation.
 
No one can say what the PS5 specs are, since Sony was not clear about it, but:

Lets not confuse Rosario Leonardi declarations about PS5 having no ML stuff in it as not having int 4 or Int8 support. Rosario was talking about dedicated hardware, not GPU specifications.

This becomes clear since even if PS5 is lacking int 4 and int 8, PS5 can use Int 16 for ML... Basically ML is always available, so it becomes clear Rosario was talking about dedicated hardware.

Then if we look at the consoles, both PS5 and Xbox Series X's GPUs are based on RDNA 2 architecture. And we know that RDNA 2 (and even RDNA 1.1) supports Int4 and Int8 operations.

If this was advertised with Series X, it was not talked for PS5. But does this means it is not present? Is there a reason to assume that Sony specifically took out the ML feature. If so, why would it?

You can learn more about RDNA and ML in this AMD whitepaper.

What is clear is that PS5 fully supports Machine Learning on a hardware level, as this was showed when Insomniac patched Spider-Man Miles Morales on PS5 with physically-based muscle and costume deformations in real-time.

Yes, some had doubts if the inference was done on the PS5 hardware. But this was later clarified by Josh Dicarlo (Lead Character Technical Director at Insomniac) that confirmed the tech is ML based. and that all the ML inference is being done on the PS5 hardware in real time.

So, we seem to have more reasons to believe int4 and int8 support is there, than to believe it is not.

If anyone can show evidence more palpable then this, to show it has not int 4 and int8 support, then please show it.
 
Last edited:
It's no completely different, is it? It's still frame generation. The technique is different from VR to non-VR as VR get more predictive data from headset.

What Shifty said.

But I believe on console (non-VR Frame gen) could get much better than on PC with even lower latencies.

What's your basis for believing this? DLSS3 FG is the current state of the art on PC using an Nvidia trained AI model that runs on the tensor cores and optical flow processor of Ada specifically.

What makes you think PS5 Pro (assuming you're talking about the Pro) will have comparable hardware capabilities or a comparable (or for that matter any) AI based frame gen model?

I can certainly see the Pro making use of FSR3 FG but that, while generally considered good enough is a clear step down from DLSS3 in several areas, e.g. handling of rapid motion/sudden scene changes, minimum viable frame rate, how it works when not vsynced (although that may have been fixed now).
 
So, we seem to have more reasons to believe int4 and int8 support is there, than to believe it is not.

If anyone can show evidence more palpable then this, to show it has not int 4 and int8 support, then please show it.

If PS5 supported int4 and int8 they'd have said. In the pre-launch hype wars AI was a big thing, and Sony said nothing. To this day they've said nothing, and neither has any developer.

PS5 is an older branch of the RDNA family than Xbox and missing several key features of RDNA 2. MS stated that they specifically asked for this to be added for Xbox, so it's not like it was a core technology that Sony would have removed.

Int4 and int8 are just data formats, you don't need them to do "ML". Spiderman having muscle deformation says nothing definitive about the data formats that PS5 supports.
 
No one can say what the PS5 specs are, since Sony was not clear about it, but:

Lets not confuse Rosario Leonardi declarations about PS5 having no ML stuff in it as not having int 4 or Int8 support. Rosario was talking about dedicated hardware, not GPU specifications.

This becomes clear since even if PS5 is lacking int 4 and int 8, PS5 can use Int 16 for ML... Basically ML is always available, so it becomes clear Rosario was talking about dedicated hardware.

Then if we look at the consoles, both PS5 and Xbox Series X's GPUs are based on RDNA 2 architecture. And we know that RDNA 2 (and even RDNA 1.1) supports Int4 and Int8 operations.

If this was advertised with Series X, it was not talked for PS5. But does this means it is not present? Is there a reason to assume that Sony specifically took out the ML feature. If so, why would it?

You can learn more about RDNA and ML in this AMD whitepaper.

What is clear is that PS5 fully supports Machine Learning on a hardware level, as this was showed when Insomniac patched Spider-Man Miles Morales on PS5 with physically-based muscle and costume deformations in real-time.

Yes, some had doubts if the inference was done on the PS5 hardware. But this was later clarified by Josh Dicarlo (Lead Character Technical Director at Insomniac) that confirmed the tech is ML based. and that all the ML inference is being done on the PS5 hardware in real time.

So, we seem to have more reasons to believe int4 and int8 support is there, than to believe it is not.

If anyone can show evidence more palpable then this, to show it has not int 4 and int8 support, then please show it.
Yes they also do real-time ML inference on God of War Ragnarok to selectively upscale textures to 4K on PS5. They use FP16 for this and their implementation is very customized for the PS5 hardware.
 
If PS5 supported int4 and int8 they'd have said. In the pre-launch hype wars AI was a big thing, and Sony said nothing. To this day they've said nothing, and neither has any developer.

PS5 is an older branch of the RDNA family than Xbox and missing several key features of RDNA 2. MS stated that they specifically asked for this to be added for Xbox, so it's not like it was a core technology that Sony would have removed.

Int4 and int8 are just data formats, you don't need them to do "ML". Spiderman having muscle deformation says nothing definitive about the data formats that PS5 supports.
Would they say it?

Why?

Xbox Series S has 12.15 Tflops at 32 bits. This means 24,3 Tflops at 16 bits. An also means 48.6 Tops at 8 bits (they anounce 49), and 97,2 at 4 bits. They announce 97.

Question is... Why announce something you do not have? After the shaders use most of your Tflops, you end up with litle power free. So the Tops number is just marketing. In reality, it means very little unless you dedicate your full GPU to it.

Sony now touches this on PS5 pro because 300 Tops at 8 bits is not something you can have on the GPU.

33.5 Tflops at 32 bits, means 67 Tflops at 16 bits, 134 Tops at 8 bits and 268 at 4 bits. So 300 Tops at 8 bits must be a dedicated unit. Something you can always count on for use of PSSR AI, and not dependable on how many Tflops your shaders will use. And as such, it makes sense to talk about it! But not on the PS5 case!
 
Last edited:
Yes they also do real-time ML inference on God of War Ragnarok to selectively upscale textures to 4K on PS5. They use FP16 for this and their implementation is very customized for the PS5 hardware.

Not shure about that!
Having consulted the documentation here they claim this about the use of FP16:

Using halves does bring some precision concerns, But in our case, we didn’t observe significant changes in the results

Pages 41 to 44.

So, precision was a concern... and 16 bits just happened to be enough! But 32 bits could have been needed.

This means 8 or less bits were not even possible for this!
 
Last edited:
Would they say it?

Why?

Xbox Series S has 12.15 Tflops at 32 bits. This means 24,3 Tflops at 16 bits. An also means 48.6 Tops at 8 bits (they anounce 49), and 97,2 at 4 bits. They announce 97.

Question is... Why announce something you do not have? After the shaders use most of your Tflops, you end up with litle power free. So the Tops number is just marketing. In reality, it means very little unless you dedicate your full GPU to it.

Sony now touches this on PS5 pro because 300 Tops at 8 bits is not something you can have on the GPU.

33.5 Tflops at 32 bits, means 67 Tflops at 16 bits, 134 Tops at 8 bits and 268 at 4 bits. So 300 Tops at 8 bits must be a dedicated unit. Something you can always count on for use of PSSR AI, and not dependable on how many Tflops your shaders will use. And as such, it makes sense to talk about it! But not on the PS5 case!

You'd announce it because you had it, and your competitor has it, and a lot was being made of it in the gaming press. Same reason that they announced their TF, and talked about their decompression block etc. Road to PS5 world have definitely dropped it in there.

At the point Xbox was specced it was a customisation, and so it would have been for Sony too.

Current Xbox consoles have more than enough power to use software XeSS. The fact MS has done nothing with this speaks about MS rather than whether there's enough performance to do something with it.

To be fair, Sony also said very little about how RT worked on PS5 too.

True, but they did make it clear that they have it!
 
You'd announce it because you had it, and your competitor has it, and a lot was being made of it in the gaming press. Same reason that they announced their TF, and talked about their decompression block etc. Road to PS5 world have definitely dropped it in there.

At the point Xbox was specced it was a customisation, and so it would have been for Sony too.

Current Xbox consoles have more than enough power to use software XeSS. The fact MS has done nothing with this speaks about MS rather than whether there's enough performance to do something with it.



True, but they did make it clear that they have it!
At the time, several sources claimed Sony had a lot more presentations about the PS5 on schedule, but since people misunderstood everything on "The road to PS5", they stopped there "letting the games do the talking".
No one understood the dinâmic clock speeds and PS5 was presented as an 8 TFLOPS machine with overclock capabilities. No one understood the SSD and how its could Change the graphics in a game. No one understood the power handling off the console, etc, etc.
Besides, Cerny talked ONLY about the PS5, what made her unique, and the design choices, not about RDNA capabilities.
 
At the time, several sources claimed Sony had a lot more presentations about the PS5 on schedule, but since people misunderstood everything on "The road to PS5", they stopped there "letting the games do the talking".
No one understood the dinâmic clock speeds and PS5 was presented as an 8 TFLOPS machine with overclock capabilities. No one understood the SSD and how its could Change the graphics in a game. No one understood the power handling off the console, etc, etc.
Besides, Cerny talked ONLY about the PS5, what made her unique, and the design choices, not about RDNA capabilities.
I don't think the games have done much talking TBH. They have yet to deliver on any of that hype.
 
It's a completely different scenario and not really comparable. Occulus uses head movement tracking data to predict the next frame and I believe internally renders a wider field of view than the current viewport to provide data for the "generated frame". So it's not a fully generated frame, more like they just calculate the players predicted view point in an already rendered image. At least that's how I understood it.
The usage is different, but it's still fundamentally using temporal information of a frame(or frames) before to create an 'artificial frame' in between real frames.
 
Last edited:
It's a completely different scenario and not really comparable. Occulus uses head movement tracking data to predict the next frame and I believe internally renders a wider field of view than the current viewport to provide data for the "generated frame". So it's not a fully generated frame, more like they just calculate the players predicted view point in an already rendered image. At least that's how I understood it.
Oculus's tech is a bit more advanced than PSVRs, but the results are horrible! It uses motion vectors to distort 2D frame elements.


It's since been updated with 3D motion vectors, but it's still noting like nVidia's work and it shouldn't be claimed Oculus beat nVidia to the punch.
 
...
Sony now touches this on PS5 pro because 300 Tops at 8 bits is not something you can have on the GPU.

33.5 Tflops at 32 bits, means 67 Tflops at 16 bits, 134 Tops at 8 bits and 268 at 4 bits. So 300 Tops at 8 bits must be a dedicated unit. Something you can always count on for use of PSSR AI, and not dependable on how many Tflops your shaders will use. And as such, it makes sense to talk about it! But not on the PS5 case!
Not necessarily. They could overclock the GPU (to around 2450mhz) for TOPs performance (I mean to reach ~150, not sure how they double that though, maybe by doubling units?). I'd guess the GPU has highly variable clocks depending of instructions used and types of instructions. Sony has experience in the matter now, it's probably transparent (easy) from developer perspective like PS5 dynamic clocks were with very low latencies when clocks are changed etc. Very likely another super efficient and novel system from Cerny and Co.

The usage is different, but it's still fundamentally using temporal information of a frame(or frames) before to create an 'artificial frame' in between real frames.
Yep. PSVR is still technically "frame generation". Using temporal information from frames and vectors information (from headset) since 2016 on base PS4 using in-house Sony tech with pretty decent results and very low latencies.
 
Not necessarily. They could overclock the GPU (to around 2450mhz) for TOPs performance (I mean to reach ~150, not sure how they double that though, maybe by doubling units?). I'd guess the GPU has highly variable clocks depending of instructions used and types of instructions. Sony has experience in the matter now, it's probably transparent (easy) from developer perspective like PS5 dynamic clocks were with very low latencies when clocks are changed etc. Very likely another super efficient and novel system from Cerny and Co.

Low latencies with dynamic clocks, where are you getting this information from?

No one outside of developers know anything about how PS5s boost actually works and what the clock ranges are. So what is your source?
 
Last edited:
Not necessarily. They could overclock the GPU (to around 2450mhz) for TOPs performance (I mean to reach ~150, not sure how they double that though, maybe by doubling units?). I'd guess the GPU has highly variable clocks depending of instructions used and types of instructions. Sony has experience in the matter now, it's probably transparent (easy) from developer perspective like PS5 dynamic clocks were with very low latencies when clocks are changed etc. Very likely another super efficient and novel system from Cerny and Co.

This is very very unlikely. PS5 pro clock seems to be a bit lower than PS5, so, if there is enough thermal and power margin for 2450 MHz why keep it so low?
Besides how can you double 8 bits performance? That would mean extra hardware, and as such, dedicated hardware for this.
And more... If you could reach 150 tops by overclock to 2450 MHz, why not announce 37,6 TFLOPS since overclock would affect the entire GPU?

The 300 tops at 8 bits is a strange number and it seems like a sort of a tensor core dedicated unit will be present. And 300 Tops beats the RTX 3090 tensor cores capability that stands at 285 Tops.
 
Not necessarily. They could overclock the GPU (to around 2450mhz) for TOPs performance (I mean to reach ~150, not sure how they double that though, maybe by doubling units?). I'd guess the GPU has highly variable clocks depending of instructions used and types of instructions. Sony has experience in the matter now, it's probably transparent (easy) from developer perspective like PS5 dynamic clocks were with very low latencies when clocks are changed etc. Very likely another super efficient and novel system from Cerny and Co.


Yep. PSVR is still technically "frame generation". Using temporal information from frames and vectors information (from headset) since 2016 on base PS4 using in-house Sony tech with pretty decent results and very low latencies.
They are getting 2 cells inside with 5 integer units per SPU instead of 4 floating point ones+1 integer one. ;)
Well, seriously, they could have modified several CUs in the way they did for the DMA tempest engine (that is 35% faster in PS5 PRO, so must be clocked higher that the main GPU mode). DMA has sense for ML, right?.
 
Last edited:
This is very very unlikely. PS5 pro clock seems to be a bit lower than PS5

Weren't the leaked specs altered, specifically the CPU clocks, on protecting sources? I believe DF doesn't even know the "actual clocks" other than the leaks shared with them.
 
This is very very unlikely. PS5 pro clock seems to be a bit lower than PS5, so, if there is enough thermal and power margin for 2450 MHz why keep it so low?
Besides how can you double 8 bits performance? That would mean extra hardware, and as such, dedicated hardware for this.
And more... If you could reach 150 tops by overclock to 2450 MHz, why not announce 37,6 TFLOPS since overclock would affect the entire GPU?

The 300 tops at 8 bits is a strange number and it seems like a sort of a tensor core dedicated unit will be present. And 300 Tops beats the RTX 3090 tensor cores capability that stands at 285 Tops.
I wouldn't be surprised if the GPU can clock to 2.45GHz in the same way that the base model's can clock to 2.23GHz under most circumstances.

I expect the lower than 2.23GHz figure relates solely to dual issue SIMD use.
 
The 300 tops at 8 bits is a strange number and it seems like a sort of a tensor core dedicated unit will be present.

I agree, I think there's a dedicated block or blocks in there that the 300 TOPs number is partly or (more likely) wholly derived from. The strongest clues are IMO that the leaked documentation refers to:
  • The "Machine Learning Capabilities of GPU" (not the library)
  • That it is "8-bit integer focused"
  • That it is "Accessed only through libraries"
The instructions on the XBox consoles and other RDNA 2+ devices aren't accessed only through a library and they aren't specifically 8-bit integer focused. This appears to be something else.

There is no reason for "300 TOPs" and the clock speed to be at odds as I don't think the number is derived from stuff done only (or at all) on the CUs. Hell, PS5Pro might still not support int8 and in4 on the CUs for all we know.

Select the colour / depth / motion buffers you're using, tell the upscaler what to you want from it and where you want the output to be put using the libraries and there you go.

You'll note that now Sony have "TOPs" - and lots of them - they're repeated across high level slides on the developer docs and are out there in public even before the console is! (Even though developers seemingly can't touch them directly because they're in a black box behind a library, running a proprietory model, on unspecified, custom hardware). Just a thought!

And 300 Tops beats the RTX 3090 tensor cores capability that stands at 285 Tops.

The tensor core units are beasts also designed for training AI and using all kinds of models. They support a large range of types and precisions (which only seem to grow) and they have to be very flexible and good with all of them.

Sony's upscaling unit is "8-bit integer focused", and almost certainly much, much more limited than Nvidia's tensor stuff but less costly in various ways. Probably more like a NPU on mobile SoCs, but a lot faster.
 
Back
Top