PS5 Pro *spawn

What are we all thinking for the layout of the SoC?

The rumoured CU count is 60 active out of 64. That would lend itself to adding an additional row of DCU's on each wing of the butterfly, and 4 more in the centre (which on the base model is occupied by cache and primitive units etc).

But then what of the belly of the butterfly with its primitive units, geometry engine, cache, memory controllers etc? Do they all reduce equally well in the shrink to 5nm such that the "non-DCU" part of the GPU doesn't have to be an odd shape and can rectangularly sit in the centre?
 
What are we all thinking for the layout of the SoC?

The rumoured CU count is 60 active out of 64. That would lend itself to adding an additional row of DCU's on each wing of the butterfly, and 4 more in the centre (which on the base model is occupied by cache and primitive units etc).

But then what of the belly of the butterfly with its primitive units, geometry engine, cache, memory controllers etc? Do they all reduce equally well in the shrink to 5nm such that the "non-DCU" part of the GPU doesn't have to be an odd shape and can rectangularly sit in the centre?
It's going to be a totally new layout from AMD with certainly some new silicon blocks. Lots of unknowns there including the node process (some think it could still be 7nm based on some specs), number of ROPs, clocks and how dynamic they'll be.
 
Assumptions about it? We tested 30 fps on DLSS 3 at launch in many titles and found it not too good at all except for something like "controller only play in flight simulator."

We found a 40 fps internal frame-rate for DLSS 3 with m + kb to be the point where it starts looking and feeling convincing. With FSR3 it is even worse.

For me, the biggest issue I see with FSR 3 on console is that consoles do not employ a reflex like thing yet. So devs will have to roll their own automated frame-rate and utilisation limiter or have to live with the latency added. FSR 3 has a good deal more latency than DLSS 3 which everyone seems to just categorically ignore for some reason in the discussion sphere online.
So probably fsr3 wont be good enough for using it in psvr2 games instead of currrent reprojection from 60 -> 120 ?
 
Would be interesting to see DF take a deep dive into how well frame gen(Nvidia and FSR3) actually works at 30fps rather than just making assumptions about it.

That would be a purely subjective test though as each person would have different feelings about how good it works at 30fps.

From my personal experience with using DLSS frame generation I can say that somes times you can't tell frame generation is even on and in other times it feels like dog shit.

Alan Wake 2 with path tracing feels perfectly fine with frame generation on with a low input frame rate, meanwhile Portal RTX with the same input frame rate as Alan Wake 2 feels like a laggy crapfest.

Some people might have completely different thoughts about it from me

So Amy testing DF did wouldn't be a scientific test and would simply be how they personally feel about the latency and implementation.
 
It's going to be a totally new layout from AMD with certainly some new silicon blocks. Lots of unknowns there including the node process (some think it could still be 7nm based on some specs), number of ROPs, clocks and how dynamic they'll be.
Good point about the new blocks. I'm really looking forward to the die shot.

Do you believe the 6/7nm possibility? I don't really buy it. It's a bit tricky to figure out exactly how much of the power budget is being spent exclusively on the CPU, but if we assume (based on a very swift bit of googling) ~40w for everything non-GPU related, that would render the PS5's GPU ~170w in the most extreme cases.

36CU's to 60CU's is a 1.6x increase. That would take the GPU alone from 170w to 283w. There's just no bloody way.
 
Last edited:
Good point about the new blocks. I'm really looking forward to the die shot.

Do you believe the 6/7nm possibility? I don't really buy it. It's a bit tricky to figure out exactly how much of the power budget is being spent exclusively on the CPU, but if we assume (based on a very swift bit of googling) ~40w for everything non-GPU related, that would render the PS5's GPU ~170w in the most extreme cases.

36CU's to 60CU's is a 1.6x increase. That would take the GPU alone from 170w to 283w. There's just no bloody way.

I think you might be slightly overestimating how much of the power measured at the wall is going to the GPU part of the SoC. Going to pull some numbers out of my ass based very loosely on stuff I've read about / observed for PCs:

For PS5 slim, assuming 90% PSU efficiency on 210W at the wall that's 189W for the innards. Take about ~10W for the stuff that gets hot on the mobo, ~16W for the memory, and say 30W for the CPU and you're maybe about 125W for the GPU component for the SoC. Not all of the GPU will be increasing by 65% - the number of memory controllers and probably the amount of L2 seem to be the same. I've no idea how much power they'll use, but my thought is that they'll not be inflating like the CU count.

RDNA3.5+ will hopefully have some efficiency gains over PS5 even on the same node. It's also possible that the difference between typical gaming clocks and max boost on the PS5 Pro is larger than on the PS5. A wide GPU on PC can sometimes drop down from max boost clocks more substantially than a narrower GPU when loaded across all its compute cores, as these are the most power hungry part of the chip.

So if for instance the PS5 Pro were to spend less time close to 2.18 ghz than the PS5 were to 2.23 ghz, that could account for a potentially significant reduction in power draw per CU keeping the total power draw down. It might also help explain why the Pro is 65% wider and almost the same clock speed but only 45% faster at rasterisation - it could be throttling back further more often. As we don't know how the clocks behave on either of these machines it's hard to guess though.

I'm not saying that the Pro will be on 6nm, but I think there are too many unknowns to say for sure whether it is nor not at the moment. I'm sure whatever node it is on will be the most cost effective for Sony though!
 
Who would have designed such a block?
hehe, who knows, but talking of designs and blocks, it wouldn't surprise me that Sony might prefer to not make hardware anymore to sell exclusives given the sales numbers as of currently, but let others make the hardware, so we could see a PS6 made by Asus, and a PS6 made by MSi etc.
 
I think you might be slightly overestimating how much of the power measured at the wall is going to the GPU part of the SoC. Going to pull some numbers out of my ass based very loosely on stuff I've read about / observed for PCs:

For PS5 slim, assuming 90% PSU efficiency on 210W at the wall that's 189W for the innards. Take about ~10W for the stuff that gets hot on the mobo, ~16W for the memory, and say 30W for the CPU and you're maybe about 125W for the GPU component for the SoC. Not all of the GPU will be increasing by 65% - the number of memory controllers and probably the amount of L2 seem to be the same. I've no idea how much power they'll use, but my thought is that they'll not be inflating like the CU count.

RDNA3.5+ will hopefully have some efficiency gains over PS5 even on the same node. It's also possible that the difference between typical gaming clocks and max boost on the PS5 Pro is larger than on the PS5. A wide GPU on PC can sometimes drop down from max boost clocks more substantially than a narrower GPU when loaded across all its compute cores, as these are the most power hungry part of the chip.

So if for instance the PS5 Pro were to spend less time close to 2.18 ghz than the PS5 were to 2.23 ghz, that could account for a potentially significant reduction in power draw per CU keeping the total power draw down. It might also help explain why the Pro is 65% wider and almost the same clock speed but only 45% faster at rasterisation - it could be throttling back further more often. As we don't know how the clocks behave on either of these machines it's hard to guess though.

I'm not saying that the Pro will be on 6nm, but I think there are too many unknowns to say for sure whether it is nor not at the moment. I'm sure whatever node it is on will be the most cost effective for Sony though!
Good points, thanks.

Even just correcting for my overestimation would put the GPU at ~200w. Hmm, yeah that does start to look a lot more feasible at 6nm. Especially compounded by the rumoured weird clockspeed.

I'm not quite settled on it being 6nm just yet though. 6nm wafers being cheaper would certainly point to that, but it seems inevitable that both a 5nm PS5 & Pro are coming at some point, and so surely it would make the most sense to only design the Pro's SoC the once. Having a 5nm Pro precede a 5nm base PS5 would presumably give them a few months worth of data to know how much of TSMC's 5nm capacity they'd need to book for both lines too.

But maybe it's just not that expensive to port the PS5 & Pro's SoC's to different nodes ¯\_(ツ)_/¯
 
Good points, thanks.

Even just correcting for my overestimation would put the GPU at ~200w. Hmm, yeah that does start to look a lot more feasible at 6nm. Especially compounded by the rumoured weird clockspeed.

I'm not quite settled on it being 6nm just yet though. 6nm wafers being cheaper would certainly point to that, but it seems inevitable that both a 5nm PS5 & Pro are coming at some point, and so surely it would make the most sense to only design the Pro's SoC the once. Having a 5nm Pro precede a 5nm base PS5 would presumably give them a few months worth of data to know how much of TSMC's 5nm capacity they'd need to book for both lines too.

But maybe it's just not that expensive to port the PS5 & Pro's SoC's to different nodes ¯\_(ツ)_/¯
Yes indeed. Based on Specs and the small CPU upclock a 5nm process would be logical. So the inevitable 4N shrink would still be quite cheap because of compatible processes and 5nm is cheaper than 4N. I could see PS5 Pro consuming ~250W based on how they improved their cooling solution in the ~220W slim. Basically at 250W they could even have a similarly priced cooling solution than on the launch PS5. Their exotic liquid metal cooling (and ingenious double sided fan) are that good for Playstation!
 
Assumptions about it? We tested 30 fps on DLSS 3 at launch in many titles and found it not too good at all except for something like "controller only play in flight simulator."

We found a 40 fps internal frame-rate for DLSS 3 with m + kb to be the point where it starts looking and feeling convincing. With FSR3 it is even worse.

For me, the biggest issue I see with FSR 3 on console is that consoles do not employ a reflex like thing yet. So devs will have to roll their own automated frame-rate and utilisation limiter or have to live with the latency added. FSR 3 has a good deal more latency than DLSS 3 which everyone seems to just categorically ignore for some reason in the discussion sphere online.
I mean a more comprehensive, dedicated video since it's been out for a while now, and including FSR3.

I have no doubt there's still plenty of flaws, but surely some testing of this would be interesting and relevant?
 
So, Sony has freed 1.2gb of ram on the PS5 pro compared to the base console. But how exactly are they doing that? Os optimizations? Additional DDR4 like on the PS4 pro? I don't think a 1.2 gigabyte module exists, so what are they doing here?

Also, if they are optimizing the os, that would apply to the base PS5 too.
 
Last edited:
So, Sony has freed 1.2gb of ram on the PS5 pro compared to the base console. But how exactly are they doing that? Os optimizations? Additional DDR4 like on the PS4 pro? I don't think a 1.2 gigabyte module exists, so what are they doing here?

Also, if they are optimizing the os, that would apply to the base PS5 too.

I bet they reserved more memory for the OS than they needed at the launch of PS5 with full knowledge that they were going to release a pro version that needed more memory.

I think PS5 was designed with a plan for a pro from the start.
 
I bet they reserved more memory for the OS than they needed at the launch of PS5 with full knowledge that they were going to release a pro version that needed more memory.

I think PS5 was designed with a plan for a pro from the start.
1.2GB of memory is a lot. Maybe 200mb from the os and a gigabyte of additional ddr4 ram?
 
1.2GB of memory is a lot. Maybe 200mb from the os and a gigabyte of additional ddr4 ram?
My guess is they'll replace the current 512MB of DDR4 with 2GB. Then allocate 1.2GB of GDDR6 to games, and give back 1.5GB of DDR5 to OS slow functions. Like memory needed to game recording and app switching. PS5 Pro could need more OS memory if they actually want to activate 8K output later.
 
Last edited:
My guess is they'll replace the current 512MB of DDR4 with 2GB. Then allocate 1.2GB of GDDR6 to games, and give back 1.5GB of DDR5 to OS slow functions. Like memory needed to game recording and app switching. PS5 Pro could need more OS memory if they actually want to activate 8K output later.
So PS5 has 512MB additional memory while PS4 pro has 1GB... didn't know they "downgraded" that aspect. Then upgrading from half a gig to 2 would make sense.
 
Quite interesting that the CPU will only max out at 3.85GHz. I trust Sony knows what they're doing. This system will be seamless for developers to utilize.
 
For me, the biggest issue I see with FSR 3 on console is that consoles do not employ a reflex like thing yet. So devs will have to roll their own automated frame-rate and utilisation limiter or have to live with the latency added. FSR 3 has a good deal more latency than DLSS 3 which everyone seems to just categorically ignore for some reason in the discussion sphere online.
Reflex is just a library that helps aids developers in reducing the amount of buffering/queuing that a game/driver is doing. There's truly nothing unique about the technology that could not be implemented by design itself. On consoles you have even more control over frame presentation ...
 
I mean a more comprehensive, dedicated video since it's been out for a while now, and including FSR3.

I have no doubt there's still plenty of flaws, but surely some testing of this would be interesting and relevant?
Maybe later when frame Gen also has come to console. So become a comparison of various methods across different devices.

It'll be a nightmare to test,and to make the report.

But it'll be an amazing read / video.
 
Back
Top