Sony PlayStation 5 Pro

This performance number doesnt even make any sense. Raytracing needs shading, too. So 4x would be maybe only tracing rays and doing triangle intersection tests. And without any kind of sorting efficiency will go down with every architecture. Even a 4090 isnt more than 4x faster than a 6900XT in Cyberpunk/Alan Wake 2 with Path Tracing. And Lovelace has more than 4x the "ray tracing" performance.
 
Expecting anything like the PT of Cyberpunk out of this console is only going to lead to disappointment.
Sure but isn't there a middle ground between one light RT effect (like shadows) and pathtracing? 2/3 RT effects should be such middle ground. Like shadows (already on GTA5) and RTGI.
 
Just for curiosities sake, here's a good article on the modifications to PS5's Zen 2 core:


Lower local bandwidth, but likely nothing that affects gaming performance much at all. But at the same time, the space savings are pretty paltry. Dont blame Sony for saving a few extra mm² if AMD could offer it.
 
Guys let’s keep the pure GPU benches out of the 5Pro talk.

Not seeing the value here, consoles have always been about compromises, they will compromise to get the performance they need to ship while still looking pretty decent. Not sure how benches of top of the line GPUs are going to be reflective of performance to expect on 5pro
 
Sure but isn't there a middle ground between one light RT effect (like shadows) and pathtracing? 2/3 RT effects should be such middle ground. Like shadows (already on GTA5) and RTGI.
Someone mentioned full RT on the prior page. I think even adding a secondary RT effect will be the minority of games though.
 
Not sure how benches of top of the line GPUs are going to be reflective of performance to expect on 5pro

Because they show results from GPU's roughly inline with base PS5, so by multiplying that number by 2/3x you can roughly see where PS5 Pro will be.

I think it's safe to say we can rule out PS5 Pro getting any where close to be able to do path tracing in CP2077.
 
Because they show results from GPU's roughly inline with base PS5, so by multiplying that number by 2/3x you can roughly see where PS5 Pro will be.

I think it's safe to say we can rule out PS5 Pro getting any where close to be able to do path tracing in CP2077.
I think it’s a given and should never have been considered as a discussion topic here.

I don’t even think it’s in the cards for next generation consoles. There’s just no reasonable way to see this happen in a $399/499 box sitting under a TV anytime.
 
I think it’s a given and should never have been considered as a discussion topic here.

I don’t even think it’s in the cards for next generation consoles. There’s just no reasonable way to see this happen in a $399/499 box sitting under a TV anytime.
I find it hard to believe that neither Sony, Amd or Microsoft will not invest in raytracing acceleration from now until 2028-2029.
The architecture that will get bankrolled by the console makers (RDNA 6?) will be competitive with Nvidia at the time of release (maybe a generation behind), just like RDNA 2 and ampere-turing.
This gen will feel like a beta compared to the next, just like PS3 before PS4.
 
I think it’s a given and should never have been considered as a discussion topic here.

I don’t even think it’s in the cards for next generation consoles. There’s just no reasonable way to see this happen in a $399/499 box sitting under a TV anytime.
I think it would be very unlikely PS6 doesn't have enough grunt to run Cyberpunk PT mode. Next gen levels of detail with PT would presumably be out of the question, but running currently available titles should be absolutely doable.
 
I find it hard to believe that neither Sony, Amd or Microsoft will not invest in raytracing acceleration from now until 2028-2029.
The architecture that will get bankrolled by the console makers (RDNA 6?) will be competitive with Nvidia at the time of release (maybe a generation behind), just like RDNA 2 and ampere-turing.
This gen will feel like a beta compared to the next, just like PS3 before PS4.

Look at the charts above, AMD aren't a generation behind Nvidia, they're multiple generations behind Nvidia.

I don't think AMD will ever catch up, even Intel, on their 1st attempt came up with hardware that's faster at RT than AMD's.
 
One thing I appreciate about the PS5 pro and Sony is they're trying to build the most optimal machine that meets a certain goal within constraints(engineering). Whatever hw they add they plan on fully utilizing. You really wonder how much was saved by using a cut down Zen 2 CPU in the base PS5 but its further proof of their focus looking at how the PS5 performs comparatively well to the more powerful Series X despite missing a few instructions. I trust Sony is going to build a very powerful system that will deliver impressive results in terms of 4K higher fps gaming as well as RT effects. It surely wont match the most powerful consumer GPUs but it will be so good that you wont be missing out on much by choosing their $500 box.
 
  • Like
Reactions: snc
I can't fathom how PS5Pro has 2~4x the RT performance of PS5 without a new ray-triangle hardware thingy and some very large L3 cache.
 
Suggestions for deeper architectural improvements include dedicated BVH tree building/traversal (i.e. NVidia and Intel approach) with a dedicated stack wemory for ray/BVH coordinates; AMD has a patent US20230206543 on a 'hardware' (fixed-function) traversal engine describing these techniques.
 
I find it hard to believe that neither Sony, Amd or Microsoft will not invest in raytracing acceleration from now until 2028-2029.
The architecture that will get bankrolled by the console makers (RDNA 6?) will be competitive with Nvidia at the time of release (maybe a generation behind), just like RDNA 2 and ampere-turing.
This gen will feel like a beta compared to the next, just like PS3 before PS4.

I think it would be very unlikely PS6 doesn't have enough grunt to run Cyberpunk PT mode. Next gen levels of detail with PT would presumably be out of the question, but running currently available titles should be absolutely doable.
That's actually where I'm at. The problem is that there's too much grunt required. And I'm going to be very reductive here, so bear with me on this.

There will always be this sweet spot between clock speed, energy required, cooling required and silicon required.
And I don't really care about which IHV, but if we're having a serious discussion around getting 4090 levels of power into something of a form factor the size of a PS5, then we're talking about cramming all that power into something approximately the size of 350mm2 to 292mm2.

So let's assume to run CyberPunk PT, you need X levels of computational power, which we can call X, and right now that X, say is 4090 levels of computation. And you're looking at about 300-450W of power consumption on a 4090 to play to that game with PT on.

Now combine that with a CPU (80mm2), and shrink a 4090 (609mm2) combined into something around 350mm2. Think about all that power now being pushed into a very tiny area. And cooling to me becomes increasingly harder to do.
And for consoles to exist, they have to be at a very particular price point.

So when you consider the combination of heat, cooling, energy, silicon size, which the smaller it gets and the more computation we require of it, the energy rating of watts/mm2 is eventually going to so high, that we have no materials to cool it, at least nothing that will allow us to get it at console level pricing. And so the obvious answer, is to go wider with slower clocks to reduce the power requirement and to increase the die size therefore increase the cooling area, but now we're paying significantly higher per die due to silicon costs.

Thus regardless of AMD or nvidia, that's the issue that stands to me, is that there's a clear physics barrier that can only be overcome by significant cost increases. And the reason why PC continues to flourish is because we have more applications for this level of power (whereas consoles are dedicated as gaming machines only) and that also we're moving back to mainframe days where computation is moving to cloud so that it's cheaper for everyone to have access to it.

I just don't see how with the rate of how slow our node shrinks are coming, by PS6 we'll be able to fit that level of computation into silicon of 350mm2.

We could develop entirely new hardware accelerators, or come up with a way to use magnitude order less silicon to do the same amount of computation, but outside of that, by 2026/27, I don't think we'll be far enough in the node shrinks to make this happen. And even if we were far long enough, I don't think the cooling solution would be ideal to keep us at our current price point.
 
That's actually where I'm at. The problem is that there's too much grunt required. And I'm going to be very reductive here, so bear with me on this.

There will always be this sweet spot between clock speed, energy required, cooling required and silicon required.
And I don't really care about which IHV, but if we're having a serious discussion around getting 4090 levels of power into something of a form factor the size of a PS5, then we're talking about cramming all that power into something approximately the size of 350mm2 to 292mm2.

So let's assume to run CyberPunk PT, you need X levels of computational power, which we can call X, and right now that X, say is 4090 levels of computation. And you're looking at about 300-450W of power consumption on a 4090 to play to that game with PT on.

Now combine that with a CPU (80mm2), and shrink a 4090 (609mm2) combined into something around 350mm2. Think about all that power now being pushed into a very tiny area. And cooling to me becomes increasingly harder to do.
And for consoles to exist, they have to be at a very particular price point.

So when you consider the combination of heat, cooling, energy, silicon size, which the smaller it gets and the more computation we require of it, the energy rating of watts/mm2 is eventually going to so high, that we have no materials to cool it, at least nothing that will allow us to get it at console level pricing. And so the obvious answer, is to go wider with slower clocks to reduce the power requirement and to increase the die size therefore increase the cooling area, but now we're paying significantly higher per die due to silicon costs.

Thus regardless of AMD or nvidia, that's the issue that stands to me, is that there's a clear physics barrier that can only be overcome by significant cost increases. And the reason why PC continues to flourish is because we have more applications for this level of power (whereas consoles are dedicated as gaming machines only) and that also we're moving back to mainframe days where computation is moving to cloud so that it's cheaper for everyone to have access to it.

I just don't see how with the rate of how slow our node shrinks are coming, by PS6 we'll be able to fit that level of computation into silicon of 350mm2.

We could develop entirely new hardware accelerators, or come up with a way to use magnitude order less silicon to do the same amount of computation, but outside of that, by 2026/27, I don't think we'll be far enough in the node shrinks to make this happen. And even if we were far long enough, I don't think the cooling solution would be ideal to keep us at our current price point.

Well we could decouple the cpu and gpu. That is a new concept in the console space appearing with the one and ps4. Previously you had multiple chips and in some systems you had what 8 + like the saturn ?

The ps5 is an extremely large console and the xbox series x isn't far behind. There really isn't a limit on how large you can go with a console size. People will just have to be willing to buy it.

So a company could create a console with a 16 core ryzen with a large 3d cache and put in a geforce 5080 in it if they wanted to. You can even go back to split memory pools. Throw in ddr for the cpu and ultra fast gdr or hbm.
 
Well we could decouple the cpu and gpu. That is a new concept in the console space appearing with the one and ps4. Previously you had multiple chips and in some systems you had what 8 + like the saturn ?

The ps5 is an extremely large console and the xbox series x isn't far behind. There really isn't a limit on how large you can go with a console size. People will just have to be willing to buy it.

So a company could create a console with a 16 core ryzen with a large 3d cache and put in a geforce 5080 in it if they wanted to. You can even go back to split memory pools. Throw in ddr for the cpu and ultra fast gdr or hbm.
You can’t. There’s no TAM for that and it would go out of business. Console purchasers are extremely price sensitive. The device can do nothing but play games. People won’t spend $1000+ on a console.

AMD greatest market are console players and handheld devices in the GPU side of things. There’s a very specific reason why AMD makes the design choices that they do, they are designed to save silicon space at the cost of performance in which they hope developers can somehow claw it back through optimization. The cheaper console will always sell more than expensive one.
 
Abou cpu boost, only game that matter and will be real test is gta6, no 60fps on pro would be marketing fail

Rockstar was even reluctant to unlock 60fps Red Dead on PS5 even when that version didnt ship on Xbox. They didnt want to upset MS. Framerate is "feature parity". No matter what CPU there was it could have been still locked to 30fps if the base consoles are. DF seems to running on 30fps but not sure its true.. how is Cyberpunk not CPU limited and this is? Sure it could be 40-60fps but whatever.
 
That's actually where I'm at. The problem is that there's too much grunt required. And I'm going to be very reductive here, so bear with me on this.

There will always be this sweet spot between clock speed, energy required, cooling required and silicon required.
And I don't really care about which IHV, but if we're having a serious discussion around getting 4090 levels of power into something of a form factor the size of a PS5, then we're talking about cramming all that power into something approximately the size of 350mm2 to 292mm2.

So let's assume to run CyberPunk PT, you need X levels of computational power, which we can call X, and right now that X, say is 4090 levels of computation. And you're looking at about 300-450W of power consumption on a 4090 to play to that game with PT on.

Now combine that with a CPU (80mm2), and shrink a 4090 (609mm2) combined into something around 350mm2. Think about all that power now being pushed into a very tiny area. And cooling to me becomes increasingly harder to do.
And for consoles to exist, they have to be at a very particular price point.

So when you consider the combination of heat, cooling, energy, silicon size, which the smaller it gets and the more computation we require of it, the energy rating of watts/mm2 is eventually going to so high, that we have no materials to cool it, at least nothing that will allow us to get it at console level pricing. And so the obvious answer, is to go wider with slower clocks to reduce the power requirement and to increase the die size therefore increase the cooling area, but now we're paying significantly higher per die due to silicon costs.

Thus regardless of AMD or nvidia, that's the issue that stands to me, is that there's a clear physics barrier that can only be overcome by significant cost increases. And the reason why PC continues to flourish is because we have more applications for this level of power (whereas consoles are dedicated as gaming machines only) and that also we're moving back to mainframe days where computation is moving to cloud so that it's cheaper for everyone to have access to it.

I just don't see how with the rate of how slow our node shrinks are coming, by PS6 we'll be able to fit that level of computation into silicon of 350mm2.

We could develop entirely new hardware accelerators, or come up with a way to use magnitude order less silicon to do the same amount of computation, but outside of that, by 2026/27, I don't think we'll be far enough in the node shrinks to make this happen. And even if we were far long enough, I don't think the cooling solution would be ideal to keep us at our current price point.
I think we should be able to squeeze 4090 performance into console form factor at 1nm which is what I expect the final Playstation to use. It’s quite a shrink from 5nm. We also have to factor in architectural improvements that can come over the next 5 years. Don’t forget, Nvidia hasn’t made any significant improvements to its graphics architecture since 2018. They just increase the RT and tensor core throughput and continue to add SMs.
 
Back
Top