What's the performance range of PS5? *spawn*

Status
Not open for further replies.
Can't get more Apples-to-Apples than 2 AMD x86 APUs with Zen 2 CPUs & RDNA 2 GPUs. :oops:

Tommy McClain

At a basic high level that's exactly what they are. But that oversimplification glosses over the customisation that both parties have done to those GPUs and CPUs. Using TFlops as a measurement can't take any of that into account just as it can't take into account the differences in IO speed and the impact that will have. Or changes to caches etc. Cerny summed that up in the expo he did.

Just as MIPs are useless for CPUs, Flops are useless for GPUs. It's a number you throw out to people who have a very basic understanding of what they are talking about. The only real way to measure modern GPUs is by how they perform in real world scenarios.
 
At a basic high level that's exactly what they are. But that oversimplification glosses over the customisation that both parties have done to those GPUs and CPUs. Using TFlops as a measurement can't take any of that into account just as it can't take into account the differences in IO speed and the impact that will have. Or changes to caches etc. Cerny summed that up in the expo he did.

Just as MIPs are useless for CPUs, Flops are useless for GPUs. It's a number you throw out to people who have a very basic understanding of what they are talking about. The only real way to measure modern GPUs is by how they perform in real world scenarios.

Keep looking at all the minutia & you'll never be able to compare them. Look at the same games running on both systems & you'll just have somebody say you can't look at them because one of them was the lead system & they did the bare minimum on the port. Or one vendors dev tools are behind another vendor. Or one's dev environment is to the metal or the other has too much hardware abstraction. Eventually none of them can be compared. So why the hell are we here discussing all of this shit? LOL

Tommy McClain
 
Keep looking at all the minutia & you'll never be able to compare them. Look at the same games running on both systems & you'll just have somebody say you can't look at them because one of them was the lead system & they did the bare minimum on the port. Or one vendors dev tools are behind another vendor. Or one's dev environment is to the metal or the other has too much hardware abstraction. Eventually none of them can be compared. So why the hell are we here discussing all of this shit? LOL

Tommy McClain

In that case we just have to get smarter about how many levels of meta we can deal with before the variance is too great for any realistic measurement to be useful. And measurements like MIPs, Flops are too basic to contain any useful information. Leave those to the n00bs :)
 
In that case we just have to get smarter about how many levels of meta we can deal with before the variance is too great for any realistic measurement to be useful. And measurements like MIPs, Flops are too basic to contain any useful information. Leave those to the n00bs :)

Would you have that mindset if the PS5 had more FLOPs? Cause I know from last generation it was the one thing got shoved down everyone's throats. ;)

Tommy McClain
 
A long time ago we estimated that a 2070 approximate performance and feature set is an ideal console within a specific price point.

That’s exactly where a 5700xt sits. Arguably the only features missing is dlss. It could be 8 TF or 10TF as long as it’s performing or putting out 2070 benches that’s all that matters. Honestly I don’t know where PS5 will land. It has the CUs of a 5700 and the clockspeed above the 5700XT anniversary. It gives it a wide range of performance.

That being said I do think there is a complex around any digit that isn’t a 10. Neither 5700 or 5700xt are at peak above 9.75 TF.

If you walk away with 5700xt performance with a super fast SSD and a Zen2 with an amazing price point I don’t understand the argument around whether it’s 8.2 or 10.2. It’s not relevant. You’re getting massive amounts for your money. The only thing that would make this less desirable is only if their competitor manages to ship the same price point in which more people would desire more power but likely not in the face of giving up a game library.
The larger TF differential is going to be put towards resolution and that’s where most of it’s benefit will lie.
 
Would you have that mindset if the PS5 had more FLOPs? Cause I know from last generation it was the one thing got shoved down everyone's throats. ;)

Tommy McClain

Totally. They were just as useless a measurement last generation as they are this generation. The proof of that pudding was definitely in the eating, so to speak ;)
 
A long time ago we estimated that a 2070 approximate performance and feature set is an ideal console within a specific price point.

That’s exactly where a 5700xt sits. Arguably the only features missing is dlss. It could be 8 TF or 10TF as long as it’s performing or putting out 2070 benches that’s all that matters. Honestly I don’t know where PS5 will land. It has the CUs of a 5700 and the clockspeed above the 5700XT anniversary. It gives it a wide range of performance.

That being said I do think there is a complex around any digit that isn’t a 10. Neither 5700 or 5700xt are at peak above 9.75 TF.

If you walk away with 5700xt performance with a super fast SSD and a Zen2 with an amazing price point I don’t understand the argument around whether it’s 8.2 or 10.2. It’s not relevant. You’re getting massive amounts for your money. The only thing that would make this less desirable is only if their competitor manages to ship the same price point in which more people would desire more power but likely not in the face of giving up a game library.
The larger TF differential is going to be put towards resolution and that’s where most of it’s benefit will lie.

Yeah this I believe. The last generation was the last one where the hardware was the thing to win people's hearts and minds (and therefore wallets too). Since then services have evolved and gamers are far more invested in their platforms of choice than they have ever been before. Analysts are predicting a 2:1 split of the market again and that based on PS4 owners will get a PS5 and Xbox owners will get a XSX, or more probably a Lockhart (which is why they should bring that out before the XSX). It doesn't matter which is supposedly more powerful on paper. That is so last generation :D
 
Teraflops isn't going to matter, it's going to be bandwidth limited.

I'm wondering about that as well.

But the PS4Pro manages extremely well with only 218 GB/sec and compares very favorably against the X1X with it's 336 GB/sec BW. Originally, I thought the Pro was going to be really bandwidth started and it isn't. Maybe dev's have to work around that, I don't know, but it doesn't seem to be a big issue.
 
I'm wondering about that as well.

But the PS4Pro manages extremely well with only 218 GB/sec and compares very favorably against the X1X with it's 336 GB/sec BW. Originally, I thought the Pro was going to be really bandwidth started and it isn't. Maybe dev's have to work around that, I don't know, but it doesn't seem to be a big issue.
Pro has higher fill rate but runs most games at a lower resolution, sometimes while even using optimizations like checkerboarding (making the actual rendered resolution/bandwidth requirements even lower). 3d rendering is like a chain, only as strong (fast) as it's weakest link. I would imagine that Pro is more often than not bandwidth constrained, given is fillrate advantage, while at equal settings 1x would be bottle necked by something else. Both are held back by the CPU, though.
 
Fitting rumor with the subject: PS5 CUs and RDNA3 CUs have independent clock domains depending on its activity.

On the other hand if PS5 geometry engine capabilities bring really deferred vertex shading the tflops number lose greatly its value because of huge efficiency gains in the pipeline. Would like to read 3dilettante take on this...
 
Last edited:
Fitting rumor with the subject: PS5 CUs and RDNA3 CUs have independent clock domains depending on its activity.

On the other hand if PS5 geometry engine capabilities bring really deferred vertex shading the tflops number lose greatly its value because of huge efficiency gains in the pipeline. Would like to read 3dilettante take on this...
Hmm haven’t heard such a rumour before. I suspect if true this would only be for the graphics pipeline
 
Its cool that Sony committed to giving developers flexibility to decide where they want to really push the hardware, but if the past is indication, developers will likely lower CPU clocks to make sure GPU clocks can run full tilt. Im sure there will be exceptions, but most games remain GPU limited, and I doubt that has changed. Even with reduced reduced clock speeds for the CPU in the PS5, its going to be massive improvement over the Jaguar cores in current gen consoles, so I doubt developers will really have much trouble. They will always want more, but as we have seen with Switch, even a game like Witcher 3 was able to get up and running on three Arm A57 cores clocked at 1Ghz. The PS5's CPU having a slower clock compared to X wont be a deal breaker.
 
How about we drop the current bout of discussions? It's not productive in the least.

Since no one wants to actually stop the current bout of bitching, it's best this thread is closed.
 
Status
Not open for further replies.
Back
Top