PS5 Pro *spawn

I just watched the IGN guy (and others like John) opening respectfully their PS5 Pro box. I have seen others unboxing of different stuff. I have neer seen that behavior before, even when they were gifted them, even more so. People do that kind of stuff for a good reason, usually. Doesn't he know how to use gravity (like John did) to easily get it open without tearing it apart?

Is he disrespectful like this with his others stuff and the myriads of others consoles he owned? Does he toss all this consoles boxes to the garbage and keep only the blank cardboard box?
Maybe he prefers not to keep a box that was already ripped.
 
Could it be that the Pro is using RDNA2 compute units like the vanilla PS5 does? That would equal RX6800-ish performance level, which is in-line with the 7700XT rumors that have been going. Perhaps for BC with the base PS5? That'd be 'just' 4TF up from the XSX, the previous most powerful console on paper.
It's not '4070' level of performance in raw performance as some have indicated before, neither in RT performance. In special if the GPU isn't having 'tensor cores'/separate units for the job.

Or the console is using RDNA3 compute units but Sony opted for just 16.7TF and relied on the PSSR and improved ray tracing performance? Its alot of money (we pay over a 1000 euro for the unit) which is hefty for what you get. No disc drive, no stand and its a meager upgrade compared to what the 4 Pro did over the base PS4. Its the upscaling and the RT that sell the product i guess. Two technologies that have been seen in negative light when they debuted back in 2018.
 
It's pretty good with only using 36CUs a bit overclocked at 2.35ghz like I suspect (5% GPU improvement). Bandwidth improvements must surely help the most IMO. Always above 48fps VRR window at min 1620p native. But it won't resolve the biggest problem of PS5 version: the pop-in.
Is 2.35 GHz for 60CUs?

When running base PS5 games, there are 24 CUs completely idle. Is it possible to turn off these CUs?

If 36CUs uses the GPU power, GPU may be higher than 2.5 GHz. This explains how base PS5 games improves 20% or more.
 
Perhaps they want to give realistic expectations of the Professional's performance relative to the base PS5 so they're comparing like with like...?
Thats also a very reasonable explanation.

Or.

dual FP32 was enabled for devkits but was never intended for final release for reasons that are beyond me.
 
Last edited:
How we will now get 300tops Int8:runaway:
I’m not in disagreement.
Though Dual issue FP32 could be different from dual issue int8.

To me; it doesn’t make much sense. Does it cost more money or less to remove dual issue FP32. I really don’t know. I’m struggling as to why they would market 16.7TF, but the best reason is that, it performs closer to 16.7 than it does 33.4.

That’s reasonable. But if there was some sort of silicon savings reason, I’d be curious to learn more about the actual silicon cost of support dual issue fp32. Perhaps there is a backwards compatibility cost. I really don’t know. It’s just some interesting technical nuances to learn about, the number itself is not indicative of performance.

I frankly don’t care what the number is, I just want to know why.
 
  • Like
Reactions: snc
I’m not in disagreement.
Though Dual issue FP32 could be different from dual issue int8.

To me; it doesn’t make much sense. Does it cost more money or less to remove dual issue FP32. I really don’t know. I’m struggling as to why they would market 16.7TF, but the best reason is that, it performs closer to 16.7 than it does 33.4.

That’s reasonable. But if there was some sort of silicon savings reason, I’d be curious to learn more about the actual silicon cost of support dual issue fp32. Perhaps there is a backwards compatibility cost. I really don’t know. It’s just some interesting technical nuances to learn about, the number itself is not indicative of performance.

I frankly don’t care what the number is, I just want to know why.

1:04

Have an actual custom silicon AI upscaler
 

1:04

Have an actual custom silicon AI upscaler
That AI upscaler doesn’t do any compute in FP32. It does it in int8 with sparsity.

I don’t want to be a negative Nancy, there is an insanely high probability that you’re going to see this exact feature with RDNA4 as well. And you’ll see this used with AMD FSR4 instead of it being called PSSR.
 
Is there PS5 Pro RT performance test compared with Nvidia GPU?
No but I would guess it’s comparable to a 2080 Ti in heavy RT.

I’m not in disagreement.
Though Dual issue FP32 could be different from dual issue int8.

To me; it doesn’t make much sense. Does it cost more money or less to remove dual issue FP32. I really don’t know. I’m struggling as to why they would market 16.7TF, but the best reason is that, it performs closer to 16.7 than it does 33.4.

That’s reasonable. But if there was some sort of silicon savings reason, I’d be curious to learn more about the actual silicon cost of support dual issue fp32. Perhaps there is a backwards compatibility cost. I really don’t know. It’s just some interesting technical nuances to learn about, the number itself is not indicative of performance.

I frankly don’t care what the number is, I just want to know why.
We have never had any information on the transistor cost of any GPUs with dual issue of any instruction type. If they chose to remove it I would venture it’s for other reasons though considering the extremely small quantities they are likely to sell.
 
Maybe Rich should have first Lot some incence and doused holy water over the PS5Pro think cardboard Box. Now that would be true respect befitting a sacred professional machine.

WRT Dual Issue - things change, maybe the OG Version of PS5Pro was going to be dual issue, but they got rid of it for die space?

It helps compute Performance a Bit in Games, but not exactly a whole bunch in a way they might have seen as necessary for their bottom line.
 
Have an actual custom silicon AI upscaler
That kinda muddies the water. Reinforces the notion of a novel unit, but as an engine programmer, I'm not sure his statement is definitive on the nature of the hardware. +1 to "there's a bespoke unit in there" but the match is far from over!
 
Isn't it dual issue FP16 and single issue FP32 - AMD GPUs are a 2:1 design - So it would be 33.4 TFlops for FP16 and 16.7 for FP32?
 
So now we have added PS5 Pro doesn't do hardware RDNA3 dual issue to the FUD pile. it reminds me the good old days of: PS4 has only 14 CUs for games, PS5 doesn't have hardware RT, PS5 Pro doesn't have dedicated ML silicon. I wonder what will be the next BS FUD against a Playstation hardware? Odd I have never heard about anyone casting similar doubts about any Xbox consoles.
Isn't it dual issue FP16 and single issue FP32 - AMD GPUs are a 2:1 design - So it would be 33.4 TFlops for FP16 and 16.7 for FP32?
It is 67 Tflops of FP16 according to the leaked docs. And we have developers who are confirming it. This is some special conspirational stuff I am reading in this thread.

PS5-Pro-PSSR.jpg
 
...
It is 67 Tflops of FP16 according to the leaked docs. And we have developers who are confirming it. This is some special conspirational stuff I am reading in this thread.

That would be cool, but it doesn't line up with what they have on the final documentation. They specifically quote 16.7 Tflops in there which would imply they are identifying the FP32 throughput. Maybe the leaked document is referring to a custom block that exists for PSSR and maybe future use. I still think the Pro is just an in production testbed for the PS6, only they'll get tons of real world data about performance and training data for ML.
 
So now we have added PS5 Pro doesn't do hardware RDNA3 dual issue to the FUD pile.
There's is no FUD pile. No-one is trying to spread fear, nor doubt, nor uncertainty. There's only theories and discussion and bits of pieces of evidence here and there. It doesn't matter which theory is right or wrong. We speculate, debate, and then see who's right when we finally know. This is your last warning. If you don't like people expressing viewpoints you don't agree with, you don't belong in B3D.
It is 67 Tflops of FP16 according to the leaked docs. And we have developers who are confirming it. This is some special conspirational stuff I am reading in this thread.
But we have different information from different sources! If they all said the same thing, it'd be very easy to understand what reality is. Where they differ, we need to understand why to arrive at a correct conclusion. The only people who don't embrace that process of collating more evidence and evaluating it are those who already have a preferred outcome, select evidence to support their preferred understanding, and deride those who challenge it.

Here, we like the process of discovery. We like the process of getting evidence, and then getting different evidence, and not quite knowing. Then comes the challenge of a hypothesis that can consolidate the disparate evidences and arrive at the truth. Platforms don't come into that process.

One question raised from your source - it claims 2ms upscaling time, this might be reduced. If PSSR is operating on a discrete upscaling unit, why is there interest in reducing the time taken? If that 2 ms isn't taking away from the GPU, who cares whether it's 2ms or 5ms or 1 ms? The worst it'll do is add that much input latency as lag after the frame has been rendered, but 2 ms is neither here nor there so reducing that won't make any difference.

When all the evidence points to one possibility, yay. Until then, we ponder, Yay!
 
Last edited:
Back
Top