Baseless Next Generation Rumors with no Technical Merits [post E3 2019, pre GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
well, I'd say the price must meet consumer expectations, then they try to get the most performance they can out of that profile. It's never been about power first and price second.
All of these consoles have to be designed with a price point in mind.

If Sony naturally targeted a lower price point, then you have to come to expect lower performance profiles. But that shouldn't be a sleight against Sony nor do I see it as any form of weakness. They set a price point they think they will find success in, I would trust their process. They have a great deal of experience in knowing how to sell a lot of playstations and they have leadership position going into next gen. I think they are very focused on making a product that will succeed but that doesn't necessarily mean it has to meet the requirements of hardcore enthusiasts dreams of what acceptable power is.

everyone has a different threshold of what acceptable power is.
Unless MS decides to subsidise their product for aggressive compedition
 
There is very low chance they don't add any CUs and choose to clock 0.2GHz more since SONY already makes other RDNA2 GPU.

The GPU must exceed 10TF for marketing reasons (9.2 TF is hard to convince core consumers).

I don't know how you can draw any of the conclusions. If anything, I think the chances of 9.2 TF (2.0GHz 36CU) PS5 is higher now than before. RDNA2 should allow them to hit that clock and at a more reasonable power consumption, likely similar to the PS4 Pro.
 
Consoles simply follow price/performance target, and they determine how much power they will pack in console.

Take PS4Pro for example, that console was released in 2016 with 4.2TF chip and 8GB of GDDR5 RAM. Xbox One X was released a year after with 6TF and 12GB of GDDR6.

One could say, well, if Sony were to release PS4Pro year later, they would probably have matched XBX TF number. But, if you think about it. Actual fabric of which these two systems are made of is pretty much the same. MS didn't have advantage of newer node, or better RAM technology. They were both at 16nm, with Jaguar core and had GDDR5. XBX chip was 40mm2 bigger, with 4GB more RAM and UHD. This is what determined 499$ price, not specs which were available to them and not to Sony.

Sony made decision to drop UHD, keep RAM/BW pretty much the same, and delivered 4.2TF chip because they were aiming for 399$. If MS had 30-40$ higher budget for actual SOC, and they were not as cost conscious as Sony (duo to Lockhart), they would simply be able to deliver more TF, but for higher price. Thats all there is to it.

I don't know how you can draw any of the conclusions. If anything, I think the chances of 9.2 TF (2.0GHz 36CU) PS5 is higher now than before. RDNA2 should allow them to hit that clock and at a more reasonable power consumption, likely similar to the PS4 Pro.
I still think a bit more. Something like 4.9TF RX470 was at around ~120W.

9.2TF RDNA2 with 16GB of highest clocked GDDR6 RAM would probably come at around ~150W in best case scenario. I think this system would likely be at around 190-200W at max, XSX probably ~30W more.
 
- We have proof Sony designed 2 SOCs, Ariel and Oberon (like allegedly Microsoft, yep).
- We know Oberon regression tests are mainly there to test Ariel BC tests, basically nothing more. Those tests are not called "Regression tests" for nothing. See @AbsoluteBeginner posts for more details about it.
- So Oberon B0 with about 100GB/s more bandwidth have very probably more CUs than Ariel. Usually you set the bandwidth depending of the TF numbers because bandwidth is very expensive, you don't give more than what is enough. So based on 448GB/s for 9.2tf, 550GB/s should be good enough for 11.3tf, say 11-12tf.

Now based on O'dium testimony, which I believe as it's a real developer, current PS5 (noisy) devkit is at 11.6 tf. So that's 44 CUs at 2060mhz or 52 CUs at 1742mhz.

A 11.6 tflops machine at 2060mhz would beat a 12tf 1800mhz machine in most tests assuming the rest of the specs is equivalent.
 
- We have proof Sony designed 2 SOCs, Ariel and Oberon (like allegedly Microsoft, yep).
- We know Oberon regression tests are mainly there to test Ariel BC tests, basically nothing more. Those tests are not called "Regression tests" for nothing. See @AbsoluteBeginner posts for more details about it.
- So Oberon B0 with about 100GB/s more bandwidth have very probably more CUs than Ariel. Usually you set the bandwidth depending of the TF numbers because bandwidth is very expensive, you don't give more than what is enough. So based on 448GB/s for 9.2tf, 550GB/s should be good enough for 11.3tf, say 11-12tf.

Now based on O'dium testimony, which I believe as it's a real developer, current PS5 (noisy) devkit is at 11.6 tf. So that's 44 CUs at 2060mhz or 52 CUs at 1742mhz.
Just correction
  • Oberon is running all Ariel iGPU testlists : Native, BC1 and BC2. It's not just BC test
  • Ariel with 448GB/s and 9.2TF would most likely be BW starved. Not only would it go completely against console precedent of having more total BW vs PC GPU equivalent (~25%), you would have to split 448GB/s with CPU as well. Since 7.9TF 5700 has full 448GB/s on its disposal, I think that banwidth would be far too little for entire system
  • Thus I think 528GB/s (which is actual benchmark for Flute AND Oberon test), would probably be sweetspot for ~9-10TF with RT + Zen2.
 
Unless MS decides to subsidise their product for aggressive compedition
eh.
I just dont' see MS doing that. No one wants to get into a subsidy war. Everyone is just going to end up losing money. Back then subsidies were necessary. I don't think they are as critical in today's environment. With multiple methods in which people can access MS content, I doubt they will go with subsidies.
 
Just correction
  • Oberon is running all Ariel iGPU testlists : Native, BC1 and BC2. It's not just BC test
  • Ariel with 448GB/s and 9.2TF would most likely be BW starved. Not only would it go completely against console precedent of having more total BW vs PC GPU equivalent (~25%), you would have to split 448GB/s with CPU as well. Since 7.9TF 5700 has full 448GB/s on its disposal, I think that banwidth would be far too little for entire system
  • Thus I think 528GB/s (which is actual benchmark for Flute AND Oberon test), would probably be sweetspot for ~9-10TF with RT + Zen2.
He just want PS5 beating XSX in power and cheaper you know, you can see that in his edit.
 
Just correction
  • Oberon is running all Ariel iGPU testlists : Native, BC1 and BC2. It's not just BC test
  • Ariel with 448GB/s and 9.2TF would most likely be BW starved. Not only would it go completely against console precedent of having more total BW vs PC GPU equivalent (~25%), you would have to split 448GB/s with CPU as well. Since 7.9TF 5700 has full 448GB/s on its disposal, I think that banwidth would be far too little for entire system
  • Thus I think 528GB/s (which is actual benchmark for Flute AND Oberon test), would probably be sweetspot for ~9-10TF with RT + Zen2.
Ariel hasn't got 448GB/s. It has about 430GB/s measured BW (from memory). It is that number that must be compared to the 530 GB/s number. I really hope you'll at least understand that part. Oberon has about 100GB/s more bandwidth than Ariel, that's in the github, both using real world measured numbers.
 
He just want PS5 beating XSX in power and cheaper you know, you can see that in his edit.
And the same can't be said vice versa? Why so cute?
Let's face it, the two sides aren't giving up until the official reveal or some real legit leak happens.
 
And the same can't be said vice versa? Why so cute?
Let's face it, the two sides aren't giving up until the official reveal or some real legit leak happens.
The "other side" in here at least fully aware if it's indeed 12TF vs 9.2TF, this means XSX is more expensive.
 
Ariel hasn't got 448GB/s. It has about 430GB/s measured BW (from memory). It is that number that must be compared to the 530 GB/s number. I really hope you'll at least understand that part. Oberon has about 100GB/s more bandwidth than Ariel, that's in the github, both using real world measured numbers.
And I hope you understand that my entire point is that I don't think 530GB/s will be sufficient for 12TF GPU with RT and CPU sharing same BW. My entire point is that 448GB/s for Ariel is too little if its 9.2TF, and 528GB/s is more like it.

Simply put :

PS4 1.8TF 176 GB/s
HD7850 1.79TF 153 GB/s
R9 270 2.36TF 180 GB/s

PS4 > 1.28x more BW per TF (v R9)

PS4Pro 4.2TF 218 GB/s
RX470 4.9TF 211 GB/s

Pro > 1.24x more BW per TF

XBX 6.0TF 326 GB/s
RX580 6.1TF 256 GB/s

XBX > 1.29x more BW per TF


Notice that for less TFs, consoles had more GB/s, because they share it with CPU as well, its not used only by GPU.
 
And the same can't be said vice versa? Why so cute?
Let's face it, the two sides aren't giving up until the official reveal or some real legit leak happens.
Not really, I am pretty objective when it comes to this. Once specs are revealed, and we see deep dives into tech, I will turn my head towards next gen. I am just commenting situation as I see it.
 
I'd be curious about what further tests they've done as it's possible that GDDR6 may have less of a shared penalty while Zen/Navi have a more robust cache hierarchy (and better culling) relative to the current round.
 
Before confirmation of perf per watt improvement of RDNA2 compared to RDNA1, I believed PS5 will be 8,9,10,11.x Tflops. I believe it will be between 9 and 11.x Tflops but I don't believe the PS5 will be 36 Cus 2 Ghz. If it is 9 Tflops maybe 44 or 48 CUs is better. A good compromise.

I think PS5 will be less powerful than XSX imo.
This - it's getting tiresome to have legitimate questions about GitHub leaks dismissed as frustration due to Xbox being more powerful.

Nobody has offered a good explanation for a 2ghz clock speed and that is enough on it's own to say the GitHub leaks lack context. Most of the push back I read here and advance myself is centered around this very issue.

I've said all along I think PS5 will be somewhere between 9.5 and 11TF, have an SSD with 16Gb of memory. I really don't care if Xbox has a 12TF option, I'm not seeing any reason to believe PS5 is clocked at 2Ghz.
 
And I hope you understand that my entire point is that I don't think 530GB/s will be sufficient for 12TF GPU with RT and CPU sharing same BW. My entire point is that 448GB/s for Ariel is too little if its 9.2TF, and 528GB/s is more like it.

Simply put :



PS4 > 1.28x more BW per TF (v R9)



Pro > 1.24x more BW per TF



XBX > 1.29x more BW per TF


Notice that for less TFs, consoles had more GB/s, because they share it with CPU as well, its not used only by GPU.
If you want to play that game, then 530/560 *12TF = 11.4TF.

Zen 2 itself does not appear BW constrained even with <50GB/s on desktop.
 
2.0GHz still seems high but if XSX is doing near 1.7GHZ (which seems believable in order to hit 12TFlops) then it's possible Sony is managing it on a smaller chip.
 
If you want to play that game, then 530/560 *12TF = 11.4TF.

Zen 2 itself does not appear BW constrained even with <50GB/s on desktop.
I know that, but I already said that I dont see a way for XSX to be 560GB/s if 12TF is true. We obviously have no measured data for Arden, so we dont know if they are using chips faster then 14Gbps, but as is in the case of Oberon - I do expect it.

5700 is 448GB/s with 7.9TF, this would be first time consoles were down on BW per TF compared to last gen when they were rutinely ~25% up.
 
Status
Not open for further replies.
Back
Top