Baseless Next Generation Rumors with no Technical Merits [post E3 2019, pre GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
Do you people understand what they were testing?

I know it is regression test but there is tons of things I. The file why the RT and VRS data would have disseapear here. I disagree with you but we will know very soon just a few months. I doubt in all the repository we would not find trace about RT and VRS if it was present at one moment for Oberon even if it was not tested.

Edit:
And why receive RT and VRS chip long before Microsoft test their chip.
 
I know it is regression test but there is tons of things I. The file why the RT and VRS data would have disseapear here. I disagree with you but we will know very soon just a few months. I doubt in all the repository we would not find trace about RT and VRS if it was present at one moment I side the test.
Mate, there is not going to be trace of RT/VRS if you are regression testing a chip against Navi 10 (Ariel) testlist. Its simply physically impossible.

I have 0 doubt Oberon chip had RT/VRS hardware. I have no doubt, and didn't have doubt about it when it leaked, so leak is not discredited at all. In fact, with new info regarding big perf/watt improvements, only thing that was missing (2.0GHz) became very much expected.

This leak got all other chips right (Renoir, Arden, MI100), so I think this will soon include Oberon. ONLY cliff hanger is CU count. Native test suggest 36CU @ 2.0GHz is native, but I will leave some wiggle room here because we never got exact number of shader units as in Arden case.
 
I speak about some trace of theoretical value like Arden and Sparkman chip.
Arden, Sparkman, Renoir, Navi 10, MI100 all had native theoretical data in cases for ariel and navi10 excel file.

Oberon does not exist there, and everything we know about Oberon is related to regression testing Ariel testlist, therefore it simply cannot contain RT/VRS.
 
Arden, Sparkman, Renoir, Navi 10, MI100 all had native theoretical data in cases for ariel and navi10 excel file.

Oberon does not exist there, and everything we know about Oberon is related to regression testing Ariel testlist, therefore it simply cannot contain RT/VRS.


And its seems Oberin is linked to RV1x, the 1x showing again Navi RDNA1.

Edit:
https://www.resetera.com/threads/ps...stors-running-this-again.173318/post-29668091
 

And its seems Oberin is linked to RV1x, the 1x showing again Navi RDNA1.
Unfortunately, no, this is simple mistake by Komachi who equals Ariel and Oberon duo to same PCI ID range.


AMD 00840F40 - 13F9

13F9

EN4CPe_VAAAUhhl.jpg


I wrote a post about it some time ago - https://forum.beyond3d.com/posts/2106613/

Github actually contains file named :
  • Oberon_100-000000004-20_4_GPUID_ThermTripLimit_Feb-10
  • Inside that file you can find code 00840F10 - assigned to Oberon B0 (and therefore Komachis E0 stepping F40 - Oberon E0, not Ariel).
BTW RV1x means - RavenRidge. Has nothing to do with either chips. This is Komachis personal spreadsheet so he put these names (ARL/RVN/ADN etc.)
 
Realistically though, devs creating a game will probably set themselves a sane bottom end. How many people on integrated graphics are trying to run Metro Exodus or RDR2?

I can't say, but there's a whole lotta folks running Doom and Witcher 3 on the Nintendo switch, which is a whole heap less powerful than just about anything with integrated graphics. And those are impressive games.

RDR2 is perfectly playable (30~40 fps outdoors) at 1080p with 3/4 resolution scale, on a 3400G (11 Vega CUs at about 1.4 gHz). It's probably a better experience overall than on PS4 or X1. So long as AMD APUs aren't saddled with slow ram and tiny TDPs, like in most laptops, they're perfectly workable gaming systems - frequently exceeding base consoles.

I'd bet 100 English pounds that R* made sure that decent APUs were within their "sane bottom end!"

If Lockhart isn't bandwidth crippled like laptop APUs, and the cooling is good enough to allow the hardware to hit its full clock potential, then 4TF of RDNA 2 is going to be well within the scalability range of any competently made made. Lockhart is going to be somewhere in the ballpark of a GTX 1660 / Super but with more ram and RT support.
 
The tricky part to Lockhart is the overall clocks & front-end in terms of bottleneck shifts; I have no data, but one could make the argument that the clocks are that much more important at the lower resolution where the triangle to pixel ratio increases for a given LOD, and so the culling/tiling capabilities of Navi might be more important :?: /gibberish :oops:

A smaller die, and higher yields for a given clock speed, might mean Lockhart retains the same clocks speeds of the larger part without a significant impact on cost.

Might even be that the number of CUs per shader engine decreases. /shrug

A slight drop in CPU clocks might be okay, given that there should typically be less data to be streamed from the SSD (lower resolution / fewer texture accesses) and potentially fewer draw calls (if lod is based on pixel area). Hopefully it's exactly the same CPU clocks / cache size etc though.
 
Arden, Sparkman, Renoir, Navi 10, MI100 all had native theoretical data in cases for ariel and navi10 excel file.

Oberon does not exist there, and everything we know about Oberon is related to regression testing Ariel testlist, therefore it simply cannot contain RT/VRS.

But same why only oberon have no theoretical value? There is something very strange. Why all the other chip before or after Oberon have theoretical data? This is the only chip without theoretical value inside the repository. There is something strange and I am not sure if the chip is RDNA1 or RDNA2 at all. And why continue to do so much steeping after at least C0, D0 and E0 and maybe much more intermediate stepping if we believe what AMD told in the last talk at financial meeting(46 minutes of the video) by AMD where they said now they are able to test in hours or week, not months.

EDIT:

Here with the link to 46 minutes talking about reengineering delivery of chip.
 
Last edited:
Very soon, just a few months :D I think we have heard many times 'soon we will know' now.

A few monhs is very soon imo, if consoles launch is delayed in March 2021, maybe the reveal like Switch will be in October 2020 or they can decide to do it in September 2020 like PS4 Pro reveal month was september 2016. And this is the last month I expect reveal of PS5.
 
Last edited:
Not really strange, it just few believe it will be 9.2TF, few believe it will be 12.6/13/14 whatever TF as long as it's better than XSX.
All theory may or may not have data or rumors or insiders to back it up.

For me if i have to choose i would pick 9.2TF simply because the position right now is like reversed version of 2013, and i personally think Sony is aiming for lower price. That said it's not like Sony going to block used games or forcing bundle PSVR in every PS5:LOL:, so i can see both won't make any mistakes that lead to a botched launch this time.
Not that i have problem if it's not 9.2TF though.
 
But same why only oberon have no theoretical value? There is something very strange. Why all the other chip before or after Oberon have theoretical data? This is the only chip without theoretical value inside the repository. There is something strange and I am not sure if the chip is RDNA1 or RDNA2 at all. And why continue to do so much steeping after at least C0, D0 and E0 and maybe much more intermediate stepping if we believe what AMD told in the last talk at financial meeting(46 minutes of the video) by AMD where they said now they are able to test in hours or week, not months.

EDIT:

Here with the link to 46 minutes talking about reengineering delivery of chip.
At the same time why is that only Oberon has regression testing data in abudnance, and Arden/Sparkman has none of it? Or why is Renoir full of measured data, yet MI100 has few lines?

Its messy repo, they probably ran 100x more regression tests. My opinion is, Sony is ahead of MS for at least 3-4 months with their chip, hence E0 stepping and Oberon B0 already in June.

Again, they had to have APU ready by summer, given that they sent development kits to developers (+ Flute leaked in July). Why did they go all the way to E0 stepping? Perhaps to make sure every chip clocks at 2.0GHz, which might still be tall order even for RDNA2.

In any case, puzzles are fitting rather well. All data found in Github (Arden, Renoir, MI100, Navi 10) has been correct. Its not a question if Oberon is PS5 chip, it is, otherwise what else did they send in V dev kits? Its question if its 36CU. Thats last question left, and I think everything points in that direction.
 
Not really strange, it just few believe it will be 9.2TF, few believe it will be 12.6/13/14 whatever TF as long as it's better than XSX.
All theory may or may not have data or rumors or insiders to back it up.

For me if i have to choose i would pick 9.2TF simply because the position right now is like reversed version of 2013, and i personally think Sony is aiming for lower price. That said it's not like Sony going to block used games or forcing bundle PSVR in every PS5:LOL:, so i can see both won't make any mistakes that lead to a botched launch this time.
Not that i have problem if it's not 9.2TF though.

Before confirmation of perf per watt improvement of RDNA2 compared to RDNA1, I believed PS5 will be 8,9,10,11.x Tflops. I believe it will be between 9 and 11.x Tflops but I don't believe the PS5 will be 36 Cus 2 Ghz. If it is 9 Tflops maybe 44 or 48 CUs is better. A good compromise.

I think PS5 will be less powerful than XSX imo.
 
At the same time why is that only Oberon has regression testing data in abudnance, and Arden/Sparkman has none of it? Or why is Renoir full of measured data, yet MI100 has few lines?

Its messy repo, they probably ran 100x more regression tests. My opinion is, Sony is ahead of MS for at least 3-4 months with their chip, hence E0 stepping and Oberon B0 already in June.

Again, they had to have APU ready by summer, given that they sent development kits to developers (+ Flute leaked in July). Why did they go all the way to E0 stepping? Perhaps to make sure every chip clocks at 2.0GHz, which might still be tall order even for RDNA2.

In any case, puzzles are fitting rather well. All data found in Github (Arden, Renoir, MI100, Navi 10) has been correct. Its not a question if Oberon is PS5 chip, it is, otherwise what else did they send in V dev kits? Its question if its 36CU. Thats last question left, and I think everything points in that direction.

If it is 36 Cus and 2 Ghz. I am not conviced at all by the second number like you said it is a tall order. I find a 36 CUS and 1.7 Ghz or 1.8 Ghz much more realist than a 2 Ghz PS5. I know Oberon is a PS5 chip.
 
If it is 36 Cus and 2 Ghz. I am not conviced at all by the second number like you said it is a tall order. I find a 36 CUS and 1.7 Ghz or 1.8 Ghz much more realist than a 2 Ghz PS5. I know Oberon is a PS5 chip.
I don't think its about what we find probable, its what they decided 2-3 years ago when designing the chip. In any case, AMD promised "multi GHz" RDNA2, so there is every bit of a chance to clock it higher then first gen RDNA.

I am convinced by that number because they clearly tested chip at PS4, PS4Pro and Native (GEN2) clocks. If anything, with latest confirmation from AMD with regards of big bump in perf per watt, this all makes absolute sense now.

That means with say RDNA1 chip at 220W, RDNA2 chip would pull ~170W for same performance.

Its also question how Sony disables CUs. Do they disable them at Shader Engine level, which should be straightforward or they can do fine grain disabling of CUs on fly. Because for 44-48CUs, they would probably require more SEs, which makes entire design inefficient compared to higher clocked 36CU one (as you will pay price on die size in this case vs minimal performance gain).
 
Last edited:
I have 0 doubt Oberon chip had RT/VRS hardware. I have no doubt, and didn't have doubt about it when it leaked, so leak is not discredited at all. In fact, with new info regarding big perf/watt improvements, only thing that was missing (2.0GHz) became very much expected.

This leak got all other chips right (Renoir, Arden, MI100), so I think this will soon include Oberon. ONLY cliff hanger is CU count. Native test suggest 36CU @ 2.0GHz is native, but I will leave some wiggle room here because we never got exact number of shader units as in Arden case.

Before confirmation of perf per watt improvement of RDNA2 compared to RDNA1, I believed PS5 will be 8,9,10,11.x Tflops. I believe it will be between 9 and 11.x Tflops but I don't believe the PS5 will be 36 Cus 2 Ghz. If it is 9 Tflops maybe 44 or 48 CUs is better. A good compromise.

There are at least two SOCs made by SONY: Ariel/Gonzalo (Navi 10 lite RDNA1) and another RDNA2 SOC

Previously a lot of people assume that PS5 is RDNA1 based, so they think PS5 GPU is like Navi 10 with 40CUs, and this matches 36CUs (total 40CUs?) test in Github data.


Some people say SONY can't design two SOCs. But since we have concluded there are at least two SOCs, the chance of 9.2TF is very low. We know

Gonzalo (RDNA1) 36CUs @1.8GHz

There is very low chance they don't add any CUs and choose to clock 0.2GHz more since SONY already makes other RDNA2 GPU.

The GPU must exceed 10TF for marketing reasons (9.2 TF is hard to convince core consumers).
The problem is how SONY pushes PS5 above 10TF and how much they can achieve.



From Bloomberg news we can say PS5 still uses "narrow and fast" design. If Odium is right Sony seems to overcome the heat at 2GHz which is 11.26 TF (44CUs). They are still pushing for higher frequency.

We may wait for a while to see the specs.
 
Last edited:
The GPU must exceed 10TF for marketing reasons (9.2 TF is hard to convince core consumers).
The problem is how SONY pushes PS5 above 10TF and how much they can achieve.
well, I'd say the price must meet consumer expectations, then they try to get the most performance they can out of that profile. It's never been about power first and price second.
All of these consoles have to be designed with a price point in mind.

If Sony naturally targeted a lower price point, then you have to come to expect lower performance profiles. But that shouldn't be a sleight against Sony nor do I see it as any form of weakness. They set a price point they think they will find success in, I would trust their process. They have a great deal of experience in knowing how to sell a lot of playstations and they have leadership position going into next gen. I think they are very focused on making a product that will succeed but that doesn't necessarily mean it has to meet the requirements of hardcore enthusiasts dreams of what acceptable power is.

everyone has a different threshold of what acceptable power is.
 
Status
Not open for further replies.
Back
Top