Baseless Next Generation Rumors with no Technical Merits [post E3 2019, pre GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
Just food for thought...

Die sizes :

PS2 - 519mm2 ↓
PS3 - 493mm2 ↓
PS4 - 348mm2 ↓
Pro - 320mm2 ↓
PS5 - ???mm2 ?
I'm assuming the PS2 and PS3 values are arrived at by adding the areas of the CPU and graphics chips together. While it is true that there is increasing pressure to keep die size down, the trend is over-emphasized using the PS2 and PS3 because they had two separate chips that experienced much more reasonable yields versus a single combined chip. The later APUs would save costs in other ways, but as single-chip solutions would run into the non-linear relationship between area and cost more readily.

Clocks :

PS5 - ???GHz ?
Pro - 911MHz ↑
PS4 - 800MHz ↑
PS3 - 500MHz ↑
PS2 - 147MHz ↑

...as we go further down the manufacturing node, there is clear trend of chip sizes getting smaller and smaller, while frequencies are getting higher and higher.
While overall true, the risk in an analysis that extends over such a time period and seems focused on the GPUs is that it misses certain trends. The PS2 through PS3 era covered the period of the GHz wars on the desktop, where at the time transistor density scaling was generally coupled with performance scaling. The more dominant factor at the time was the reduction of the cost and performance barriers for the transistors, and around the period of the PS3 when the relatively easy scaling ended. The CPU clocks in this comparison would show a steep dive from the PS3 to PS4, and possibly barely clawing back to parity with the next gen.
The GPU portions operate in a more modest range, which a supposed 2 GHz GPU is in the "modest" range relative to where the CPUs go, but I'd be interested to see what the tradeoffs are at that point.

nano3.png


When people ask themselves why would Sony go for narrower and faster design (beside BC consideration - which are paramount to next gen success and ability to transition PS4 players to PS5) - here is your answer.
This kind of accounting has been disputed here before as not reflecting at least some projects, so it's not entirely clear if this is fully applicable. One possibility is that if this is in a mobile context, the feature set and number of components is not being kept constant across all these nodes, so there are other design knobs that are cranking up the cost that aren't directly related to the node--other than the fact that the node gives the designers the room to toss a lot more interacting components into the same chip and offer more product services or functions than before. A large number of these bars may be partially abstracted by the foundry or semi-custom designer, where pre-built or tweaked IP gets a lot of this out of the way from the POV of Sony or Microsoft.


It terms of sheer volume of data the github leak is the most egregious. We leaked 3 consoles + 2 AMD GPUs.

If draw parallels to previous gens, once we got detailed CPU and GPU info they're pretty much the same outside of clocks. This leak is the most detailed leak ever! Memory is the one area we've seen late upgrades to, and that's because it is possible to do so.
AMD showed signs of significant leakage and potentially cross-contamination between teams back with the current gen, when it couldn't contain the existence of the semi-custom group much less the two major clients.
It was not a good look for AMD's semi-custom that there were leaks, worse because they were comparative leaks that should be much harder to have happen with appropriate compartmentalization.
I thought AMD had done more to tamp down on this, but if the data points are accurate and from AMD there's not a lot of benefit from firewalled teams if it all gets tossed into an externally accessed common repository.

So I've read pretty much entire repo and here are few points :
  • Its legit as it can be, and a massive fuck up by intern at AMD (I don't even think this guy knew were this chips will end actually)
I don't know if the semi-custom teams should have their data going to the same person. Probably wouldn't have that person be an intern either. If it was done by one or more low-level employees, there are organizational lapses and questions about what else AMD is sharing internally. That's not going into the class of deficiencies that might allow even segregated data of this nature to be found externally.

If the XBoneX uses a Polaris GPU (which it probably does) then it does "have FP16". Polaris has instructions for FP16 by using only the needed cache for FP16 instead of having to perform a full "promotion" of every variable to FP32, i.e. it takes less bandwidth to do FP16 operations on XBoneX so it's still better to use FP16 when you can. It also takes less power for the operation, though that's less relevant to a console.
I'm not certain Microsoft said that it used Polaris. The Digitalfoundry interview said it took on Polaris features, but they only mentioned bandwidth compression and improved geometry/quad scheduling. Those elements would be more freely transplanted, as the PS4 Pro similarly transplanted compression and workload scheduling from Vega.
 
So your friends didn't message you back and say dude, Vega only goes up to 13TF.



Same friends didn't tell you dude, rdna only has 40 cu's?




Now these friends tell you dude, you should go bet your mortgage on xbox!!

This is a story that I made up. It's from a developer insider POV.

Disregard if you don't like good stories.
 
So I looked at the leak. In all "regression test results", there's always a tab for "igpu" and another for "igpu_295", with the same tests.
Dual 36CU GPU double confirmed for PS5.



I'm not certain Microsoft said that it used Polaris. The Digitalfoundry interview said it took on Polaris features, but they only mentioned bandwidth compression and improved geometry/quad scheduling. Those elements would be more freely transplanted, as the PS4 Pro similarly transplanted compression and workload scheduling from Vega.

Is there any reason to believe Scorpio wouldn't use Polaris, or rather that Scorpio would be the only new-and-not-shrinked 14nm AMD GPU to ever exist without native FP16 support for bandwidth and power savings?
I get that Microsoft didn't mention that small feature by name, but I wouldn't expect them to list all the GPU specs during an interview to a mainstream-ish publication either.


What was the saying? The uninformed always boasts about how sure they are of something and the wise are always full of doubt.
Hey... you shouldn't talk of @Proelite like that...
 
Ariel, Gonzalo, Prospero...Shakespears - The Temptest
  • Ariel - first version of chip / leaked in January - confirmed in AMD Github
  • Gonzalo - AMD's codename for dev kit chip (?) / leaked in January - 2 revisions, ES and QA (Jan/April)
  • Prospero - dev kit name / leaked by Gizmodo, confirmed by DF's sources
Oberon, Flute...Shakespears - A Midsummer Night's Dream
  • Oberon - revision of Ariel / leaked in June - AMD Github
  • Flute - AMD's codename for retail chip (?) / leaked in June - benchmark, now deleted
 
Last edited:
Ariel, Gonzalo, Prospero...Shakespears - The Temptest
  • Ariel - first version of chip / leaked in January - confirmed in AMD Github
  • Gonzalo - AMD's codename for dev kit chip (?) / leaked in January - 2 revisions, ES and QA (Jan/April)
  • Prospero - dev kit name / leaked by Gizmodo, confirmed by DF's sources
Oberon, Flute...Shakespears - A Midsummer Night's Dream
  • Oberon - revision of Ariel / leaked in June - AMD Github
  • Flute - AMD's codename for retail chip (?) / leaked in June - benchmark, now deleted
Nice. Oberon chip ambition is quite different than Ariel as it has about 100GB/s more GDDR6 bandwidth (measured).

I think we don't know if prospero has ariel or oberon iGPU.
 
Nice. Oberon chip ambition is quite different than Ariel as it has about 100GB/s more GDDR6 bandwidth (measured).

I think we don't know if prospero has ariel or oberon iGPU.
I think 448GB/s is too low for both, 8 core Zen2 and 5700XT equivalent with RT and first Ariel likely had 14Gbps chips, but 18-16Gbps was always a target.
 
Btw I think we should seriously think about how much we rate 8 core zen with 8MB of cache in these consoles.


They could go well below assumed 35W. I think laptop chips (as well as consoles) will profit from improved N7P process and more mature node.

200W console could fit ~150W worth of GPU in there if we assume 20W is enough for Zen2 @3.2GHz and 8MB of cache.
 
Is there any reason to believe Scorpio wouldn't use Polaris, or rather that Scorpio would be the only new-and-not-shrinked 14nm AMD GPU to ever exist without native FP16 support for bandwidth and power savings?
I get that Microsoft didn't mention that small feature by name, but I wouldn't expect them to list all the GPU specs during an interview to a mainstream-ish publication either.
Volcanic Islands brought a number of architectural changes, from operations that could use FP16 without converting to FP32 first, to scalar memory writes, sub-word addressing, vector register indexing, and cross-lane operations.
VI and later did improve how compute wavefronts could be scheduled or context switched.
Absence of public disclosure isn't the same as it not being there, but I think some of those would be on the order of importance as the improved quad scheduling that was disclosed.
These should provide some improvement, but also require re-evaluating the microarchitecture in terms of the old and new software that uses it, and if there are any customizations that need to be adjusted for it.

Volcanic Islands also dropped a number of operations from Sea Islands' ISA or changed some of their behaviors, which would add to the complexity of integrating it into a platform that theoretically is transparently compatible with Sea Islands--or at least the version Microsoft made of it.
It's not impossible to handle, but I tend to err on the side of not doing additional work if most of the previously mentioned upsides are not in evidence.

A third way to get additional FP16 is to have it retrofitted into the architecture like Sony did, just not opt for as invasive a change as 2x rate.


In iGPU tab there are only Ariel A0 and Oberon A0. In iGPU there is second Oberon revision - B0.

*snipped linkedin screenshot*
I feel like AMD should have done more since the last spate of Linkedin disclosures of internal projects to maybe rein in how freely they name and describe them in public. If the working theory is that all those projects do not belong to the same semi-custom client, maybe not publicly document a place where AMD has shared staff with potentially too much awareness of supposedly separate projects?

From what I remember 1X does have FP16, not RPM though.
People usually conflate the two.
So it does help with register pressure and slightly on bandwidth I believe.
Such as FP16 VALU operations? There's a more limited form of packing FP16 values into a single register that dates back to Sea Islands.
 
I feel like AMD should have done more since the last spate of Linkedin disclosures of internal projects to maybe rein in how freely they name and describe them in public. If the working theory is that all those projects do not belong to the same semi-custom client, maybe not publicly document a place where AMD has shared staff with potentially too much awareness of supposedly separate projects?.

Maybe this is where all the confusion lie. AMD seems to be contracting Soctronics to perform chip verification. These codenames maybe internal to Soctronics and not AMD. Alloting different sets of codenames to external contractors is an easy way to source outside leaks.
 
It's just a "baseless next generation rumor without a technical merit".

I guess there is a rumored PS5 reveal in early February, which may or may not be real. I assume he means that. The supposedly insider user "Osirisblack" (I think that's the name) on Neogaf is a source for that.

However I doubt they'll reveal any teraflop specs anyway even if event occurs.

Early February PS5 reveal does sound reasonable to me.
 
I guess there is a rumored PS5 reveal in early February, which may or may not be real. I assume he means that. The supposedly insider user "Osirisblack" (I think that's the name) on Neogaf is a source for that.

However I doubt they'll reveal any teraflop specs anyway even if event occurs.

Early February PS5 reveal does sound reasonable to me.

Why? It would be roughly a repeat of last gen.
 
Status
Not open for further replies.
Back
Top