I'm assuming the PS2 and PS3 values are arrived at by adding the areas of the CPU and graphics chips together. While it is true that there is increasing pressure to keep die size down, the trend is over-emphasized using the PS2 and PS3 because they had two separate chips that experienced much more reasonable yields versus a single combined chip. The later APUs would save costs in other ways, but as single-chip solutions would run into the non-linear relationship between area and cost more readily.Just food for thought...
Die sizes :
PS2 - 519mm2 ↓
PS3 - 493mm2 ↓
PS4 - 348mm2 ↓
Pro - 320mm2 ↓
PS5 - ???mm2 ?
While overall true, the risk in an analysis that extends over such a time period and seems focused on the GPUs is that it misses certain trends. The PS2 through PS3 era covered the period of the GHz wars on the desktop, where at the time transistor density scaling was generally coupled with performance scaling. The more dominant factor at the time was the reduction of the cost and performance barriers for the transistors, and around the period of the PS3 when the relatively easy scaling ended. The CPU clocks in this comparison would show a steep dive from the PS3 to PS4, and possibly barely clawing back to parity with the next gen.Clocks :
PS5 - ???GHz ?
Pro - 911MHz ↑
PS4 - 800MHz ↑
PS3 - 500MHz ↑
PS2 - 147MHz ↑
...as we go further down the manufacturing node, there is clear trend of chip sizes getting smaller and smaller, while frequencies are getting higher and higher.
The GPU portions operate in a more modest range, which a supposed 2 GHz GPU is in the "modest" range relative to where the CPUs go, but I'd be interested to see what the tradeoffs are at that point.
This kind of accounting has been disputed here before as not reflecting at least some projects, so it's not entirely clear if this is fully applicable. One possibility is that if this is in a mobile context, the feature set and number of components is not being kept constant across all these nodes, so there are other design knobs that are cranking up the cost that aren't directly related to the node--other than the fact that the node gives the designers the room to toss a lot more interacting components into the same chip and offer more product services or functions than before. A large number of these bars may be partially abstracted by the foundry or semi-custom designer, where pre-built or tweaked IP gets a lot of this out of the way from the POV of Sony or Microsoft.
When people ask themselves why would Sony go for narrower and faster design (beside BC consideration - which are paramount to next gen success and ability to transition PS4 players to PS5) - here is your answer.
AMD showed signs of significant leakage and potentially cross-contamination between teams back with the current gen, when it couldn't contain the existence of the semi-custom group much less the two major clients.It terms of sheer volume of data the github leak is the most egregious. We leaked 3 consoles + 2 AMD GPUs.
If draw parallels to previous gens, once we got detailed CPU and GPU info they're pretty much the same outside of clocks. This leak is the most detailed leak ever! Memory is the one area we've seen late upgrades to, and that's because it is possible to do so.
It was not a good look for AMD's semi-custom that there were leaks, worse because they were comparative leaks that should be much harder to have happen with appropriate compartmentalization.
I thought AMD had done more to tamp down on this, but if the data points are accurate and from AMD there's not a lot of benefit from firewalled teams if it all gets tossed into an externally accessed common repository.
I don't know if the semi-custom teams should have their data going to the same person. Probably wouldn't have that person be an intern either. If it was done by one or more low-level employees, there are organizational lapses and questions about what else AMD is sharing internally. That's not going into the class of deficiencies that might allow even segregated data of this nature to be found externally.So I've read pretty much entire repo and here are few points :
- Its legit as it can be, and a massive fuck up by intern at AMD (I don't even think this guy knew were this chips will end actually)
I'm not certain Microsoft said that it used Polaris. The Digitalfoundry interview said it took on Polaris features, but they only mentioned bandwidth compression and improved geometry/quad scheduling. Those elements would be more freely transplanted, as the PS4 Pro similarly transplanted compression and workload scheduling from Vega.If the XBoneX uses a Polaris GPU (which it probably does) then it does "have FP16". Polaris has instructions for FP16 by using only the needed cache for FP16 instead of having to perform a full "promotion" of every variable to FP32, i.e. it takes less bandwidth to do FP16 operations on XBoneX so it's still better to use FP16 when you can. It also takes less power for the operation, though that's less relevant to a console.