Baseless Next Generation Rumors with no Technical Merits [post E3 2019, pre GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
Aren't all AMD GPUs in the 36+ CU range "butterfly" like the 5700, with shader engines and CU arrays distributed symmetrically to either side of the command processor and front end logic?

Yes but also one CU bank was a different size/shape than the other on Pro.

 
Really curious to see this ps5 Apu... Can't wait. It's really time Sony tell us something solid about.....
 
Old games didnt stand the test of time. Now we ve got a lot of artistic games from indie developers that are enjoyed no matter when you play them.
Last few generation of games also reached a level of quality that can still be enjoyed.
It is not like the days when PS1 games were unbearable to play when the PS2 was released and PS2 games were unbearable when the PS3 was released.
 
The Pro gpu was "butterfly" in the sense it had half the CU's on either side of the front end but the CU banks were not 100% symmetrical like on most AMD GPU's. The CU's in the right bank are narrower/taller.

To me that says there is probably at least an optimal way of how to set a GPU that is enabling/disabling CU's and it's just not ideal to be enabling/disabling whatever CU's at a whim.

Of course that wouldn't restrict to 36 CU's in itself but maybe to a double/half rule. But when you then combine that with maintaining interoperability with PS4 games it might make sense to stick with 36 CU's.

Xbox One X left, PS4 Pro right
APUComp.jpg
 
I don't see how a 48 CU design with 8 core Zen 2 would be anywhere near the reticle limit for 7nm.
I think someone took the list of reasons given to use chiplets from AMD threadripper/epyc presentations, and used it to write that ps5 rumor.
 
The Pro gpu was "butterfly" in the sense it had half the CU's on either side of the front end but the CU banks were not 100% symmetrical like on most AMD GPU's. The CU's in the right bank are narrower/taller.

To me that says there is probably at least an optimal way of how to set a GPU that is enabling/disabling CU's and it's just not ideal to be enabling/disabling whatever CU's at a whim.

Of course that wouldn't restrict to 36 CU's in itself but maybe to a double/half rule. But when you then combine that with maintaining interoperability with PS4 games it might make sense to stick with 36 CU's.

Xbox One X left, PS4 Pro right

I wouldn't say that most butterflies have significantly asymmetric wings, as far as the physical layout choices of the PS4 Pro's GPU go.
Perhaps there could be a functional reason for differences between the halves, if there were some requirement for additional logic or different gating on a CU that must support two modes versus a CU in the half that is only active in full mode. Whether there's sufficient wattage savings for different gating given the relative insensitivity to power consumption, or justification for engineering two separate CUs just for mode switches isn't clear.
However, one reason for the layout change I can think of is that the PS4 Pro's GPU has a more lopsided space constraint than most discrete GPUs, and more shader engines than most other APUs. One half is surrounded by the CPU cores, uncore, and memory controllers, while the other has fewer neighbors.
If that outer half had the same physical layout as the other, it would extend further to the right and the patterned silicon would stop short of the upper die edge. If the CUs were instead laid out to be taller and thinner, they could fill in the die rectangle with no wasted space. The dimensions may be more favorable and AMD or Sony wouldn't be paying for dead silicon.

The Scorpio GPU might have had similar pressures if not for the wider memory bus, which would consume extra area and push blocks over to the space around the right side of the GPU.

As far as whether there's an ideal pattern for CU disablement. If you meant for yield recovery, defects are random and so there needs to be some element of arbitrariness. I think the chosen yield recovery was 1 spare CU per shader engine in case random defects struck one or more of them.
 
In 2012/2013, a 500Gb 2.5mm, the exact one in the ps4, was ~$100. Today, a 1Tb ssd is ~$120-130 and sony's proposed solution suggests it being a cost saving approach vs OTS parts. Even assuming the last part is BS, how does it end up as a $100 cost to sony?
1TB seems to add up to $100 of nand chips and a few dollars for a cheap controller, that's the contract price. The only cost cutting possible here is the controller, and maybe a $0.50 connector if they solder it on the board.

The 500GB HDD in 2012/2013 was estimated to be a $35 contract price for consoles, but retail drives were more like $60 at the time, if you looked for deals on amazon and newegg. The thailand floods at the end of 2011 didn't help the wild fluctuations at retail, contracts are not affected as much depending on when they signed. Retailers took advantage of the low availability and bumped the prices for themselves.
 
The Pro gpu was "butterfly" in the sense it had half the CU's on either side of the front end but the CU banks were not 100% symmetrical like on most AMD GPU's. The CU's in the right bank are narrower/taller.

To me that says there is probably at least an optimal way of how to set a GPU that is enabling/disabling CU's and it's just not ideal to be enabling/disabling whatever CU's at a whim.

Of course that wouldn't restrict to 36 CU's in itself but maybe to a double/half rule. But when you then combine that with maintaining interoperability with PS4 games it might make sense to stick with 36 CU's.

Why and how are the first and second sentences related?

Is there even any public documentation saying the right or the left half of the Pro's iGPU is the one being disabled? What if it's top vs. bottom? What if it's top-left + bottom-right vs. top-right + bottom left?

And even if that left vs. right theory is true, why assume it's somehow an optimal way? There are always two CUs that are turned off in all PS4 Pro chips, and they're not randomly selected. Which pair of disabled CUs provides better results in this case?


Some fresh BS from Reddit for your entertainment pleasure since things are getting kinda dry.
Well that post has 0 upvotes (meaning it's probably got negative total votes) for a reason.

The only way I could see AMD disabling 8 CUs / 4 WGPs is if they're super aggressively clocked and not that many CUs can handle those. But even then we'd probably be looking at a very unoptimal power consumption and heat output.
 
The only way I could see AMD disabling 8 CUs / 4 WGPs is if they're super aggressively clocked and not that many CUs can handle those. But even then we'd probably be looking at a very unoptimal power consumption and heat output.
I think disabling CUs is only for random defects? Speed yield would be more related to the process strength and variation on large areas if not the entire wafer. After all that's why binning entire chips produces such a nice bell curve.
 
Why and how are the first and second sentences related?

Is there even any public documentation saying the right or the left half of the Pro's iGPU is the one being disabled? What if it's top vs. bottom? What if it's top-left + bottom-right vs. top-right + bottom left?

And even if that left vs. right theory is true, why assume it's somehow an optimal way? There are always two CUs that are turned off in all PS4 Pro chips, and they're not randomly selected. Which pair of disabled CUs provides better results in this case?

Well I am just speculating of course. But between the die shot and Cerny's words:
"First, we doubled the GPU size by essentially placing it next to a mirrored version of itself, sort of like the wings of a butterfly. That gives us an extremely clean way to support the existing 700 titles," Cerny explains, detailing how the Pro switches into its 'base' compatibility mode. "We just turn off half the GPU and run it at something quite close to the original GPU."

If I had to wager I'd say it's left vs right.
 
Maybe half of the pro was plain ole PS4 CUs and the other half Vega-based.
That wouldn't fit with "essentially a mirrored version". The whole mirror thing is to make it easy to disable half of it in a clean way. There might be extra circuits and data paths required on the ones that can be enable/disabled, while the ones that are always on don't require it.
 
Maybe half of the pro was plain ole PS4 CUs and the other half Vega-based.

The Pro had to use all of the GPU so I don't see how this would really work. Unless the half that ran base PS4 and Pro was a slight hybrid of sorts in terms of architecture.
 
Status
Not open for further replies.
Back
Top