Current Generation Hardware Speculation with a Technical Spin [post GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
Shader Engine and Shader Units are different things. One Shader Engine consists (in RDNA1 anyway) of 2 Shader Arrays, one Shader Array has several CUs (depends on chip/configuration how many) and other blocks in it.
oh he was looking at the big blocks.

So isn't it like 2-4 or something like that then?
 
Well If ms is using a controller that is already used in millions of hard drives that will be cheaper than something that is being used in thousand or hundreds of thousands of ps5s at the start.

On a production chip level, yeah. But Microsoft did not spend years R&D'ing a controller and patenting it. Those are probably not insignificant sunk costs. Hardware design and manufacture is Sony's bread and butter though.




That means it's cheaper for Phison to manufacture, not necessarily cheaper for Microsoft to licence. Microsoft is not paying cost, companies who licence their tech to other companies are not a charity :nope:

I could be wrong because it depends on volume, but usually off-the-shelf components have the R&D absorbed already and they have a massive client base.

Regardless, it's 3 times the size of the Phison which MS is using. 12ch is an expensive silicon footprint versus 4ch.
MS’s controller is a custom part from Phison. This was confirmed by one of their engineer’s LinkedIn.

https://www.tweaktown.com/news/7010...ss-pcie-4-nvme-up-3-7gb-sec-speeds/index.html
 
You know, I might have boobed (not the first time).

I can't actually find confirmation of the number of shader engines in PS5 and XSX now I'm looking again. Was sure I'd seen it ....

Anyone?
Given the console budget, it's more likely to me that they're both configured similarly to Navi 10 i.e. 4 SAs, 2 SEs. The overhead for additional front-end is maybe not worth replicating to each SA.

Oberon = 5 WGP per SA
Anaconda = 7 WGP per SA

(I'd chuckle if Lockhart ends up being 3 WGP per SA)
 
I believe it's 64 shader units per CU so...
3328 for XSX
and
2048 for PS5

Sorry, I meant shader engines.

The 5700 has 2 shader engines, with 9 active dual compute units in each. (Presumably the number of active DCUs in each shader array within the shader engine doesn't need to be symetrical).

I'm trying to get my head around page six of this bad boy, wrt next gen consoles.

https://www.amd.com/system/files/documents/rdna-whitepaper.pdf

If you have to disable an entire DCU, and if all shader engines are required to have the same number of active DCUs, then it's looking like only 2 shader engines for both consoles .....
 
It's a publicly available part in their line up. The "custom" part, if any, has to be the firmware.
AFAIK, there’s no commercial part available based around it. I don’t think it’s unreasonable that MS was part of the driving requirements for the design given the timeline largely points to Scarlett arriving before any commercial part.
 
AFAIK, there’s no commercial part available based around it. I don’t think it’s unreasonable that MS was part of the driving requirements for the design given the timeline largely points to Scarlett arriving before any commercial part.
It's literally an off-the-shelf part according to phison, it's a generic 4ch ramless controller, designed for cost cutting:
https://www.tweaktown.com/image.php...less-pcie-4-nvme-up-3-7gb-sec-speeds_full.jpg

There are no devices using it yet, because it's a new part, and the xbxx isn't even coming out for another 6 months. Sony was the first to use 8gbit gddr5 in the PS4 revision (no clamshell), that doesn't mean samsung made it custom for them.
 
Last edited:
Sorry, I meant shader engines.

The 5700 has 2 shader engines, with 9 active dual compute units in each. (Presumably the number of active DCUs in each shader array within the shader engine doesn't need to be symetrical).

I'm trying to get my head around page six of this bad boy, wrt next gen consoles.

https://www.amd.com/system/files/documents/rdna-whitepaper.pdf

If you have to disable an entire DCU, and if all shader engines are required to have the same number of active DCUs, then it's looking like only 2 shader engines for both consoles .....

I believe Navi 14 (RX 5500) is configured as a single SE with 11/12 active (maybe Apple scooped the fully enabled ones). AMD's site mentions 32 ROPs, so it has to have 2 shader arrays since it's only up to 16 ROPs (4 RBs) per array.

Symmetry is probably more important between SE, but I'm not entirely sure how work distribution is affected between asymmetric SAs within the SE.
 
Given the console budget, it's more likely to me that they're both configured similarly to Navi 10 i.e. 4 SAs, 2 SEs. The overhead for additional front-end is maybe not worth replicating to each SA.

Oberon = 5 WGP per SA
Anaconda = 7 WGP per SA

(I'd chuckle if Lockhart ends up being 3 WGP per SA)

Yeah, would be funny if Lockhart SAs ended up better fed than on either of the big boys! o_O

It's a publicly available part in their line up. The "custom" part, if any, has to be the firmware.

Possibly with an OS controlled section of the drive set to work as SLC! Wouldn't be the first time for Phison controllers:

https://www.anandtech.com/show/15442/enmotus-midrive-rethinking-slc-caching-for-qlc-ssds
 
Yeah, would be funny if Lockhart SAs ended up better fed than on either of the big boys! o_O

Yeah, assuming it sticks to two SEs.

It'd be more important on the geometry side for Lockhart not to be totally gimped as that would actually be more important the lower the resolution for a given set of model assets while not everything can be LOD'ed for a certain triangle: pixel density.

They could even just halve the number of ROPs per SA for Lockhart to save some die space on there. According to the Navi 10 die shot, eight DCUs should be in the region of 32mm^2 while each 64-bit MC is roughly 13mm^2. If they simply shave off 16 DCUs and 128-bit, that's about 90mm^2 off of Anaconda's size. Hopefully they don't gimp the bandwidth that much further (<192-bit), but 12GB would be a rather simple configuration with a 6x2GB setup.

Maybe LH is in the region of 240-250mm^2 when all is said and done?
 
It's literally an off-the-shelf part according to phison, it's a generic 4ch ramless controller, designed for cost cutting:
https://www.tweaktown.com/image.php...less-pcie-4-nvme-up-3-7gb-sec-speeds_full.jpg

There are no devices using it yet, because it's a new part, and the xbxx isn't even coming out for another 6 months. Sony was the first to use 8gbit gddr5 in the PS4 revision (no clamshell), that doesn't mean samsung made it custom for them.
Not the same thing. GDDR5 is produced compliant to a JEDEC spec. This is in contrast to a controller that adheres to a standard but whose implementation is completely NRE on behalf of the manufacturer.

If it was a completely off-the-shelf part with no customization needed, there’d be no need for Phison to direct independent engineering effort towards it.
 
Not the same thing. GDDR5 is produced compliant to a JEDEC spec. This is in contrast to a controller that adheres to a standard but whose implementation is completely NRE on behalf of the manufacturer.

If it was a completely off-the-shelf part with no customization needed, there’d be no need for Phison to direct independent engineering effort towards it.
There's no indication engineering effort was put into it specifically for MS. It has no particular improvement from it's predecessor other than pcie 4.0. The only thing visible is that they have a prominent client buying it.

https://www.anandtech.com/show/1472...18-pcie-40-ssd-controller-up-to-7-gbs-nvme-14

"The PS5019-E19T mainstream controller will be a quick follow-up to the E13T that is currently in production but has not yet shipped in retail products."

"The new PS5019-E19T will be based on the PS5013-E13T (one Arm Cortex-R5 core, four NAND channels, 28nm technology), but featuring a new PCIe 4.0 x4 PHY and thus enabling cost-effective yet fast SSDs."
 
There's no indication engineering effort was put into it specifically for MS. It has no particular improvement from it's predecessor other than pcie 4.0. The only thing visible is that they have a prominent client buying it.

https://www.anandtech.com/show/1472...18-pcie-40-ssd-controller-up-to-7-gbs-nvme-14

"The PS5019-E19T mainstream controller will be a quick follow-up to the E13T that is currently in production but has not yet shipped in retail products."

"The new PS5019-E19T will be based on the PS5013-E13T (one Arm Cortex-R5 core, four NAND channels, 28nm technology), but featuring a new PCIe 4.0 x4 PHY and thus enabling cost-effective yet fast SSDs."
You don’t find it odd that he would single it out if it was completely off the shelf with no engineering effort from Phison?
 
You don’t find it odd that he would single it out if it was completely off the shelf with no engineering effort from Phison?
Not really, because saying "I worked on a low-end chip used by ADATA" doesn't really sounds as good on a resume.
 
You don’t find it odd that he would single it out if it was completely off the shelf with no engineering effort from Phison?

There is probably the two decompressors on the controller too. I think this is the reason they need a PCIE4 bus. For been able to go above 4GB/s with uncompressed data. Not sure 100% to confirm after someone open the console.

Edit: I don't think it will be in another Phison controller.
 
Last edited:
Status
Not open for further replies.
Back
Top