Next Generation Hardware Speculation with a Technical Spin [pre E3 2019]

Status
Not open for further replies.
Can someone create a plausible BOM for this system? Should be ~Radeon VII size GPU, Zen CPU, and 24 GBs GDDR6, which all sounds jolly expensive to me. 330 mm² for the GPU. 200 mm² for Threadripper size CPU? Shrunk if on 7nm.

If Sony or MS ship a 8c/16t Zen2, I'd bet money on them using the exact same chiplet that's in the CPUs. Sure, it costs some extra power for the link between the CPU and the IO/GPU chiplets, but using the same chiplet for the CPU means they get to bin them, from the set of all the Zen2 chips AMD will make. This buys them more performance and less power and most importantly, much lower cost than shipping a huge monolithic chip that's bust if there is a single flaw in a cpu core.

I think the GPU could be smaller than the VII, because of no need for 64-bit support, but broadly in the same class. Assuming that this launches for the holiday sales in 2020, 330mm^2 of 7nm would not be terrible.

$10 per GB of GDDR6 x24 for $240 of RAM??

This is the part I find not so credible. The only way that would happen is if DRAM prices nosedived. The 384 bit bus would add a lot to the BoM even after the chips are cheap.
 
Dram spot price is already $5.9/GB average and continues to drop. There is no way gddr6 is going to be $10/GB about 18 months from now. The cost difference between ddr and gddr isn't that big. As we saw from ps4 vs xb1 launch BOM.

$5/GB would probably allow 24GB in a 499 console. It's much more within reach than some other predictions of a 2TB SSDs which requires a 4 times drop in the next 18 months.
 
880 gb/sec ?!? ... I think for big RAM amounts this huge bandwidth is needed otherwise extra RAM get unused... If true that specs are great. Let's hope...
 
Yes you would. There isn't GDDR6 fast enough.

Current top end is 16 gb/pin. I don't think any actual products are even past 14.

This "dev kit" couldn't make 880 on anything less than a 512-bit bus.

Not basing this on what can be delivered today, looking ahead to what could come next year. The fake leak was "revealing" what the final spec would be, not what the devkits have. Current devkits could have a Radeon VII in them for all we know.
 
Not basing this on what can be delivered today, looking ahead to what could come next year. The fake leak was "revealing" what the final spec would be, not what the devkits have. Current devkits could have a Radeon VII in them for all we know.

Nope! The fake leak was claimed to be about what was in current devkits.

"This data are 100% correct and about the lastet dev-kits sony has sent to devs".

Again, he claimed 880, GDDR6, current dev kits. That would require a 512- bit bus.

A 384-bit bus would require vast quantities of GDDR6 clocking beyond 21 gb/pin. Double edit: brain fart. 18+ gbpp.

Edit: That would be a [strike]50%+[/strike] bump over clocks in top end Nvidia stuff.
 
Last edited:
Nope! The fake leak was claimed to be about what was in current devkits.

"This data are 100% correct and about the lastet dev-kits sony has sent to devs".

Again, he claimed 880, GDDR6, current dev kits. That would require a 512- bit bus.

A 384-bit bus would require vast quantities of GDDR6 clocking beyond 21 gb/pin.

Edit: That would be a 50%+ bump over clocks in top end Nvidia stuff.

There's a semantics argument to be had here, but I'm not biting. Not over a BS leak.
 
Nope! The fake leak was claimed to be about what was in current devkits.

"This data are 100% correct and about the lastet dev-kits sony has sent to devs".

Again, he claimed 880, GDDR6, current dev kits. That would require a 512- bit bus.

A 384-bit bus would require vast quantities of GDDR6 clocking beyond 21 gb/pin. Double edit: brain fart. 18+ gbpp.

Edit: That would be a [strike]50%+[/strike] bump over clocks in top end Nvidia stuff.

I don't believe the supposed leak, but wouldn't it be possible for the first iteration to use GDDR6 on a 512-bit bus, then transition to a couple of stacks of HBM3?

It seems that some of the packaging techniques relevant to lowering the cost of HBM are coming into effect later this year - I'll try to find the post, I think it was @anexanhume or @MrFox - so it's conceivable that it could be on the horizon. Hopefully . .
 
HBM3 is planned for 4gbps, so it would be in line with previous binning ranges to have the lowest bin at 3.4gbps when it becomes available, for 880GB/s with only two stacks. The timing of hbm3 is unclear, specially at the high volume required for a console launch.

Cost issues are suposedly close to being solved with non-silicon interposers, and hbm3 is claimed to be compatible with it.

For the heat issues of HBM at increasingly high clock, I saw mentions of heat conducting thin film being used instead of simpler underfill. Also samsung mentionned they add more TSV and microbumps to conduct more heat between layers.

Going back to sony's heatsink patent...
Supposedly, the hottest die in the stack is on the bottom (it drives all 8ch through the stack, and also drives the phy to the soc). Maybe a heat-conductive underfill could get that heat to the substrate, then to vias right down to the board, with a heatpipe on the other side. That could be a lower thermal resistance than having to go through 8 dies.
 
Can someone create a plausible BOM for this system? Should be ~Radeon VII size GPU, Zen CPU, and 24 GBs GDDR6, which all sounds jolly expensive to me. 330 mm² for the GPU. 200 mm² for Threadripper size CPU? Shrunk if on 7nm. $10 per GB of GDDR6 x24 for $240 of RAM??
No-one's risen to my challenge of a realistic costing though. :(

The 8-core Zen 2 chiplet is around 70mm^2. The 330mm^2 Vega 20 might be a good start for a 13 TFLOPs GPU, but a 13 TFLOPs Navi could be significantly smaller. Navi could be designed to clock higher, and/or to have significantly more ALUs per CU (more than 64).
See for example the RV670 -> RV770 transition. AMD increased the ALU and TMU amount by 2.5 times while increasing the chip size by around 33% in the same 55nm process. Considering Turing has separate FP32 and INT32 ALUs, nvidia too increased significantly the number ot ALUs from Pascal to Volta.
While I doubt we'll see a 13 TFLOPs capable GPU measuring below say 200mm^2, it might be significantly smaller than Vega 20's 330mm^2.
We know both Liverpool and Durango were about 350mm^2 big, and that's where I'd point next gen SoCs to be. If we subtract 70mm^2 for CPU cores (+L2 +L3), we'd get around 280mm^2 worth for the rest of the SoC (GPU, memory controller, I/O). It's not out of the realm of possibilities to have a 280mm^2 GPU capable of hitting 12-13 TFLOPs (4 engines, 64 CUs, 64 ROPs at 1.6GHz).

Cost for such a SoC would probably depend on whether they're using EUV or DUV. The former should be fairly expensive, probably $150-200 (considering the 350mm^2 SoCs on 28nm back in 2013 were ~$100). EUV should be cheaper, and with the recent news of TSMC going with mass production EUV in March.

Memory would be a problem, because despite the recent downward trend the prices are still very bloated from over 2 years of "abuse".
I too would point to $200-250 for 24GB GDDR6 during H2 2019.

So we're at $150 for SoC and $250 for memory. Assuming everything else similar to the PS4, we'd be looking at close to $600 for BOM + manufacturing (without that rumored WiiU-like controller).


Dram spot price is already $5.9/GB average and continues to drop. There is no way gddr6 is going to be $10/GB about 18 months from now.
Digikey is putting 2K units of GDDR6 chips at around $20/chip at the moment.
I have no doubt Sony and Microsoft can get much better deals for tens of millions of chips, but aren't those $6/GB price you're seeing for DDR4? GDDR6 is an entirely different beast.

Moreover, most rumors point to a March 2020 release date for the PS5. That's 12 months from now, but volume production for the console (which is when the first memory chips would be ordered) would need to start some 4-6 months earlier, so 8 months from now is the price that matters.
Unless Digikey is charging ridiculous margins, I don't see how GDDR6 is going to reach $5/GB in 8 months even if bought directly from the manufacturers.



Again, he claimed 880, GDDR6, current dev kits. That would require a 512- bit bus.
According to Buildzoid (at ~1m30s) GDDR6's layout demands make it very difficult and/or ridiculously expensive to create a 512bit bus. Honestly, nvidia's TU102 PCB show that even a 384bit bus on GDDR6 seems super complex and maybe too expensive to put into a home console that is capped at $500.
If Sony or Microsoft want a total bandwidth too much above 512GB/s, then they might be better off using HBM, but then they can't go with insane amounts like 24GB.

In the end, I think 16GB GDDR6 at 256bit, plus around 8GB 64/128bit DDR4 exclusive for the CPU, would be the most plausible scenario at the moment.
 
The timing of hbm3 is unclear, specially at the high volume required for a console launch

Interesting post, thanks.

As for the quoted, that's why I wonder whether it would be planned for a future revision. If it can be expected to reach affordability at the same time as a 7nmEUV or 5nm revision, a massive, 512-bit bus with GDDR6 might not be such a bad idea for the launch model.
 
Nope! The fake leak was claimed to be about what was in current devkits.

"This data are 100% correct and about the lastet dev-kits sony has sent to devs".

Again, he claimed 880, GDDR6, current dev kits. That would require a 512- bit bus.

A 384-bit bus would require vast quantities of GDDR6 clocking beyond 21 gb/pin. Double edit: brain fart. 18+ gbpp.

Edit: That would be a [strike]50%+[/strike] bump over clocks in top end Nvidia stuff.
Any dev kit floating around now is likely to be PC hardware. The likelihood of engineering silicon in there is pretty low, especially since Navi supposedly needed a re-tape.

In that sense, they could easily send out Vega VII GPUs in dev kits with the HBM speed dialed down to deliver that bandwidth.

880GB/s takes 18.3Gbps RAM @ 384 bit. Samsung already says they can make 18Gbps modules, and Micron thinks building 20Gbps modules is possible based on their own simulation assessment of the interface. So yes, the number is aggressive, but for a 2020 console, it may not be that uncommon.

The 8-core Zen 2 chiplet is around 70mm^2. The 330mm^2 Vega 20 might be a good start for a 13 TFLOPs GPU, but a 13 TFLOPs Navi could be significantly smaller. Navi could be designed to clock higher, and/or to have significantly more ALUs per CU (more than 64).
See for example the RV670 -> RV770 transition. AMD increased the ALU and TMU amount by 2.5 times while increasing the chip size by around 33% in the same 55nm process. Considering Turing has separate FP32 and INT32 ALUs, nvidia too increased significantly the number ot ALUs from Pascal to Volta.
While I doubt we'll see a 13 TFLOPs capable GPU measuring below say 200mm^2, it might be significantly smaller than Vega 20's 330mm^2.
We know both Liverpool and Durango were about 350mm^2 big, and that's where I'd point next gen SoCs to be. If we subtract 70mm^2 for CPU cores (+L2 +L3), we'd get around 280mm^2 worth for the rest of the SoC (GPU, memory controller, I/O). It's not out of the realm of possibilities to have a 280mm^2 GPU capable of hitting 12-13 TFLOPs (4 engines, 64 CUs, 64 ROPs at 1.6GHz).

Cost for such a SoC would probably depend on whether they're using EUV or DUV. The former should be fairly expensive, probably $150-200 (considering the 350mm^2 SoCs on 28nm back in 2013 were ~$100). EUV should be cheaper, and with the recent news of TSMC going with mass production EUV in March.

Memory would be a problem, because despite the recent downward trend the prices are still very bloated from over 2 years of "abuse".
I too would point to $200-250 for 24GB GDDR6 during H2 2019.

So we're at $150 for SoC and $250 for memory. Assuming everything else similar to the PS4, we'd be looking at close to $600 for BOM + manufacturing (without that rumored WiiU-like controller).



Digikey is putting 2K units of GDDR6 chips at around $20/chip at the moment.
I have no doubt Sony and Microsoft can get much better deals for tens of millions of chips, but aren't those $6/GB price you're seeing for DDR4? GDDR6 is an entirely different beast.

Moreover, most rumors point to a March 2020 release date for the PS5. That's 12 months from now, but volume production for the console (which is when the first memory chips would be ordered) would need to start some 4-6 months earlier, so 8 months from now is the price that matters.
Unless Digikey is charging ridiculous margins, I don't see how GDDR6 is going to reach $5/GB in 8 months even if bought directly from the manufacturers.




According to Buildzoid (at ~1m30s) GDDR6's layout demands make it very difficult and/or ridiculously expensive to create a 512bit bus. Honestly, nvidia's TU102 PCB show that even a 384bit bus on GDDR6 seems super complex and maybe too expensive to put into a home console that is capped at $500.
If Sony or Microsoft want a total bandwidth too much above 512GB/s, then they might be better off using HBM, but then they can't go with insane amounts like 24GB.

In the end, I think 16GB GDDR6 at 256bit, plus around 8GB 64/128bit DDR4 exclusive for the CPU, would be the most plausible scenario at the moment.

Navi could also trim down by dropping FP64 support and perhaps a few other research/training/etc applicable hardware features. I wonder if a 384-bit GDDR6 would be smaller than a 4096-bit HBM2 interface as well.
 
It wouldn't. Just fast (expensive) GDDR6. Just saying it's possible.

Sorry, only just noticed the edit.

Yeah, I mean I could imagine a premium 1X style model with a 384-bit bus hitting that kind of BW at whatever point 18~20 gb/pin is one rung off the top of the ladder.

Console vendors typically don't go for the very fastest bin, barring perhaps the 360 when it began production.

Does anyone know the planned rollout for higher clocked chips?
 
Sorry, only just noticed the edit.

Yeah, I mean I could imagine a premium 1X style model with a 384-bit bus hitting that kind of BW at whatever point 18~20 gb/pin is one rung off the top of the ladder.

Console vendors typically don't go for the very fastest bin, barring perhaps the 360 when it began production.

Does anyone know the planned rollout for higher clocked chips?
Samsung announced mass production for up to 18Gbps over a year ago.

https://www.anandtech.com/show/12338/samsung-starts-mass-production-of-gddr6-memory

Only 16Gbps is visible on their site, though.
 
Any dev kit floating around now is likely to be PC hardware. The likelihood of engineering silicon in there is pretty low, especially since Navi supposedly needed a re-tape.

Agreed!

880GB/s takes 18.3Gbps RAM @ 384 bit. Samsung already says they can make 18Gbps modules, and Micron thinks building 20Gbps modules is possible based on their own simulation assessment of the interface. So yes, the number is aggressive, but for a 2020 console, it may not be that uncommon.

Thanks.

I think to see 18 gb/s in a console we'd probably be seeing 20 on the market, so console vendors could hoover up a lower bin.
 
Samsung announced mass production for up to 18Gbps over a year ago.

https://www.anandtech.com/show/12338/samsung-starts-mass-production-of-gddr6-memory

Only 16Gbps is visible on their site, though.

Makes me wonder where these mass produced 18 gbps 16 gb chips have been going for the last year, if even multi thousand dollar enterprise cards aren't using them.

(Or maybe someone is, and I missed it? Or maybe no-one is interested in those capacities, which would seem odd?)
 
Status
Not open for further replies.
Back
Top