Baseless Next Generation Rumors with no Technical Merits [post E3 2019, pre GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
Always wanted that one, ended up with the 20 in 1 or something like that :LOL:
My parents saw it as an investment. They thought I would become the next Bill Gates with this.

It turned out to be a bad investment, relatively speaking.
 
So I just did some napkin math related to SOC costs of next gen consoles based on AMDs yielded mm²/$ for 7nm vs 16nm, waffer price difference between the two and XBX/XBS SOC costs (AMD royalities included).

For XSX costs are :

380mm² - ~170-190$
400mm² - ~180-200$

This is slightly less then 2x per mm² vs Xbox One (28mm²) and around 1.5x per mm² vs XBX.

With 16-20GB of DDDR6 and 1TB SSD we could see BOM go above 500$.

Edit

Substracted ~50-60$ duo to packaging and additional costs being taken into ~1.5-2x formila.
 
Last edited:
When we saw the actual air intake on the xbsx I became very skeptical of the 12TF figure, and I was also skeptical of the 9.2TF@2ghz for sony. So I thought maybe we'd get 10TF and 7.5TF respectively, unless something magical happens with 7nm+ or rdna2.

Now with rdna2 being both higher clock and higher perf/watt, it explains both.

The xbsx box is probably closer to 200W, and the form factor was not as necessary as we thought. It's exactly as they said, they designed it to be on a table or besides the TV.

If PS5 is 36CU and really pushing the clock, they will need a relatively expensive heat spreader and other clever tricks, but it wouldn't be a large nor high wattage console as a whole. The power density would be high and that's a much more difficult problem to solve than just a larger chip with proportionally more heat.

Interestingly, those who said 12TF was resonable for xbsx before learning about rdna2 big improvement in perf/watt, have to admit that 14TF or 15TF is now reasonable on rdna2 to remain consistent.

There’s still some ambiguity in that 1.5x PPW. Is that N7P to N7+? Does it rely heavily on VRS? How is it normalized? E.g. are they pushing the clocks really high where RDNA 1.0 efficiency goes kaput but their design enhancements improve things? If N7+ is involved, we can’t necessarily consoles will get that part of the benefit.

So I just did some napkin math related to SOC costs of next gen consoles based on AMDs yielded mm²/$ for 7nm vs 16nm, waffer price difference between the two and XBX/XBS SOC costs (AMD royalities included).

For XSX costs are :

380mm² - ~220-230$
400mm² - ~230-250$

This is more then 2x per mm² vs Xbox One (28mm²) and around 1.5x per mm² vs XBX.

With 16-20GB of DDDR6 and 1TB SSD we could see BOM go above 500$.

The latest estimates don’t use the 2x per yielded mm^2 due to the age of that statistic.

They use a known wafer price and known defect density, which recently became public knowledge. That gets you a XSX SoC price in the mid $1xx range.
 
I actually used it as top end estimate, I used waffer price/dies per waffer as well as cost per transistor and got somewhat of an average.

If price for ~400mm² was in ~150$ range, that would make it only 23% more expensive per mm² then 28nm and actually cheaper per mm² then 16nm which doesnt make sense at all.

Cost per transistor is following expected downtrend, but cost per mm² is up.

Can you post how you calculated it? I tried to use several different data sources. For example, Polaris line od GPUs at 230mm² is ~200$ while 250mm² Navi line is ~380$. There has to be difference somewhere there, and its not only gross margines.

Edit.

I made mistake on calculation, but even with best case scenario I didnt get lower then 200$ for 400mm².
 
Last edited:
I actually used it as top end estimate, I used waffer price/dies per waffer as well as cost per transistor and got somewhat of an average.

If price for ~400mm² was in ~150$ range, that would make it only 23% more expensive per mm² then 28nm and actually cheaper per mm² then 16nm which doesnt make sense at all.

Cost per transistor is following expected downtrend, but cost per mm² is up.

Can you post how you calculated it? I tried to use several different data sources. For example, Polaris line od GPUs at 230mm² is ~200$ while 250mm² Navi line is ~380$. There has to be difference somewhere there, and its not only gross margines.
Wafer cost is just under 10K according to IBS. That's rather close to the 2x factor AMD previously claimed compared to 16nm.


We know N7 yield is excellent historically thanks for David Schor.

This is a defect density of 0.09 per cm^2. I would note that cross-correlating the two graphs show large die are actually a little better than this.


With a wafer cost of $10,000 and the above yield metrics, you get 97 good die per wafer, assuming a 20x20 die for 70% yield (you can change the aspect ratio to 2:1 and only get one less die yielded). That comes out to $103 per part. That's probably dice/bumped/tested, but not packaged, so there's some further costs to add before it appears as the level of assembly people like IIHS are using for their BOMs. At the time, I felt a ~50% mark-up was plenty conservative for the further processing and margin of intermediate players to get an estimate.
 
Last edited:
I did take this data in formula, but that would make 16nm chip like one in Pro ~43$ and one in XBX ~48$, which seems way to low. Now, yields on 16nm are probably better then back in late 2017, but that chip was at least ~120-130$.

And going by their own calculations, price per mm² on 7nm v 16nm is higher by 61%. This would mean Navi 10 chip is 70$ a pop, which does not fit with AMDs gross margins at all.

So if fab of 400mm² chip is 97$, by that very same formula for same size of XBX chip it would be~ 86$ on 7nm (ignoring few % for yields, 400*0.89). It would actually fit at additional ~60-80% vs 16nm per mm² but there is no way XBX SOC is less then 50$ in BOM.

Something doesnt add up.
 
Last edited:
I did take this data in formula, but that would make 16nm chip like one in Pro ~43$ and one in XBX ~48$, which seems way to low. Now, yields on 16nm are probably better then back in late 2017, but that chip was ~120-130$.

And going by their own calculations, price per mm² on 7nm v 16nm is higher by 61%. This would mean Navi 10 chip is 70$ a pop, which does not fit with AMDs gross margins at all.

So if fab of 400mm² chip is 97$, by that very same formula for same size of XBX chip it would be~ 86$ on 7nm (ignoring few % for yields, 400*0.89). It would actually fit at additional ~60-80% vs 16nm per mm² but there is no way XBX SOC is less then 50$ in BOM.

Something doesnt add up.
If you assume the rest of the process is relatively the same (it cost no more to test, package etc. a 7nm die), then the wafer delta of a 7nm part using the above math adds at most a $50 premium on a 16nm part assuming the generous 2x cost factor and dividing the $103 by 2.

That way you can focus on the known cost factors.
 
If you assume the rest of the process is relatively the same (it cost no more to test, package etc. a 7nm die), then the wafer delta of a 7nm part using the above math adds at most a $50 premium on a 16nm part assuming the generous 2x cost factor and dividing the $103 by 2.
True, but I think its more then 50$ premium (or chip is a more expensive to produce). I doubt XBX chip was cheaper then XBONE on 28nm (110$).

XBS chip was 99.5$ for 240mm² die on 16nm, so probably additional ~20$ for XBX.

Therefore while 1.5-2x price of fab does hold, rest of costs stay similar, you are right.

That would make 380-400mm² chip somewhere between 150-200$ I think.
 
Last edited:
That's probably part of the reason that they are going with only a rumored 16GB of RAM to offset those costs.
Even that will come very expensive (especially for if 16+ Gbps RAM is used).

These are prices from Micron last year

Screenshot_20200307-143718__01.jpg
This would make 16GB of GDDR6 14Gbps RAM - $176 (Jeez...)

I guess bulk buying and long contracts can slash that at least in low 100s, but for 16-18Gbps chips its another step up.
 
That's probably part of the reason that they are going with only a rumored 16GB of RAM to offset those costs.
If XSX does indeed have a 320-bit bus, with GDDR6 known to be 10-15% higher cost now that it is mature, RAM cost should be relatively flat versus the X1X. Add in another cost delta for SSD over HDD, and MS is probably eating $50-80 on top of the launch X1X cost.

I feel that’s in line with Zhuge’s numbers.

 
Last edited:
16gb gddr6 wouldnt be too bad anyway?

It is not enough , you cannot stream everything from SSD , you need alot Renderingdata etc. permanent in the Memory. And i'm wondering about the Rumors that PS5 and Scarlett has only a 256/320 Bit Bus the Xbox one X has a 384 Bit bus for a slower Cpu/Gpu Setup compare to nextgen with a Zen2/RDNA2 Combo with lot more computal Power. 1,3 Tflops vs 12 Tflops , this need alot Bandwidth more, or they think VRS can solve this??

GDDR6 is a high Latency Memory Type not ideal for all Computing Operations.
 
It is not enough , you cannot stream everything from SSD , you need alot Renderingdata etc. permanent in the Memory.

I strongly disagree. Your base constantly in use static memory will not be that large.
 
If XSX does indeed have a 320-bit bus, with GDDR6 known to be 10-15% higher cost now that it is mature, RAM cost should be relatively flat versus the X1X. Add in another cost delta for SSD over HDD, and MS is probably eating $50-80 on top of the launch X1X cost.
From Micron's slide, price per GB 14Gbps v 6Gbps is 71% higher for GDDR6.

12GB of GDDR5 - 81$ (2019)
16GB of GDDR6 - 187$ (2019)
20GB of GDDR6 - 233$ (2019)

As per Guru3d, these are prices for 2k chips. Big players (Nvidia/AMD) can expect 20-40% discount.

If MS goes with 20GB (and especially if they go with 16Gbps), I dont think prices will be flat at all.

Also, if Sony goes with 16GB of 18Gbps RAM and MS goes with 20GB of 14-16Gbps, costs will probably be the same if not higher for Sony.
 
From Micron's slide, price per GB 14Gbps v 6Gbps is 71% higher for GDDR6.

12GB of GDDR5 - 81$ (2019)
16GB of GDDR6 - 187$ (2019)
20GB of GDDR6 - 233$ (2019)

As per Guru3d, these are prices for 2k chips. Big players (Nvidia/AMD) can expect 20-40% discount.

If MS goes with 20GB (and especially if they go with 16Gbps), I dont think prices will be flat at all.
Which is why contracts are important. I wouldn’t trust a publicly advertised number. The site I referenced is very well sourced on the manufacturing side, so I believe their numbers for manufacturing cost deltas. Everything on top of that is demand/supply dynamics and the vendor setting their margins, but manufacturing costs is ground truth from which prices would perturbate.

Sony paid $88 in 2013 for 16 GDDR5 chips according to IIHS. If your numbers were correct, price per chip would have gone up 4x due to halving of the chips and near doubling of the cost per chip. That doesn’t make sense based on how much NAND has dropped in that timeline and their shared manufacturing nodes.

You have to have a point of concession somewhere in your numbers. Using X1X as a baseline, there's not room for the SoC to be $100 more expensive, the RAM price to grow, and the SSD to grow over the HDD. The XSX BOM is probably in the realm of $50-80 more at most. Divide those beans up how you choose.
 
Last edited:
Status
Not open for further replies.
Back
Top