My parents saw it as an investment. They thought I would become the next Bill Gates with this.Always wanted that one, ended up with the 20 in 1 or something like that
It turned out to be a bad investment, relatively speaking.
My parents saw it as an investment. They thought I would become the next Bill Gates with this.Always wanted that one, ended up with the 20 in 1 or something like that
Maybe they'll bring their follow-up game, Life of Deer, to nextgen.So now with the PS5 the Life of Black Tiger can actually have textures?
Maybe they'll bring their follow-up game, Life of Deer, to nextgen.
When we saw the actual air intake on the xbsx I became very skeptical of the 12TF figure, and I was also skeptical of the 9.2TF@2ghz for sony. So I thought maybe we'd get 10TF and 7.5TF respectively, unless something magical happens with 7nm+ or rdna2.
Now with rdna2 being both higher clock and higher perf/watt, it explains both.
The xbsx box is probably closer to 200W, and the form factor was not as necessary as we thought. It's exactly as they said, they designed it to be on a table or besides the TV.
If PS5 is 36CU and really pushing the clock, they will need a relatively expensive heat spreader and other clever tricks, but it wouldn't be a large nor high wattage console as a whole. The power density would be high and that's a much more difficult problem to solve than just a larger chip with proportionally more heat.
Interestingly, those who said 12TF was resonable for xbsx before learning about rdna2 big improvement in perf/watt, have to admit that 14TF or 15TF is now reasonable on rdna2 to remain consistent.
So I just did some napkin math related to SOC costs of next gen consoles based on AMDs yielded mm²/$ for 7nm vs 16nm, waffer price difference between the two and XBX/XBS SOC costs (AMD royalities included).
For XSX costs are :
380mm² - ~220-230$
400mm² - ~230-250$
This is more then 2x per mm² vs Xbox One (28mm²) and around 1.5x per mm² vs XBX.
With 16-20GB of DDDR6 and 1TB SSD we could see BOM go above 500$.
Wafer cost is just under 10K according to IBS. That's rather close to the 2x factor AMD previously claimed compared to 16nm.I actually used it as top end estimate, I used waffer price/dies per waffer as well as cost per transistor and got somewhat of an average.
If price for ~400mm² was in ~150$ range, that would make it only 23% more expensive per mm² then 28nm and actually cheaper per mm² then 16nm which doesnt make sense at all.
Cost per transistor is following expected downtrend, but cost per mm² is up.
Can you post how you calculated it? I tried to use several different data sources. For example, Polaris line od GPUs at 230mm² is ~200$ while 250mm² Navi line is ~380$. There has to be difference somewhere there, and its not only gross margines.
If you assume the rest of the process is relatively the same (it cost no more to test, package etc. a 7nm die), then the wafer delta of a 7nm part using the above math adds at most a $50 premium on a 16nm part assuming the generous 2x cost factor and dividing the $103 by 2.I did take this data in formula, but that would make 16nm chip like one in Pro ~43$ and one in XBX ~48$, which seems way to low. Now, yields on 16nm are probably better then back in late 2017, but that chip was ~120-130$.
And going by their own calculations, price per mm² on 7nm v 16nm is higher by 61%. This would mean Navi 10 chip is 70$ a pop, which does not fit with AMDs gross margins at all.
So if fab of 400mm² chip is 97$, by that very same formula for same size of XBX chip it would be~ 86$ on 7nm (ignoring few % for yields, 400*0.89). It would actually fit at additional ~60-80% vs 16nm per mm² but there is no way XBX SOC is less then 50$ in BOM.
Something doesnt add up.
True, but I think its more then 50$ premium (or chip is a more expensive to produce). I doubt XBX chip was cheaper then XBONE on 28nm (110$).If you assume the rest of the process is relatively the same (it cost no more to test, package etc. a 7nm die), then the wafer delta of a 7nm part using the above math adds at most a $50 premium on a 16nm part assuming the generous 2x cost factor and dividing the $103 by 2.
That would make 380-400mm² chip somewhere between 150-200$ I think.
Even that will come very expensive (especially for if 16+ Gbps RAM is used).That's probably part of the reason that they are going with only a rumored 16GB of RAM to offset those costs.
If XSX does indeed have a 320-bit bus, with GDDR6 known to be 10-15% higher cost now that it is mature, RAM cost should be relatively flat versus the X1X. Add in another cost delta for SSD over HDD, and MS is probably eating $50-80 on top of the launch X1X cost.That's probably part of the reason that they are going with only a rumored 16GB of RAM to offset those costs.
16gb gddr6 wouldnt be too bad anyway?
16gb gddr6 wouldnt be too bad anyway?
It is not enough , you cannot stream everything from SSD , you need alot Renderingdata etc. permanent in the Memory.
From Micron's slide, price per GB 14Gbps v 6Gbps is 71% higher for GDDR6.If XSX does indeed have a 320-bit bus, with GDDR6 known to be 10-15% higher cost now that it is mature, RAM cost should be relatively flat versus the X1X. Add in another cost delta for SSD over HDD, and MS is probably eating $50-80 on top of the launch X1X cost.
Which is why contracts are important. I wouldn’t trust a publicly advertised number. The site I referenced is very well sourced on the manufacturing side, so I believe their numbers for manufacturing cost deltas. Everything on top of that is demand/supply dynamics and the vendor setting their margins, but manufacturing costs is ground truth from which prices would perturbate.From Micron's slide, price per GB 14Gbps v 6Gbps is 71% higher for GDDR6.
12GB of GDDR5 - 81$ (2019)
16GB of GDDR6 - 187$ (2019)
20GB of GDDR6 - 233$ (2019)
As per Guru3d, these are prices for 2k chips. Big players (Nvidia/AMD) can expect 20-40% discount.
If MS goes with 20GB (and especially if they go with 16Gbps), I dont think prices will be flat at all.