Jawed
Legend
Jebaiting NVidia into revealing 4090Ti, so that AMD can respond with RX 7950XTX?Kinda the point, really.
Jebaiting NVidia into revealing 4090Ti, so that AMD can respond with RX 7950XTX?Kinda the point, really.
TSMC 7nm is well known to be ~$8k, TSMC N6 is a cost alternative so (IMO) likely a bit lower.Where did you get that? Any proofs?
Again, not confirmed info on N33, so still highly inaccurate and speculative stuff
5$ turned into 20$ would change a lot
Are there any real info on prices rather than just rumors?
Kept me thinking.If this is accurate, it means LDS has been practically doubled?
So besides 'no more register pressure' due to +50% VGPR, no more occupany drop due to high LDS usage either?
I just spent a few minutes calculating this stuff and got 1.35x cost difference between AD102 and NAVI31 GPU packages with 2x 5nm wafer cost and 4x packaging cost you provided, so it seems there is a good deal of wishful thinking involved into your judgment because numbers are again far closer to what was in this table rather than to some random 2-3x guesses.$5 into $20 on a product that has a likely BOM of ~$250-$300
This, the 6th one should be a spacer a-la A100.whether 7900 XT physically lacks an MCD
I recently moved to a 165 hz monitor, and I found the visual improvement from higher refresh rates very significant (much more than I expected). I wouldn't want to trade that for the (most likely subtle) improvements gained from enabling Series S level RT if that means dropping back down to 60fps. For the target of ">= 120 fps with whatever RT features I can get away with" it's not at all clear that the tradeoffs AMD's making are worse than Nvidia's.You're simply saying that the first option above is not a bad thing in your opinion because (perhaps more rationally than my argument) you don't care that the Series S might have better core graphics because you value the PC trade off of much higher frame rates and resolution more. For my part the minimum baseline would always be what I could get from a much cheaper piece of hardware, and then the extra value from the more expensive/powerful PC comes from building on that. That almost always translates as turning all settings to max (with the potential occasional exception of some invisible Ultra settings), and then adjusting my resolution accordingly to hit a suitable frame rate for that particular genre - which in many cases I define as solid and well frame paced 30fps or slightly above on my 1070 but would be looking to up that to a minimum of 60 or thereabouts on a modern GPU.
~$18k a wafer.I just spent a few minutes calculating this stuff and got 1.35x cost difference between AD102 and NAVI31 GPU packages with 2x 5nm wafer cost and 4x packaging cost you provided, so it seems there is a good deal of wishful thinking involved into your judgment because numbers are again far closer to what was in this table rather than to some random 2-3x guesses.
Also, it will be interesting to see whether 7900 XT physically lacks an MCD or if there are all MCDs in the package but with one disabled (which would mean more complex chiplet packaging also affects yields)
A possibility is that they usedHow are they getting the cache bandwidth btw? N21 is 16 slices of 8MB, 64B/clock, 1024B/clock total at "up to 1.94GHz", 1986.6GB/s. RDNA3 has 5.3TB/s bandwidth (rest of the slides) including VRAM, 4340GB/s without VRAM and 96MB vs 128MB (0.75x) which makes it 2.91x BW per slice, assuming same slice count. Did they reduce the clocks a lot (~1.4GHz), go 4MB slices and double width per slice (24 slices, 128B/clock)? 192B/clock, 12x 8MB (same as RDNA2), 1.88GHz? Because 2.82GHz would be needed if they only doubled one aspect and that sounds unrealistic given N31 clocks although if they fixed it that becomes realistic again
*That bandwidth increase is insane, don't think it's had much attention but it's very impressive
**Also means ~2.8TB/s for N32 and ~1.4TB/s for N33 assuming similar clocks
Which is why they're using industry densest fanout to get all those pins.64B bidirectional needs 1024 pins if run as parallel... 24 channels means 24576 pins. That's... a lot.
It's organic in a way that it's not a Si slab, but it's built with extremely dense RDLs so...This seems more inline with the pinout density that an organic interposer can achieve
You move your goalposts with every posts.~$18k a wafer.
I simply took a lower bound of your fork above for calculations because of your negative bias towards nvidia in this discussionMost pricing estimates of TSMC 5nm is ~$16-18k
looked back at my post and I didn't say anything about 2x.
Well, 16k is a 2x 8kTSMC 7nm is well known to be ~$8k
Adding more speculative unknowns would obviously skew results even more than beforeDefect density of .07 for TSMC N5, so possibly a bit worse on 4nm since "the source" said it is only slightly better for Nvidia than the disaster that was Samsung 8LPP.
Apparently, Navi31 is being produced on some aliens' 5nm and 6nm tech process with perfect yields while AD102 on some crappy peasants' 5 nm so that the 3x would make any sense.Worst case, only ~50 good die per wafer. $360 per die. I.E. my 3x comment.
Why shouldn't I assume a perfect yield of 100%? And what do you mean by yield at all, is it working dies? Then it would be way higher than 80%Assuming you can save a good portion and keep yield at 80%, you are still looking at ~$260 per die.
Our numbers don't add up. With 6 nm MCDs, 5 nm GCD, 2.25x wafer cost (18K for a 5 nm wafer) and 4x packaging cost for chiplets, I am getting 1.39x. With 16K wafer price, that's 1.35x.or about 1.6x more than my Navi31 estimate which
As noted in the earlier blog, bump pad pitch is 25 x 35 μm, giving a potential 57,000 total pads on the die, although Apple only mentioned 10,000 I/Os (20,000 pads).
Yes, I used this one, but it does the same. Using CALY's calculator, difference shrinks to 1.32661x with $17000 for 5 nm wafer and $10,000 for 6N, pretty much in line with all my previous calculations.But to summarise, you can use the below calculator, with a defect density of 0.07 and an edge loss of 4
Still trying to wrap my head how everyone here is getting 340$ (must be some magic number) for a chip 2x size of the GCD in NAVI31, $100-122 * 2x (some magic happens here) = 340!Versus $340 for a 4090 for Nvidia.