TSMC would disagree with you, as well as Nvidia and AMD all of which are using N5 family for all their products right now including those launching this year.There was nothing especially 'cutting edge' about a 5nm family process in late 2022.
TSMC would disagree with you, as well as Nvidia and AMD all of which are using N5 family for all their products right now including those launching this year.There was nothing especially 'cutting edge' about a 5nm family process in late 2022.
So the GeForce4 (NV25) at 128mm2 and the Radeon 8500 (R200) at 120mm2 (both on 150nm) were low-end then, despite being unquestionably the fastest cards of their day? And a >$20K 2nm wafer price doesn't make any difference compared to ~$4K for 28nm in the Maxwell timeframe? (iirc, exact number could be slightly wrong)150mm² is low end. It just is. There is no process so advanced that it changes this. Also just a 128-bit bus. That's again, a low end spec that you use on low end parts.
Exactly. There's also a whole side missing from this discussion - die cost depends on how many dies from a platter is being sold. If a 150mm^2 die is one from a platter then it's cost is the same as of one 600 mm^2 die from the same platter. It doesn't have to be a result of physical defects either - a 150 mm^2 binned for some SKU as the one with ultra high clocks for example could end up being just one from a platter. The whole idea that die size mean anything about how "low end" a product is is just completely wrong.we don’t have good info on what a given die costs anyway
What's the GDDR6 platform then?... Very vague but if they’re referring to Blackwell vs 4090 that would be a nice bump.
Presumably RDNA3What's the GDDR6 platform then?
What's the GDDR6 platform then?
CES launch has been the norm for mobile parts in sync with new CPUs so pretty much as expected. Good to see them move to 8 GB on the xx50 series finally, I feel it was long overdue tbh.
They absolutely would have been low end if much larger GPU's existed on the same architecture like exists now.So the GeForce4 (NV25) at 128mm2 and the Radeon 8500 (R200) at 120mm2 (both on 150nm) were low-end then, despite being unquestionably the fastest cards of their day?
Assuming those are the Blackwell die configurations, what is the prevailing thought as to primary driver of (presumed) performance enhancements? To my layman's eyes, it would seem the main contenders are, in no particular order:
It has always been fascinating to me to see how different architectures have evolved to achieve greater performance or lower power. Like, Maxwell reconfigured the SM and kept data more local, Pascal ramped the clocks, Turing employed specialized hardware, etc.
- Significantly increased clock speeds;
- Going wider i.e., each TPC contains more 2 SMs than prior architectures, so there are more compute units without having more, or significantly more, GPCs;
- internal reconfigurations, such as cache improvements or enlargements or scheduling enhancements, to improve perf/mm2 or the "IPC" of the existing units;
- Memory bandwidth boost from GDDR7 and/or the larger bus (at least for GB202);
- Specialized hardware additions or improvements (e.g., BVH building fixed function hardware, expanding frame generation to additional frames, hardware to push GPU-driven work generation, etc.)
If the GB dies aren't on a smaller node than Ada, then it'd probably be a combination of 3, 4, and 5. If there is a die shrink, it opens up 1 and 2. I don't know which way I'm leaning, and I'm just talking out of my nethers, but it's fun to speculate.
Basic unit configs more than "specs" really.
Unless Nvidia have pulled off some incredible architectural performance improvements(via clockspeed increases or sheer IPC gains), this is looking like a fairly disappointing generation. It would be nice if Nvidia gave us a bone in terms of pricing/value to compensate, but AMD seems half checked out of the GPU market by now and consumers have decided they are fine with being exploited in order to have the new shiny thing so.....Guessing time!
5090ti- 448bit bus, 28gb of ram, 525w+ $2k, 40% faster than a 4090
5090 -384bit bus, 24gb of ram, 450w, $1500, 20% faster than a 4090- 256bit bus, 16gb of ram, 350w, $1k, 20% faster than a 4080
5080ti
5080 -228bit bus, 14gb of ram, 300w, $750, basically a 4080 super
5070ti - 192bit bus, 12gb of ram, $750, 5% faster than 4070ti Super
5070 - 192bit bus, 12gb of ram, $600, 4070 super
5060ti - 128bit bus, 8gb of ram, $400, 4070
5060 - 128bit bus, 8gb of ram, $329, 4060ti
Unless Nvidia have pulled off some incredible architectural performance improvements(via clockspeed increases or sheer IPC gains), this is looking like a fairly disappointing generation. It would be nice if Nvidia gave us a bone in terms of pricing/value to compensate, but AMD seems half checked out of the GPU market by now and consumers have decided they are fine with being exploited in order to have the new shiny thing so.....
You're arguing semantics in a response to a post from a moderator that such an argument really isn't adding to the conversation?They absolutely would have been low end if much larger GPU's existed on the same architecture like exists now.
150mm² is low end today, and has been for quite a while. I feel like trying to find some wormy argument to slither out of acknowledging such a basic concept is bizarre. We're not talking about some magical super advanced process beyond anything else.