Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Well, it doesn`t look like NV30 syndrome IMO. Why? because NV30 was completely new architecture compared to NV25, totally new generation and it have had aboy 2X more transistors than NV25.
Between GT212 and GT200 (even GT200B) is not such a big difference. There is NO new architecture and NO significant increase number of transistors. Moreover i think that GT212 will have only more ALUs than GT200 and number of TMU will be the same as GT200 has. I think NVIDIA will do 32SP per cluster (24SP at now) so then we could see something like this - 320ALU,80TMU,32ROP,512-bit MC. This is my opinion about GT212.
That'd be one of the things amendable with 40nm technology.So do I, my expectations for GT212's die size are too large for it to make much sense in my mind.
I completely agree -- and that's almost exactly what happened to GT200 which probably is the only real example of NV choosing the wrong process since the NV30/130nm fiasco.In this specific case, I think what needs to be realized is that while NV could justify lagging behind if they hit all their milestones, if they get delayed then by the time their part comes out it would have been more attractive not to be so conservative on process technology out of fear for wafer cost/yields.
Exactly. And that's why it's pretty pointless to try and 'guess' die production cost from it's size alone. And that's why being "slow" to smaller TSMC nodes doesn't mean paying more for the GPUs. Especially when we're talking about node -> half-node transitions.TSMC is not a "dumb" entity that just creates naive roadmaps pricing schemes not based on customer relationships. Both capacity and the different pricing models for different customers is dependent on complex feedback loops, and anything that doesn't take that into account is unlikely to be a very useful theory IMO.
That's a valid theory =)My guess, FWIW, is that it is a G98 replacement that got canned. The fact there was a 'i' (i.e. integrated) version of the same is a strong hint in that direction; given the debacle that is NVIDIA's chipset division, it probably got killed in favour of focusing on future 40nm products.
55nm RV670 went against 65nm G92 (although it's worth to mention that NV's tactical mistake here made them do it -- they should've put G94 ahead of G92 and against RV670 instead).I was thinking of RV670 and RV770. And I should have said process/die-size advantage. They went up against considerably larger 80/90nm and 65nm parts from Nvidia.
I don't think it's 'conservative', i think it's 'strategical'. They first 'try' the process with a simple chip and then transit a more complex ones. This 'simple chip' from NV for the most part was available as soon as the process allowed it to be. So it's not like they're waiting for half a year before switching to a new process, they simply beginning the switch in the low end segment (for which nobody cares here anyway). And this strategy mostly paid off.I don't think Nvidia's conservative stance on process adoption is debatable. They've openly been willing to take their time moving to new nodes.
130 - NV31/1Q03 - RV360/4Q03
110 - RV370/2Q04 - NV43/3Q04
90 - R520/4Q05 - G7(1/2/3)/1Q06
80 - RV535/3Q06 - G86/2Q07
65 - RV630/2Q07 - G92/4Q07
55 - RV670/4Q07 - G92b/2Q08
Yeah, exactly -- and we all see how that turned out.Perhaps, but remember G92 and G94 hit around the same time with the 8800GT actually making it to market months before the 9600GT so there's still a possibility.
If it's a straight GT200 shrink, yes.Wasn`t G92 (the fastest G9x chip) first GPU in 65nm process from NVIDIA? So IMO GT212 (the fastest GT2xx chip) could be first GPU from NVIDIA made in 40nm as well.
But i have severe doubts about GT212 being the first 40nm chip from NV. Even ATIs engineers prefer to go with the simplier chip first now. And for NV it's like a tradition of sorts since NV43. So i'm still pretty sure that we'll see GT216 or GT214 before GT212.
It's a known problem, expect it to be fixed in a firmware revision... *waits for VR-Zone, Expreview, and/or Fudzilla to link to thisAnother problem lies in the low transistor density of 65/55 NV GPUs -- G92b is bigger than RV770 on the same 55nm process while having 160M less transistors. I think that's the real problem for NV in 65/55nm generation -- simply put NVs 65/55 process usage sucks and they need to improve it considerably on 40nm node.
Maybe; if it was a 55nm product, nearly certainly. If it's 40nm, nearly certainly not.So iGT209 is killed too? 8)
You forgot RV350; of course everyone seems to always forget that one and how smoothly it went, poor ATI!If you think about it, NV was never that late with process transitions compared to ATI/AMD:
The cost benefit is likely to be smaller for small chips if they include a lot of I/O or analogue (i.e. this doesn't apply to handheld chips in the same way etc.) - however very big chips are riskier and will suffer from yield problems. This is not just catastrophic defects like coarse redundancy would partially prevent; it's also variability among other things. Chips like GT216/RV740 in the 120-150mm² range are likely to be a relatively good compromise, IMO.Since I am no chip production/design expert: Isn't it the case, that you usually get a better shrinkage the more logic and cache, i.e. digital ICs, you have on a chip? Perfect target: Large Dies.
I think you should consider inserting GDDR3-based GT212 solution (a la 4850) in there. And that kinda kills the idea of having GDDR5-based 192-bit solution (especially if GT212 is using 256-bit bus as i expect).I'm starting to think about something... When both I and possibly other sites heard about GT214, it certainly hadn't taped-out. Assuming they left the possibility open until the end depending on market conditions, which is a big if, maybe they did switch to GDDR5 for GT214 and that LinkedIn entry means more than I thought (not that it really reveals much either way)
After all, this would be a quite impressive roadmap:
GT218: 64-bit GDDR3 [~15GB/s]
GT216: 192-bit GDDR3 [~60GB/s]
GT214: 192-bit GDDR5 [~110GB/s]
GT212: 384-bit GDDR5 [~240GB/s]
GT300: 512-bit GDDR5 [~320GB/s]
Well i sure hope they will be -- for their own sake.expect them to be much more aggressive on 40nm.
Sorry, i was using B3D 3D Tables time line 8)You forgot RV350; of course everyone seems to always forget that one and how smoothly it went, poor ATI!You also forgot G73b, so NVIDIA wasn't that late to 80nm in fact.
...
GT300: 512-bit GDDR5 [~320GB/s]
...
http://www.xfastest.com/viewthread.php?tid=17608&extra=page=1NVIDIA_DEV.06A0.01 = "NVIDIA GT214"
NVIDIA_DEV.06B0.01 = "NVIDIA GT214 "
NVIDIA_DEV.0A00.01 = "NVIDIA GT212"
NVIDIA_DEV.0A10.01 = "NVIDIA GT212 "
NVIDIA_DEV.0A30.01 = "NVIDIA GT216"
NVIDIA_DEV.0A60.01 = "NVIDIA GT218"
NVIDIA_DEV.0A70.01 = "NVIDIA GT218 "
NVIDIA_DEV.0A7D.01 = "NVIDIA GT218 "
NVIDIA_DEV.0A7F.01 = "NVIDIA GT218 "
NVIDIA_DEV.0CA0.01 = "NVIDIA GT215"
NVIDIA_DEV.0CB0.01 = "NVIDIA GT215 "
NVIDIA_DEV.0A20.01 = "NVIDIA D10M2-30
NVIDIA_DEV.06FF.01 = "NVIDIA HICx16 + Graphics
Ooh, interesting, wonder if they're doing screen space ambient occlusion? That could be quite widely applicable - erm, though my understanding of the algorithm is not exactly in-depthNew option to force ambient occlusion? Wonder how that works. Seems like a very application specific thing.
NVIDIA_DEV.0A20.01 = "NVIDIA D10M2-30
NVIDIA_DEV.06FF.01 = "NVIDIA HICx16 + Graphics
Intriguing... Hadn't heard those before.![]()
Ooh, interesting, wonder if they're doing screen space ambient occlusion? That could be quite widely applicable - erm, though my understanding of the algorithm is not exactly in-depth
Jawed