CarstenS said:
Two things in your very own evidence might contradict that: First, the very low memory bw, indicating the use of 800 MHz GDDR3 and an additional disable Quad-ROP/Rop-Partition whatever you prefer.
The S1070 also has 800MHz GDDR3, but it really uses the same memory chips as the GTX 280; it's just clocked down to improve reliability. Sorry for not disregarding that point, couldn't just let it pass!
Domell said:
But there is NO GT214 and GT218 on their roadmap at now. There are only GT212 and GT216 which are supposed to be released next year (most likely Q2).
GT212/GT214/GT216 all very much do exist and are on NVIDIA's roadmap. GT218, I haven't heard in a while but I wouldn't really expect it before the others anyway. Surely you don't think the public leaks always perfectly represent NVIDIA's internal roadmap?
Domell said:
Another way is not increase numbers of TMUs and ROPs but significantly increase number of Shader Processors.
Isn't that exactly what I proposed?
G94: 32 TMUs, 16 ROPs, 64 SPs
G92: 64 TMUs, 16 ROPs, 128 SPs
GT214: 24 TMUs, 12 ROPs, 120 SPs
Anyone any idea what kind of shrinkage NVidia will get with 40nm process, when compared against either 65nm or 55nm?
2x compared to 55nm, excluding non-digital stuff which should shrink very little, is a fair bet. That's assuming a slight increase in transistors/mm² in addition to the process' natural shrink (so as to compensate the lower SRAM shrink), which seems like a reasonable bet to make in my mind. Of course, if the feature set/arch isn't 100% the same or if they optimized noticeably more for power (as I've suggest everyone *should* do on 40nm) it's harder to estimate.
Honestly, I'm more interested in the kinds of clocks they could achieve. Obviously the 90->65/55 transition for NV has been awful both in terms of density *and* performance, so if we assume some of those are fixable internal issues and 40nm allows for higher-than-traditional improvements at the same time (although who knows at what cost), it could get interesting. Not that the same (i.e. interesting) isn't true for AMD also, of course!
Jawed said:
The reason I ask is that Arun thinks that NVidia is not squishing features as closely as possible - preferring to space them out. If that methodology is kept for 40nm, what kind of shrink will occur?
Just to be clear, it's certainly not the only factor; I'm just arguing it's very likely to be one of them. Whether the voluntary desire to have that is the largest one or the smallest one, who knows!
Another thing is that Windows 7 makes D3D10.1 a first class citizen for the desktop UI. Does this increase the likelihood that NVidia will be introducing a top-to-bottom 10.1 line-up before the D3D11 cards arrive?
I've heard the possibility that GT21x is D3D10.1 a few times, but who knows, there's enough FUD flying around that I don't think it makes a lot of sense to speculate about it at this point.