DX ten point one. nvidia am fail.
I heard that detail of Larrabee will be available in next IDF.
DX ten point one. nvidia am fail.
No, they should look to their own strategy of how to be a highly profitable business. With the exception of G80, they've never gone for much more expensive GPUs than their competition in any generation except NV30, and they've been most successful when they had substantially better products than their competition at any given production cost (G7x, G8x, G9x before RV770).What is it with all the armchair experts here? Nvidia has been operating a highly profitable business for years and years. The low volume high-end segment halo effect is just one aspect of this.
Are you guys saying they should be looking to AMD for advice on how to run a business, just because they managed to pull off one promising videocard generation now?
Yeah, just like unification and HDR+MSAA :smile:What? The Nvidia focus group members tell me DX 10.1 is pointless.
Yes, but did not people like PdM (was he really banned from RWT ?!?) and others report in the past those 400+ mm^2 IPF chips cost Intel like $150 (more or less) to actually make ?
(I remember an old RWT discussion about manufacturing costs of Itanium 2 chips)
That'd be laughable if true.
Chances this is real: zero
reason: die size is IDENTICAL to GT200
common sense...
Small note, but the die is 600mm^2, give or take, not 576.
I've always been curious as to what functions those really are? Nvidia - no wonder - is very tight lipped about it and no one ever mentioned which and foremost why DX10.1 should be giving them such a hard time.If it's indeed the case that the base G8x architecture doesn't lend itself well to certain DX10.1 functionality there really isn't a compelling reason for them to make substantial changes until DX11.
looks fake to me.
Same die size across generations, seems very very unlikely
If you ask me, the 'GT300' (if indeed that is how it will be named) will simply be a 55nm die shrink of GT200 with no major differences other than the obvious, power consumption, clock speeds and YIELDS.
There is a good chance they might include GDDR5, however, allowing them to lower the number of memory controllers and vastly increasing the memory clock speed. I'd sooner expect this on a 8800GTX -> 9800GTX style makeover though.
If you ask me, the 'GT300' (if indeed that is how it will be named) will simply be a 55nm die shrink of GT200 with no major differences other than the obvious, power consumption, clock speeds and YIELDS.
Reducing the number of crossbar MC channels would also require a reduction in the number of ROP partitions. NV would have to re-architect their ROP partitions to keep the same fillrates.
Assuming the abundant fillrate of GT200 is of any use to begin with.......
Didin't you just describe GT200b?