Oh, I missed the R700 bit, I was thinking R770, single die to single die. GT200 vs R770 is either 512b GDDR3 vs 256b GDDR3 or 256b GDDR5. All I was saying is that if GT300 goes to GDDR5, power use will go up. That's it.
Okay looking back at the quotes, I realize I should have quoted the original thing you said to be clear. You said:
ATI's power budget takes GDDR5 into account, NV's doesn't, so another black mark for NV. How much do you think the rumored 512b GDDR5 will consume?
My point was basically "we know how much 512b GDDR5 will consume, just look at R700, increase it for higher memory clock speeds and reduce it by whatever the PCIe bridge takes. And maybe also 1GB vs 2GB". So even assuming (unlikely) that NV engineers are too dumb to look at a GDDR5 specsheet, it's not hard to see that it's not such a big deal. Either way this is just a detail and it's best we don't focus too much on it...
With regards to GT300 in general, Charlie, the problem is you made a whole bunch of bizarre claims in your GT300 article that are nearly certainly horribly wrong. And that's before we consider non-GT300 tidbits you got wrong, such as:
- Process node density. The node numbers are marketing, 55nm is not 0.72x 65nm it's 0.81x, the real density is available publicly and you should only look at that. Density from a full node to another full node is not fully comparable, but you should at least be looking at the kgates/mm² and SRAM cell size numbers.
- "The shrink, GT206/GT200b" - I can tell you with 100% certainty GT206 got canned and had nothing to do with GT200b. However it still exists in another form
- GT212 wasn't a GT200 shrink, it had a 384-bit GDDR5 bus and that's just the beginning. You have had the right codenames and part of the story of what happened for some time, which is much more than the other rumor sites/leakers, but you don't have the right details.
- "Nvidia chipmaking of late has been laughably bad. GT200 was slated for November of 2007 and came out in May or so in 2008, two quarters late." - it was late to tape-out, but it was fast from tape-out to availability (faster than RV770). G80 was originally aimed at 2005 for a long time but only taped-out in 2006, does that make it a failure? For failures, look at G98/MCP78/MCP79 instead.
- etc.
Then we get to GT300:
- "The basic structure of GT300 is the same as Larrabee" - if that's true, you need scalar units for control/routing logic. That would probably be one of the most important things to mention...
- "Then the open question will be how wide the memory interface will have to be to support a hugely inefficient GPGPU strategy. That code has to be loaded, stored and flushed, taking bandwidth and memory." - Uhh... wow. Do you realize how low instruction bandwidth naturally is for parallel tasks? It never gets mentioned in the context of, say, Larrabee because it's absolutely negligible.
- "There is no dedicated tesselator" - so your theory is that Henry Moreton decided to apply for early retirement? That's... interesting. Just like it was normal to expect G80 to be subpar for the geometry shader, it is insane to think GT300 will be subpar at tesselation.
- " ATI has dedicated hardware where applicable, Nvidia has general purpose shaders roped into doing things far less efficiently" - since this would also apply to DX10 tasks, your theory seems to be that every NV engineer is a retard and even their "DXn gen must be the fastest DXn-1 chip" mentality has faded away.
If I took the time to list all this, it's so that I don't feel compelled to reply in this thread again too much. It's important to understand that it's difficult to take you seriously about NV's DX11 gen when that's your main article on it, and I wasn't even exhaustive in my list of issues with it and didn't include the details I know. Their DX11 gen is more generalized (duh) but not in the direction/way you think, not as much as you think, and not for the reasons you think. Anyway, enough posting for me now!