Dave, common now. We all know that a typical process advance yeilds anywhere from a 30-80% increase in logic budget. It's absolutely huge in absolute and relatived terms.
Vince - what was the transistor differnce between R200 and R300? By my calcs about 78% (and its not that different from NV25 to R300) - thats "absolutely huge in absolute terms", can you acknowledge that? They achieved on the same process where you think is required to go to a new process.
To design a new architecture, generally based around a DX/OGL revision, would seem to be the time you'd want that extra logic.
You might want it, the question is whether you
need it. History disctates that in this case it was not needed, and indeed the performances show that not only was it not needed, but could still be achived with the performance being on the side of the 150nm designed part.
This is what my comment entailed, and your responces totally missed. I have no idea what your talking about.
Vince, you stated: "Dave, you should need the new processes to facilitate your new architectures" -- the change from R200 to R300 is clear evidence that this is not the case! Its a new architecture, meeting (exceeding) the needs of a major new API, doubling the performance of its predecessor and all on the same process as its predecessor, ergo you don't
need a new process to facilitate a new architecture. Can you see that?
I'm going to guess you're not saying that using the more advanced process is a bad thing
Using a more advanced process when its not ready is a bad thing - both ATI and NVIDIA were warned by TSMC that what NVIDIA wanted to achieve with NV30 would not ready for some time; from this ATI subsequently chose the 150nm process NVIDIA chose to forge on despite those warnings. For R300 and NV30 which was the correct choice?
In what way? What you're stating is totally irrelevent.
Again Vince, you stated "you should need the new processes to facilitate your new architectures" - that is quite obviously not the case.
150nm is more expensive than 130nm (per die, not mask), it's less advanced in thermal attributes, less dense in logic.
Die size and thermal attributes are also very dependant on what you do with your chip. NV30 and R300 have roughly the same die size and the thermal properties also brobably lie with R300, even at similar clock speeds (also handing the performance to R300)
Without invoking fanb0y qualities of ATI-nVidia, how can you spin utilizing an older process as a good thing?
Using smaller processes at the right time for the right requirements is the good thing to do. ATI did not need to utilise a smaller process to meet the demands of DX9 whilst still giving large gains in performance over all previous parts - was it not a good thing that they did use 150nm? We'd likely not have seen a DX9 part before 2003 had they not done this.
It's also clear that no other graphics manufacturer has pushed the 150um process to the limit yet - good use of lithography also means nderstanding how far you can go with current processes, something that NVIDIA obviously didn't grasp on the chnageover from NV25 to NV30.
Ok, this is a fanb0y comment. 130nm allowed for low-K dielectrics, it's cheaper per die, it allows for higher transistor counts, lower thermal dissipation... the list goes on.
No Vince. Did any other graphics manufacturer get anywhere near to the ulitisation of the 150nm process that ATI did?
As for what 130nm brings - yeah, thats the case, but were any of those things brought to consumers in 2002? No. I can sit there and list the benefits of the 90nm based process, but am am going to be actually able to use these benefit to make my gaming better now? No.
But, yet, I'll get 30 replies about how this one time it didn't work out - so it's the de facto standard. So, the one time there is lithography problems and design faults... it becomes the standard. But we'll forget about the other 6 sucessfull times it's worked prior.
The entire thing is a crapshoot - sometimes you get lucky, others you don't. However, as I said to fully understand the headroom of the current processes is as necesary as understanding the advantages (and drawbacks) of bleeding edge processes. Both ATI and NVIDIA have warned of increasing cycle times (in both metal revisions and in new processes entirely) so understanding when one process has a suitable amount of headroom left and when a newer one is going to be even more critical as time goes by - making a mistake of shooting for a process that isn't entirely ready yet may cause even more delays as we move on.