The NV30 Chronicles: A Short story by Mr H. Binder

MuFu

Chief Spastic Baboon
Veteran
Originally posted by Hellbinder
Guys.. I dont work at nvidia, and I dont have *That* kind of information. And anything I tell you is never ging to get validated in any kind of an official way. I simply had a conversation with someone who works at a another company whos identity will remain unrevealed. It was basically a long exchange regarding the true nature of the delays behind the GFFX based on obvious connections all IHV's have with TSMC. The gist of it is that in the .13u process was *NOT* all Botched up at TSMC, there was defenitly problems getting the low-K Dialectrics working towards the end of the process.

[story summation]
The Nv30 origionally taped-out in its origional form back in feb of last year. Which is why you'all around here kept getting in flame wars over the summer after every single Stockholder address. Its taped out, its not taped out, its taped out etc etc etc.. There were even *very* early revisions of the Nv30 in peoples hands before the R9700 debuted at E3 in that doom-III system.

The problem was, that for whatever reason the Nv30 was having serious Signal bleeds and other anomalies which caused various low level revamps of the design. Yes there were a few issues with the .13u process where the Nv30 was concerned but other products had already been completely finished at .13u all through last year. However the entire nature of the issue changed once the true nature of the R300 was revealed. Then the entire focus changed on getting the core speed up. Origionally the Nv30 was designed to be released at 350 and 400mhz. which was clearly not going to cut it against the R300 from a technology and performance leadership position. Back to the drawing board for a face lift, and Final revisions etc were finished around august, and the chip was finally ready for a Final Tape-out in early september. However this is where all the MAJOR issues with .13u comes into play. They hit serioius issues at that point with the implimentation of low-k. Which is what they were completely banking on for getting lower power and heat, in order to get the process running at 500mhz, the new target Clock Speed for the ULTRA. the whole Train wrecked for about 14 days. Nvidia at that point said.. screw this low-k, we can make it without it. They got the chip back in its final form witout low-k. It *almost* worked but there was some issue that required a final Metal layer respin. Which had them back and with a final working prototype board with initial drivers just before their big launch. However, the card was still only running at 400mhz at that point. It would run at 500mhz.. but it was very unstable. Thus they at this point (I believe) did one more revision to the chip they have now. They already new that they would need a giant cooling system as soon as low-k fell through. What they did not count on was the necessity of a 12 layer PCB for complete stability due to power requirements and other issues until pretty late in the game. Thus the big announcement a few weeks ago.

Which all brings us to where we are today. And why the Nv30 is in the current condition that it is. Ultimately it boils down to an Nv30 that was origionally targeted at 350-40mhz, with about 2.5x the performance of the GF4 in the *best* sinario. Which of course would have been a great product if the R300 was targeted at the GF4.

In this sense Type os perfectly correct in his statement about the early overhype. It put Nvidia in a position where they HAD to deliver or it was going to be a complete disaster, they did the only thing they could.
[/story]

Please dont flame me to hell if you think this is bunk or whatever. I understand if everyone wants to reject it. However, i was asked to elaborate on my little comment. So i did. I believe its a pretty accurate summery, even if some of teh specifics are a little off.

Interesting stuff. I still believe very strongly that the Xbox project was the main contributing factor to nVidia being on the back foot right now (having been privy to direct information from two internal parties supporting that idea) but what Hellbinder details is quite intriguing... all seems to make sense, does it not?

MuFu.
 
It certainly matches many of the theories/rumors that have been spread about.

Of course, that could just mean this story is a compilation and regurgitation those rumors.

Whatever the reason, they're plenty late with their new product. Hopefully their next product won't be quite so plagued with whatever delayed this one.
 
RussSchultz said:
It certainly matches many of the theories/rumors that have been spread about.

Yeah - there is one that I am ashamed to say I was probably responsible for and that was that an "something" taped out in Feb/March last year and came back from TSMC looking *not great*. A few people at ATi thought that it might be NV30 on a 0.15u process and I of course got excited like a total idiot and gasbagged it all over the place. Knowing that the part would rely on high clockspeeds for it's realworld performance, a 0.15 tapeout obviously spelled doom & gloom for nVidia. People then put two and two together and assumed *severe* process problems @0.13u. I can see now how that whole fuss could well have been about the first NV30 tapeout (0.13u) that Hellbinder mentioned.

...the chip was finally ready for a Final Tape-out in early september

That I know is 100% true - not from web-based speculation at all.

I believe the fact that it has a 12 layer PCB is more to do with time constraints than anything else. I'm sure they could have developed an 8 layer board (like the first NV30 PCB) if they had a longer R&D run with the target clockspeeds in mind.

MuFu.
 
anyone else here knows the story behind Parhelia??

if not, then there is no use asking if someone else has found same similarities from design process as I... though, manufacturer and process changes _the stuff_ ahs quite few similarities...
 
I wonder if Nvidia might not have been better just releasing it at 350/400 MHz 6 months ago and conceeded the performance crown. As it is it seems like they were slaving away on the part in the back room, desperately trying to make it better than the R300. In the end they barely succeeded, and it seems to me they just threw it out the door at this point only because they knew the R350 was on the horizon and if they didn't get it out now they'd never be able to compete.
 
I am probably wrong, but I was under the impression that the NV2A technologically could be considered a subset of the NV25. Therefore all that existed in the nv2a either existed previously in the nv20 or was being developed for nv25. Could the design process alone really set them back that far?

Ninelven
 
You mean Hellbinder doesn't work for NVidia? :oops:
:)

I wonder how early NVidia new what the R300 would be. I think if they had been able to release a 400 MHz NV30 within a few weeks of the R300, they would've been fine (nearly)--performance wouldn't have been that far off and they could trumpet the more advanced shaders. If the delay was entirely due to a redesign for 500 Mhz, they really shot themselves in the foot.
 
ninelven said:
I am probably wrong, but I was under the impression that the NV2A technologically could be considered a subset of the NV25. Therefore all that existed in the nv2a either existed previously in the nv20 or was being developed for nv25. Could the design process alone really set them back that far?

Ninelven

The design of any chip, regardless how derivative, takes months. At least 1 month of the design process is in fabrication, 1 month in layout, etc. Add several months of design implementation, verification, system design, and you're looking at at least 6 months. Even if you don't design a thing, but use IP bought from others, it still takes months of work integrating the various pieces together.

Its certainly not as simple as 'snip snip' and you're done.
 
I don't believe it. I actually believe something that Hellbinder posted. Amazing. This looks, to me, to be at least 90% accurate. The only thing that may not be completely accurate is that the FX chip was out in its "current incarnation" in February of last year. Well, that and the following:

While I can't be certain about the points on why nVidia was pushing so hard for the low-K dielectric process and failed, they do seem plausible. Still, there was probably more to it than that. This is, after all, somewhat against what we've seen of the company culture at nVidia over the past few years. Since the original TNT (which was released at .35 micron, even though it was meant to be at .25 micron in order to usurp the Voodoo2 SLI), nVidia has been all about execution. If nVidia delayed just for the purpose of increasing performance, this would be a first (and a grave mistake).
 
The only thing that may not be completely accurate is that the FX chip was out in its "current incarnation" in February of last year

No, I said *origional form* not *current Form*

i also mentioned that there were some revisions done throughout last year, leading up to the big *Final Tape-Out* that is quite famous now.
 
Nappe1 said:
anyone else here knows the story behind Parhelia??

if not, then there is no use asking if someone else has found same similarities from design process as I... though, manufacturer and process changes _the stuff_ ahs quite few similarities...

Both chips seemed to take a similar amount of time to go from initial tape out to production. Although if the story is true, Nvidia is late and faster than the initial spec while Parhelia was late and slower.
 
Hellbinder[CE said:
]
The only thing that may not be completely accurate is that the FX chip was out in its "current incarnation" in February of last year
No, I said *origional form* not *current Form*

i also mentioned that there were some revisions done throughout last year, leading up to the big *Final Tape-Out* that is quite famous now.
Well, nVidia has publicly stated that they have been testing the .13 micron process since about that time, but I find it more likely that most of those chips in the first few months were far from fully-functional graphics chips.
 
This was quite intetersting, but why don't you get some of that privy info, and tell us what the heck they are up to now aye?
 
Well, nVidia has publicly stated that they have been testing the .13 micron process since about that time, but I find it more likely that most of those chips in the first few months were far from fully-functional graphics chips.

I agree. During Feb last year, TSMC was far from capable of making anything close to a functional NV30. Heck, in 1H 2002, they barely produced ANY .13u wafers, let alone anything as complex as the NV30 in transistor count.
 
Chalnoth said:
This is, after all, somewhat against what we've seen of the company culture at nVidia over the past few years. Since the original TNT (which was released at .35 micron, even though it was meant to be at .25 micron in order to usurp the Voodoo2 SLI), nVidia has been all about execution.

I thought it was the other way around - after .35 micron TNT, Nvidia has banked heavily on the latest unproven processes, and that that they up till NV30 have been "lucky" with that strategy.
 
While the 'evolution' of the NV30 make sense, it doesn't make sense that the drivers should be so inmature at this point if they had dieffrent NV30-rev chips to work on for so long. Sure, they were not the same rev, but still. Think about it.
 
Back
Top