Detailed tape out process

To summarise the editorial :

if you buy the NV30 in Dec (if it is on the shelf then) then you buy an product not worth the money. I wonder how they will be able to sell an barely functional product to their customers (read OEM's + board manufacturers).

I hope they have better luck then ATi. Imagine ATi would have released the A01-chips for production (even higher power-draw; <250MHz). This product would have had an performance lower then the GF4Ti4600 (without AF+AA). If this happens with the NV30 <-> R300 then all FanATics would have an really good december :D
 
Didn't something similar happen to original GeForce3? The very first boards had various issues if my memory serves me right...

Although it doesn't matter much - the availability of "bad" chips will be low and after half a year everything will be normal again... and everybody sees how much the NV30 drivers have improved over time (compared to those used in December reviews) 8)

The most important thing is to keep their promise about "holiday season". Even if some reviews state that there are minor issues, it don't affect stock prices as much as delaying the product for another couple of months.
 
I thought they had a few tape-outs before and they now have the final tape-out that needs only a few improvements, or am I totally wrong???
 
Just to clarify, is "tape out" an archaic term relating to the magnetic tape with the database on it physically sent to the fab ("out")?
 
mboeller said:
if you buy the NV30 in Dec (if it is on the shelf then) then you buy an product not worth the money. I wonder how they will be able to sell an barely functional product to their customers (read OEM's + board manufacturers).

Ack, I don't believe this sell "barely functional chip" crap. Not only would it be very damaging to a company's image, but if this was a standard way of getting a chip quickly to the market, we would have seen a lot more of halfway functional Radeon's and GeForce's out there... :rolleyes:

I cannot remember one single incident besides some rumours that the GF3 A03 didn't have support for 3d textures that the GF3 A05 have. And it was never confirm anyway.

Sure, it makes perfect sense to showcase first silicon (if it works of course!) but this rapid launch of limited functionality idea to consumers is just stupid.
 
A good read, BTW. So Nvidia would have had a much easier time doing first a NV2x refresh in .13, then doing NV30?
 
LeStoffer said:
single incident besides some rumours that the GF3 A03 didn't have support for 3d textures that the GF3 A05 have. And it was never confirm anyway.

Yes, it was. There is/was a presentation in the NVIDIA developer area where it clearly states that 3d textures are only available in A5. Though it is hard to find as it wasn't directly related to 3d textures.
 
Gunhead said:
Just to clarify, is "tape out" an archaic term relating to the magnetic tape with the database on it physically sent to the fab ("out")?
Yes. Nowadays 'FTP-out' is probably a better description of the process...
 
Gunhead said:
A good read, BTW. So Nvidia would have had a much easier time doing first a NV2x refresh in .13, then doing NV30?
Yes, but that would be lucky even to be as good as R9700 - if you see the benchmarks, R9700 is between 1.5 and 2X Ti4600 performance, and getting an extra 50%+ clockrate out of a design that was originally simulated at the lower rate is very hard...
 
LeStoffer said:
Ack, I don't believe this sell "barely functional chip" crap. Not only would it be very damaging to a company's image, but if this was a standard way of getting a chip quickly to the market, we would have seen a lot more of halfway functional Radeon's and GeForce's out there... :rolleyes:

hmmh... this reminds me... Point Sprites Acceleration in Radeon 7x00 cores have something very wrong. you can get about 2-10 times faster running point sprites with software. GeForce DDR surpasses my Radeon in point sprites. GF 256 DDR is about 20 times faster than Radeon in 3DMark Point Sprites test.

I even wonder, why they didn't do anything about it when launching RV200... it has same problem.

so, I am saying that propably there has been some non working things in chips but none of them would be so critical that they would be visible to user. (because user doesn't actually know how the card / chip should be working if everything would be fine.)
 
LeStoffer said:
Ack, I don't believe this sell "barely functional chip" crap. Not only would it be very damaging to a company's image, but if this was a standard way of getting a chip quickly to the market, we would have seen a lot more of halfway functional Radeon's and GeForce's out there... :rolleyes:
Sound right to me. I imagine we'll probably see nv30 operating as soon as nvidia can (to derail R9700 sales), with a promise of 'immediate production' which is likely to turn into a long, long wait.

When it comes to limp-along hardware.... well, they have to capture the top end from R9700; anything less will be a disaster for them as it is shipping months ahead and can do everything theirs can.

Here is the hook, though. Not only is nVidia going with a very new design, they are also going with a new process and that can cause about anything to go wrong.
That's the big point in there.
 
All they have to do is get one working chip that can run insanely fast speeds and put it on a board. Then they can pass it around to the review sites like a crack pipe.
 
Nappe1 said:
hmmh... this reminds me... Point Sprites Acceleration in Radeon 7x00 cores have something very wrong. you can get about 2-10 times faster running point sprites with software. GeForce DDR surpasses my Radeon in point sprites. GF 256 DDR is about 20 times faster than Radeon in 3DMark Point Sprites test.

I even wonder, why they didn't do anything about it when launching RV200... it has same problem.

so, I am saying that propably there has been some non working things in chips but none of them would be so critical that they would be visible to user. (because user doesn't actually know how the card / chip should be working if everything would be fine.)

Well, there is a big difference between accepting a general (as in all chips) design flaw and releasing it as-is and sending out chips with individual flaws as that article suggest.

The first can be okay as the company knows the flaw exactly and how you can 'hack around it' in the drivers. The latter is, well, crap.
 
His reticle discussion is a bit overblown. They cost nearly a million dollars, the tools that do the layout and backend cost a million dollars, and there's engineers on all sides doing checks and verifications. I'm not saying its impossible to have reticle generation problems, but it isn't some shoot by the hip, maybe it'll work kind of thing.

I'm not sure why they'd say "Everything going perfect, nVidia will have a small, rather imperfect number of NV30 in December". Generally, everything going perfect has a different meaning to me. ;)

It doesn't necessarily take longer to go from tape out to finished goods on .13 than it does on .15.

Modern testing is done via scan chains, where a pattern is injected into a debug port on the die. This debug port connects to a subsystem that touches every gate on the chip and allows any gate to be set to any state. The die is then clocked once, and the chain is read out, and verified for correctness. This is independant of the logical design--all it does is test that the chip was manufactured without defects. This will separate the wheat from the chaff, when it comes to fabrication errors. (This test, by the way, is run on every single part coming off the production line, even if they've been manufacturing it for years) It takes a couple of seconds per part (though you can test hundreds in parallel), so it is nothing that will take longer on .13 than .15.

Once they know they have parts with no fabrication errors, then they can do functional verification. Just because its .13, doesn't mean the LOGIC is any more difficult to verify (beyond the fact that there is likely more of it).

To sum it up, being a completely new design would increase functional verification time. Going to a new process would cause potential yield issues if the fab did not correctly model the parameters, or doesn't have the process dialed in yet. Its not, however, this giant spaghetti bowl that is so intertwined that it is difficult to seperate the two issues.

Plus, he keeps talking about silicone. What do boobies have to do with the semiconductor business? My experience is quite the opposite. ;)
 
BIST is good for regular things (like RAMs), but I think to use BIST for general fabrication verification could pose its own challenges.

1) Your test program must be known at tapeout time
2) You can't change it
3) It takes up silicon (Though test time costs also)

Here's a few articles that compare the two methods with some historical background:
http://www.semiconductorfabtech.com/features/tap/articles/edition1/taptech1.pdf/tt1_infrastru.pdf
http://www.semiconductorfabtech.com/features/tap/articles/edition1/taptech1.pdf/tt1_systest.pdf

At our company, for example, we are constantly improving our test programs to get closer to the process edge so we can not throw away as many parts.
 
Thanks for the links.
In fact during the 80´s I almost wrote a fault simulator after a hierarchical simulator I had developed.
 
RussSchultz said:
Plus, he keeps talking about silicone.

Well, they do have those 3dfx people working for them now. Maybe they brought Asia Carrerra in to consult on the hardware design.

NV30, the first chip that jiggles.
 
All they have to do is get one working chip that can run insanely fast speeds and put it on a board. Then they can pass it around to the review sites like a crack pipe.

:LOL: Until one of the review sites oveclock it too much and its go up in a puff of smoke :). I wonder how many dies it will take to build another ;).
 
Back
Top