NV30 not taped out yet....

LeStoffer said:
Maybe, just maybe, the NV30 isn't really delayed according to nVidia's own internal timetable.

Initialy, NV30 was target to be the spring 02 product. Then there was the GF3 delay and as a consequence of this the GF4 delayed as well. The NV30 became the fall part. Now we seem to have another delay ...

I don't believe, however, that there was any delay due to "hey, we need to add a 256 bit bus also!". The major design cannot be changed, but some minor tweaking after the test simulation might have taken place during the last month or so.

Yes, I agree. The bus size and therefore the memory bandwith expected is a key design target, you build your architecture around this, designing cache sizes, number of TMUs, pipelines, memory controller and decide how much engineering effort has to be put into bandwith saving features (or not). Simply adding more memory bus lines late in the process brakes down your architecture design.
 
nVidia has always pushed technology. Back in the days when 3DFX was their main competitor, they chose to persue the riskier new .18 die & expensive DDr memory while 3DFX went with the tried & true .25 die & cheaper Sdram memory. Due to 3DFX’s mismanagment, and the fact that nVidia was correct in choosing the newer technology, 3DFX is now history. This time, though, the newer technology is biting nVidia in the a$$! Everyone (cept Intel) that is trying .13 die is having problems. Look at how late AMD was with Tbred. And, it’s my personal belief that NV30 was designed around a 128 bit memory interface & DDr II memory. nVidia has stated many times that 128 bit memory interface was enough, and they have a track record of throwing expensive & very leading edge memory at their products. And who knows what problems they are having with this new memory technology. And, not to mention, nVidia's pride is always on the line!

Of course, maybe it’s the 3DFX factor, that hiring all the old 3DFX employees & beginning to try to use some of that 3DFX tech has cursed nVidia…… I could be wrong........
:rolleyes:
 
Interresting Martrox

It is all about pride. Most people will not buy a NV30 or a R300, most people will probably buy the sub $200 card, but there is the pride of having a card from the company that has the fastest chip LOL.

I kown both ATI and nVidia will do a good job :)
And I want more good jobs and more options.
People should start to think as consumers.
 
DaveBaumann said:
I was under the impression that the typical rule of thumb for a 'repspin' was in the region of 4 (if you're lucky) to 6 weeks - whick is about 28-42 days; would you then have another 58+ days for production ramp up after that?

The time for a respin (in the fab) is about what you're saying. However, my best guess is those numbers in the chart were the days between initial tapeout and the time when the silicon was verified and the 'go' button was hit for mass production. In other words, not including the fabrication of the initial lot of production wafers (which would add another 2-3 months).

The nine months number people are tossing around sounds more like the time between design kickoff and initial tapeout.
 
There may also be another issue here, arrogance. Maybe nVidia never expected ATI to do what they have done with the R300. If nVidia based the NV30 on a 128 bit memory controller with DDr II, and then found that the R300 was faster (even if just a small amount faster), just what would they do? Would their pride let them release a “slower†product? Or, would they backpedel, issue a bunch of FUD, and then only own up to the “facts†when they had no choice? And only admit those facts that they, by law, have to? :oops:

Now, don’t get me wrong. I own 7 nVidia cards(2-TI4600, GF3, 2-200TI, GTS & m64 TNT) and no ATI cards – although I will replace the 4600’s with ATI 9700’s when they are available – and, if anything better comes out (and CompUSA carries it!) the 9700’s will be replaced…… if you want to know why & how I can do this, just ask!
;)
 
martrox said:
Now, don’t get me wrong. I own 7 nVidia cards(2-TI4600, GF3, 2-200TI, GTS & m64 TNT) and no ATI cards – although I will replace the 4600’s with ATI 9700’s when they are available – and, if anything better comes out (and CompUSA carries it!) the 9700’s will be replaced…… if you want to know why & how I can do this, just ask!
;)

:) Cause you Bill Gates and you can afford it?? :D

j/k

US
 
I don't believe for a moment that the reason the NV30 isn't coming out in September is due to design changes in a response from ATI.

First of all, nVidia employees have said, time and again, that they firmly believe that their key to success is timing. They specifically decided to time their launch schedules around OEMs, so that each time OEMs were ready to ship new products, nVidia was right there with a new product.

If nVidia had decided to ship in September with a 128-bit bus, then they would have just bitten the bullet and taken the lower performance, with a promise to outdo ATI with a refresh (and a 256-bit bus). In the meantime, the 128-bit bus would be marketted as more technologically capable, and "You don't need the extra performance of a 256-bit bus...it costs too much!"
 

So as their talking about "correcting errors _in the metal_", it seems it took 9 months from the first to the final silicon!


I'm fairly certain they are referring to the time from when they started the design to the point where they did initial tapeout. Since it was a refresh, it was possible to do in this short period of time. The reason I don't think it was time from initial tapeout to final is that they mention that initial silicon only had 19 functional problems, 12 of which were handled with software workarounds and 7 that were all addressed by a metal fix. Since all 7 were fixed in a metal spin, there's no way it would have taken them 9 months from tapeout to release revision.
 
Cause you Bill Gates and you can afford it??

NOT! I doubt Bill Gates shops at CompUSA....and I only shop there when I have to. About the time the V5 came out, I purchased 2 of them at CompUSA. As I was making the purchase, I was told by the salesperson that I could pay $25.00 each and have the ability to trade the card in anytime within 2 years, or, if the card quit working, do the same. I figured, what the hell, it's only $50.00.......
So far, these are the cards I've gotten from them:
3DFX V5(x2)
Hercules Geforce 2 Pro 64 meg (had to pay difference of $100.00)(X2)
Visiontek GF3(x2)
Visiontek GF3 500TI
Visiontek GF4 TI4600(x3 - one died...)

On all I had to reup for the tradin/warranty - the cost seems to be dependant on cost of card, time of month, phase of moon.......
Between $25 & $35 each!
Also, sometimes I have been told that I couldn't trade the card in...But, the employee has ALWAYS taken me aside & then written the card up as warranty return.
There is nothing immoral about this, it's their policy, on anything except full systems. And you can upgrade to a more expensive device for the difference in what you paid vs. what they are selling the new device you want. And your tradein value is the maximum you have spent, not including tradin/warranty purchase - $25.00 to $35.00.
Stay on the bleeding edge....and not pay for it.....it might make a fanperson accually try someone elses product......well, maybe not :rolleyes:
 
Randell said:
There were 2 reasons I beleive the Gf3 was slower in some cases -

drivers & the fact that it doesn't like SDR systems. Aceshardware has some upgrade articles which shows just throwing a Gf3/4 in an SDRAM system it is slower than a Gf2Pro say. The Gf3 is now faster in those games it was losing to the Gf2U in on modern systems with enough power and memory badnwidth .

The LMA and nfiniteFX are integral parts of the Gf3 architecture soI fail to see how they were put into the Gf3 to 'speed it up over the gf2'

In respect of the NV30, true DX9 games will be a way off, we have no knowledge of how either the R300 or the NV30 will actually perform using DX9 applications.

As for Cg will speed it up over it competitiors - dont even go there ;)

I'll disagree with you Randell

I turn your attention to http://www.rivastation.com/gf3_e.htm

chiptable_e.gif


As you can see the Core and the Texel/Sec the GF2 Ultra gives out higher results . .but the GF3's T&L Engine gives out 60m triangles/sec and 76 gigaflops compared with the Gf2 Ultras which is almost half that. Don't you think the LMA and nfiniteFX makes a difference now?? lemme quote

Despite its nominally lower clockspeed, the GeForce 3 will (or let´s say should) be noticeably faster than the GeForce 2 Ultra, thanks to a couple of tricks that NVIDIA has pulled out of its hat. Most of these attack one problem specifically that has become most video cards' primary bottleneck: Memory bandwidth and AGP transfer rate. NVIDIA uses three of these "tricks" together and calls them "Lightspeed Memory Architecture":

Crossbar Memory Controller: The internal 256 Bit memory bus is divided into four paths of 64Bit each. This way, smaller data packets don't automatically block the entire bus. When larger data sets need to be transferred, these subdivisions can be reconnected, giving you a 256Bit wide bus. Ideally, the CMC is supposed to be 4x faster. NVIDIA says that the 64-bit data paths are sufficient for about 75% of all memory transfers that occur.
Lossless Z-Compression: One of the biggest memory bandwidth hogs is Z-Buffer data, since it is read from memory and written back to it for every rendering cycle. The GeForce 3 transfers this data only after it has been compressed at a ratio of 4:1 (lossless).The compression and decompression takes place completely in hardware and thereofre reduces the strain on the meory. While the feature is strongly reminiscent of ATi's Hyper-Z, NVIDIA maintains that their technique is something completely different...
Z-Occlusion Culling: During the last months we have witnessed rumors of (leaked) drivers supposedly supporting HSR (Hidden Surface Removal) on GeForce 2 cards flying around the web. That's all it was though - a rumour. So what is HSR, you ask? Traditional (so called "brute-force") graphics chips render a lot more polygons than you'll ever see in the completed scene - objects behind a wall, for instance. (Victims of the legendary ASUS cheat/see-through drivers will know what I mean). Z-Occlusion culling is supposed to help the GeForce 3 predetermine the visibility of any given texel. If the texel is judged to be invisible, it won't be rendered and thus frees up space in the framebuffer as well as saving the chip a read/write to/from the memory.
According to NVIDIA, the LMA features alone will make the GeForce 3 up to 4x faster than a GF2 Ultra. If these claims are borne out in real-world scenarios remains to be seen.

Randell.. although the GF3 was clocked slower the architecture was made to increase it's performance values .. be it with DX8(remember DX7 didn't support the features) or with NVidia released drivers(as we all know Nvidia's drivers support is awesome) which most of the times yield faster performances. Initially the drivers released just before the GF3 came out gave out quicker performances to the GF2 Ultra than it gave the GF3. But drivers released after the GF3 came out yielded faster results for the GF3 than the GF2 Ultra.

US
 
martrox said:
NOT! I doubt Bill Gates shops at CompUSA....and I only shop there when I have to. About the time the V5 came out, I purchased 2 of them at CompUSA. As I was making the purchase, I was told by the salesperson that I could pay $25.00 each and have the ability to trade the card in anytime within 2 years, or, if the card quit working, do the same. I figured, what the hell, it's only $50.00.......

Martrox .. that's so kewl . .wish they something like that over here. :cry:

All well .. so now u await the R300 hey ;) kewl.

US
 
Unknown Soldier said:
Randell.. although the GF3 was clocked slower the architecture was made to increase it's performance values .. be it with DX8(remember DX7 didn't support the features)
US

actually we agree and disagree :)

1. Your post read to me like the LMAII and nFiniteFX (PS/VS) were shoehorned in to make the Gf3 faster than the Gf2. I know now thats not what you meant.
2. I agree totally that the LMA helps the Gf3 outperform the Gf2 on a lower clocked core, regardless of DX version. My understanding was though that the Gf3 had a vertex shader and a hardwired TnL unit for backwards compatibilty. Even those games which use DX7 TnL though the Gf3 is now faster in.
3. Other features like Z-Occlusion culling again are DX agnostic.

So in my view it was purely drivers initially aided and abetted by SDR systems - here's the link to the Aces article;

http://www.aceshardware.com/read.jsp?id=45000228

Of course their reasoning may be flawed, but in games with a DX7 TnL base, the Gf3 is now faster than a Gf2U in any recent test I can remember (say Giants, Max Payne etc).

Oh and why would any of what you beleive nVidia are planning for the NV30 automatically make it faster in the long run over the R300? Teh difference in teh scenario you paint is that you are comapring how 2 different generation chips compared as drivers got better on the new generation versus the old genreation.

With the NV30/R300 debate we have 2 chips of the same generation, so any speed differences purely due to implementation of DX9 features will be available to both.
 
Ok.... I was under the impression that the "tape out" as it were is something that is normally only done once ...... according to someone who will remain nameless.. At any rate the CEO made it clear that this has not occurred as of yet. So with the assumption that a "tape out" is normally the correct term for the initial part it would seem logical to assume that there have been no "metal revs" yet and the part as far as they know is "good to go" so to speak. But with a chip as complex as the nv30 is rumored to be(120 million transistors) it would almost be another logical assumption that they will require at least one "metal rev".. requiring yet more time. Further the CEO did not say that they were close to having a "tape out" any time soon. In fact his words were "We are in the process of wrapping it up".. what does this mean exactly? We can speculate all we want but to me he is trying to make it look as though the first "tape out" is near finished...... just not in so many words. Besides if it is true that in the last CC he claimed that it had already been "taped out" but in reality was no were close to that I think this says volumes about the mans credibility... or is that just me? While we are on the topic I suppose we also could make the educated guess that in fact the nv30 will have numerous problems at the .13um process with yields. To even suggest that the nv30 will be available(on store shelves) in the fall is foolhardy and ignores the reality of the matters facing nvidia entirely. In fact to say that the card will be on store shelves for Christmas is a stretch IMHO.
 
Geek_2002 said:
While we are on the topic I suppose we also could make the educated guess that in fact the nv30 will have numerous problems at the .13um process with yields.

Dunno. I came about this newsbite from www.siliconstrategies.com dated April where TSMC acknowledged that they have some problems with one of the two 0.13-micron processes:

http://www.siliconstrategies.com/story/OEG20020410S0023

At present, TSMC offers two separate and basic versions of 0.13-micron technology. The first version is a copper-based technology, with an FSG (fluorine-doped silicate glass) option. Meanwhile, the second version is a copper-based technology, with a low-k dielectrics option.

While TSMC claims it is shipping parts based on the FSG version, company officials acknowledged that still having some "reliability" issues with the 0.13-micron process, based on low-k dielectrics.

TSMC is shipping 0.13-micron parts based on low-k, but the yields are low and the technology is more difficult than the company had originally expected, according to sources.

I have no clue whether NV30 is based on the low-k version...
 
LeStoffer said:
I have no clue whether NV30 is based on the low-k version...

Low-k is intended for very high clock rates (pushing 1 GHz +) so I don't see why the NV30 would need it.

Mize
 
LeStoffer said:
Geek_2002 said:
While we are on the topic I suppose we also could make the educated guess that in fact the nv30 will have numerous problems at the .13um process with yields.

Dunno. I came about this newsbite from www.siliconstrategies.com dated April where TSMC acknowledged that they have some problems with one of the two 0.13-micron processes:

http://www.siliconstrategies.com/story/OEG20020410S0023

At present, TSMC offers two separate and basic versions of 0.13-micron technology. The first version is a copper-based technology, with an FSG (fluorine-doped silicate glass) option. Meanwhile, the second version is a copper-based technology, with a low-k dielectrics option.

While TSMC claims it is shipping parts based on the FSG version, company officials acknowledged that still having some "reliability" issues with the 0.13-micron process, based on low-k dielectrics.

TSMC is shipping 0.13-micron parts based on low-k, but the yields are low and the technology is more difficult than the company had originally expected, according to sources.

I have no clue whether NV30 is based on the low-k version...

Yeah but unless TSMC has managed to fix their issues it really makes no difference. With either process they have significant issues..

Hopefully TSMC gets these problems fixed soon. I would imagine that ATI also .13um part in the works as well. I mean sense it is TSMC(or are we talking about UMC?) it isn't as though nvidia will have a monopoly on .13um process or anything.
 
I'm a little confused by the question asked in the conference call, myself.

On one hand, if it hadn't had the initial tape out yet, there's no way in hell that product will reach the shelves in time for Christmas selling.

But on the other hand, "has it taped out?" is a very very very strange way to ask if its had its final metal rev taped out and the part is ready for production.

But, if it had its initial tapeout, then the NVIDIA rep could have answered 'yes' to the question posed and be perfectly 'legal'.

To clarify, if you ask me "has your part taped out", I'd bring that to mean "have you done your initial tapeout".

If you asked me "how many times has your part taped out", I'd think you were goofy, but answer with how many metal revs because thats the closest thing I can think of what you're asking.

If you asked me "how many tape outs have you had this year", I'd think about all our products and answer with the number of individual products.

As I mentioned before, you technically do a "tape out" for every rev, its just that that terminology isn't used to count revisions. Kind of like authors publishing books. Having three revisions of the same book doesn't count as three publications, even though you do publish three different editions.
 
RussSchultz said:
I'm a little confused by the question asked in the conference call, myself.

On one hand, if it hadn't had the initial tape out yet, there's no way in hell that product will reach the shelves in time for Christmas selling.

But on the other hand, "has it taped out?" is a very very very strange way to ask if its had its final metal rev taped out and the part is ready for production.

But, if it had its initial tapeout, then the NVIDIA rep could have answered 'yes' to the question posed and be perfectly 'legal'.

To clarify, if you ask me "has your part taped out", I'd bring that to mean "have you done your initial tapeout".

If you asked me "how many times has your part taped out", I'd think you were goofy, but answer with how many metal revs because thats the closest thing I can think of what you're asking.

If you asked me "how many tape outs have you had this year", I'd think about all our products and answer with the number of individual products.

Yes I think if the part had initially been "taped out" he would have said "yes it is taped out" without having to make referance to a "metal rev" at all.

In fact he would have told them what they want to hear if he could.

If in fact he did say in the last CC that in fact the nv30 is taped out then the SEC should investigate that.. But sense nvidia does not officially back such a claim then... there really isn't much to discuss about it.

I am willing to except that in fact the nv30 has not had its initial "tape out" and that we will indeed be waiting till next year to see the part "on the shelves" but we may get a paper launch towards the end of the fall season just before Christmas but not actually be able to get the part till next year. Also what does "wrapping it up" mean?

On that note I think all this talk about vapourware is meant to be nothing but a distraction, from nvidia, to sway potential buyers away from ATI and the Radeon 9700.. IMHO of course.
 
I'd assume that wrapping it up means the same no matter what you're talking about: they're in the final stages of doing whatever they're doing.

Whether its verifying their metal fixes, or doing the final layout for the first try, or...

I'm personally not putting any money on any predictions, either for product on the shelves by the end of the year or not.

The tea leaves are way too capricious. Every hint is countered by another innuendo; every substantiated fact by an equal but opposite fact. None of it even seems to make a general outline of whats really happening. Since all of it is second hand(via websites), or very strange terminology (the conferece call), I'm in the "I'll sit and wait until the smoke clears" camp.

Either way, some day soon, when I want to buy a DX9 card, they'll all be too expensive and I won't. ;)
 
Back
Top