NV-30 problems ?!!

Well, Dan said it himself: "Yes, more or less, but I don't know for sure." That's pretty vague, indicating that while this may have been intended to be the spring part, it was in no way finally decided yet.

A similar delay happened with the GF3 originally.

1) It was taking longer than expected
2) there was no real competition at the time, so they could afford another refresh (GF2 Pro/Ultra back then) and still keep the top spot.

Wether reasons 1) or 2) are "more" responsible for the GF3Ti launch and the resulting 6 months "delay" of GF4Ti (which was always planned, unlike GF3Ti) and NV30 this time, is hard to say. NV30 was probably pushed back to be the Fall 02 part ever since shortly before GF3Ti was released though, as long as it meets that target it should be fine. If not, then...
 
Fahad145 said:
I don't get the difference between a graphic chip and a graphic processor. Like Gefroce and a nforce, what is the difference?

nForce is not a graphics processor or chip,it's a motherboard chip.(two even I guess...not sure,I know SiS has single chip,Via and Intel uses two...north and south bridge....which also SiS does but in one chip... :))
So nForce handles everything with the PC and they have included a graphics core in it for OEM's and those not needing more...but it's still a graph chip included in the design....GF2 MX with nForce...(to save a bit of cash for especially OEM's so they don't need to use an add in graph card)
GeForce (in this case) is hence a pure graphics chip and put on add in boards...so even though the nForce (or some of them rather) can handle graphics it's not due ti nForce having something to do with graphics processing,they have just added a graphics processing unit in it...
 
For what it is worth: There was 100 days (14 to 15 weeks) between the tape-out of the GF4 and production. This was a record, as far as I understand, and this was of course on a well-know 0.15 process with a well know architecture (GF3 + some extras).

The NV30 won't get out that quickly, but lets play with a best-case scenario: Tape out of the NV30 was the start of June. Lets say week 24. Then add at least 15 weeks. That'll be at the end of September. That will give nVidia 2 months (October and November) to fumble and still hit the fall timeframe with the chip - albiet maybe not at full production.

If nVidia gets a nice first rev-silicon back soon you can bet that this info will leak to rain on the Radeon 9700 parade.

Just remember that this is a company that as been well known for its ability to execute. It would, therefore, be a major bummer if the CEO's promise about the NV30 this fall dosn't hold true.
 
LeStoffer said:
For what it is worth: There was 100 days (14 to 15 weeks) between the tape-out of the GF4 and production. This was a record, as far as I understand, and this was of course on a well-know 0.15 process with a well know architecture (GF3 + some extras).

The NV30 won't get out that quickly, but lets play with a best-case scenario: Tape out of the NV30 was the start of June. Lets say week 24. Then add at least 15 weeks. That'll be at the end of September. That will give nVidia 2 months (October and November) to fumble and still hit the fall timeframe with the chip - albiet maybe not at full production.

If nVidia gets a nice first rev-silicon back soon you can bet that this info will leak to rain on the Radeon 9700 parade.

Just remember that this is a company that as been well known for its ability to execute. It would, therefore, be a major bummer if the CEO's promise about the NV30 this fall dosn't hold true.

I would have to pretty much agree with LeStoffer view. Granted I do think that it is the best case for nvidia and very possibly not come to be. I doubt that the yeilds at the .13um will be sufficent for any volume shipping before Christmas.

Dave if you want to rely on the CEO of nvidia for information then that is your choice. I don't always believe a CEO. You are right though in a sence because we won't know it isn't comming on time till he says so. Nvidia wants to appear as much as possible to be caught up to ATI and it is his job to at least make it appear that way even if the NV30 is only paper launched before Christmas.

EDIT: Last year though the nforce was very late..... not a word about this from the CEO.
 
does the transistor count drop when moving from .15 to .13 ? or is the whole point just to make more dies per wafer ?
 
jvd said:
does the transistor count drop when moving from .15 to .13 ? or is the whole point just to make more dies per wafer ?

No, but as the numbers would imply the size does.
 
Gollum said:
DaveBaumann said:
Well, this is OT from the thread, but wasn't the P3 issue also due to diminishing returns from the FSB? i.e. they had reached and FSB limit and without the extra memory bandwidth there was no point in going much further.
OT: That's possibly another reason why they developed the P4 they way it is, but Intel also had problems delivering any significant amount of P3 CPU's that ran stable beyond 1,13GHz. There was, amongst other stories, quite some noise about P3 1,13GHz test-samples Intel sent Tom and other members of the online press, one ceased working and others showed several different stability problems too. Shortly thereafter, the retail version of that chip was canned and they were only delivered in small quantities to OEMs and workstation markets. IMO this shows that there were more problems than just a comparatively low FSP speed. Maybe a die-shrink would have helped the P3, but I doubt it would have scaled a lot further. All of this was quite some time ago, forgive me if my memory betrays me somewhere... :)

I have a 1.2ghz .13 Celeron Tualitin and I would recommend it to anyone who wants a silent PC. This thing puts off almost no heat compared to my AthlonXP and can easily overclock to 1.5ghz speeds with a better hsf.
From what Ive seen, its possible that the XP2200 .13u Athlon is just a filler until the Barton comes out. The .13u athlon will probably make it into laptops very soon at lower speeds and much lower power consumption than the current .18u athlons. If this is true, then its possable that it isnt designed specificly for a high end platform but rather for a laptop and thus wont reach much higher clock speeds.
Speculation of course! :p
 
Geek_2002 said:
I doubt that the yeilds at the .13um will be sufficent for any volume shipping before Christmas.

So, suddenly the production engineer, now, are we?

You have absolutely NO basis for making this statement. Why do you say such things when you apparently haven't a clue? You don't know the die size, you don't know the design, you don't know TSMCs average yield for the fab their in, you don't know the process tweaks, you don't know the design targets. A fully qualified production engineer would only be able to make guesses based on rumors known to the general public, yet here you are making dire predictions.
 
> Of course, I'm just a software guy too, who works in the fabless
>semi industry. I'm sure there's an ASIC guy out there getting
> ready to school me.

RussSchultz, For a *software* guy, you seem to know an AWFUL AWFUL LOT about ASIC design :O I'm waiting for someone else to ask a *software* question, just to see how you far you take the first responder all the way back to pre-school :)

> Laying out custom logic is also a pain in the ass, since standard cell basically makes a grid out of the gates and custom doesn't.

Actually the standard-cell's library elements (the gates) have no fixed dimensions. As far as I can tell (looking through our Artisan Library 0.18u TSMC manual), the cell dimensions are all over the place. Standard cell is 'easier' to work with, because each unit cell encapsulates a whole group of transistors (anywhere from 2-50 or so.)

In custom-layout, *you* are responsible for sizing/placing/connecting each *individual* transistor. So the 'unit of work' is much smaller/lower (compared to standard-cell.)

There is a 2nd class of ASICs called 'gate-arrays.' Gate-arrays are ASICs where the gates are *already placed* and laid out in some distribution along the die. In this case, since the gates are already on the die, the tool just connects-wires between the gates. (And it'll tell you if you 'run out of gates.' For example, if the gate-array only has 3000 flipflops, but your design needs 3001, the tool will give you a general protection fault, heh just kidding!)

In terms of area efficiency, gate-arrays are far less efficient than standard-cell (or custom-layout) digital-logic. But they have a faster turnaround time and lower NRE, because of the fixed-gate nature. A company who is targeting a market expectation of limited quantity might choose a gate-array ASIC over a standard-cell.

<ARMCHAIR EXPERT MODE ON>
My coworker told me that if I want to 'sound' knowledgable, I should say something like "Area, speed, power-consumption... most technology upgrades (manufacturing process improvement, CAD tool improvement, or design methodology change) let you pick 2 out of the 3. If you want all 3, that's extra work for you." :)
<ARMCHAIR EXPERT MODE OFF>
 
asicnewbie said:
Actually the standard-cell's library elements (the gates) have no fixed dimensions. As far as I can tell (looking through our Artisan Library 0.18u TSMC manual), the cell dimensions are all over the place.
If your design is 90% NAND cells, I'm guessing the backend lays them out in something looking vaguely like a grid. ;)

But seriously, all the elements have a common height (or width), so it makes strips of cells.
 
Mulciber said:
I have a 1.2ghz .13 Celeron Tualitin and I would recommend it to anyone who wants a silent PC. This thing puts off almost no heat compared to my AthlonXP and can easily overclock to 1.5ghz speeds with a better hsf.
From what Ive seen, its possible that the XP2200 .13u Athlon is just a filler until the Barton comes out. The .13u athlon will probably make it into laptops very soon at lower speeds and much lower power consumption than the current .18u athlons. If this is true, then its possable that it isnt designed specificly for a high end platform but rather for a laptop and thus wont reach much higher clock speeds.
Speculation of course! :p
Little OT:
Congratulations
I have P3-S (server tualatin) .13 micron (no copper) 1.13Ghz 512kb l2 cache 8-way set associative 8)
This little beast is quiet, cool (29W !!!!), fast with all aplications and rock solid :D
There are people with the P3-S 1.4GHz going up to 1.6GHz without change the vcore (only 1.45v) and faster than P4s :eek:
If it had a higher vcore, .13 micron copper and DDR FSB my guess it could obliterate everything 8)
 
RussSchultz said:
asicnewbie said:
Actually the standard-cell's library elements (the gates) have no fixed dimensions. As far as I can tell (looking through our Artisan Library 0.18u TSMC manual), the cell dimensions are all over the place.
If your design is 90% NAND cells, I'm guessing the backend lays them out in something looking vaguely like a grid. ;)

But seriously, all the elements have a common height (or width), so it makes strips of cells.

Doh, again you're right. I asked one the layout guys in my office, and he confirmed that the standard cells are placed on along a fixed grid. (But most cells occupy more than '1 grid', so relatively speaking, I guess it's like rendering a polygon with subpixel-accuracy :) And I reviewed our library documentation again -- as you noted, all the library cells share the same width.

> If [Pentium3-S Tualatin] had a higher vcore, .13 micron copper

Really?!? I thought both Intel and AMD's fabs had *already* incorporated copper-interconnect at the 0.18u design rule? Intel regressed for 0.13u?
 
Back
Top