NV-30 problems ?!!

DemoCoder said:
BTW, since when is Intel a rival to NVidia's GPU business? The article says that Intel is one of the "reinvigorated rivals" Intel probably has the worst integrated video in the business. The reporter's credibility is being stretched here.

I think since in last months Wired magazine the CEO of NVDIA outlined how they were going to relagate the CPU (specifically mentioned Intel) to a secondary posittion & the GPU as the do all, be all Wizard of Oz. Also like most real world magazines, businessweek has their articles written several days before pubilcation (they publish each Sun night for distro on Mon afternoons).
 
Humm, I am still not fully convinced that the nv30 can be delivered by Oct. Assuming that everything goes perfectly with the nv30 at the .13um (highly unlikely..to say the least.) it would take at least 6 months for them to iron out the drivers wouldn't it? Further their third parties will also have to contend with a highly complex PCB.

I mean really come on. At the absolute best we may see a paper launch of the nv30 in the late fall at the absolute most positive optimal outcome. This will only be used to dissuade end users from purchasing ATIs Radeon 9700 until the nv30 is actually launched in the spring or late winter 2003. I really think that first off nvidia will have issues with the nv30s 120 million transistors at .13um... then after that is settled they have to work out complex driver problems.

Nvidia I believe has been forced to go with the nv30 at the .13um process because they have no design at the .15um that competes with the Radeon 9700 forcing them to push TSMC for a .13um part that they were not completely prepared for. All this talk of the nv30 as a fall part is just that...
 
sumdumyunguy said:
I think since in last months Wired magazine the CEO of NVDIA outlined how they were going to relagate the CPU (specifically mentioned Intel) to a secondary posittion & the GPU as the do all, be all Wizard of Oz.

the Intel factor has been talked about in investment circles for a long time because of i845G and, as I said earlier, the fact that there have been no integrated Intel product for P4 until then. If you look at PIII chipsets Intel had a huge video market share with i810 and analysts have been expecting i845G to have a similar effect and hence undercut many of the low end discrete graphics board sales. According to NVIDIA this hasn’t occurred much as yet. Though.

Geek_2002 said:
Humm, I am still not fully convinced that the nv30 can be delivered by Oct. Assuming that everything goes perfectly with the nv30 at the .13um (highly unlikely..to say the least.) it would take at least 6 months for them to iron out the drivers wouldn't it?

The majority of driver development is carried out on the simulators that simulate the features of the chip. AFAIK its only really the nigly compatability and possibly performance stuff that you’re getting down to once the chip hits silicon – there my be a few workarounds for non-fatal bugs in the chip but that’s about it.

Geek_2002 said:
Further their third parties will also have to contend with a highly complex PCB.

Is there any reason to assume that the PCB will be any more complex than others out there?
 
So you really think a fall launch of the nv30 is possible Dave? (Paper launch does not count IMHO.) If so I will reconsider my opinions.
 
Geek_2002 said:
So you really think a fall launch of the nv30 is possible Dave? (Paper launch does not count IMHO.) If so I will reconsider my opinions.

At the moment the only information we have says that it is going to be a fall part - the lack of any information otherwise is not enough to state that it isn't going to be so. Its possible that it will be delayed longer than that but we have no evidence to show it, so until we do we have to assume they are on their schedule.
 
The question that needs to be answered is this: "Has TSMC sorted out their 0.13 micron process yet?"

I know they have started hyping the 0.09 process for next year but it seems to have taken an age to get the 0.13 process working to an appreciable degree. I seem to remember several chip companies (including ImgTch when the Kyro III was still being mooted) waiting for the 0.13 process to come on line.

If I remember correctly, back in the day NVidia took a 'gamble' on the process for the original GeForce and everyone was comparing this choice to 3Dfx's going with the tried and tested process. It appears as though this has happened again this hardware cycle. This 'gamble' was a winner for NVidia in the past, but will it be successful this time? I'm inclined to believe that with the engineering talent NVidia have available (and their past record), any further NV30 delays are likely to be due to problems at TSMC.
 
I agree with Dave.
Until Nvidia does not say "we have a delay" everything is speculation.
Of course if they say fall part that could also mean the last day during the december before winter starts :)
Maybe it is a paper launch who knows. But what then is the R300?
Until now its nothing more than a paper launch too.

Anyway i expect NV30 to hit shelves around October/November.

Regarding the 0.13 problems at TSMC i have to add that this will hurt all companies which have their chips produced over there including ATI if they plan a 0.13 R300.
Maybe the NV30 is late but then they have a advantage over ATI concerning process technology. Just look at Intel vs AMD.
The delay in 0.13 hurts AMD a lot right now and this of course can happen to other companies as well.

I do not think we need to worry about this. Until the NV CEO does not say anything different they are on track.
 
AMD as example demonstrates pretty well that smaller manufactuing process doesn't neccessarily mean higher clockspeeds. Its okay to assume higher clockspeeds are possible on a smaller process, but the newly shrunk K7 core shows little to no advantage over older ones concerning clockspeed, despite all manufacturing advantages. These can only go so far, the design has to be good too. R300 seems to be an exceptional .15micron design, if NV30 turns out to be only average on .13micron, it might end up running similarly or only slightly faster after all...

Anyway, I was actually mainly about to agree with Dave, until we get to hear something more official, all the rumored delays are of little actual worth to judge NV30's release date by, so we should assume a "fall" launch until we know better...
 
Gollum, I think its only slower due to marketting. (Or they used the die shrink to lower the core voltage and reduce power)

That same die can probably run much higher speeds, and you'll eventually see those speeds as they release their xp2200, xp2400, xp2600, etc...
 
Russ, I came to this conclusion after having read about 10 different articles on overclocking the new K7 core the weeks following its release. All in all, the new .13 K7 did not seem to overclock very well in 90% of the cases. There were only 2 or 3 reported cases where a significant overclocking was achieved (like in more than 50-100MHz). I suppose it is possible that this will improve when the process at AMD matures further, yields get better and ongoing tweaks occur, but the overall sentiments I remember from those articles was that the K7 architecture itself appears to be slowly reaching its limits concerning clockspeeds.

It's similar to what happened with the P3, it basically hit it's architectural limit at slightly above 1GHz, and while a die-shrink might have enabled it to run a little faster still, the resulting increases would have been fairly small and a new architecture was simply needed to go beyond these limits...

These are CPUs and operate quite differently from GPUs, I know, but I think the same principle might still apply here as well. The dramatic clockspeed difference between R300 and other, even much less complex .15 micron parts show that architecture can have a more significant impact on clockspeeds than manufacturing process alone.
 
RussSchultz

I'm a software guy so this may be a stupid question, and I certainly don't want to post anything that can get me lumped into any 'fanboy' catergory, but here goes...

Put Simply, what kind of issues are there when moving a design from say, 0.15 to 0.13?

With ati having a chip that seems to be pushing the limits of what people expected of the .15 process in terms of size and speed, does this make the process any easier?

Any other thoughts you may have on this, I and I'm sure others would be very interested in.

J
edit for typo
 
Gollum said:
It's similar to what happened with the P3, it basically hit it's architectural limit at slightly above 1GHz, and while a die-shrink might have enabled it to run a little faster still, the resulting increases would have been fairly small and a new architecture was simply needed to go beyond these limits...

Well then, .13 Athlon is all about cost and power reduction then. :)
 
The difference between a .15 micron to a .13 micron process is smaller than the difference between a .18micron to a .13micron.
 
It's similar to what happened with the P3, it basically hit it's architectural limit at slightly above 1GHz, and while a die-shrink might have enabled it to run a little faster still, the resulting increases would have been fairly small and a new architecture was simply needed to go beyond these limits...

Well, this is OT from the thread, but wasn't the P3 issue also due to diminishing returns from the FSB? i.e. they had reached and FSB limit and without the extra memory bandwidth there was no point in going much further.
 
jasonjlee said:
Put Simply, what kind of issues are there when moving a design from say, 0.15 to 0.13?

With ati having a chip that seems to be pushing the limits of what people expected of the .15 process in terms of size and speed, does this make the process any easier?
Simply put, a lot of different issues.

In general, most companies do standard cell ASICs, meaning they take the standard proven simple logic gates and RAM bit cell provided by the fab and use that to build all of their complex logic. When your design is based on this, its easy to 'migrate' from one process to another (generally).

When the sims are run, you might find some pieces that fail design rules or timing, and the design needs to be simplified or pipelined, or changed, etc. Generally not.

Doing this also makes metal revs easier in development, since every logic cell is the same, you can reroute logic, etc without having to worry about a needing particular gate, etc.

Custom logic can be denser, something that takes 1000 NANDs for example, which is N transistors, might only 90% of those transistors in custom logic. However, if the design is wrong, its not so easy to fix with a metal fix. Laying out custom logic is also a pain in the ass, since standard cell basically makes a grid out of the gates and custom doesn't. There are lots of backend tools to do the layout for you and standard cell works great for them, not so great for the custom logic.

You can also optimize for speed, power, etc using custom logic, at the expense of flexibility.

Intel has done simple optical shrinks (i.e. making the die smaller by scaling the mask) of the P4 (announced in EE times, I think) successfully. Sometimes you get lucky and sometimes you don't. You definately don't get lucky so often when analog is involved.

So, my guess is "optimized" .15 for the R300 doesn't help it get it any closer to .13. They'd have a lot more design rules to worry about with the custom logic, and no back end synthesiser to help them out with it.

Of course, I'm just a software guy too, who works in the fabless semi industry. I'm sure there's an ASIC guy out there getting ready to school me. ;)
 
DaveBaumann said:
Well, this is OT from the thread, but wasn't the P3 issue also due to diminishing returns from the FSB? i.e. they had reached and FSB limit and without the extra memory bandwidth there was no point in going much further.
OT: That's possibly another reason why they developed the P4 they way it is, but Intel also had problems delivering any significant amount of P3 CPU's that ran stable beyond 1,13GHz. There was, amongst other stories, quite some noise about P3 1,13GHz test-samples Intel sent Tom and other members of the online press, one ceased working and others showed several different stability problems too. Shortly thereafter, the retail version of that chip was canned and they were only delivered in small quantities to OEMs and workstation markets. IMO this shows that there were more problems than just a comparatively low FSP speed. Maybe a die-shrink would have helped the P3, but I doubt it would have scaled a lot further. All of this was quite some time ago, forgive me if my memory betrays me somewhere... :)
 
I don't get the difference between a graphic chip and a graphic processor. Like Gefroce and a nforce, what is the difference?
 
RussSchultz said:
Well then, .13 Athlon is all about cost and power reduction then. :)

I think the thoroughbred is more a "test series" to get everything running smooth for the release of Barton...(or rather became it...)
Barton's the next XP with larger cache.(and rumoured to have some Hammer optimizations too...but only rumours AFAIK)
 
Back
Top