Question: Where does Nvidia go from here?

Yes, what is interesting to me, is that if the roadmap is true, this would be the first time that nVidia would be using 3 different chips to go after the desktop market. To date, they have only used two.
 
Evildeus said:
Still we don't know if the late release of the NV30 has any impact on the NV35.

The NV35 probably won't be released until next Fall (Unless the NV40 is slated for then...which doesn't seem out of the question given ATI's increased pressure lately...then the NV35 may never be released).

But, the current NV30 probably includes many technologies that were originally slated for the NV35. I have serious doubts that the engineers over at nVidia just sat on their hands while TSMC was having fabrication problems.
 
Chalnoth said:
Evildeus said:
Still we don't know if the late release of the NV30 has any impact on the NV35.

The NV35 probably won't be released until next Fall (Unless the NV40 is slated for then...which doesn't seem out of the question given ATI's increased pressure lately...then the NV35 may never be released).

But, the current NV30 probably includes many technologies that were originally slated for the NV35. I have serious doubts that the engineers over at nVidia just sat on their hands while TSMC was having fabrication problems.

Where do you get the last conclusion? I was under the impression that what they were busy doing was removing features from the nv30, and trying to get better performance for them, if anything. If they were busy with more advanced technology, I'd actually expect it to be more likely it would be to speed up the introduction of the nv35 based on the lessons learned from the nv30.

I'm not saying you're wrong, but I'm interested in why you think it is the case.
 
I agree with the idea that technology was removed from the NV30 in order to get it out sooner and perhaps even make it perform better.

that removed technology would make its way (or has already) back into the NV35, with NV35 coming 6 months after NV30.

IMHO, Nvidia has to have the '35 out 6 months after '30, or be left behind ATI.
 
oh!!!! oh my, I didn't see that roadmap until after I said what i did above.

Very, very very interesting :D thank you Evildeus for posting that! :D

Joe, that is some smart speculation about Nvidia's future lineup in the market. 8)

I feel better about 2003 now, knowing that R350 and NV35 are VERY likely to be out, sometime around the middle of the year, assuming that NV35 isnt pushed back because of NV30's delay. hopefully NV35 has not been delayed. even if the roadmap is outdated.

I don't see why Nvidia would have manufacting problems/delays with NV35, as they did with NV30, now that the .13u hurdle has been crossed.

And just because NV30 is coming in Jan, that should not have to INJURE Nvidia's timetable. why should NV35, NV40, etc have to suffer because of NV30 and .13u?
 
Because ATI have spent bult of their effort on 9700 they have not
enough resources to concentrate 0.13 design. Unless ATI
engineer are 2x fast, it's unlikely come out with 0.13 product
until 2003Q4 or 20041Q. Best they can do is tweak 9700 unlikely
yield same improvement as GeforceFx

they have not enough resources to concentrate 0.13 design???

lol, and exactly where do you get this kind of information?

If you dont know what you're talking about then i suggest you shut your mouth and stop making an ass of yourself.
 
The main reason for NV30's delay (and consequently the provider of future headroom) is fairly simple actually.

Things didn't go entirely according to plan on the manufacturing front.

nVidia was prevented from releasing earlier because they had to change 13 micron processes when the advanced one they hoped to use (TSMC offers two 13 micron processes) proved too troublesome to guarantee a usable supply of product.

nVidia designed NV30 around Applied Materials Black Diamond process, which is a low K dielectric, meaning the chip runs cooler, faster and uses less power.
http://www.businesswire.com/cgi-bin...?story=/www/bw/webbox/bw.011801/210180127.htm

They were forced to switch to TSMC's standard 13 micron process, and this meant a lower clock speed, the huge cooler and the molex connector became necessary (to get clock speeds up to where they are needed on the normal process, nVidia is basically overvolting and overclocking the chips deliberately).

nVidia confirmed in a merril-lynch report on Cnet that they decided not to go with the low K dielectric process.
http://investor.cnet.com/investor/brokeragecenter/newsitem-broker/0-9910-1082-20687186-0.html

As another poster in this thread said, nVidia has a history of using and relying on the most advanced manufacturing processes available. This time it bit them (just like they were bit with TNT-1).
 
radar1200gs said:
The main reason for NV30's delay (and consequently the provider of future headroom) is fairly simple actually.

Things didn't go entirely according to plan on the manufacturing front.

nVidia was prevented from releasing earlier because they had to change 13 micron processes when the advanced one they hoped to use (TSMC offers two 13 micron processes) proved too troublesome to guarantee a usable supply of product.

nVidia designed NV30 around Applied Materials Black Diamond process, which is a low K dielectric, meaning the chip runs cooler, faster and uses less power.
http://www.businesswire.com/cgi-bin...?story=/www/bw/webbox/bw.011801/210180127.htm

They were forced to switch to TSMC's standard 13 micron process, and this meant a lower clock speed, the huge cooler and the molex connector became necessary (to get clock speeds up to where they are needed on the normal process, nVidia is basically overvolting and overclocking the chips deliberately).

nVidia confirmed in a merril-lynch report on Cnet that they decided not to go with the low K dielectric process.
http://investor.cnet.com/investor/brokeragecenter/newsitem-broker/0-9910-1082-20687186-0.html

As another poster in this thread said, nVidia has a history of using and relying on the most advanced manufacturing processes available. This time it bit them (just like they were bit with TNT-1).


Excellent points, all. And thanks...it's nice to see it put so succinctly. Makes all the sense in the world--had nVidia waited out K-dialectric nv30 would be many months out yet. Probably the 2nd iteration of nv30, possibly in the fall of '03, will be the one to watch for, if they're able to go K-dialectric by then, that is. I think this explains what's happened very neatly--I had thought about K-dialectrics but figured nVidia had never actually planned on it for nv30 in the first place--but now I see they did plan on it. Everything falls into place nicely.
 
Given current informations, my guess for nVidia's next chips:
(No insider info)

nv35 low-k .13 process, 8 pipes, close to 1GHz speed, 256bit bus
nv31 low-k .13 process, 8 pipes, lower clock speeds, 128bit bus
nv34 low-k .13 process, 4 pipes, 128 bit bus

I expect no new features. (But they can always hype up something they forget to mention on nv30.)

I expect nVidia to push these as soon as they can (when the process is available), even so they'll kill nv30 with it, and leave nv30 buyers disappointed.
(They did the same with the GF1.)

"When" is an important factor, assuming r400 will be out next summer.
 
demalion said:
Where do you get the last conclusion? I was under the impression that what they were busy doing was removing features from the nv30, and trying to get better performance for them, if anything. If they were busy with more advanced technology, I'd actually expect it to be more likely it would be to speed up the introduction of the nv35 based on the lessons learned from the nv30.

I'm not saying you're wrong, but I'm interested in why you think it is the case.

I still think it's more likely that technologies were removed from the NV35 to get it out sooner, and then call it the NV30. This particularly seems the case when original rumors placed the NV30 at right around 100 million transistors, and now it's sitting at 125 million.
 
Hyp-X said:
nv35 low-k .13 process, 8 pipes, close to 1GHz speed, 256bit bus

Isn't that a bit too much? :)
650-700 is more realistic, especially cause they really cannot add any more cooling (unless they ship with liquid nitrogen ;)
 
As a side note, I'm really, really curious as to why the PS 3.0 specs are included with DX9. Hopefully these signify hardware on the drawing board that will be released before long. If this is essentially what the NV35 includes, then I'll be very happy.

After this point, it may become much harder to distinguish between new generations of graphics hardware. After all, there's always been a clear split in the past:

Riva 128 (NV2, I think?) -> TNT/2 (NV4/5) -> GeForce/2 (NV1x) -> GeForce3/4 (NV2x) -> GeForce FX and derivatives (NV3x)

With the enhanced programmability of the NV30, it really doesn't seem like there's a huge amount of room left to cover in this regard. Sure, there's always more processing power, better efficiency, and a few improvements to the programmability seen today (more branching, texture reads in VP), but there's just no significant change that's obvious, particularly not from a programming perspective.

Sure, there's the possibility of a primitive processor soon, but it still seems that there's not much further to go on the programming side for these new processors. And this should be very, very exciting for gamers! After all, if the hardware does become much more unified on the programming side, it's going to mean that games will make more full use of the hardware much more quickly. There just won't be the problem anymore of game developers not making the right guesses on where future technology is headed (as happened with Tim Sweeney with the original Unreal).

So, I guess what I'm trying to say is that it seems more and more likely that a so-called NV35 could very well not be any less technologicall-advanced than an R400 (Or, if you prefer, an NV40 might not be any more advanced than an R350), depending on how the internal naming scheme is done.
 
Laa-Yosh said:
Isn't that a bit too much? :)
650-700 is more realistic, especially cause they really cannot add any more cooling (unless they ship with liquid nitrogen ;)

What about peltier + water cooling? :D

I don't know what is realistic actually, as I don't know how much less power is needed for the more advanced process - or wether the chip design allow that high clock rates at all.
 
Hyp-X said:
I don't know what is realistic actually, as I don't know how much less power is needed for the more advanced process - or wether the chip design allow that high clock rates at all.

Realistic possibilities aside, you also have to factor in practical reasons. What good is a 1GHz chip if you cannot provide enough bandwith? Even if they could get a 256-bit 500 MHz DDR-II bus, it'd have the same relative amount of bandwith as the NV30. Thus all the neccessary magic would only be good to claim 8 Gigapixels of fillrate.

Then again, 8 Gpixels sound like a lot... oh man ;) That's 200 times a Voodoo1 in less then a decade :O
 
Laa-Yosh said:
Realistic possibilities aside, you also have to factor in practical reasons. What good is a 1GHz chip if you cannot provide enough bandwith? Even if they could get a 256-bit 500 MHz DDR-II bus, it'd have the same relative amount of bandwith as the NV30. Thus all the neccessary magic would only be good to claim 8 Gigapixels of fillrate.

Assuming your game isn't using a huge amount of surfaces with just one texture, the GeForce FX really isn't in any worse a situation than the GeForce4 Ti line. Since we all know that the Ti line has very good memory bandwidth characteristics, why won't the GeForce FX?
 
Chalnoth said:
Assuming your game isn't using a huge amount of surfaces with just one texture, the GeForce FX really isn't in any worse a situation than the GeForce4 Ti line. Since we all know that the Ti line has very good memory bandwidth characteristics, why won't the GeForce FX?

Huh? Pardon me but I don't really get what you mean...

Let's see the parameters...

GF4:
1200 Mpixel/sec
10 GB/sec

GF FX:
4000 Mpixel/sec
16GB/sec

That's a +233% increase in fillrate and a +60% increase in bandwith for the GeForce FX. Not quite the same ratio, right? Comparing anything about the two card's memory subsystem doesn't look logical to me...
 
Back
Top