Johnny Rotten
Regular
Evildeus said:
Its severely outdated as it shows the NV30 being an early/mid 2nd half 02 release. Hah!
Evildeus said:
Yeah, but it's more the information contained that the accurancy of the timetable that is interesting (and it shows the delays).Johnny Rotten said:Evildeus said:
Its severely outdated as it shows the NV30 being an early/mid 2nd half 02 release. Hah!
Evildeus said:Still we don't know if the late release of the NV30 has any impact on the NV35.
Chalnoth said:Evildeus said:Still we don't know if the late release of the NV30 has any impact on the NV35.
The NV35 probably won't be released until next Fall (Unless the NV40 is slated for then...which doesn't seem out of the question given ATI's increased pressure lately...then the NV35 may never be released).
But, the current NV30 probably includes many technologies that were originally slated for the NV35. I have serious doubts that the engineers over at nVidia just sat on their hands while TSMC was having fabrication problems.
Because ATI have spent bult of their effort on 9700 they have not
enough resources to concentrate 0.13 design. Unless ATI
engineer are 2x fast, it's unlikely come out with 0.13 product
until 2003Q4 or 20041Q. Best they can do is tweak 9700 unlikely
yield same improvement as GeforceFx
radar1200gs said:The main reason for NV30's delay (and consequently the provider of future headroom) is fairly simple actually.
Things didn't go entirely according to plan on the manufacturing front.
nVidia was prevented from releasing earlier because they had to change 13 micron processes when the advanced one they hoped to use (TSMC offers two 13 micron processes) proved too troublesome to guarantee a usable supply of product.
nVidia designed NV30 around Applied Materials Black Diamond process, which is a low K dielectric, meaning the chip runs cooler, faster and uses less power.
http://www.businesswire.com/cgi-bin...?story=/www/bw/webbox/bw.011801/210180127.htm
They were forced to switch to TSMC's standard 13 micron process, and this meant a lower clock speed, the huge cooler and the molex connector became necessary (to get clock speeds up to where they are needed on the normal process, nVidia is basically overvolting and overclocking the chips deliberately).
nVidia confirmed in a merril-lynch report on Cnet that they decided not to go with the low K dielectric process.
http://investor.cnet.com/investor/brokeragecenter/newsitem-broker/0-9910-1082-20687186-0.html
As another poster in this thread said, nVidia has a history of using and relying on the most advanced manufacturing processes available. This time it bit them (just like they were bit with TNT-1).
demalion said:Where do you get the last conclusion? I was under the impression that what they were busy doing was removing features from the nv30, and trying to get better performance for them, if anything. If they were busy with more advanced technology, I'd actually expect it to be more likely it would be to speed up the introduction of the nv35 based on the lessons learned from the nv30.
I'm not saying you're wrong, but I'm interested in why you think it is the case.
Hyp-X said:nv35 low-k .13 process, 8 pipes, close to 1GHz speed, 256bit bus
Laa-Yosh said:Isn't that a bit too much?
650-700 is more realistic, especially cause they really cannot add any more cooling (unless they ship with liquid nitrogen
Hyp-X said:I don't know what is realistic actually, as I don't know how much less power is needed for the more advanced process - or wether the chip design allow that high clock rates at all.
Laa-Yosh said:Realistic possibilities aside, you also have to factor in practical reasons. What good is a 1GHz chip if you cannot provide enough bandwith? Even if they could get a 256-bit 500 MHz DDR-II bus, it'd have the same relative amount of bandwith as the NV30. Thus all the neccessary magic would only be good to claim 8 Gigapixels of fillrate.
What good is a 1GHz chip if you cannot provide enough bandwith?
Chalnoth said:Assuming your game isn't using a huge amount of surfaces with just one texture, the GeForce FX really isn't in any worse a situation than the GeForce4 Ti line. Since we all know that the Ti line has very good memory bandwidth characteristics, why won't the GeForce FX?