Question: Where does Nvidia go from here?

Well I guess we have to wait and see how fast the geforcefx really is but I believe that ati can take it on with a 435mhz core and 450 mhz ram clock.

I also feel that ati would have to be fools to not be well on thier way to finishing the .13 micron move. They should also be wrapping up the r400 design in the next couple of months. I really think nvidia has some troubles if ati can stay on the ball.
 
DaveBaumann said:
Actually, I glibbly asked Alan Tiquet (NV) when 0.09 was on the cards (pardon the pun), and he pointed out that 0.11 was next...
That's strange, considering .11 isn't on anybody's road map.
 
Well, I certianly wasn't aware that .11 existed as a process (if it does), but this is what he said.
 
I wouldn't be surprised. The move from 0.18 to 0.15 was not originally planned. It was supposed to be from 0.18 to 0.13, but that proved too difficult, so 0.15 was introduced as a stop gap solution.
 
Well I guess we have to wait and see how fast the geforcefx really is but I believe that ati can take it on with a 435mhz core and 450 mhz ram clock.

I also feel that ati would have to be fools to not be well on thier way to finishing the .13 micron move. They should also be wrapping up the r400 design in the next couple of months. I really think nvidia has some troubles if ati can stay on the ball.

agreed. ATI should be able to make R300 stand up to GFFX with just a higher clocked core and faster memory (GDDR-3 perhaps, or not even)
without even having to introduce the much rumored R350.


R400 should indeed be finished within a few months. this past spring, in ATI's conference call, they mentioned by name, the R400, R500 and their desire to get the contract to provide the graphics for Nintendo's next machine. I don't recall R350 being mentioned though.

I'd expect R400 to have a summer launch, with product on shelves by fall.
 
sc1 said:
demalion said:
Well, nVidia was retooling for 0.13 well before releasing the GF4, weren't they? So I don't see how releasing the 0.15 R300 would say anything about their next product.

Also, it is not as if they are NOT going to go to 0.13, so why would there be any further delay? Look at your first statement (I didn't quote it)...I'm pretty sure ATi has known that for a while. :p

Hugh? GF4 was comparably small design effort (tweaking existing
GF3 core). Geforce FX is hugh desing challenge. Until you have
taken this route you can't proceed to next generation of chip.

? Which route do you have to take to achieve the next generation? ATi didn't for the R300...

We are talking about 2 things, achieving next generation design and implementing a design on a new process size.

nVidia did both at the same time (and has yet to deliver), ATI did the first already (and delivered a while ago)...and you are proposing this indicates they don't/won't have the resources to do the second. I don't get the reasoning.

Because ATI have spent bult of their effort on 9700 they have not
enough resources to concentrate 0.13 design. Unless ATI
engineer are 2x fast, it's unlikely come out with 0.13 product
until 2003Q4 or 20041Q. Best they can do is tweak 9700 unlikely
yield same improvement as GeforceFx

The alternate design teams both companies have has already been mentioned. Where do you get that they spent the "bulk" of their effort and your evaluation of whether they have "enough resources" to concentrate on 0.13? Also, achieving the same thing in 2 separate steps instead of one does not require engineers to be "2x" as fast, even if the 2 separate steps are executed faster or as fast. Keep in mind that 0.13 micron is a means to an end, and ATI only has to achieve it when it is necessary for them to achieve that end. My guess is this is the R400...so let's look at this, they did their first step about 6 months before nVidia, and if they are aggressive they can do their second step 6 months after. You really doubt this is possible?

EDIT: Actually, I think they could do it much sooner if they had a reason to have made that their goal....
 
sc1 said:
Because ATI have spent bult of their effort on 9700 they have not
enough resources to concentrate 0.13 design. Unless ATI
engineer are 2x fast, it's unlikely come out with 0.13 product
until 2003Q4 or 20041Q. Best they can do is tweak 9700 unlikely
yield same improvement as GeforceFx


I don't think the .13 micron move is a matter of design changes--but rather a matter of process implementation and what that may entail. There can be no doubt in anybody's mind now that ATI was right to give .15 microns at least one last hurrah with the R300. As far as "resources" go, ATI is in better shape with the R300 and the R300-based product line than it was pre-R300, so I doubt resources will be a problem. (It isn't like once you spend the resources they're gone for good...;) Nope, the product of your labor replenishes the coffers, if it's worthwhile, which the R300 certainly is.) ATI has proven to my satisfaction that it is serious about retaking its marketshare and isn't goofing around (which frankly I thought the company was doing for a long time and as a result I didn't take ATI seriously.)

nVidia got into the habit way back with 3dfx of commiting prematurely to new chip processes and ram types and often based targets and design goals around these new technologies the development of which nVidia does not control. It was bound to catch up with them at some point--but up to nv30 the company's luck had been running strong in that regard.

When nv30 does indeed ship it will be very interesting to see how driver development has come along. Will it be as good as ATI's 9700 Pro initial driver package (which I thought was excellent for a new-architecture product)? Or will it be on the order of the GF1 drivers in terms of the amount of time it takes to get the desired performance and stability from the GFFX (took a good while before GF1 would reliably outperform TNT2)? I think driver development and efficacy will have something to say about where nVidia sits in the short term. But as far hardware goes I can't imagine anything at this point beyond a continuous tweaking of the core--and I don't mean adding new stuff like a 2nd TMU, etc--which I don't think is going to happen. Because of this, I predict we'll see an enormous amount of marketing from nVidia in 2003, more than we'll see solid development (at least on the nv30 scale.) Oh yea--plus, I think nVidia will be very busy getting out its low-mid range nv30-based products so that they aren't too far behind ATI in these markets--hopefully doing some nice things with the nForce2 (which I think I want right now but am still not 100% sure about.) And then there's Xbox and whatever that might entail of nVidia's resources. Talking about resources, especially in the current economy, I think both ATI and nVidia might find themselves stretched more than they would prefer in 2003.
 
Ailuros said:
I think now that NV30 is out, the people who were propagating that rumor will quite down.

Not any worse than the 4x4 speculations :rolleyes:

well, some of fan boys (without depending manufacturer, except maybe bitboys... their supportters doesn't even dare to say aloud that they support BB. Not to mention talking about specs. No one would believe them anyways.) just take specs of the best bet right that moment and add few details to get it look like faster and realistic. I can already guess that 4x4 rumour is started just after Parhelia chip launch on May.

I call ppl making up "reliable" specs as Wannabe spys. Of course there is also always a group of ppl that will hear a real info, but because they speak faster than think, they usually start getting crap from their reliable sources. Nothing is more effective way to get someone shut up than talking some yada yada about upcoming chip and making him look like a fool in the front of general public.

so... which category I go?? I really don't know anymore... I think that most of my hints have been technically pretty close what has been coming (or is it?? Maybe it is me that wants to see things on that light.) , but otherwise things have been going like to hell already more than 3 times. hmmh... maybe I should make a poll about my reliability. I really don't have a clue how ppl is thinking on my posts... But that would be a way too lame... oh, well... who cares... aaaanyways...

that's why I have been so quiet lately.
 
RussSchultz said:
Their flip chip already has ~1000 balls, so maybe they're anticipating moving to a wider bus?
I don't know if anyone here remembers "The Goodies" but...
Graham: "That's a lot of balls"
Bill: "No, it's true".

Those pins may be in use already. AFAIU, you've got to have a certain number of power and ground pins for every N external connections and, even worse, as the clock rate goes up "N" goes down. It's a case of diminishing returns.
 
Remember SA's post about available 'low-hanging fruit' from improved IMR efficiency?

Would any of those techniques proposed neccessitate

more transistors
more mhz
more power draw

to improve IMR performance?

I dont understand writing off nVidia because the Geforce FX doesnt offer a whole lot more to the average gamer than increased fillrate over the R300? Its a first generation 0.13micron product, its a first generation DX9 product - its a hell of a lot more than the Gf4 range.
 
Now, SA's post and the way people take them amuses me a little since everyone always assumes they are in relation to NVIDIA.

Now, lets look at two of the major topics he's started over the past few months. First there was the topic of the combination of some or all parts of the shader architecture and then there was the topic of HSR possibly by deffered rendering / tiling. Neither of these things NVIDIA have done yet.

However, if we remember back, a few months after the shader architecture thing we see a leaked 3Dlabs slide in which they talk about a unified shader architecture. Then, not long after SA talked about HSR efficiency 3Dlabs announce P9, a deferred renderer....

Mmmmmmmm.
 
Simon F said:
I don't know if anyone here remembers "The Goodies" but...
I used to love the Goodies. I loved the Eckithump(?) episode if any one remembers...

Also, I have asked this question before, but no one answered... :( so if any one is kind enough to answer, please do. Who is SA? He seems to know way too much!
 
DaveBaumann said:
Now, SA's post and the way people take them amuses me a little since everyone always assumes they are in relation to NVIDIA.

Now, lets look at two of the major topics he's started over the past few months. First there was the topic of the combination of some or all parts of the shader architecture and then there was the topic of HSR possibly by deffered rendering / tiling. Neither of these things NVIDIA have done yet.

However, if we remember back, a few months after the shader architecture thing we see a leaked 3Dlabs slide in which they talk about a unified shader architecture. Then, not long after SA talked about HSR efficiency 3Dlabs announce P9, a deferred renderer....

Mmmmmmmm.

I was just trying to point out that there were other techniques than die shrinkage/clock increase/expensive memory if people seem to think nVidia had hit an expensive wall.

Why do peole always assume the wall has been hit on IMR's?

I hadn't missed the link between SA's post and the P9 but the assumption is nVidia engineers are working on stuff like that themselves isnt it?
 
You'll apparently have to cut and paste the URL for the roadmap into your browser to see it.

Interesting...

nVidia seems to be positioning the NV30 and follow-up NV35 in a market segment ABOVE the GeForce4 Ti. That would coincide with a $400-$500 price tag we've been hearing about.

The NV31 would be a GeForce4 Ti replacement....presumably $150-$400

The NV34 would be the GeForce4 MX replacement....presumably sub $150...

NV 35, 31, and 34 are all slated toward the end of "1H 03". Though the NV30 is picutured as being close the the beginning of the 2H 02...so its hard to tell how old this roadmap might be...
 
Joe DeFuria said:
You'll apparently have to cut and paste the URL for the roadmap into your browser to see it.

Interesting...

nVidia seems to be positioning the NV30 and follow-up NV35 in a market segment ABOVE the GeForce4 Ti. That would coincide with a $400-$500 price tag we've been hearing about.

The NV31 would be a GeForce4 Ti replacement....presumably $150-$400

The NV34 would be the GeForce4 MX replacement....presumably sub $150...

NV 35, 31, and 34 are all slated toward the end of "1H 03". Though the NV30 is picutured as being close the the beginning of the 2H 02...so its hard to tell how old this roadmap might be...

Yes i think it's an old Roadmap, but it's quite informative nevertheless. We don't know yet if Nvidia has changed it's timetable target or not.
 
Back
Top