Not a rumor, NV30 this year, Geforce4 "refresh" of

ben6

Regular
http://www.vr-zone.com/#2265 a translation of a interview done with Nvidia's David Kirk from http://www.watch.impress.co.jp/pc/docs/2002/0409/kaigai01.htm

NVIDIA goal is to develop a new GPU with a new architecture every year and release a following product with better performance after 6 months. For example, NV20 aka GeForce3 is a new GPU of a new architecture (Programmable Vertex and Pixel Shader) and NV25 aka GeForce4 is the following product with performance enhancements.

The reason for having two Vertex shaders instead of two Pixel Shaders is having 2 Vertex shaders can process more vertices at the same time that increase the amount of geometry processing. Also a Pixel Shader consumes nearly twice the silicon area of a Vertex Shader and with the current 0.15 micron process, having two Pixel Shader will increase the die size considerably.

Accuview Shifted Antialiasing was introduced to improve the performance of anti-aliasing and its goal is to turn on anti-aliasing without sacrificing much performance. However, the problem turning on AA without sacrificing much performance will not be solved completely but under present conditions, it is rather satisfactory. Also flat panel displays are getting more popular and they display sharper images therefore AA will get more and more important.

NVIDIA is enthusiastic to to use new 0.13 micron process technology but right now it is still not matured yet but the next generation NVIDIA GPU NV30 is most likely to be 0.13 micron based and it will take place in the second half of this year.

Mobile version of GeForce4 uses the TSMC Low Power "LV" edition 0.15 micron process while the desktop version uses the TSMC High Speed "HS" edition of the 0.15 micron process.

There will not be 6 rendering pipelines but pipelines in proper folds like 2, 4 and 8 is what NVIDIA would like to see. Therefore there is a chance that the next gen. GPU has 8 pipelines since the current GeForce4 has 4 pipelines
 
I wish nVidia would stop changing their mind on what makes a new core or just a refresh...

NVIDIA goal is to develop a new GPU with a new architecture every year and release a following product with better performance after 6 months. For example, NV20 aka GeForce3 is a new GPU of a new architecture (Programmable Vertex and Pixel Shader) and NV25 aka GeForce4 is the following product with performance enhancements.

Well, we immediately know, of course, that it might be nVidia's goal, but they certainly don't always meet it. New GPU every year? nVidia already missed that goal the last two product cycles then.

GeForce + 6 months = GeForce2 GTS = REFRESH.
GeForce2 + 6 Months = Geforce2 ULTRA = REFRESH
GeForce2 Ultra + 6 Months = GeForce 3 = New Core
GeForce3 + 6 Months = Geforce3 Ti = REFRESH
GeForce3 ti + 6 months = Geforce4 = REFRESH
GeForce4 + 6 Months = ??

So, according to nVidia now claiming that GeForce4 is a "refresh" (which I could have sworn last month the claimed was a new core....), then nVidia has been pretty bad at sticking to their so-called schedule. It has NOT been "new core, 6 Month Refresh, 6 months New Core" cycle. they've had a "New core, 6 Month refresh, 6 month ANOTHER refresh, 6 Months new core" cycle.

Now, I suppose that when nVidia comes out with the GeForceTi 4 "Ultra" this fall ;), nVidia will then turn around again and say that the GeForce4 was in fact new core, and the Ultra is the "we always do a 6 month refresh" part...
 
And NVIDIA is the first company in recorded history to not do something they said they were going to do... :rolleyes:
 
Livecoma said:
And NVIDIA is the first company in recorded history to not do something they said they were going to do... :rolleyes:

Thats fine, two wrongs don't make a right and what exactly is wrong with calling a spade a spade. I don't see what Joe posted as a lie, he spoke the truth as he has pretty well painted the picture of how Nvidias Roadmap has progressed so far.
:-?
 
Well, the one thing I take away from this is NV30 in 2002. I certainly hope this happens, because I don't want to see Nvidia go the way of 3dfx. . .milking the same core product cycle after product cycle.
 
And NVIDIA is the first company in recorded history to not do something they said they were going to do... :rolleyes:

And your point is? :rolleyes: Livecoma, can't anyone make an observation about nVidia that is not of praise without getting a "who cares" type of response from you?

To be clear, my points are the following:

Point 1) nVidia is typically "credited" by many for meeting the same 6-month product cycle the Kirk spelled out in the interview: New product, then 6 months a refresh, the 6 months later a new product. They however, have NOT been doing this, so I'm meerly pointing it out. Note, this is not pointing out merely that the company didn't do what they said they WOULD do. I'm pointing out that the company did not do what they said they DID. See the difference?

Kirk apparently pointed out the Geforce3 / 4 for as an example of the execution of their 12 month cycle on the same core. However, it is actually 18 months, and still counting. So what kind of confidence does this give us about nVidia product cycles?

Point 2) The fact that nVidia may be PLANNING on getting NV30 out this year, does not mean they will, as the subject line of this post suggests. I would not be surprised much at all if we don't see the Nv30 (or R300) this year, and we see the current products refreshed instead. To be clear, I wouldn't be surprised either way at this point. (DX9 products this summer / fall, or next spring.)
 
John Reynolds said:
Well, the one thing I take away from this is NV30 in 2002. I certainly hope this happens, because I don't want to see Nvidia go the way of 3dfx. . .milking the same core product cycle after product cycle.

I agree. I am not looking for other company to take over nVidia's place, but I am waiting to get more competition.
 
tEd said:
...about next gen gpus , there is an interview with a ati pr guy here http://www.reactorcritical.com

R300 is almost ready and will ship as soon as dx9 arrives and you can expect a jump as it was from radeon to radeon8500

...and DX9 is still not even in beta. Now they (MS) are saying "May'ish". I smell at slip from the target of Q3 2002, gentlemen! We may not see DX9 final released before, say, November. So now R300 will be locked away in a cellar into then?

Something is rotten in the state of DX9!
 
DX9 is not released for the Beta Program. But there's internal releases and I'm pretty sure ATI, NVIDIA and all the other guys have their "copies".
 
Ante P said:
DX9 is not released for the Beta Program. But there's internal releases and I'm pretty sure ATI, NVIDIA and all the other guys have their "copies".

Of course they have - they're helping making the darn thing. 8) I just think it's a bit odd that they haven't reach beta yet. It suggest to me that some DX9 features/implementations isn't nailed down yet. DX9 hardware have been in the works for quite some time - so what gives?
 
However, the problem turning on AA without sacrificing much performance will not be solved completely but under present conditions, it is rather satisfactory

I'd be more personally concerned with the mindset that antialiasing isn't being addressed from an "image quality" standpoint, but instead what they can label as AA provided it doesn't have a performance hit.

This is the *wrong* basis for antialiasing IMO, and one that just seems to be more "GF3 multisampling" in derivation.

Whatever happened to focusing on image quality versus box-side feature lists and stretched definitions...
 
Joe DeFuria said:
Point 1) nVidia is typically "credited" by many for meeting the same 6-month product cycle the Kirk spelled out in the interview: New product, then 6 months a refresh, the 6 months later a new product.

They are widely credited with a 6 month cycle. The subtlety of the alternating new core/refresh core concept is lost on most people, and is an unrealistic expectation in any case.

The simple fact is that they do pop out a new and superior product approximately every 6 months. And on nearly every occasion, it is also the highest performing product on the market.

Sitting around praising the company is pointless. But nitpicking product schedules does come across as... kinda whiney.
 
Oompa Loompa said:
Joe DeFuria said:
Sitting around praising the company is pointless. But nitpicking product schedules does come across as... kinda whiney.

I'm sure Joe's a big boy and can defend his own words, but I don't think he was "nitpicking product schedules" so much as nitpicking inconsistent company PR statements regarding those product schedules. I do agree with you, though, in that whether or not the product is based on a new architecture, Nvidia has still managed to continue releasing new chips that're faster than the competition's offerings.
 
The reason for having two Vertex shaders instead of two Pixel Shaders is having 2 Vertex shaders can process more vertices at the same time that increase the amount of geometry processing. Also a Pixel Shader consumes nearly twice the silicon area of a Vertex Shader and with the current 0.15 micron process, having two Pixel Shader will increase the die size considerably.

You have 4 pixel pipelines, so you have 4 pixel combiner (shader), not 1. If you only had one you'll only be able to process one pixel at a time, not four. So what does 2 pixel shader mean ? That it can process 2 ops at a time ? Wouldn't that be called a superscalar unit ? WTF
 
I certainly hope this happens, because I don't want to see Nvidia go the way of 3dfx. . .milking the same core product cycle after product cycle.

The subtlety of the alternating new core/refresh core concept is lost on most people, and is an unrealistic expectation in any case. The simple fact is that they do pop out a new and superior product approximately every 6 months. And on nearly every occasion, it is also the highest performing product on the market.

I do agree with you, though, in that whether or not the product is based on a new architecture, Nvidia has still managed to continue releasing new chips that're faster than the competition's offerings.

If you look at 3Dfx, over the course of a bit over a year they released the Voodoo Rush, Voodoo2, Vodoo Banshee and Voodoo3. They stalled out on the Voodoo5 after they started losing money in mid-'99, but it was at least intended to be released late that year. Whatever you say about the technology, the Voodoo5, as fast as a GF256, was light years ahead of the Voodoo2 in terms of sheer speed, and gaming had gone from 16-bit at 640x480 to 32-bit at 1024x768 and maybe a little FSAA thrown in. Even the Voodoo2 to Voodoo3 transition included a huge increase in speed - about twice the speed for half the price, and throw in good quality 2D as well.

If you look at ATi, their Rage Fury was a bit faster than a TNT (and moreso in 32-bit), the Rage Fury Pro pretty much equal to a TNT2 but with a broader featureset, the Rage Fury Maxx about as fast as a GeForce256SDR, and all those cards came out in '99. The drivers were poor in a lot of areas and the cards were perpetually three to six months late to market, but the hardware was moving forward. The Radeon was in GF2 GTS range and also continued their approximate six-month cycle. But then nothing for over a year, until the 8500. Well, almost nothing - there was a faster-clocked "SE" version of the Radeon64 DDR that got little notice. But the 8500 was again about as fast and capable as the GF3, so ATi had really lost no ground to nVidia over three years.

And if you look at nVidia, you do see a new and faster product every six months, and almost alway the fastest thing on the market. But look closer and what do you see? The original TNT didn't quite deliver as promised, but the six-month refresh TNT2 did deliver on that six month-old promise. The GF256 SDR was extremely memory bandwidth-limited, but the later DDR version was better and they finally got it right with the GF2 GTS. The new architecture was finally notably faster than the previous TNT2 Ultra - double the framerate in Q3 at 10x7, but less in other benchmarks. But that took a year, just like 3dfx with the V2-V3, and now the card cost 50% more, not 50% less. Yes, it had hardware T&L, but what good did that do you in '00?

So then they speed-bin the chips, require very fast memory and put out the GF2 Ultra, now reaching an incredible $500. Kicks some ass in Q3, some 150% faster than a TNT2 Ultra, but only marginally faster than the GTS and only about 50% faster than the year-old GF256. And a UT benchmark in Anandtech's review show it to be slower than the V5, Radeon, and RFMaxx at 10x7, 16-bit or 32-bit. A real product cycle? Real progress?

Then the GF3, not much faster than the Ultra at first, includes programmable shaders still mostly useless today, but it did introduce advanced memory bandwidth management with Lightspeed, a very good thing. But only 25-50% faster than a year-old GTS in Q3 this time, and the price hadn't dropped much from the GF2 Ultra's record level.

Maybe ATi could have released a faster, high-cost Ultra version of the Radeon, or a Maxx dual-chip version, but they didn't. They did finally release an Ultra version in the 7500, using the old nVidia trick - speed increase via a die-shrink. But it was at half the price of the original Radeon and not intended to be new generation, rather the 8500 was.

So in reaction, nVidia releases their Ti lineup, which featured the Ti 500, nothing new. But barely faster than the original GF3 because of the meager 9% memory speed increase, and still at $400. New generation or purely a reaction to competition? And now the GF4s, which do have feature improvements and even more speed, but not a huge step forward.

So has nVidia really delivered well beyond the others, or is it a designed-in product cycle that includes only marginal speed and feature improvements, plus occasional (18 month) architecture overhauls including major feature additions with no real-world applications to truly test them? Could ATi have put out a card a year ago that ran with the GF2 Ultra or GF3, assuming a $400-500 price tag? I imagine they could have, but it would have been panned for not including DX8 capability (like the V5 re T&L), and it might well have slowed the 8500 to market. Or maybe they simply chose not to pursue this rapid upgrade path.

In any case, I think this whole nVidia superiority thing based on the six-month product cycle and superior technology based on T&L and programmable shaders is both oversimplified and overstated. It's not necessarily that they're moving forward faster than anyone else, rather it's at least partially that they are trying very hard to look like they are. And they're very good at that.
 
Joe DeFuria said:
And NVIDIA is the first company in recorded history to not do something they said they were going to do... :rolleyes:

And your point is? :rolleyes: Livecoma, can't anyone make an observation about nVidia that is not of praise without getting a "who cares" type of response from you?


Hmm well I say "who cares" simply because I fail to see the point in scrutinizing something that miniscule. If I meticulously search for and point out every discrepancy between pr talk and truth I would be 100% insane instead of 50%. You can poke and prod at NVIDIA's pr speak all you want of course, but let’s not forget how much they have actually delivered.

How many occasions has NVIDIA moved fast enough to take away the overall performance crown from their own product? I think the only other company to steal the performance crown from them self was 3dfx V1 to V2 perhaps? Who's right there releasing yet a new graphic chip that outdoes everything else on the market right now? It isn't rv250... 8)

NVIDIA, so far, has left everybody else in the dust. If anybodies product cycles should be under the microscope it should be Matrox. Now I'd love to know what they are up too. NVIDIA's execution has become rather reliable, and therefore predictable. Boring!



I wish nVidia would stop changing their mind on what makes a new core or just a refresh...

Don't worry, I will e-mail big bad NVIDIA and ask them to modify their interview responses to be less iritating to you. :LOL: :LOL:
 
LeStoffer said:
I just think it's a bit odd that they haven't reach beta yet. It suggest to me that some DX9 features/implementations isn't nailed down yet. DX9 hardware have been in the works for quite some time - so what gives?

It might be related to MS's taking a month or so to focus on security and robustness, rather than feature creep. The fact that I get a Critical Update notification almost every other day seems to support this.

Is MS' bug-quashing month already over?
 
m sure Joe's a big boy and can defend his own words, but I don't think he was "nitpicking product schedules" so much as nitpicking inconsistent company PR statements regarding those product schedules.

Exactly. nVidia is to be rightly credited with shipping products. And shipping products on a 6 month schedule. I'm not nit-picking their schedule. I'm not saying they should be doing even MORE than they have been doing. I'm nit-picking what nVidia claims their schedule to be, becuase it has direct implications when trying to "predict" what their next product might be.

Livecoma,

Hmm well I say "who cares" simply because I fail to see the point in scrutinizing something that miniscule.

Again, what we're trying to do here is make a prediction of what is gonig to be shipped this fall, no? (See thread topic).

To do this, we partly rely on rumors and speculation, and we partly rely on historical performance. So, getting the historical perspective CLEAR, is important when trying to make a "best guess" as to what's going to happen this fall, and what might happen alternatively. Wouldn't you agree?

Now, if we take Kirk's word "at face value", there is only one conlculsion: there will be an NV30 this fall. However, when you realize that what he's saying doesn't really make sense (GeForce3 / 4 product schedule), that calls into question what might really happen.

How many occasions has NVIDIA moved fast enough to take away the overall performance crown from their own product?

Almost always. But what's your point? I would not be surprised if this fall we see EITHER a "NV30" or a "GeForce4 Ultra"...EITHER of which would take away the performance crown relative to the current Geforce4.

NVIDIA's execution has become rather reliable, and therefore predictable. Boring!

No, this is where I disagree. What has been reliable is their product release rate. What has not been reliable is "new architecture" vs. "refresh" cycle release rate.

Part of the confusion arises between what should be considered a new core, and what is a "refresh." I think one could make a case for GeForce4 being either.

So, I am fully confident that nVidia will release SOME new part this fall, just like they always do. And whatever it is, nVidia is to be congratulated for shipping something. I am less confident that it will be a NV30. Just last this past fall, nVidia had told some of us to expect the follow-up to GeForce3 to not just be "the same thing but faster"....and we ended up with GeForce3 Ti. Back then nVidia was telling us their "cycle" was "new core in the fall, refresh in the spring." Somehere along the line that changed, because GeForce3 Ti is obviously not a new core.

Don't worry, I will e-mail big bad NVIDIA and ask them to modify their interview responses to be less iritating to you.

Just ask them to tell the truth, and you won't be irritated by my responses to their interviews. :LOL: (While you're at it, you can e-mail the marketing department of every other company as well...)

You have to stop making the implication that just because I point out contradictions in nVidia's statements, that I an labeling them as "big and bad." If we're going to start making guess as to WHAT will be shipped this fall, it's prudent that we get the facts straight.

For the record, here are my top three guesses as to what we'll see from nVidia, in order from most to least probable...all IMO:

1) NV30, but not on 0.13...on 0.15 micron. (So, lower clock and/or fewer pipes than the "real" NV30.)
2) NV30 on 0.13 (This is the "real" NV30).
3) GeForce4 Ti "refresh".

Similarly, I think we'll see from ATI:
1) R300 on 0.15 (4 pipes?)
2) R300 on 0.13 (8 pipes?)
3) R250 (Radeon 8500 "ultra"...or possibly a MAXX variant.)
 
Back
Top