GForce 4 Next Gen ?

Clayberry

Newcomer
Is the Gforce 4 Next gen or is it just a rehash of the Gforce family? I belive that it is just a rehash nothing new just refind and polished(well the New shader That's New). I like the numbers it puts out. But I would like to get something that breaks new ground, Like Bitboys or the next PVR chipsets. I hope that nVidia does something like this soon. Please don't kill me but that is just my opinion.
 
if NV works like it did before, the next Chip will have something new (NV30)
NV25 (like NV15 and TNT2) is not really something new, so i would not call it a new Generation

<font size=-1>[ This Message was edited by: mat on 2002-02-07 22:27 ]</font>
 
If you have a Radeon 8500/Geforce 3 Ti series you would be crazy to upgrade unless you have endless pockets. I'm impressed by the Ti4600 and would buy one if it wasn't so expensive so I will be buying a 128 Meg Radeon 8500 to replace my 64 Meg 8500 as they are the same price in Canada . I was able to sell my 64 Meg 8500 for a great price :smile:
I like the truform technolgy and Pixel Shader 1.4 support. The ATI PP demonstration shows Bizzard supporting truform, I'm excited to see what title they have supported for that as Warcraft III is on my TO BUY list.
The thing I'm watching closely is Unreal 2 technology benchmark, that is the game that I will be playing alot and if the 8500 can deliver 60+ fps with the eye candy on...I'll ride the 8500 till Dec. and evaluate whats the next best buy.
 
Well, I was expecting this sort of reaction to the GF4 myself. I myself, although impressed by the benchmarks, am am not very impressed by this card. After doing alot of digging around I discovered the card is only really a GF3 with an extra pixel shader. It is obviously running very hot one only has to look at all the cooling the card requires to keep from burning up. This would also suggest that it is a GF3 overclocked. The tecknology on the thing is basicaly the same as the GF3. Why did nvidia create this card? One would think that all they really are doing is astranging clients who just spent a fortune on the GFTi series. My guess is that they were begining to fear for market share that ATi is taking with the Radeon 8500. As it is the GF4 4600 IS too expensive a price to pay for a GF3 with one more pixel shader overclocked to hell. The card is still only DX 8.0. Personally I think the product is overhyped.... but that is par for the course. I wouldn't upgrade to the thing if I owned a Radeon 8500 or a GF3 Ti 500 that would be silly IMHO.

On that note the card IMHO THE GF4 IS NOT A NEW GENERATION, no new tech no new generation. Brand naming scam.


Sabastian

<font size=-1>[ This Message was edited by: Sabastian on 2002-02-07 20:49 ]</font>
 
Sabastian:
Going by that logic a P4 is only an overclocked P3... Anything that is more complex/bigger (read: more features) and at a high clockrate will most probably require more agressive cooling unless the fab process is radically improved upon.

And new tech is such an inaccurate term. But to include a better/faster AA, fixed ST3C (still a debate if the original implementation was bugged or suboptimal though), more optimized everything, extra vertex shader, new multi-monitor support etc. etc. and only adding 3 mill transistors says they've done a lot. What exactly is defined as new tech beats me but take a look at the GF1 vs GF2. That generation jump actually has less architechtural differences IIRC.
 
Well, GF4 could be next-generation.

The real question is if developers will target it soon. I guess not, see this:

Matthias Worch
Sr. Level Designer
Legend Entertainment:

How about getting ME a GeForce4 (or 3, for that matter) first? Come on, I need a GeForce4 ASAP so that I can add millions of more polygons and shader effects! Everybody else out there already's got one!!! :smile:

http://www.ina-community.com/forums/showthread.php?s=&amp;threadid=160751
 
I think I will buy the 8500 from OEM but the score are not bad from the GF4 4600 how over clockable is the 8500?
 
On 2002-02-07 22:13, Bogotron wrote:
Sabastian:
Going by that logic a P4 is only an overclocked P3... Anything that is more complex/bigger (read: more features) and at a high clockrate will most probably require more agressive cooling unless the fab process is radically improved upon.

And new tech is such an inaccurate term. But to include a better/faster AA, fixed ST3C (still a debate if the original implementation was bugged or suboptimal though), more optimized everything, extra vertex shader, new multi-monitor support etc. etc. and only adding 3 mill transistors says they've done a lot. What exactly is defined as new tech beats me but take a look at the GF1 vs GF2. That generation jump actually has less architechtural differences IIRC.

No there is a big difference. The first generation Geforce was 0.22 micron core where the Geforce 2 was 0.18 micron. The Geforce 3 and 4 are both the same 0.15 micron process adding more credence to the idea that they really shouldn't be separate generation. The transistor count difference between the GF1 and GF2 was 8%. The transistor count difference between the GF3 and GF4 is only 5%. Also the clock rate jump between the GF1 and GF2 is more substantial 40% as opposed to the 20% clock speed differential between the GF3 and GF4. The Shading Rasterizer was new technology introduced with the GF2 ........ So your claim is not fare. There is absolutely no new hardware in the architecture with the exception of the second pixel shader. IMHO it is a over clocked GF3. Now you can call it whatever you like but a "new generation" I wouldn’t say that ........unless I was trying to sell them.

BTW Would you really drop your GFTi500 or Radeon 8500 to buy a GF "4". I wouldn't.
 
Going by that logic a P4 is only an overclocked P3... Anything that is more complex/bigger (read: more features) and at a high clockrate will most probably require more agressive cooling unless the fab process is radically improved upon.

P3 and P4 are completely different things than NVidia's progress with the GeForce line. A much better comparisison would be the PPro -> P2 -> P3.

P4 is a completely, totaly different architechture. Its pipeline is 20 stages deep, and the caching is also handled very differently. The original P3 was 10 stages deep, I think.

P3 -> P4 is like K6-2 -> Athlon.

Or, in video card terms, TNT2 -> GeForce.
 
On 2002-02-08 03:46, Clayberry wrote:
I think I will buy the 8500 from OEM but the score are not bad from the GF4 4600 how over clockable is the 8500?

You should get 300/600 with the normal 8500, but the OEM's are barely overclockable, most don't reach 275/550.
 
No there is a big difference. The first generation Geforce was 0.22 micron core where the Geforce 2 was 0.18 micron. The Geforce 3 and 4 are both the same 0.15 micron process adding more credence to the idea that they really shouldn't be separate generation.

That's a pretty flimsy argument for not being "impressed." The chip design itself matters more than the fab process.

The transistor count difference between the GF1 and GF2 was 8%. The transistor count difference between the GF3 and GF4 is only 5%

That's also another pretty flimsy argument. The percentage increase usually decreasesas transistor count increases, even when the rate of increase is equal to a previous generation. A chip going from 50M transistors to 53M is a 6% increase, a 3 million difference. A chip going from 60M to 63M is a 5% difference, even when increasing by the same 3M, or even slightly higher. Why exactly do transistors have to exceed factorially in order to be "impressive"?

If China's economy grew 6% last year, and the U.S. economy grew 3% (for argument's sake), does that mean you would not be impressed with U.S. growth as opposed to China's? Well, 3% of $10 trillion is a helluva alot more than 6% of 900 billion.

Also the clock rate jump between the GF1 and GF2 is more substantial 40% as opposed to the 20% clock speed differential between the GF3 and GF4

Just like above, numbers that are low give higher percentages. Why don't you look at actual performance increases instead of Mhz increases? The GF4 Ti4600 running in 2x FSAA running as fast as a GF3 Ti500 without FSAA is pretty dammned impressive to me. You should be more concerned about how this is achieved with the GF4's efficiencies, like LMA II, rather than actual MHz.

. The Shading Rasterizer was new technology introduced with the GF2 ........ So your claim is not fare. There is absolutely no new hardware in the architecture with the exception of the second pixel shader. IMHO it is a over clocked GF3. Now you can call it whatever you like but a "new generation" I wouldn’t say that ........unless I was trying to sell them.
Exactly what kind of "revolutionary" features are you looking for?





<font size=-1>[ This Message was edited by: Exposed on 2002-02-08 07:32 ]</font>
 
The percentage increase usually decreasesas transistor count increases, even when the rate of increase is equal to a previous generation.

That's not what NVidia has been saying for the past year, first with their "Moore's law squared!" from Mr. Perez himself, to "exceeding Moore's law!" boasting as of more recent.

For those not keeping track, Moore's law would dictate 100% transitor count increase per year.
 
For those not keeping track, Moore's law would dictate 100% transitor count increase per year.

The original paper quoted 100% increase per year, but Moore himself revised this figure to 100% increase per 1.5 years :smile: (and this held true..)

ciao,
Marco
 
That's not what NVidia has been saying for the past year, first with their "Moore's law squared!" from Mr. Perez himself, to "exceeding Moore's law!" boasting as of more recent.

Oh, that quote changes periodically. According to Dan Vivoli at the launch they are now up to Moore's Law cubed!
 
You know the Moore's Law Squared and Cubed stuff can depends on what value is being squared or cubed.

For instance if you suqared or cubued the 18 months (or in this case 1.5 years) you'd get the power doubling ever 2.25 and every 3.375 years!

Hehe. So, is that more like what is happening. :smile:
 
Exposed made most of my arguments for me. Thats what you get for being slow on a messageboard. :smile:

Sabastian:
The Shading Rasterizer was new technology introduced with the GF2 ........

IIRC my old GF DDR was able to use the NSR demos. It wasn't as efficient as the GF2 due to the texture unit/pipe layout, but it ran them just fine.

And like exposed says, the manufacturing process has very little impact on what generation it is, what the layout and algorithms in use are the interesting stuff. They are what makes up how advanced an architecture is, not what fabbing process is used.
 
The original paper quoted 100% increase per year, but Moore himself revised this figure to 100% increase per 1.5 years (and this held true..)

I wonder how much Intel paid for that revision? :smile:
 
The GF4 is a typical NV Spring refresh product, like usual. It has a few nifty features (nView, new FSAA mode) and speed improvements. It's definitely not revolutionary and if you already own a recent generation card (8500/GF3Ti), there's really nothing enough to make it a must have card. Cost is secondary since the Ti4400 lists for the same price as the 128MB 8500, but still outperforms the 8500.

NV could be holding back features again that will be exposed in future drivers, but that's nothing to really count on right now.

The GF4MX cards are pretty strange at their price points. Performance doesn't warrant the slightly high prices. When the prices fall, the DDR cards will be fine and competitive in price for the budget gaming segment.
 
Back
Top