NV30 Update

Bigus Dickus said:
I don't consider the NV25 a "refresh."

I consider the GF256 DDR (was that NV11?), NV16, NV20-Ti, and NV28 to be "refresh" parts, as all but the GF256 DDR fell on approximately 6 month cyles after the "new architecture" was released (GF DDR was more like 4 months).

The GF256DDR a refresh? Now that's a laugh. The GeForce DDR was the exact same chip as the SDR. The NV11 was the GeForce2 MX. There was never an NV16. In fact, the DDR was officially released at the same time, even though availability wasn't there just yet (actually, it was so hard to even get a GeForce SDR for so long without preordering, that I was able to buy a GeForce DDR before an SDR...).

NVIDIA's naming scheme seems consistent: first number denotes DX generation/major core family, second number denotes position in that family. x0 and x5 parts are architecturally different from the previous cores, and subsequent numbers are typically core/mem speed bumped parts (or perhaps AGP 8X added).

Generally, every part with a different number is architecturally different. Quick example:

NV10: 4 single-pixel trilinear pipes (.22um)
NV11: 2 dual-pixel bilinear pipes (.18um)
NV15: 4 dual-pixel bilinear pipes (.18um)
NV17: 2 dual-pixel bilinear pipes, aniso + MSAA (.18um)

If the parts just have different clocks, they have the same part number.

NV10, NV15, NV20, NV25, NV30... these are architecturally different, even if only minor changes (like NV20 to NV25).

Now that's kind of funny, as the NV20 made up the GeForce3/Ti cards, while the NV25 was the GeForce4 Ti line.

No matter which way you slice it, the GeForce4 Ti cards were not only to have a 6-month life span (which no new architecture from nVidia has had...), but they also follow essentially the same programming path as the GeForce3 line of cards. Most of the advancements were in performance, with a couple in terms of image quality (namely Quincunx/4x9 FSAA modes).
 
Joe DeFuria said:
Here's the PDF (from *ahem*, nVidia)

http://videos.dac.com/videos/39th/26/26_2/26_2slides.pdf

See Page 7.

No matter how you look at the data, one thing is clear:

Well, now that I've actually looked at the original PDF, I'll point out something nobody else here has:

Notice the "dual edge clocking" marker on the Gen2->Gen4? That means memory interface to me, though those clocks are still rather...off... In particular, the Gen1 clock seems low for a memory interface (memory clocks of the TNT/TNT2 cards: 120MHz, 150MHz, 175MHz, 183MHz).

In other words, particularly after seeing the actual PDF, I see even less reason to think that this means anything in terms of the core clock of the NV30.
 
Chalnoth said:
Joe DeFuria said:
Here's the PDF (from *ahem*, nVidia)

http://videos.dac.com/videos/39th/26/26_2/26_2slides.pdf

See Page 7.

No matter how you look at the data, one thing is clear:

Well, now that I've actually looked at the original PDF, I'll point out something nobody else here has:

Notice the "dual edge clocking" marker on the Gen2->Gen4? That means memory interface to me, though those clocks are still rather...off... In particular, the Gen1 clock seems low for a memory interface (memory clocks of the TNT/TNT2 cards: 120MHz, 150MHz, 175MHz, 183MHz).

In other words, particularly after seeing the actual PDF, I see even less reason to think that this means anything in terms of the core clock of the NV30.

I read that too at first, but then the speeds are way off. So for me it only indicates that this chips support DDR-SDRAM nothing more.
 
I don't understand the number change from 120 million to 100 million transistors? When did that happen? That sounds like a major design change to me.

Now I hope that NV30 supports N-patches, I see that as a very smart and efficient means of increasing model complexity and subsequently IQ. Yes if you turn on TRUFORM on models that are not designed for N-Patches you can get that ballon looking effect. I don't think RTCW has any problems when TRUFORM is used, it does increase model smoothness and has better lighting. I see TRUFORM as another piece of the puzzle in creating more realistic enviroments for game play.

I got a kick out of the last statement made from Nvidia. Ahmmm from their slide show.
black_logo.gif


Our biggest problem is “getting it rightâ€￾

:) , sorry I just had to quote them on their word. You see Nvidia is truthful ;).
 
I'm not sure we should read too much into the 100+ M transistor count statement. My understanding is that the decision to drop the primitive processor was made early in the design. I'm fairly positive the 120M transistor count declaration came well after that decision. My guess is that it will most likely ship with around 120M transistors.
 
Typedef Enum said:
Those shots were mocked for a very simple and obvious reason...they looked ridiculous, and deserved to be mocked. It's that simple.

When a feature ends up making players look like they're constructed from a bunch of balloons, it actually has the effect of making a mockery out of both the game, as well as the hardware...

Surely you can't sit there (though I'm sure you can/will/would) and deny the fact that those screenshots were NOT ridiculous looking.

What a load of bullshit, you took my screen shots out of context, as you can see by the shots I was trying to show the improvement to models, and I always stated this, it was a after the fact hack by Croteam AND required exclusions to textures to stop some of the effects..

I even linked to the article by Croteam themself when praising truform and stating the same thing about being a after the fact hack to show the technology....but you took my screen shots and ran with them all over the net ...typical :rolleyes:

If a game is designed around it like RTCW the image quality is greatly improved..what a concept..

The improvements to IQ is no different than FSAA, yet you downplayed it because Nvidia doesn't have it...it is as simple as that.
tru-form_sample.jpg
 
TRUFORM or should I say N-Patches is a big improvement when done right, just like any other feature. It is an API feature which ATI supports and Nvidia has yet to support. I am looking forward to my Radeon 9700 and using TRUFORM or just plain N-Patches for short. N-Patches can make a scene or model look utterly realistic which would be impossible or unworkable to do with high poly models causing a large overhead on the AGP bus and for slower cards. This is a feature that we should be promoting.
 
Chalnoth said:
The GF256DDR a refresh? Now that's a laugh. The GeForce DDR was the exact same chip as the SDR.
BING! And the light of comprehension shines in Chalnoth's head. That's precisely why I call it a resfresh... NVIDIA's refresh parts have always been essentially the same core as the previous one. This started with the TNT2 to TNT2 Ultra - same core, but the refresh part was clocked higher. This is all NVIDIA's "refreshes" have ever been. I don't understand how some people have convinced themselves that NVIDIA actually releases a new chip every 6 months.

To reiterate the point: the GF4 isn't a "refresh" part, that was the GF3 Ti and whatever the NV28 will be.
The NV11 was the GeForce2 MX.
I was making a guess there, as indicated by the "?" immideately following it. In any case, the DDR hit the shelves later... but it is really the only blip in NVIDIA's otherwise consistent schedule.
There was never an NV16.
I thought I had read that the GF2 Ultra core was the NV16. Oh well, I was talking about the GF2 Ultra, whatever name of the core might have been.

Note: I'm leaving out all of the "MX" series of cards in this sequence, since they muddy the water considerably.

Let me try to make this clearer:

TNT2 - refresh: TNT2 Ultra
GF256 - refresh: GF256 DDR (odd one in the series)
GF2 GTS - refresh: GF2 Ultra
GF3 - refresh: GF3 Ti500
GF4 - refresh: NV28

As you can see, ever since the TNT2 the refresh part has been the same core. The NV28 will deviate by adding AGP 8X support most likely (very minor change).

So when you call the GF4 the "6-month refresh," I just have to ask... refresh of what? The GF3 was a year before, not 6 months. If you mean a refresh of the GF3 Ti, then what does that make the Ti? The GF4 isn't a refresh, because (and perhaps that light is still on) it's core is actually an improvement over the previous one.
 
Bigus Diskus: I assume that your definition of "refresh" is when only the clockspeeds are higher (gf3 --> gf3ti for example), while gf4ti is an improved core with some other improvements besides improved clock speeds, which is why u don't define it as a "refresh".

If that's the case, then I do undersand your line of thinking and it's logical...
 
GF4 isn't a refresh, it is a new core with mediocre improvements at best. A refresh of the GF4 should be coming which I don't think will enhance anything over the GF4 except a faster core/mem and AGP8x. Nvidia wasted time and money on it as far as I am concerned. The NV30 is the next generation from Nvidia from the GF3/4 era. Looking at around 2 years to do. Nvidia is falling behind ATI at this stage.
 
noko said:
GF4 isn't a refresh, it is a new core with mediocre improvements at best. A refresh of the GF4 should be coming which I don't think will enhance anything over the GF4 except a faster core/mem and AGP8x. Nvidia wasted time and money on it as far as I am concerned. The NV30 is the next generation from Nvidia from the GF3/4 era. Looking at around 2 years to do. Nvidia is falling behind ATI at this stage.

NV28 will be NV25 with AGP 8x suppot and probably higher clockspeeds.

nVidia didn't waste any money or time on NV28, as the NV28 will replace the NV25 and the costs to produce it will be much lower, which is the main reason nVidia are getting it out.

nVidia are behind ATI at this stage, but there are reasons for that, the main reason being the fact that they focused on NForce and NV2A in the past.

Everything will return to it's usual pace and routine with the NV30.
 
alexsok said:
Bigus Diskus: I assume that your definition of "refresh" is when only the clockspeeds are higher (gf3 --> gf3ti for example), while gf4ti is an improved core with some other improvements besides improved clock speeds, which is why u don't define it as a "refresh".

If that's the case, then I do undersand your line of thinking and it's logical...
Yes, that's my logic.

If NVIDIA hadn't produced a string of clock bumped cards inbetween their ~year apart new cores, I wouldn't have that opinion.
 
nVidia didn't waste any money or time on NV28, as the NV28 will replace the NV25 and the costs to produce it will be much lower, which is the main reason nVidia are getting it out.

I hope so, still it will compete against a DX9 capable card, the Radeon9500. In short the NV28 may not look to be a good option for its price unless it is a sub $250 card from the get go. Something else Nvidia has to overcome.
 
I hope so, still it will compete against a DX9 capable card, the Radeon9500. In short the NV28 may not look to be a good option for its price unless it is a sub $250 card from the get go. Something else Nvidia has to overcome.

Think of it like this:

NV28: NV25 with AGP 8x & higher clock speeds. Since the costs to produce it will be lower, the price will also be lower, especially when R9500 is going to be released. Also, I expect NV28 to be faster at certain situation over the R9500, since it's 4x2, where R9500 is 4x1, although it's hard to deny the fact that R9500 is a DX9 capable card...

In my opinion, NV28 is nVidia's mainstream product until a mainstream NV30 solution is brought to the market.
 
alexsok said:
Also, I expect NV28 to be faster at certain situation over the R9500, since it's 4x2, where R9500 is 4x1, although it's hard to deny the fact that R9500 is a DX9 capable card...

Supposition.
 
Doomtrooper said:
Let me help you with some History here:

Radeon 1 was released same time as a GTS, following that Nvidia released Geforce 2 Pro, Geforce Ultra, then Geforce 3.

Let me help YOU with history. Radeon was released at the end of the Summer 2000, around the time R8500 and R9700 were released in the following years. GTS was released Spring 2000, to compete with V5.
 
Well I could not get a Geforce 2 GTS in Canada when the Radeon 1 was released, in fact I ended up getting a Radeon 1 after a powercolor Geforce 2 MX crapped out, then I RMA'd it and got a GTS a Month later which also crapped out..both with memory failures..so I RMA'd it and got a Radeon 64 meg for the same price as my 32meg GTS...

So where I live they were not available, maybe in the US but Not in Canada..and I had ALOT of suppliers...

A good example is this Radeon 9700 launch..even though ATI is a Canadian based company the US market was released 1st and most online large shops here are just shipping now Sept 10th...while alot of US consumers have had theirs for weeks.

Kind of sad really that ATI ignores us even though they do business here, but they are out to make money and the US market is much larger than little ol' Canada.
 
I understand the regional variations in availability, but the point stands: GF2 was finished several months prior to Radeon, giving ATI extra time to improve on their design.
 
Bigus Dickus said:
BING! And the light of comprehension shines in Chalnoth's head. That's precisely why I call it a resfresh... NVIDIA's refresh parts have always been essentially the same core as the previous one. This started with the TNT2 to TNT2 Ultra - same core, but the refresh part was clocked higher. This is all NVIDIA's "refreshes" have ever been. I don't understand how some people have convinced themselves that NVIDIA actually releases a new chip every 6 months.

The TNT2 Ultra was no more a refresh of the TNT2 than the GeForce4 Ti 4600 was a refresh of the GeForce4 Ti 4400.

Your definition of refresh is obviously quite different from what is normally used, and is thus wrong.

Here's a little bit of history:

1. TNT -> TNT2: die shrink, increased 32-bit performance
2. GeForce -> GeForce2: die shrink, performance optimizations, two bilinear textures per pixel.
3. GeForce3 -> GeForce4: second vertex shader unit, performance/visual quality optimizations, a few more pixel shader instructions.

These are what are normally termed as the refreshes from nVidia. The GeForce3 Ti and GeForce2 Pro/Ultra were only termed as "refreshes" because they came out at about the time people thought refreshes should come out. Still, the GF2 Ultra sort of counts as a refresh because it was done on a slightly different process at TSMC.

What you are describing as "refreshes" would be more accurately described as "re-releases."

And regardless of which way you slice it, the GeForce4 was based on the GeForce3 core, and was not an entirely-new architecture.
 
Chalnoth said:
These are what are normally termed as the refreshes from nVidia. The GeForce3 Ti and GeForce2 Pro/Ultra were only termed as "refreshes" because they came out at about the time people thought refreshes should come out. Still, the GF2 Ultra sort of counts as a refresh because it was done on a slightly different process at TSMC.

Oh I see...
eek7.gif


Well these..ummm...refresh parts were marketed as NEW cards including the Geforce 2 PRO, and of course the ULTRA (another $700 Canadian Card)...and then there is a Geforce 3 Ti which was a die shrink and....oh yes enabled features the 1st Geforce 3 was marketed as having..

Silly us.
 
Back
Top