ATI - Entire family of products in first half 2004

Chalnoth said:
And this also isn't what ATI claims to want to do. They claim to release an entire line in the first half. That's six months of leeway.

(Um, the point Chalnoth...is how is ATI "copying" anything based on nVidia's past?)
 
Chalnoth, I think you're misinterpreting that statement.
It's not because they release a full line that it's all based on the R420 technology. The low-end will be based on the R300 for sure. That's not the same thing as what NVIDIA did with the NV34, which had the exact same features as the NV30.


Uttar
 
Yes Joe - and to take that a step further, for the past few releases ATI have offered two chips with the introduction of a new generation: New architecture and a lower end chip based on the previous architecture (8500 was introduced with 7500 and 9700 was introduced with 9000) which is quite unlike NVIDIA's model so far.
 
;) 8 Pipeline ;) , 256-bit cards have already been at the magic $200 price point for several weeks now. Check newegg.com for 5900 non-ultra card prices. 400Mhz Cores + 128MB 850Mhz Memory ~ $200. Have priced there since middle of Nov.

-The Rockster
 
Joe DeFuria said:
(Um, the point Chalnoth...is how is ATI "copying" anything based on nVidia's past?)
Joe, you are the one that brought up copying, not me.

And the comment was in response to a statement that no company had released a complete family of products.
 
Rockster said:
;) 8 Pipeline ;) , 256-bit cards have already been at the magic $200 price point for several weeks now.

I know, I got myself one of those $200 Radeon 9800 non-pros from CircuitCity. ;)

We're talking about introductory MSRPs though.
 
nelg said:
Uttar said:
The low-end will be based on the R300 for sure.

:D That is music to my ears.

Bah, wait 3-4 more months and you'll get the NV42/NV43, low-end parts fully replacing the NV34, with support for Shaders 3.0. and much more acceptable performance than the NV34.
0.13u LowK at TSMC for one, I suspect. Maybe 0.11u SOI at IBM for the other, or maybe TSMC is going to support 0.11u too even though they're rushing 0.09u - strange choice that'd be though!


*entering speculation mode*
NV35 = 130M
NV36 = slightly more than 80M

That means 50M transistors saved - on what, though?
Transistor costs in the NV3x AFAIK:
Full FP32 unit: 3M/component ( -> 12M for Vec4 -> 48M on the NV30/ )
VS Unit: 9M ( 3 in both the NV35 and NV36 ).

So, 24M transistors are saved from reducing the number of full PS units; BUT there also are mini-units! I've got no idea about how much they cost, but I'd suspect them plus ROPs, ... to cost at least 8M per "pipeline".

So we got 40M saved out of 50M required. The rest most likely comes from cache size reductions, mostly in the LMA. I've confirmed through a few small testings that the number of registers/quad in the NV36 is the same as in the NV35.

Roughly, the NV34 is the NV31 (which is nearly identical to the NV36 beside a lot more stuff is bugged and it's FX12), is:
80M - ( 2x10M for VS ) = 60M
60M - 5M for LMA = 50M
50M - 10M for smaller caches (less registers,...) and some of the LMA caches simply gone = 45M

To get to the 130M transistor count of the NV35, we do:
3x9 = 27M (VS)
3x4x4 = 48M (PS full)
8x4 = 32M (rest of pixel pipeline)
10+5 = 15M ( LMA )

27+48+32+15 = 122M
Add 8M for 2D, Triangle Setup, Rasterization, and so on to get to 130M.

Remember this is all speculation guys.

Thus, let us imagine a similar scenario for the NV40/NV41/NV42!
Let's imagine the NV40 has 5 VS units (no idea really) and 8 pipelines.
Each VS unit would cost 10M.
LMA would cost 20M.
Misc. would cost 10M.

This means 50+20+10 = 80M for non-pixel stuff, and 95M for pixel pipelines (this includes EVERYTHING pixel-related: ROPs, FPUs, pseudo-TMUs, ...)
So, a 3VS & 4 pipeline NV41 would be:
30+15+10+50 = 105M

Hmm, a mid-end part at 100M? Sounds logical to me considering the production will start in late H1 or early H2 2004 on 0.11u!

Then, the NV42/NV43 would be 1VS & 4 pipeline, with each pipeline having some retrieved 25% units/cache, and with less transistors for LMA/Misc/... thus:
10+10+10+35 = 65M
Reasonable target IMO, considering it's done on 0.13u and after some time will maybe even be done at 0.11u - although considering those transistor figures, it's possible the NV34 will survive afterwards, just like the GF4MX did (could very well be the end of the GF4 MX now though, thanks god! Or at least, only the current TNT2 M64 buyers will still consider a GF4 MX(420, 64-bit), eh).

However, considering those transistor figures, I'd say the NV42/NV43 will be MUCH more viable DX9 solutions than the NV34 ever will be. Unless NVIDIA screws up and does a 2x2/4x1, meh, then it would suck, yes....

Assuming the cost per-transistor for performance increased by 20% due to new features but it balances out to only 10% due to increased efficiency compared to the NV3x... And considering a nearly 20% clock speed disadvantage, too, compared to the NV36, we could say it's about 25% slower PS-wise.

Not bad at all, for a $99 part!

*stops useless speculation*

Uttar
 
Seriously, transistor counting for various parts really is the most futile guesswork. Most of the time the basis that your are starting from isn't correct in the first place.
 
Joe DeFuria said:
I know, I got myself one of those $200 Radeon 9800 non-pros from CircuitCity. ;)

We're talking about introductory MSRPs though.

I know we were talking MSRP. Just making a point about how much costs have come down lately. Hence, more evidence to support future mainstream performance specs. Glad to see you got that good deal! Although I must say, it is certainly more difficult to shop for 9800 non-pros vs. 5900 non-ultras. Especially online, because there are many 9800's configured with 4 pipelines and/or 128bit mem interfaces. Do you suppose the day will come when the IHV's can simplify all this for consumers?
 
DaveBaumann said:
Seriously, transistor counting for various parts really is the most futile guesswork. Most of the time the basis that your are starting from isn't correct in the first place.

Yep, you're right now :(
I just wanted to do some speculation I admit, been a while since I did...

BTW, the transistors count I assumed for the NV41 and NV42 are perfectly identical with several other ways to get at them:
NV41/NV36 = +30% = NV31/NV25
NV42/NV34 = +45% = NV34/NV18

Those are really guesses, but I think 100M and 60M transistor counts for the NV41 and NV42 are quite likely indeed.
BTW, main reason I wrote all that huge but useless thing is because I think the RV381 is clearly, once again, lacking behind NVIDIA's low-end. Unless ATI gets more aggressive in the low-end, NVIDIA is certain to do pretty darn well next year.


Uttar
 
woooooooooooooow I can't believe so many people didn't know that 4:20 (April 20th) is the international pot smoking day.... hell, even my mom knows that.... I wonder if maybe I should get new friends :?
 
Hellbinder said:
First of all Nvidia does not have a history of releasing a full family of products at one time. It Virtually *Never* happens if ever. Their releases and introductions have been staggared.
Typically the intial release is high-end only. The refresh has been shipping with an entire family since the GeForce2 days.
 
Typically the intial release is high-end only. The refresh has been shipping with an entire family since the GeForce2 days.

I think I'd have to disagree there.

The refresh of the GF3 was the GF3Ti500 and Ti200. The Ti200 was not a low-end card, but rather was aimed at the mid-range of the market. The lower end was covered by updated GF2 technology including MXs with no shaders.

The next refresh after the GF3Ti cards was GF4. These were all mid to high end with the exception of the GF4MX which was still based on GF2 tech with no shaders.

In fact, the first time NVidia has actually released a whole range of new technology from low to high end is with the GFFX range. Kudos to them for this, although it is a pity that the 5200 and 5600 were such weak offerings in terms of performance.

It is disappointing the ATI haven't really released a low-end DX9 card as yet. I know the 9600SE is such a card in theory but it really irritates me to see a much more capable chip bastardised with such a weak memory interface and hobbled the same way as was the 9500.

Incidentally, was anyone else as surprised as I was with Anand's latest budget card round-up?

http://www.anandtech.com/video/showdoc.html?i=1933

The 9600SE beat the 5200 Ultra in half of the tests despite having less than a third of the memory bandwidth. I know the 5200 Ultra isn't the best chip in the world but I'd have thought it might do a bit better against such a bandwidth-starved competitor.
 
I was surprised by this round-up... Where are the FX 5200 and the FX 5200 64 bits ? You can find these boards everywhere. Can you find a FX 5200 Ultra so easily ? In Europe FX5200U is not very common... unless we talk about reference boards ;)
 
Tridam said:
Can you find a FX 5200 Ultra so easily ? In Europe FX5200U is not very common... unless we talk about reference boards ;)
Yea...the Wal-Mart up the street sells them for $149 bucks ;)
 
I remember (but not where) that ATI is working on HDTV decoders for their cards, with the intent to market them in spring 04. Perhaps an HDTV AIW will be one of their new announcements?
 
Mariner said:
Typically the intial release is high-end only. The refresh has been shipping with an entire family since the GeForce2 days.
I think I'd have to disagree there.

The refresh of the GF3 was the GF3Ti500 and Ti200.
I'm not really sure you can call that a refresh. The GF3 Ti cards were still NV20's. By "refresh" I was talking about the GeForce4, the NV25, and the GeForce4 MX, the NV17.

Oh, and just fyi, if I remember correctly, nVidia actually did release the GeForce2 MX200 and MX400 at about the same time as the GeForce3. If I am remembering this correctly, nVidia has actually released an entire family at the induction of a new processor (but, of course, the MX200 and MX400 were slightly redone versions of an older core, the original MX).
 
Back
Top