RV380 and R420 info @ xbit

Back in the "Good Old Days" (TM), Nvidia managed to "get it right" by always moving for the smallest manufacturing process and most expensive and exotic RAM : many people (myself included) doubted the introduction of DDR for the GF1, then the move to 0.18 (IIRC) for the GF2... It was an engineering gamble that paid of pretty well. I'm starting to believe that what was an engineering decision back then may have turned into some kind of mantra for the company, ie the management at Nvidia may very well believe the success of the company is tied to "using the smallest process and the best RAM money can buy".
 
Guys, I think we've miss a major point: nVidia diverted resources from NV30 to NV2A. Remember anyone? Anyhow, due to the lack of resources they must have cut corners to keep down costs. The low yields may just be an extention of that. Just some pondering on my part.
 
Guys, I think we've miss a major point: nVidia diverted resources from NV30 to NV2A. Remember anyone? Anyhow, due to the lack of resources they must have cut corners to keep down costs. The low yields may just be an extention of that. Just some pondering on my part.

Don´t you guys remember the "overkill" debate that David Kirk repeated for the 256bit bus.
If there had been no competition(atleast at the highend), the FX 5800 "normal" at 400/400 could(IMO) ended up as the "ultra" .
Then a NV30 with 350/350 core,mem would be the vanilla card.
These card would have been much faster than the NV25-line.

So when i look at it at that wiew i take it as Nvidia was stunned by the R300 and later R350.
Today as i see it ATI is the market-leader and has been for nearly a year.
When NV35 hits in volume and at a competive price then i see it as equal.
 
DemoCoder said:
I think the lost is due to poor design. Even if the NV3x had great yields on .13, it wouldn't change the fact that the architecture has problems.

The real "gamble" NVidia made was their multi-precision architecture, support for old register combiner stuff, and stencil acceleration, while not really doing anything to their rasterization and AA. Nvidia bet that the multiprecision design would yield real benefits, but their implementation is too constrained.

I think the problem with the NV30 is that, prior to what I said along time ago, it is not a completely new architecture, but an evolution of the NV2x. ATI had a totally new team work on the R300, so they threw out alot of the legacy stuff that didn't work on the R200 and gambled a clean new simple implementation would be better (e.g. booted the integer pipelines)

Nvidia's design is overly complex, and they didn't get it right this generation. Reminds me of AMD's early attempts on the k5/k6. Perhaps they will get it right with the NV40 and the architecture will start to get legs.

I would agree with this--it's pretty much how I've seen the issue myself. I disagree on the nV3x architecture getting legs, however. I've always thought nVidia should dump most of the nv3x design and go for something clean and new, instead. I think they retained just enough of their older architecture to present real problems in bolting on their newer concepts and as you say this has resulted in an overly complex architecture that generates poor manufacturing yields along with an inefficient architecture that performs relatively poorly. In otherwords, nV3x architecturally is a mess...;)

The real question is "Can they do better?", I think. If you look at the inordinate amount of time and money they invested in pcb design and in bolting on a giant fan & cooling system on nv30U simply to clock up to 500MHz, and step back and see that this was all done for what turned out to be a dead-end product they couldn't market in quantity and were not prepared to back for reliability, it raises some interesting questions along those lines, I think. Especially when you consider that as of last August nVidia saw what nv30 would be competing with in the R300, I think it really begs the question.

Looking at the fact that nv35U simply lowers the nv30U clock by 10%, ostensibly to reduce power & thermal requirements and to increase yields, and the fact that yields are still not on par for nv35, it doesn't look good.

Then there's the knee-jerk "blame it on TSMC we're going to IBM" phase the company publicized. Now we see at the very least a steady easing of the company's initial rhetoric on blaming TSMC, and it is apparent that nVidia will continue to use TSMC for the bulk of its manufacturing and that IBM's future role seems to be of a type of "probationary" status to the degree that nVidia isn't naming which of its gpus has in fact taped out at IBM. It just looks like, at least circumstantially, that nVidia is actually coming to realize that many of its nV3x problems are design related instead of FAB related.

I would have thought nVidia would have realized this when the company decided FX-FLOW was a requirement needed to pump up nv30U to a point where it could compete with R300. Maybe the company did, but their actions since last August relative to nV3x production do not indicate that anyone in the company had a clear idea of the design problems fundamental to the architecture. At least to me...;)

In its place what I probably would have done when R300 shipped last year and I saw what I would be competing with is quietly pull nv3x and go back to the drawing board. I can't see how this would have cost them more than going ahead with nv30 in terms of production and advertising, thermal R&D, and etc., only to turn around a couple of months later and declare it a failure. Had they simply scrapped it prior to attempting to ship it there never would have been a need to stipulate to such an obvious failure and the company would have been no worse on the product front than they were anyway (since because nv30 didn't fly they were left with their GF4 products in the main--just where they'd have been if they'd have scrapped nv30 last fall and started over.)

Maybe if they strip out the integer pipe and some other things that would help--but if the nV3x architecture is fundementally based on nv2x--they may not be able to do that. If the only thing they are using IBM for is a third attempt at nV3x, though, instead of to manufacture a new design, they may have no better experience than they did with TSMC. I'd like to see them make a clean break with "GF-FX" in every way and put it behind them. But maybe they can't, is what I'm wondering, because they have nothing else to bring to the party...

It's a crap shoot really. Same with TBR and other technologies. Engineers are trying out new ideas, sometimes their timing is right, and sometimes things work, and sometimes timing is bad and it doesn't work. You bet an architecture will yield savings and improvements, but either there are unintended consequences, or it takes so long to come out because of increased complexity, that by the time it is delivered, the original predict efficiency savings are no longer relevant.

This is why these companies as they get bigger sometimes err towards conservativism and incrementalism, because big bold architectural changes are unpredictable.

I agree with this and often said last year that it would be inadvisable to assume a performance delta between nv30 and R300 that many were assuming, because they were predicting nVidia's future performance based on its past record. When you go to "new" architectures, the past gets tossed out of the window and the counters are reset...more or less...
 
WaltC said:
Maybe if they strip out the integer pipe and some other things that would help--but if the nV3x architecture is fundementally based on nv2x--they may not be able to do that. If the only thing they are using IBM for is a third attempt at nV3x, though, instead of to manufacture a new design, they may have no better experience than they did with TSMC. I'd like to see them make a clean break with "GF-FX" in every way and put it behind them. But maybe they can't, is what I'm wondering, because they have nothing else to bring to the party...

1. The NV35/NV36 no longer support FX12. It is automatically considered as "FP16 without exponent", making it maybe 1% faster than normal FP16.
2. nVidia is manufacturing the NV40 at IBM
3. I actually fear that nVidia's limited experience with IBM might limit their core clock speeds, giving them little advantage to use IBM for their first product, btu reaping the rewards more like with the NV45 for example.

Also, nVidia guestimates that for now, IBM will be reserved for the real high-end, thus keeping a vast majority of their productions to TSMC, but that in a few years, they expect a 50/50 ( medium/high-end at IBM & low-end at TSMC maybe? )


Uttar
 
Uttar said:
1. The NV35/NV36 no longer support FX12. It is automatically considered as "FP16 without exponent", making it maybe 1% faster than normal FP16.
You can't treat FX12 as "FP16 without exponent" cos FP16 has only 1_sign + 10_precision = 11 bits. So exponenta is used in some way when you need to emulate FX12.
 
The FP16 format (like FP32 and FP64) doesn't store the most significant bit of the mantissa, as this bit is known to be always 1. This bit gives an additional bit of precision in addition to the 10 bits explicitly stored and the sign bit, giving you the full 12 bits needed to emulate FX12.
 
Well, it seems the R420 rumors are true as sources have been getting beta hardware leaks around the circles.

Of particular interest is the new silk-screen logo on the chip. It's quite an interesting angle for ATI to be taking.. possibly trying to appeal to a more older, hipper crowd.

r420.txt
 
arjan de lumens said:
The FP16 format (like FP32 and FP64) doesn't store the most significant bit of the mantissa, as this bit is known to be always 1. This bit gives an additional bit of precision in addition to the 10 bits explicitly stored and the sign bit, giving you the full 12 bits needed to emulate FX12.
So, FP16 precision-wise is equalent to FX12, but to emulate FX12 with FP16 you will still need this additional implied 12'th bit, for example by distinguishing between 0 and non-0 values of mantissa.
 
McElvis said:
That's a fake.

Gee, and you figured this out all by yourself did ya'? :)

I guess "sense of humor" for obvious parody/joke is something some people didn't wake up with this morning.
 
McElvis said:
Sharkfood said:
Well, it seems the R420 rumors are true as sources have been getting beta hardware leaks around the circles.

Of particular interest is the new silk-screen logo on the chip. It's quite an interesting angle for ATI to be taking.. possibly trying to appeal to a more older, hipper crowd.

r420.txt

That's a fake.

Compare picture with this link :
http://www.hardocp.com/image.html?image=MTA1NDY4MTYyN2QwQkg1elh2NVZfMV8xMV9sLmpwZw==

Mr. King of RNR, you're genius. :rolleyes: :p
 
It appears ATi isn't "smoking something hallucinogenic," but rather they're going to smoke nVidia with something hallucinogenic. ;)
 
Uttar said:
1. The NV35/NV36 no longer support FX12. It is automatically considered as "FP16 without exponent", making it maybe 1% faster than normal FP16.

Well, if that's the case it may be something else relative to nv2x architecture they can't toss out that's causing the nv35 yield problems, I would guess.

2. nVidia is manufacturing the NV40 at IBM

Rumor, it would seem at this point. I would consider it likely if nVidia is still of the opinion that "the FAB makes the difference." Recent statements they've made indicate they may be pulling back from that mantra. In any event I've not seen any official mention that IBM is fabbing "nv40" from nVidia. We know they are fabbing something for nVidia, however.

3. I actually fear that nVidia's limited experience with IBM might limit their core clock speeds, giving them little advantage to use IBM for their first product, btu reaping the rewards more like with the NV45 for example.

Yes, and I think it remains to be seen whether any sort of ongoing relationship with IBM will work for nVidia. They may just be engaging in this to "light a fire" under TSMC in their estimation.

Also, nVidia guestimates that for now, IBM will be reserved for the real high-end, thus keeping a vast majority of their productions to TSMC, but that in a few years, they expect a 50/50 ( medium/high-end at IBM & low-end at TSMC maybe? )

Uttar

I guess time will tell...
 
Sharkfood said:
Well, it seems the R420 rumors are true as sources have been getting beta hardware leaks around the circles.

Of particular interest is the new silk-screen logo on the chip. It's quite an interesting angle for ATI to be taking.. possibly trying to appeal to a more older, hipper crowd.

....

Cool..... Is that a starburst or a palm tree? Actually what I'd like to see is the silhouette of somebody at ATi...a sort of "guess the profile" game, maybe?

Heh... :D
 
WaltC said:
Cool..... Is that a starburst or a palm tree?
On the off chance that a couple of you weren't kidding, that's a picture of a marijuana leaf. It seems the artist took the "ATI must be smoking something hallucinogenic" quote a bit seriously on this one...
 
Chalnoth said:
On the off chance that a couple of you weren't kidding, that's a picture of a marijuana leaf. It seems the artist took the "ATI must be smoking something hallucinogenic" quote a bit seriously on this one...

Russ Schultz said:
Ummm, hopefully you're kidding.

I thought Sharkfood's remarks would have made things clear. Just to make sure I wasn't taken seriously I added in the "profile" thing. But oh, well....;) I guess I'll have to be more careful and sign everything "just kidding" if I'm kidding around...;)
 
Back
Top