ATi same mistake as nVidia?

RoOoBo said:
I was surprised last week when I got a new R9000 from Saphire. While my old GF2 MX 200 was using a small fan (that was full of dust when I removed the card) the R9000 at double the frequency was just using a somewhat large heatsing (more or less the size of the old PIII heatsink).

my old GF2MX (the original one, no 200 or 400) has no fan. just a heatsink
 
Chalnoth said:
What picture? The FX Ultra is a high-end part, one that won't see much use. The RV350 is a mainstream part, meant to compete against nVidia's MX line. The products are utterly different.

Ok, my bad.

Look at this.. OCS L2 Enhanced Radeon 9700Pro : http://www.monster-hardware.com/reviews/ocsystemati9700 1.htm

The same still applies tho.. Meaning you probably can't cool an Ultra FX with a zalman, but with the R9700 (and overclocked) you can.

If when the R9900 is released, and some vendors have solutions with a zalman.. that will certainly raise a few eyebrows :)
 
demonic said:
But, what I mean't. Is if ATi *released* the Rv350, with just a heatsink. Just imagine the picture it would send to everyone?

In the blue corner we have ATi, with just a heatsink.
In the red corner we have Nvidia, with the dustbuster!!

Now you get what I mean :LOL:

It would surprise me to find that there were still people around who didn't already have that picture firmly engraved in their gray matter...;) I know I certainly do. With a good design you should be able to take it down a size and have it run on less voltage and cooler than it did on the former process--which means you can then ramp it up in MHz speed and increase the voltage a tad and still come out with an overall thermal profile close to what you had on the larger process at slower MHz speeds and less voltage. IMO. You can foul this up if you abuse the shrink process and tack on too much additional circuitry, though--which is maybe what nVidia did.
 
WaltC said:
With a good design you should be able to take it down a size and have it run on less voltage and cooler than it did on the former process--which means you can then ramp it up in MHz speed and increase the voltage a tad and still come out with an overall thermal profile close to what you had on the larger process at slower MHz speeds and less voltage. IMO. You can foul this up if you abuse the shrink process and tack on too much additional circuitry, though--which is maybe what nVidia did.

Or you can get burned by a process that isn't ready for prime time. While I can't say for sure that NVIDIA's engineers didn't bungle something, its been "the word on the street" that TSMC's .13u process is still not doing so well.
 
RussSchultz said:
Or you can get burned by a process that isn't ready for prime time. While I can't say for sure that NVIDIA's engineers didn't bungle something, its been "the word on the street" that TSMC's .13u process is still not doing so well.

No question of that--it's true...however, it was nVidia engineering which decreed a similar chip was "impossible" at .15 microns. So regardless, it's a nVidia engineering problem, IMO.
 
WaltC said:
RussSchultz said:
Or you can get burned by a process that isn't ready for prime time. While I can't say for sure that NVIDIA's engineers didn't bungle something, its been "the word on the street" that TSMC's .13u process is still not doing so well.

No question of that--it's true...however, it was nVidia engineering which decreed a similar chip was "impossible" at .15 microns. So regardless, it's a nVidia engineering problem, IMO.

No, the CEO said that.
 
RussSchultz said:
No, the CEO said that.

Right--I forgot that not only does the nVidia CEO know nothing about chip design, he apparently doesn't listen to his engineers, either....? :?:
 
Or maybe he was saying stuff to pump up his upcoming product (which is in .13) and denigrating the competitors product by saying nothing as wizbang could be made in .15.
 
RussSchultz said:
Or maybe he was saying stuff to pump up his upcoming product (which is in .13) and denigrating the competitors product by saying nothing as wizbang could be made in .15.


*chuckle* Let's end the one-liners, OK, Russ?......;) OK, I'll say it first, "Uncle."

:D
 
No matter how you slice it:

The NV30 "launch" is a f'n MESS.

And nVidia, AS A COMPANY, looks bad because of it. Whether Huang doesn't have a clue, or he's jut pretending not to have one for the sake of "hype", it reflects poorly on nVidia.
 
Joe DeFuria said:
No matter how you slice it:

The NV30 "launch" is a f'n MESS.

And nVidia, AS A COMPANY, looks bad because of it. Whether Huang doesn't have a clue, or he's jut pretending not to have one for the sake of "hype", it reflects poorly on nVidia.

Oh, yea--I can't remember anything this bad. nVidia's well on its way to usurping 3dfx's record for "the worst-botched 3D card/chip release in history".....! The company has done nothing but foul this up for months--quite unbelievable. If I didn't think otherwise I might be tempted to think the remnant of 3dfx within nVidia had Trojan-horse'd nVidia, but good....;)Likely though that it's something much less spectacular--like them tripping over their own hubris. You might also think the R300 rattled them senseless. Who knows.
 
As I said back when the nv30 hype was building, I have hope that the lessons nVidia takes away from this will change their PR and marketing strategies.
That is assuming they conclude there was problem in the nature of their marketing and PR conduct, instead of believing there just wasn't enough of it. :-?


Let's see how this year shapes up, and I hope their engineering is up to competing with ATI so that the alternative I'm hoping for is more appealing to them...
 
demalion said:
... That is assuming they conclude there was problem in the nature of their marketing and PR conduct, instead of believing there just wasn't enough of it. :-?...

Ha, Ha--Good point!...:D Let's hope not--'O Saints preserve us!....;)
 
WaltC:

That Trojan theory was my idea!! MINE!! :devilish: :D ;)

Seriously though, it's simple. NVIDIA has been missing product cyles ever since the NV15. This time a competitor made them pay.
 
Chris, RV350 is a value part, it's not meant to run faster than an R350, it's just meant to be built cheaper. Read the comments at Guru3D, they say the same thing.
 
Chris123234 said:
Maybe I was right

Upcoming ATI-Desktop-Graphicscards

Radeon 9800 0,15 µm 375 MHz DDR-I 128 MByte 350 MHz
Radeon 9600 Pro 0,13 µm 400 MHz DDR-II 128 MByte 400 MHz
Radeon 9600 0,13 µm 300 MHz DDR-II 128 MByte 300 MHz
Radeon 9200 0,15 µm 275 MHz DDR-I 64 MByte 275 MHz

the 9600 (RV350) has a higher clock rate and faster memory
The RV350 probably has fewer pipes and a 128-bit memory bus.
 
JF_Aidan_Pryde said:
WaltC:

That Trojan theory was my idea!! MINE!! :devilish: :D ;)

*chuckle*... 8)

Seriously though, it's simple. NVIDIA has been missing product cyles ever since the NV15. This time a competitor made them pay.

I've always thought nVidia's idea here was both logical and stupid. Logical because it helped them cook 3dfx's goose (kind of an "arm's race" manifesto). Stupid because it was obviously unsustainable from the start. You are right, they've missed several of their self-ordained cycles, most of them I think after 3dfx dropped out of the picture. But I really think what ATI did here with R300 goes far beyond nVidia's product cycle schedule--I think ATI just leapfrogged 'em, plain and simple--kind of like AMD did Intel back in '99 (although I'd end the comparison there because ATI and nVidia are much closer in terms of resources than are AMD and Intel.) nVidia played a good come-from-behind game with 3dfx, but ATI is no 3dfx, and as such I think the kind of lead ATI has will be much more difficult for nVidia to overcome than the situation was with 3dfx. I think 2003 will cement in the trends with respect to these issues that we are likely to see for at least serveral more years after.
 
Back
Top