VSA100 ?

Honestly...

Does the situation with nVidia/ATI/R300/NV30/R350 remind anybody of the VSA100 debacle from a few years ago?

You know...3dfx was paper launching VSA100 during delay. Meanwhile, nVIdia launched GF1 SDR, then GF1 DDR. By the time 3dfx was finally getting close to launching the thing, nVidia was able to crank out the GTS, and it bought a signidicant amount of time to get their drivers up to scratch.

The more I look at the current situation, the more it reminds me of what happened a few years ago.

If the R350 has, indeed, completed its tapeout...and might just hit the streets right around NV30 timeframe...boy oh boy...
 
I think though that Nvidia is in a worse situation than 3dfx was at the time, in terms of technology.

The leap from a Voodoo 3/TNT2 Ultra to a GF1 SDR/DDR wasn't really all that great in terms of performance. It was noticeable, but nothing that really made your jaw drop.

The GTS on the other hand. Dear lord that was sick. Basically what happened now with the release of the R300 is equivalent imo to what would have happened if Nvidia had released the GTS instead of the GF1 SDR/DDR in 1999.

I consider the R350 to be the Ultra for comparison.. But who knows, it might end up being a GF3 to the GTS
 
I guess if initial fab goes well they will be able to launch/demo at just about the time NV30 becomes widely available.

How much faster than the NV30 are they going to claim to be? It's hard to imagine them beating a 500 MHz NV30 by any significant margin without being at least 500 MHz themselves-- which seems difficult on a .15 process. Perhaps its less difficult if you target that speed upfront and are willing to pay the price in power consumption
 
The two situations share some amusing similarities...including the fact that both respective paper launches occurred at Comdex. ;)

And while I agree that the technological difference is greater this time around, (so it is worse for nVidia in that respect), there are other differences that work in nvidia's favor:

1) nVidia actually demoed NV30 silicon at Comdex. 3dfx did not have working VSA 100's at comdex.

2) Everyone pretty much expects NV30 based cards to ship at least a month or two before R350 ships. IIRC, GeForce2 GTS and Voodoo 4/5 boards shipped at about the same time.

In short: timing wise, nVidia fares better than 3dfx did.

It's hard to imagine them beating a 500 MHz NV30 by any significant margin without being at least 500 MHz themselves--

Actually, it's not hard at all, in specific situations. With significantly more bandwidth than the NV30, a 400Mhz+ core/mem R300 variant should be able to beat a 500 Mhz NV30 quite handily in FSAA situations, while being on par with it in most other situations.

And we still don't know what possible hardware performance tweaks (if any) might be done with R350 to make it clock-for-clock faster than the R-300.

Where clock speed will really matter, is likely with shader performance. A R-300 variant clocked lower than a NV30, will probably lose in synthetic shading benchmarks.
 
Natoma said:
I think though that Nvidia is in a worse situation than 3dfx was at the time, in terms of technology.

Except the NV30 is supposed to be more advanced than the R300. The Voodoo4/5 cards were not more advanced than the GeForce line (as it relates to programming-side features).
 
Joe DeFuria said:
Actually, it's not hard at all, in specific situations. With significantly more bandwidth than the NV30, a 400Mhz+ core/mem R300 variant should be able to beat a 500 Mhz NV30 quite handily in FSAA situations, while being on par with it in most other situations.

There's no reason to do the R350 if it is only going to beat the NV30 in "specific situations." There are probably bandwidth-limited benchmarks where the R300 will beat the NV30. I think the raison d'etre of the R350 is to be the performance leader, which means matching or exceeding the competitions performance in essentially all benchmarks. I think ATI can afford to ignore DX9-shader specific benchmarks, but it needs to measure up in fill-rate as well as bandwidth limited tests.
 
Chalnoth said:
Except the NV30 is supposed to be more advanced than the R300. The Voodoo4/5 cards were not more advanced than the GeForce line (as it relates to programming-side features).

True. But the V5 did have one ACE up is sleave that just is being bested in the past few months that was significantly better than the GF2, FSAA. Granted by toadys standard its old tech but back then it was a big difference......
 
. I think the raison d'etre of the R350 is to be the performance leader, which means matching or exceeding the competitions performance in essentially all benchmarks.

That's basically what I said. ;) That a 400/400 R-300 variant will probably beat or match NV30 in most cases. (Save synthetic shading benchmarks.)

In short, I believe 400/400 r-300 variant will specifically beat NV30 in AA benchmarks, and match it in most other situations. And with most all web-sites now pretty much on board with testing AA and aniso at high resolutions as a test of a card's "power", that would be a pretty "clear" victory for ATI.

Look at the 9500 Pro vs. the GeForce4 Ti. Most web-sites claim that 9500 Pro is the superior to GeForce4 ti 4600....even though the 4600 is a bit faster in non AA benchmarks, the 9500 Pro takes a decicive lead in AA/aniso performance. I am sort of expecting a similar thing to happen with NV30 vs. R350...

EDIT:

There is one "wildcard" though...and that's Doom3. We don't really have a good idea of how these new architectures with different strengths / weaknesses will perform on Doom3, particularly with AA and/or Aniso. (Will pixel power be more important, or bandwidth?)

Doom3 benchmarks have the potential (rightfully or wrongfully) to be seen as "the" performance metric to compare all cards. Now, based on iD's recent job posting for a sound engineer for Doom3, I don't expect to actually see Doom3 be released for benchmarking until probably Q3 next year, so Doom3 probably won't have much of an impact on perceived performance for the NV30 / R350 product timeframe.
 
Joe DeFuria said:
That's basically what I said. ;) That a 400/400 R-300 variant will probably beat or match NV30 in most cases. (Save synthetic shading benchmarks.)

In short, I believe 400/400 r-300 variant will specifically beat NV30 in AA benchmarks, and match it in most other situations. And with most all web-sites now pretty much on board with testing AA and aniso at high resolutions as a test of a card's "power", that would be a pretty "clear" victory for ATI.

It really depends on how efficient the respective architectures are. I think it will be very interesting to see how they match up. Bah, I suppose I should put my R9700 back in and see what happens when I drop the memory bandwidth down to what a GeForce FX would have at an equivalent core speed, to see just how efficient the R9700 is with its memory bandwidth.
 
Bah, I suppose I should put my R9700 back in and see what happens when I drop the memory bandwidth down to what a GeForce FX would have at an equivalent core speed, to see just how efficient the R9700 is with its memory bandwidth.

Or, you could just look at some of the recent 9500 Pro reviews which are effectively the same configuration as the NV30. (8 pipes plus 128 bit DDR interface.)

Having said that, we really have no good idea how "efficient" NV30 is with its memory interface...all we have is a couple nVidia supplied benchmarks...
 
It might also be very interesting if when Nvidia does release samples of the GF FX for testing that ATI releases the R350 powered Radeon samples also :). Talking about a possible upset in the making :D.
 
Natoma said:
I think though that Nvidia is in a worse situation than 3dfx was at the time, in terms of technology.

The leap from a Voodoo 3/TNT2 Ultra to a GF1 SDR/DDR wasn't really all that great in terms of performance. It was noticeable, but nothing that really made your jaw drop.

The GTS on the other hand. Dear lord that was sick. Basically what happened now with the release of the R300 is equivalent imo to what would have happened if Nvidia had released the GTS instead of the GF1 SDR/DDR in 1999.

I consider the R350 to be the Ultra for comparison.. But who knows, it might end up being a GF3 to the GTS

actually, I dont think you could be more wrong.

nVidia is in a much better position than 3dfx ever could have been. 3dfx never had a fraction of the spending or marketting clout that nVidia has now. Not only that, but most likely from the technological merrit point of view, I wouldn't be suprised if the R350 and nv30 were very equal in most respects.
 
The 3dfx situation with the voodoo5 and them going out of business has nothing to do in anyway with the situation nvidia is in now.

The reason why 3dfx went out of business was due mainly to management problems. Nvidia on the other hand have a rock solid management and the only reason why the nv30 was late was due to tsmc problems and not nvidia problems.



And how can you compare the full scene anti aliasing speed and efficiency of the geforcefx to the r350 when you do not have final benchmarks for the geforcefx with/without using fsaa or anisotropic filtering, nor, do you have final benchmarks for the r350 with/ without fsaa or anisotropic filtering.

Also, will the r350 be using exactly the same fsaa methods as the r300? How efficient is the geforcefx memory bandwidth when using fsaa and anisotropic filtering? Will the r350 have any memory enhancements that would increase fsaa/anisotropic filtering compared to the r300 on a clock for clock basis.

-------------------------

A little note, I do expect the r350 to beat the geforcefx. But if you are all implying that the r350 and the nv30 are an equal match, but, with the r350 having an advantage in fsaa/anisotropic filtering. Imagine if tsmc had absolutely no problems with the .13micron fabrication and the nv30 was released on time and on spec, a few weeks after the 9700pro was released?
 
This is a rehashed discussion, simon, and you missed it (judging only by your registration date). Do a search on nv30 and r300 and you'll see the reasons for doubts about the nv30 clearly "winning" versus the r300 that you seem to take as a given. Here is an incomplete rehash of the discussions you'd find with the search...

Basic issues include the benchmark results released at the "launch" of the nv30, the comparison nVidia themselves gave of the nv30 to the GF 4, and observations about the videos and information already released.

It boils down to the nv30 is considered by some (including myself) as a r300 competitor, with the "wins", and yes, "losses", compared to the r300 yet to be determined. So, when you insist the r350 is the only competition (here and in another thread), it contrasts with the view some of us have formed based on the above which would lead us to believe if the r350 significantly outperforms the r300, which seems likely given it is a redesign, it seems likely to outperform the nv30 clearly as well (though, shader performance may still be a win, though the applicability may be limited). It doesn't mean those of us who lean towards this opinion are right, by any means, but the questions about the nv30 launch we are left with also leave us with that perceived as a reasonable outcome.

Regarding the benchmarks, the basic concerns include the lack of specification of what settings and driver revisions (for the r300) for the "wins" nVidia has stated about the nv30 compared to the r300, the use of an unspecified version of Doom III with an nVidia designed test map for comparison, and observations about how people with similar systems to the system for which nVidia has given specific nv30 values compared with r300 values for them. One assumption that seems reasonable is that the concrete comparison values provided by nVidia is near the maximum lead that nVidia is able to establish for the nv30, since for the limited data they released, they chose those specific examples for comparison when it would benefit them to make the strongest case possible for their market mind share and future nv30 sales.

Regarding the comparison of the nv30 to the GF4, it is in the same range (>= 2.5x GF4 performance) as the R300 already is to the GF4 under the conditions it seems likely nVidia is using (4x AA, and perhaps 8x aniso).

Regarding the videos, some concern has been expressed due to the launch demo videos not exhibiting AA in some cases, which cause some to discount the significance of the claims of "nearly free" AA performance hit...i.e., no real advantage compared to the R300 in this regard.

With a bandwidth deficit, the hype in place of informative specifications in many occassions during the launch, and ATi promising to deliver on driver optimizations of their own (I don't have a 9700, but some owners claim the latest driver set has already begun to do this), I do think dismissing the r300 versus nv30 competition is premature.

Of course, the simple observation that driver issues may be responsible for the hesitancy to release solid figures is quite easily possible, and the nv30 may deliver on the hype in full (it is pretty much accepted that it will "win" in shader performance in any case, so the possibility of being a clear "loser" to the R300 is one possibility that seems easily dismissible), but I will note that possibility seems to go counter to your expectation, stated elsewhere, of driver stability at launch.

Basically, I think we're all waiting for the first actual independent benchmarks (hopefully a Beyond3D one early on to actually answer questions definitively!) of the nv30 before we'll be able to say anything that hasn't been said already in the past month and more.

Regarding r350 versus nv30, we still really don't even have any real idea what the r350 is in any case, or to what degree the expected performance gains (over the r300) will actually manifest.
 
The 3dfx situation with the voodoo5 and them going out of business

My intent had nothing to do with 3dfx going out of business...just the events that took place during the Fall/Winter, and then Spring of (??) 1999-2000, and how similar it is to what we've got right now...

3dfx paper launched the T-Buffer/VSA-100 while nVidia had, without any shadow of a doubt, finally delivered a higher performing part than anything 3dfx had to offer. When that wasn't good enough, nVidia then released a DDR version.

The VSA-100 delay was so long, that it enabled nVidia to not release 1...not 2...but 3 products. Along came the GTS, coupled by more and more delays (and the recall).

And here is where the similary _might_ continue. At the end of the day, I think most people agreed with the following statement:

"Had 3dfx delivered the VSA-100 on time, think about how it would have compared with the original GeForce. Bandwidth wise, it would have been superior. nVidia would never have had their Antialiasing implementation in place to go head-to-head with 3dfx. Furthermore, it would have allowed them to work on churning out a faster part for the upcoming Spring."

...or something like that :)

Anyhow, I wasn't really looking @ 3dfx folding...just the actual launching of these products, and the possibility that we could see even more parallels.

Think about this...If the R350 has just taped out, they could very well get these things out within the ~March timeframe, which is just a couple of weeks post-GeForceFX.

What if this thing sports even more bandwidth (likely), higher clock (likely), better mature/tuned drivers (likely), and a couple of features here/there (likely)...and all of this adds up to either diminishing the performance difference altogether, or _really_ putting some distance between itself and the FX?
 
demalion, i presume that every r300 vs nv30 discussion is the same on every computer community on the internet, if that is the case I know exactly what was said in the previous threads on this site already.

My reply and comments were ment in reply to the people in this thread who believe that the r350 would be an equal match for the nv30. Both of which are unreleased cards and the performance of both is still completely unknown.

The benchmarks that nvidia released mean nothing to me, the percentage increase over the geforce4 to the geforcefx mean nothing to me. These are from a source that might not be unbiased and using immature drivers and not full release drivers.

However, from what I have read about in both r300 and nv30 discussions, I would assume that the nv30 is able to beat the r300 in most benchmarks if not all benchmarks and I have never seen any proof to tell me different.

And when I say that the r350 vs the nv30, I am talking about release dates and nothing more.


I will finish off by rephrasing what I have just said. Comparing the real world performance of 2 unreleased graphics cards, 1 of which hasn't even been announced yet, is stupid and foul hardy. The benchmarks given by nvidia are using unoptimised drivers, the card should be faster when released. Also, nvidia do not have to build the nv30 to beat the r300, as in my eyes that is irrelivent, they have to build a card that can beat the r350 in some benchmarks and still be competitive in pricing.

My policy is a wait and see policy and in my eyes, it is the best policy.
 
sas_simon said:
demalion, i presume that every r300 vs nv30 discussion is the same on every computer community on the internet, if that is the case I know exactly what was said in the previous threads on this site already.

Hmm...I'd say presuming these forums are just like "every computer community on the internet" is a pretty off base assumption. Perhaps your presumption would be fulfilled, perhaps not, but why make an assumption instead of taking a look?

My reply and comments were ment in reply to the people in this thread who believe that the r350 would be an equal match for the nv30. Both of which are unreleased cards and the performance of both is still completely unknown.

Well, I referred you the reasons for their belief (which is actually in some cases that the r350 will tend to "win" versus the nv30, and atleast be an equal match for it)...it's still there if you want to take a look.

The benchmarks that nvidia released mean nothing to me, the percentage increase over the geforce4 to the geforcefx mean nothing to me. These are from a source that might not be unbiased and using immature drivers and not full release drivers.

Right, I tend to agree, but I was addressing the following:

However, from what I have read about in both r300 and nv30 discussions, I would assume that the nv30 is able to beat the r300 in most benchmarks if not all benchmarks and I have never seen any proof to tell me different.

We don't have "proof" either way...but repeating the discussion seems pointless. I was trying to prevent that and provide an alternative (pointing you discussions that had already taken place here about exactly this, since people are responding to you based on them).

And when I say that the r350 vs the nv30, I am talking about release dates and nothing more.

? OK.

I will finish off by rephrasing what I have just said. Comparing the real world performance of 2 unreleased graphics cards, 1 of which hasn't even been announced yet, is stupid and foul hardy. The benchmarks given by nvidia are using unoptimised drivers, the card should be faster when released. Also, nvidia do not have to build the nv30 to beat the r300, as in my eyes that is irrelivent, they have to build a card that can beat the r350 in some benchmarks and still be competitive in pricing.

Well, there are a lot of assumptions in that. They may be right. But they may be wrong, too. If you didn't want feedback about them and don't care what others have thought and reasons they have given, why did you share them in a discussion board?

My policy is a wait and see policy and in my eyes, it is the best policy.

Well, you've decided not to wait and see for some of your assumptions. Having done that, I'd thought you might be interested in why others had some different conclusions and the reasons for it. If not, sorry for the wasted reading time, but if you change your mind the info is still there to read.

I'm not trying to convert you away from your beliefs, so you don't have to defend them to me, I'm simply trying to tell you where you can find why people are stating they believe otherwise and that it might be in your interest to take a peek so you don't have to ask.
 
demalion said:
if the r350 significantly outperforms the r300, which seems likely given it is a redesign, it seems likely to outperform the nv30 clearly as well

RE-design? I highly doubt this. The R400 might be a redesign. The R350 is an architecture tweak. R250 is to R200 as R350 is to R300. If anything, the R350 is most likely designed to get better yields and increase margins while lowering cost than significantly boosting performance or features. In other words, it's a refresh.
 
DemoCoder said:
demalion said:
if the r350 significantly outperforms the r300, which seems likely given it is a redesign, it seems likely to outperform the nv30 clearly as well

RE-design? I highly doubt this. The R400 might be a redesign. The R350 is an architecture tweak. R250 is to R200 as R350 is to R300. If anything, the R350 is most likely designed to get better yields and increase margins while lowering cost than significantly boosting performance or features. In other words, it's a refresh.

The "V" in the name signifies what you are talking about, Democoder.
The RV350 will be what you described.
The R350 will not.
Just like the R200 and the RV200.

Now, i dont think the R350 will be a redesign, but it will be a tweak for performance - Cheaper cost and better yields but not better performance is usually signified by ATI with the "V".
Ala the RV250.
i dont recall a R250 at all.
 
Back
Top