VSA100 ?

On the original topic, I dont see any similarity between the NV30/R300 as opposed to the GTS/V5 generation.

I also see a very distorted picture of that whole debacle, which is evidence of marketing money at work... and doing it's job nicely. I know, I rushed out to get both the GTS (2 "generations" of the product to be exact) and the V5 at the time. Consumer GTS release "beat" the V5 by a total of 34 days, from which Elsa jumped the gun and released the Gladiac 32MB, which in turn was a different reference design and had an average 86% RMA return rate between my local Fry's and Comp USAs. It wasn't until 2 weeks AFTER the V5 that a consumer "ready" board was available (from which I chose a CLAP, which was a totally different board design from the three (3) failed Elsa's I'd returned a full month prior).

So the only way I could see a parallel would be if:
1) R300 boards are released in January, from which they have a near 100% RMA return rate from faulty boards.
2) NV30 based boards then are released 30-35 days later, and work perfectly fine in most hardware configurations.
3) 2-3 weeks after the NV30 based boards are released, correctly functioning R300 boards are released.
4) ATI would have to rely on specially designed benchmarks to market the product as having any edge on the NV30... and rely on featuresets that we wont see until 2004+
 
Well, I stand corrected on a technicality, but my point still stands. The R350 is a refresh, a tweak, not a significant new architectural design. I doubt it will even have any additional functional units (no extra pipelines, shader units, etc) and probably no new AA modes either. If anything, ATI will be tweaking this thing to increase clock speed and yields, not adding tons of more transistors and trying to cram in more functional units into the same .15um process. NVidia will be doing the exact same thing with the NV31. It makes no sense to do significant design changes so soon unless you are fixing bugs.

Moreover, it goes against what ATI told us at Mojo about the two teams working on the R300 and the R400. If the R400 team has already been designing for presumably a long period of time, and the R300 team just got off the R300 project, where did ATI get the resources to do a REDESIGN in such a short period of time.



(BTW, I think ATI's codenames suck as bad as NVidias. R350 is ultra-high-end design, but RV350 is what? Lower than an R300 in capability? )
 
DemoCoder said:
demalion said:
if the r350 significantly outperforms the r300, which seems likely given it is a redesign, it seems likely to outperform the nv30 clearly as well

RE-design? I highly doubt this. The R400 might be a redesign.

When I say "redesign", I don't mean to imply "complete redesign". "Tweak" seems suitable for what I have in mind...I guess I should have specified a degree of redesign as the consensus seems to be that "redesign" is more extensive than what I had in mind.

The R350 is an architecture tweak. R250 is to R200 as R350 is to R300.

Already corrected, though in the context of "one being a tweak of another", I actually agree with the statement (except for that missing V).

If anything, the R350 is most likely designed to get better yields and increase margins while lowering cost than significantly boosting performance or features. In other words, it's a refresh.

The problem may be that my use of "significantly" is open to a wide array of interpretation. Suffice it to say, your statements are not incompatible with mine, atleast to me. That's what I meant earlier by "Regarding r350 versus nv30, we still really don't even have any real idea what the r350 is in any case, or to what degree the expected performance gains (over the r300) will actually manifest."

EDIT: BTW, yes, the codenames suck just as much between the two (i.e., they work for the engineers, so who cares what we think of them :LOL:), but why even bring it up? Trying to start a flame war? :eek:
 
sas_simon wrote:
demalion, i presume that every r300 vs nv30 discussion is the same on every computer community on the internet, if that is the case I know exactly what was said in the previous threads on this site already.


Hmm...I'd say presuming these forums are just like "every computer community on the internet" is a pretty off base assumption. Perhaps your presumption would be fulfilled, perhaps not, but why make an assumption instead of taking a look?

I was surfing around this website long before I actually joined the website and have already read all of the nv30 and r300 discussions.

Quote:
However, from what I have read about in both r300 and nv30 discussions, I would assume that the nv30 is able to beat the r300 in most benchmarks if not all benchmarks and I have never seen any proof to tell me different.


We don't have "proof" either way...but repeating the discussion seems pointless. I was trying to prevent that and provide an alternative (pointing you discussions that had already taken place here about exactly this, since people are responding to you based on them).

The things that lead me to believe that the nv30 will beat the r300 are the technical specifications of each card, if you take these into account and how all the different people on beyond3d translate these into realworld performance, I predicted that the nv30 would beat the r300. I have never seen anyone on here or other websites saying "will the nv30 beat the r300", i have only read "by how far will the nv30 beat the r300".

Quote:
I will finish off by rephrasing what I have just said. Comparing the real world performance of 2 unreleased graphics cards, 1 of which hasn't even been announced yet, is stupid and foul hardy. The benchmarks given by nvidia are using unoptimised drivers, the card should be faster when released. Also, nvidia do not have to build the nv30 to beat the r300, as in my eyes that is irrelivent, they have to build a card that can beat the r350 in some benchmarks and still be competitive in pricing.


Well, there are a lot of assumptions in that. They may be right. But they may be wrong, too. If you didn't want feedback about them and don't care what others have thought and reasons they have given, why did you share them in a discussion board?

I didn't think I made any assumptions, the part about the drivers being unoptimised was me going on the fact that the first revisions of drivers are done to get the card working and then later revisions are done to optimise the card and to iron out the bugs. All software is designed like this.

And I didn't put in anywhere that I didn't want feedback, if I am incorrect on anything that I have said, then I would like to be told. My comments about comparing 2 cards where no one actually knows the final release performance of both cards being foul hardy, was my opinion on their opinions.



My post was about making people understand that you can't really compare the r350 to the nv30 as the final release performance of both cards is still unknown. It was also about explaining that people should wait and see what sort of performance the nv30 will bring when it is benchmared by unbiased websites. It was also about waiting to see what sort of speed and features the r350 will bring before making any assumptions on it's performance compared to the nv30 and the r300.


[/quote]
 
sas_simon said:
demalion, i presume that every r300 vs nv30 discussion is the same on every computer community on the internet, if that is the case I know exactly what was said in the previous threads on this site already.

Hmm...I'd say presuming these forums are just like "every computer community on the internet" is a pretty off base assumption. Perhaps your presumption would be fulfilled, perhaps not, but why make an assumption instead of taking a look?

I was surfing around this website long before I actually joined the website and have already read all of the nv30 and r300 discussions.

Why didn't you answer like this the first time? :LOL: Your prior wording conveys something completely different. Note the bolded text, and contrast it with your latest wording.

Skipped a bunch of text as my prior reply and the text I referred to address it already.

Well, there are a lot of assumptions in that. They may be right. But they may be wrong, too. If you didn't want feedback about them and don't care what others have thought and reasons they have given, why did you share them in a discussion board?

I didn't think I made any assumptions, the part about the drivers being unoptimised was me going on the fact that the first revisions of drivers are done to get the card working and then later revisions are done to optimise the card and to iron out the bugs. All software is designed like this.

Ok, we have a difference in opinion on what constitutes an assumption then.

And I didn't put in anywhere that I didn't want feedback, if I am incorrect on anything that I have said, then I would like to be told.

Again, my points are 1) I think your expectations are assumptions 2) other people have other assumptions, and I was telling you where to find their reasons 3) there is no point in repeating the dispute of these assumptions as we've likely gone as far as we can without an actual nv30. To my mind, number 3 was what was starting. The problem seems to be either you don't consider some of your statements as assumptions, while I do, or that you think I'm trying to say your conclusions aren't reasonable when I label them as assumptions.

No need to agree with me, if you've already read the previous discussion here, I've just wasted our time :-? as I was primarily trying to make sure you knew why other people thought as they did.

I hope you can see how your final point (not quoted) was echoed in my posts as well.
 
We all have different opinions, my opinions were in reply to their opinions i.e. what I personally thought of there opinions i.e. telling them that in my mind it would be better to wait and see what sort of performance the nv30 had before making assumptions on how it compared to the r350, also waiting for the r350 to be officially announced, this would mean that we could look at the improvements that the r350 has over the r300 and could then make an educated guess on what those sorts of improvements would have and compare it to the nv30.


And could you explain what you thought my assumptions were, I did make some assumptions, but in my mind they were educated assumptions. Educated meaning based on previous discussions and hardware facts. I didn't have much else to do anyway, which is strange for me.

And I guess if we all waited to see websites benchmarking cards before talking about the cards performance relating to other cards, this and many other websites would be very boring.



And when I said that this is pretty much like other computer communities on the internet, there are similarities i.e. the topic of discussion, however one of the reasons why I have surfed these forums for quite some time is because these forums go into far more detail then a lot of the other websites.
 
At the moment, everything is pure speculation because no one has an NV30 to test and trying to "benchmark" performance by paper specs is an exercise in numerology.

I think what you will find is that the NV30, R300, and R350 will all be "on par" with minor performance increases in some areas and reductions in others, vis-a-vis competing cards. I think the raw-fillrate curve is starting to level off, and future cards will make much greater strides in AA'ed fillrate and shader performance, and less radical increases in raw fillrate.

The NV40/R400 probably aren't going to have 2.5x the performance of the R300/NV30 in terms of old games/single or dual textured fillrate. But they probably will be able to do higher levels of AA and run shaders alot faster.

I think if you're expecting miraculous leaps in performance from the R350 you're probably setting yourself up for disappointment. From DX8 to DX9 cards, we really did have a large performance increase. But I don't think it will be sustained very product cycle. In particular, going to 256-bit bus or 1Ghz RAM in such a large leap is a feat that you can't keep duplicating every refresh. E.g., what's next, 512-bit bus on R400? 2Ghz RAM? Can't be done that quickly. Thus, for the forseeable future, I forsee only modest evolutionary increments in raw fillrate performance, unless someone does a deferred renderer or eDRAM design.
 
how about a deferd renderer and edram :) I think we can see a pretty big performance increase from the r300 - the r400 if ati goes 700mhz 700mhz core/ram and a 256bus.. Don't u think so ?
 
I agree democoder, future graphics cards are unlikely to push out 500frames per second in unreal tournement 2003, but they will be able to push out far more detailed and far sharper images at the same frame rate.
Future graphics cards are also going to be a lot more programmable and a lot more intelligent then more powerful.



Looking at the specs of the nv30, r300 and r350. The ati cards will have the advantage in some areas and the nv30 in other areas. I can't say the r350 having vertex shaders quite as long or as programmable as the nv30, but does that really matter at this early stage? probably not.

Hopefully, in a few weeks we will see the geforcefx benchmarks and reviews coming out from sites such as firingsquad, beyond3d, tomshardware, etc.
 
sas_simon said:
I agree democoder, future graphics cards are unlikely to push out 500frames per second in unreal tournement 2003, but they will be able to push out far more detailed and far sharper images at the same frame rate.
Future graphics cards are also going to be a lot more programmable and a lot more intelligent then more powerful.

I would disagree here.
Until every game can be run at 1600x1200x128bit with 16xAA and high quality aniso, cards need to get much more powerful.
 
Althornin said:
sas_simon said:
I agree democoder, future graphics cards are unlikely to push out 500frames per second in unreal tournement 2003, but they will be able to push out far more detailed and far sharper images at the same frame rate.
Future graphics cards are also going to be a lot more programmable and a lot more intelligent then more powerful.

I would disagree here.
Until every game can be run at 1600x1200x128bit with 16xAA and high quality aniso, cards need to get much more powerful.

Some people forget (not saying you are Althornin) that it is not only the graphics card that is doing the work but also the CPU.

For something like you have mentioned [1600x1200x128bit with 16xAA and high quality aniso] we will more than likely need 8th or even 9th generation processors and the memory subsystem will need to improve as well as the bus system (AGP just doesn't cut it).

Then there is the host OS. Looking at the XBOX as a direct example, even John Carmack commented that the XBOX is capable of something like twice the performance of an equivalent PC with Windows due to a numerous reasons.

Edit: typo
 
misae said:
For something like you have mentioned [1600x1200x128bit with 16xAA and high quality aniso] we will more than likely need 8th or even 9th generation processors and the memory subsystem will need to improve as well as the bus system (AGP just doesn't cut it).

Umm, I don't see how higher resolutions, higher color depth, better AA/aniso etc would increase the load on the CPU at all, as the additional load we are talking about are per-pixel calculations that are done 100% on the GPU side...
 
Althornin, the technology that is coming out today is about doing more for less. Making things more efficient. Imagine turning on 4xfsaa and 8xanisotropic filtering with absolutely no performance hit.
 
Typedef Enum said:
Honestly...

Does the situation with nVidia/ATI/R300/NV30/R350 remind anybody of the VSA100 debacle from a few years ago?.....

The more I look at the current situation, the more it reminds me of what happened a few years ago.

I'm tired of these comparisons with 3dfx. The implication that Nvidia could be going the way of 3dfx (that is, into oblivion) is premature to say the least. The similarities are only skin-deep. Nvidia differs from 3dfx in a number of ways that will ensure it's viability for a long time:

-- Tons of cash reserves (after a long string of profitable quarters).
-- Strong continuing cashflow from a variety of products in diverse market categories.
-- Strong OEM presence.

Sure, it's clear that Nvidia has stumbled recently, and they've lost the high-end (kudos to ATI) for now. But if I recall the facts correctly, Nvidia's high-end consumer card has historically accounted for less than 1% of revenues. As 3D fanatics and gamers we tend to lose sight of this fact, but the big money is mostly in the low-end, and somewhat in the mid-range. Nvidia is still strong in the low-end and mid-range. If you don't believe me then go to Dell, Gateway, MicronPC, etc, configure a low-end to mid-range PC, and see what video cards come up as the default or low-cost upgrades.

And don't forget about the success of Nforce2, and the Quadro boards in the professional market. Nvidia may not be the technology leader at the moment, but they continue to move lots of product. That revenue stream will enable them to recover from their recent stumbles, and give them plenty of chances in the future to regain the high end, chances that 3dfx never had.

Rumors of Nvidia's demise are greatly exaggerated. Expect the computer graphics market to be a two-horse race for many years, and expect those two horses to continue to trade the lead from time to time.
 
sas_simon said:
Althornin, the technology that is coming out today is about doing more for less. Making things more efficient. Imagine turning on 4xfsaa and 8xanisotropic filtering with absolutely no performance hit.

Ok, i'll bite... GFFX?
 
SteveG

The low end and mid end of the market ATI is clearly in the better position over Nvidia. ATI's 3rd in line video card (Radeon9500pro) beats Nvidia's best in a number of features and benchmarks but yet is a little bit more then the Ti4200. The Radeon 9000/9100 makes the GF4mx look pathetic in comparison but yet at virtually the same cost. ATI's line up right now is far superior then what Nvidia has and look like it will be for another 3 months at least. ATI also has newer better technology ready to market when ever needed. Only thing right now that I see Nvidia has going for it is the NForce2 chipset which is the best chipset at the moment for the AMD line of processors.

3dfx went out of business because it could no longer effectively compete and make a profit. Nvidia lineup right now is becoming obsolete rather quickly and the near future prospects is unclear.
 
arjan de lumens said:
misae said:
For something like you have mentioned [1600x1200x128bit with 16xAA and high quality aniso] we will more than likely need 8th or even 9th generation processors and the memory subsystem will need to improve as well as the bus system (AGP just doesn't cut it).

Umm, I don't see how higher resolutions, higher color depth, better AA/aniso etc would increase the load on the CPU at all, as the additional load we are talking about are per-pixel calculations that are done 100% on the GPU side...

As a real world test try putting a Geforce 4 Ti or Radeon 9700 in a K6-2 system with PC100 RAM and AGP 2x.
 
What "features" do you think they could add?

I'm kinda sceptical about adding new stuff, I think they'll just bump frequencies and perhaps use DDRII.

Fully hardware accelerated N-patches would be nice... :devilish: it's WAY too CPU-dependant right now. The 8500 wasn't like that and according to the ATi-guys at Rage3D only minor speed improvements might be possible in future drivers. (Personally I wonder if it's even present at all, I get the same performance hit as I got in the good old days with my Radeon 64 ViVo using full software emulation)

Constant color compression.

Perhaps adding back support for a 32 bit Z buffer and W buffer.
A second TMU hehe nah honestly I don't think we'll see that but this is just a "wishlist".

I dunno.. I guess I'm just too satisfied with my 9700 Pro to ask for much more.
Just gimme SSAA and better performance with TRUFORM and I'll all set. :p
 
misae said:
arjan de lumens said:
misae said:
For something like you have mentioned [1600x1200x128bit with 16xAA and high quality aniso] we will more than likely need 8th or even 9th generation processors and the memory subsystem will need to improve as well as the bus system (AGP just doesn't cut it).

Umm, I don't see how higher resolutions, higher color depth, better AA/aniso etc would increase the load on the CPU at all, as the additional load we are talking about are per-pixel calculations that are done 100% on the GPU side...

As a real world test try putting a Geforce 4 Ti or Radeon 9700 in a K6-2 system with PC100 RAM and AGP 2x.

In real games, sure it would be a lot slower. But "1600x1200x128bit with 16xAA and high quality aniso" does not imply higher CPU load at all. Until this becomes the norm the CPU capabilities will be way higher, and consequently the games will be more demanding on the CPU too. That's just a sign of the time rather than a sign that those graphic settings require a better CPU. I'm certain that for instance most of my demos would run at pretty much the same framerate on the K6-2 as a 2400+, because there aren't a whole lot of CPU tasks in most of them.
 
SteveG said:
Typedef Enum said:
Honestly...

Does the situation with nVidia/ATI/R300/NV30/R350 remind anybody of the VSA100 debacle from a few years ago?.....

The more I look at the current situation, the more it reminds me of what happened a few years ago.

I'm tired of these comparisons with 3dfx. The implication that Nvidia could be going the way of 3dfx (that is, into oblivion) is premature to say the least. The similarities are only skin-deep. Nvidia differs from 3dfx in a number of ways that will ensure it's viability for a long time:

-- Tons of cash reserves (after a long string of profitable quarters).
-- Strong continuing cashflow from a variety of products in diverse market categories.
-- Strong OEM presence.

Sure, it's clear that Nvidia has stumbled recently, and they've lost the high-end (kudos to ATI) for now. But if I recall the facts correctly, Nvidia's high-end consumer card has historically accounted for less than 1% of revenues. As 3D fanatics and gamers we tend to lose sight of this fact, but the big money is mostly in the low-end, and somewhat in the mid-range. Nvidia is still strong in the low-end and mid-range. If you don't believe me then go to Dell, Gateway, MicronPC, etc, configure a low-end to mid-range PC, and see what video cards come up as the default or low-cost upgrades.

And don't forget about the success of Nforce2, and the Quadro boards in the professional market. Nvidia may not be the technology leader at the moment, but they continue to move lots of product. That revenue stream will enable them to recover from their recent stumbles, and give them plenty of chances in the future to regain the high end, chances that 3dfx never had.

Rumors of Nvidia's demise are greatly exaggerated. Expect the computer graphics market to be a two-horse race for many years, and expect those two horses to continue to trade the lead from time to time.

It's not the 1% of revenue that matters, it's the fact that your product is recognized as number 1. It's the high end cards that SELL the low end. Your point about el-cheapo cards in OEM boxes is perfectly correct. But it's only because the high end is top notch that people are willing to buy the watered down version. When you lose the high end, consumers lose the insentive to buiy the low end.
 
Back
Top