Rumor: R350 out before GFFX and is 10% faster......

Yes "The Sims" is the best selling title of all time :rolleyes: ...

My most popular I meant by current most popular titles, BF 1942 is approaching counterstrike numbers...12,000 players on last weekend.
 
Posted: Tue Jan 21, 2003 6:11 am Post subject:

--------------------------------------------------------------------------------

The current trend of recent titles shows that more than anything developers are still using 60 % CPU (platform) and 40% (graphic card)..so in other words upgrading the Platform will deliver more significant peformance than a video card.

The Radeon 9700 is mostly CPU limited in most current Popular titles :

Game of the Year: Dungeon Siege (CPU limited)
BF1942: Most popular online game (CPU limited)
UT2003: (CPU limited)
i dont agree with you at all ,most gamers will get more benifit upgrading to the fastest card they can get their hands on ,especially those playing at with ultra high res / aa + af
 
borntosoul said:
i dont agree with you at all ,most gamers will get more benifit upgrading to the fastest card they can get their hands on ,especially those playing at with ultra high res / aa + af

No they will not..high end cards require high end platforms, one without the other is a waste...
 
DemoCoder said:
I just got finished playing the C&C generals beta (awesome BTW), now here is a game that pushes your hardware:

Min Req: 2Ghz CPU, 512mb RAM, GF4 or better performance. Of course, it's a debug build, but the game has a very detailed physics engine and the best in-game explosions I've ever seen.

It runs locked at 800x600. If I turn on 6XFSAA and 16x AF, the game slows down noticably. I have to scale back to 2x FSAA before its acceptable. (e.g. non-jerky. I don't get "silk smooth" until I turn off AA)

And if you think that 4-6X sampling is "perfect" anti-aliasing, I think you need to look again. If I look closely, I can still see some edge crawling and shimmering, especially at 800x600 resolution. When you look at pre-rendered graphics, you are looking at 16-64x sampling. There is a noticable difference when you go up that high.


Man, calling in A10 airstrikes, special forces raids, and Daisy Cutter bombs is so freakin cool!

I see shimmering/crawling at 1280x1024 with 4x AA on my 9700 Pro. C&C Generals sounds fuh, though; thanks for the details.
 
Doomtrooper said:
borntosoul said:
i dont agree with you at all ,most gamers will get more benifit upgrading to the fastest card they can get their hands on ,especially those playing at with ultra high res / aa + af

No they will not..high end cards require high end platforms, one without the other is a waste...

It isn't a waste, you just won't be getting the same performance you could with a faster processor because you'll be vertex bound at some point. Higher resolution, upping the anti-aliasing or anisotropic filtering shouldn't be affected by CPU speed at all and should be essentially "free". (assuming the scene doesn't change when you increase the resolution)
 
Doomtrooper said:
This ugrade has been happening for years, why all of sudden is it not 'ok', when buying a video card today since the feature set is way ahead of developers one should look to what the card will deliver them in its lifespan. i.e 9700 can deliver very high frame rates with current popular titles with FSAA and AF.

Hang on just a second. I never said that running current games with the highest possible IQ (by which I mean resolution, framerate, in-game settings and AA/AF) is not a valid reason to buy a top-end card. It's a great reason, and the 9700 Pro is a fabulous card on this measure. I'm just saying that *some* people value longevity as much or more than performance on current games. (Not to say the 9700 won't be a great card on that measure either.)

Basic said:
Five people here at work that just got new home computers with R9700PRO / R9700TX, replacing their old computers all with TNT/GF2MX range cards, says that Dave H has a point. They are non-gamers with kids that might play some games or ..uhm... "softcore" gamers. They just wanted a computer that would last as long as possible without further care.

Conclusive market data! I am vindicated!! :D

With that out of the way, onto the bigger question: what makes a long-lasting card?

I think it's best to acknowledge at this point that this question, like all the important debates of our time, is really just another way of saying "R300 vs NV30!!!!" That is, since we're going to be reading in the characteristics of these GPUs into everything we say on the topic, we might as well be explicit about it.

I think I'm not going too far out on a limb to say that, while of course we need to wait for benchmark results to be certain, most of us substantially expect that GFFX 5800 will outperform the 9700 Pro on current games without AA/AF (but no one will care because the framerates for both cards are higher than anyone could want, and most times will be CPU-limited anyways), but that 9700 Pro will draw more-or-less equal on current games with AA/AF on (plus its RGMS will look better).

The issue of which card will do better on, say, a late 2004 game is of course harder to say. I think it's fair to say that those games--at medium settings, with AA/AF off--will probably not be bandwidth-limited on either card, for the simple reason that the low-end mainstream card (or chip) of late 2004 will probably not have > 15 GB/s bandwidth; it's just too expensive.

Will they be fillrate-limited (thus benefiting the GFFX)? Could be. The Doom 3 engine seems (as I understand it) to be a real fillrate-guzzler, with 1 pass for z-buffer values and then 1 pass each for every light source, each of those consisting of something like 5 loops back through the pipeline--one for stencil buffer calculations, and 2 each (color and bump map) for diffuse and specular lighting. (What's the proper term for "loops back", anyways?)

Of course both 9700 Pro and GFFX 5800 will do just fine on Doom 3; I'm more wondering what D3 engine licensed games will be like, as well as games built on other engines but with similar principles. Presumably light counts will go up which, as I understand it, will demand more and more fillrate. (And I doubt "# of lights" will be easily adjustible in the game settings.)

Another issue is that Doom 3 has a relatively low poly count, apparently for performance reasons. Can someone explain this to me? Why would adding geometry stress the D3 engine more than most? After all, the big performance hit of the D3 engine is that everything is calculated per-pixel instead of per-vertex! My only guess is that keeping poly count low helps reduce "overdraw" (from the POV of the light) in the stencil buffer calculations, in which case increasing poly count would seem to be another fillrate hit.

Well so far things are looking good for GFFX in the longevity contest. But another possibility is that our late 2004 game will be shader limited. While Doom 3 is limited to DX7 features, presumably its successors, as well as engines which barely resemble it, will be heavy into shaders. And AFAICT, we know next to nothing about how R300 and NV30 compare in shader performance. (Indeed, all we know is that R300 can theoretically T&L 43% more vertices per clock than NV30, but that doesn't tell us anything about how they compare on other vertex shaders much less pixel shaders.)

And this would seem to be a big hole in our current benchmarking capabilities. In the past, the new game of today at high settings was often a pretty good proxy for the game of the future at lower settings; the principles, at least, were the same, just with more texture applications and such. Shaders, on the other hand, are a completely seperate ballgame, and so far the only software we have that really exercises them are IHV-supplied demos (not likely to be a fair comparison).

Hopefully HLSLs will allow the rapid creation of plenty of representative shader benchmarks. The upcoming (hopefully) round of GFFX reviews won't have them, though, and that IMO is a big loss. Perhaps 3dMark 2003 will step up to the plate, although that perhaps seems too much to hope for. Of course there will be plenty of controversy over what constitutes a well-written, "representative" forward looking shader benchmark, but controversy is at least usually indicative of an interesting problem...

So...is there any good way for the longevity-minded buyer to choose between high end cards at their introduction? When will there be good benchmarks to help this task? Or are longevity-minded buyers, being the sort of people who don't want to replace their video card, also not the sort of people who will pay any attention even if there are good benchmarks developed to try to predict this sort of thing? Won't they just go with whatever top-end card (whether ATI or Nvidia) Dell is offering that day?

Hmm....
 
I hate to quote myself, but a little bit off topic...

Until both NVIDIA and ATI can saturate the installed base (which includes the budget market as the highest percentage) with high powered, DX9.0 compliant videocards, the high end will never truly be exercised to it's capabilities.

I surely hope the following link is nothing but a bunch of guesswork/hogwash, but if it holds any amount of relevent credence, all hopes of the above are doomed:
http://www.xbitlabs.com/news/story.html?id=1042994509

I was looking forward to a complete line of NV30 based products, with the single assurance that all the price points would be DX9.0. If the budget line of cards are going to be another Geforce4 MX all over again...
 
No one really supports all DX9 features. So we don't really know what that quote means, but I could give you one example of someting bad - non support of PS2.0.

If Nvidia removes VS2.0 from the core, it's not big deal, because vertex shaders are easy to run on the CPU. Especially a fast business desktop. The real shame is if PS2.0 isn't there because that's what will prevent developers from writing PS2.0 -- a large installed base of cheap non-PS2.0 NV31s. If there is no VS2.0, developers can still target it, you'll just need a fast CPU. Pixel shader throughput is the important thing to worry about anyway.
 
I don't think that comment is particularly worrying, Shark. It certainly doesn't necessarily imply a GF 4 MX type situation to me yet. There is a lot of room in "not all DX 9 features" for critical DX 9 level functionality to be implemented (i.e., support necessary to encourage adoption of DX 9 functionality moving forward).

...

Regarding shader benchmarking, this has been a question for a while. Rightmark 3D has been one benchmark expected to possibly address some of that concern. I'm pretty sure plenty of shader benchmarks (besides whatever Rightmark and 3dmark show us) will appear throughout the year and spark some interesting benchmarking discussions...
 
Dave H said:
Another issue is that Doom 3 has a relatively low poly count, apparently for performance reasons. Can someone explain this to me? Why would adding geometry stress the D3 engine more than most? After all, the big performance hit of the D3 engine is that everything is calculated per-pixel instead of per-vertex! My only guess is that keeping poly count low helps reduce "overdraw" (from the POV of the light) in the stencil buffer calculations, in which case increasing poly count would seem to be another fillrate hit.
Increasing poly count usually means only a very small fillrate hit, simply because the area covered by those polygons stays roughly the same (average depth complexity increases a bit as you add finer details)

But you pointed it out before: each light source means one or many extra passes (depending on hw capabilities)! So if you have 10 light sources, adding one polygon to a model is roughly the same as adding 10 to another game's model.
 
RussSchultz said:
It isn't a waste, you just won't be getting the same performance you could with a faster processor because you'll be vertex bound at some point. Higher resolution, upping the anti-aliasing or anisotropic filtering shouldn't be affected by CPU speed at all and should be essentially "free". (assuming the scene doesn't change when you increase the resolution)

Well take whatever high end graphic card you have, load up Dungeon Siege and compare both games on a KT 133 900mhz Thunderbird PC 133 Sdram vs a KT 333 (Nforce 2) with DDR and set the resoltion to 1024 x 768..full shadows...100% drawing distance..object detail =MAX.

The you will see what I mean, Dungeon Siege is so platform limited that that video card is choking..I know I tried it.
Someone to go out and spend that much money on a high end card they will not be pleased with their results at all.
So IMO its a waste for someone to only upgrade their video card without the platform 1st.
 
John Reynolds said:
I see shimmering/crawling at 1280x1024 with 4x AA on my 9700 Pro. C&C Generals sounds fuh, though; thanks for the details.

Do you expect MSAA to take care of texture shimmering?
Or are you talking about edges (shimmering not usually the term).
 
Doomtrooper said:
Yes "The Sims" is the best selling title of all time :rolleyes: ...

My most popular I meant by current most popular titles, BF 1942 is approaching counterstrike numbers...12,000 players on last weekend.

Finally, something has arrived to kill it! :devilish:

Actually I just remembered, Evercrap is more popular than CS too.

Doomtrooper said:
The you will see what I mean, Dungeon Siege is so platform limited that that video card is choking..I know I tried it.
Someone to go out and spend that much money on a high end card they will not be pleased with their results at all.
So IMO its a waste for someone to only upgrade their video card without the platform 1st.

The only issue I have with this is that DS would run fine on the slower system anyway. Also, DS is not at all a good measure of games in general. WCIII isn't likely to be graphics card limited either in all but a couple rare cases. You don't need a great graphics card to play pseudo-3D top down games, which are essentially 2D with polygons (and low poly counts at that).

If you're into FPS games it's a whole different story.
 
Er.....

Doomtrooper said:
Well take whatever high end graphic card you have, load up Dungeon Siege and compare both games on a KT 133 900mhz Thunderbird PC 133 Sdram vs a KT 333 (Nforce 2) with DDR and set the resoltion to 1024 x 768..full shadows...100% drawing distance..object detail =MAX.

The you will see what I mean, Dungeon Siege is so platform limited that that video card is choking..I know I tried it.
Someone to go out and spend that much money on a high end card they will not be pleased with their results at all.
So IMO its a waste for someone to only upgrade their video card without the platform 1st.

While I don't disagree a newer platform will allow a top-end video card to really perform. I disagree it's a waste to just upgrade the card...for now...

I upgraded out of sequence. I couldn't wait, and bought the 9500np to mod. The "9700" is SO much faster than my old GF3 in every game, with 2X the FSAA/Aniso settings and higher resolutions. It really amazed me, and it's breathed a longer lifespan into my computer. Really cool side-effect of this vid-card upgrade. :)

Soyo 7vca PIII 700@945 288MB PC133

As you can see, not the shiniest tool in the shed. :cry:

(edit: fixed quote I hope)
 
Hey Dave H, good going there mate! WaltC is not the only one anymore who writes 1000 word essays on these forums. :D
 
Dave H:
I'll add some to what Xmas said.
It's actually even worse. Every light means that each poly is rendered two times. One for the stencil pass, and one to actually add the light.

And on top of that, you'll have to render silhouette polys. And the worst part, finding where the silhouette is. (There is hacks that does this in VS, but they are inexact.)

And finaly, the silhouett polys of a mesh with small triangles becomes the most inefficient shape - long and thin. (I haven't tested how much this affects performance though, it just seems reasonable that such shape is harder to render.)
 
doomtrooper, a 9700 would make a huge difference to most gaming platforms 1000 mhz and over ,thats what counts ,not the fact that its cpu limited ,its the end result that counts ,and yes i know some games need all the cpu power they can get
 
Of course it will, but you will not see the full benefit of the card, on the 9700 debut there so many posts about disappointment with no huge gains in UT 2003 or Dungeon Siege and they were all people running Sub 1000 -> 1000 mhz systems..

People expect to put a new video card in and double their frames instantly..thats the nature of the beast..thats why I always tell people to balance their system as much as possible and what I ensured was done when I had my company.
 
Doomtrooper said:
My most popular I meant by current most popular titles, BF 1942 is approaching counterstrike numbers...12,000 players on last weekend.

Last time I checked, StarCraft was at 40,000 players.

You don't even need 3d acceleration to play that!
 
Doomtrooper said:
Of course it will, but you will not see the full benefit of the card, on the 9700 debut there so many posts about disappointment with no huge gains in UT 2003 or Dungeon Siege and they were all people running Sub 1000 -> 1000 mhz systems..

People expect to put a new video card in and double their frames instantly..thats the nature of the beast..thats why I always tell people to balance their system as much as possible and what I ensured was done when I had my company.

Thats fine, but if you HAD a 9500 and got 40 fps, and moved to a 9700, you could enable 2xAA and still get 40fps.

That is the "free" part I'm talking about.

Yes, you could get better if you had a higher processor, but 700 mhz processor won't stop you from turning on AA and getting "twice" the performance (i.e. same framerates at 2x AA).
 
Back
Top