Xbit on R350: they say 16 TMUs + DDRII

DemoCoder said:
Doomtrooper said:
I personally doubt a Nv30 will be able to outperform a Radeon 9700 in peformance where it counts (Eye Candy..FSAA and Anisotropic filtering enabled)..Escpecially high resolution >1024 x 768...I mean thats what we pay $400 dollars for correct

Just what I need, more framerate in counter-strike.

You never answered my question I asked a long time ago: Would you buy a Radeon which has identical performance to the 9700 PRO and identical AA/ANISO IQ, but half the price, because it is only DirectX7 capable?

Seems to me that either you care about DX9 performance, or you don't. If you do, then the NV30's DX9 performance becomes a relevant benchmark measure. If you don't, then you're admitting that you're paying alot of extra $$$ for a "token" feature which will never really be used.

Of course, I suspect that what you really care about and what's really relevent is whatever feature the 9700 does faster. If feature X is faster, you will claim it is a feature that matters. If feature Y is slower than NV30, you will claim it is a feature that doesn't matter.

And whichever he does, you will be there to disagree.
Who cares.
 
DemoCoder said:
Just what I need, more framerate in counter-strike.

Sounds like someone is trying to find a scapegoat for a particular design to discount it's featureset.

Seems to me that either you care about DX9 performance, or you don't. If you do, then the NV30's DX9 performance becomes a relevant benchmark measure.

You summed it up to perfection. If I shell out $400+ for a videocard, it better excel at more than benchmark measures. That means the stack of modern games- WC3, SOFII, Rally Sport Challenge, NFS-PU2, UT2003 and up and coming Doom3 had better see impact.

Some folks prefer to categorize "all the games sitting on the shelf at current that people are playing" as "Counterstrike"- and it's rather unfortunate.

I could really care less how fast Card A runs IHV A's designed DX9 benchmark. I would also need to see at least 3 or 4 DX9 titles to even come close to the 25-30 DX8.0+ that are revolving every 2 months at current to offset some sort of shader superiority as being value added to a purchasing decision.

It's like us folks in California... we don't usually rush out and buy mud+snow rated tires since, hey, it only rains here a few days out of the year. Computer hardware has sunk to the level of making acid+meltdown rated tires and showing benchmarks of such tires being driven on rovers on Mars... as I dont plan on driving on Mars anytime soon, and the tread on these tires will be well worn out before consumer shuttles to mars are available, I somehow don't see this being a big need for this generation of product. Now if shuttles to Mars are suddenly released tomorrow, this could all change...
 
DemoCoder said:
Just what I need, more framerate in counter-strike.

This is a specious assertion. I couldn't play Morrowind on my Ti4400 with 4x AA and 8x AF like I can on my 9700. The same likely goes for other fairly new games, such as NFS:HP2, NOLF2, Mafia, etc.

Seems to me that either you care about DX9 performance, or you don't. If you do, then the NV30's DX9 performance becomes a relevant benchmark measure.

Only if you plan on owning the board through 2003. Which, sure, there are people out there who, after shelling out $300+ for a video card, can't or won't upgrade it again for several years.

Of course, I suspect that what you really care about and what's really relevent is whatever feature the 9700 does faster. If feature X is faster, you will claim it is a feature that matters. If feature Y is slower than NV30, you will claim it is a feature that doesn't matter.

No comment. 8)
 
DemoCoder said:
Doomtrooper said:
I personally doubt a Nv30 will be able to outperform a Radeon 9700 in peformance where it counts (Eye Candy..FSAA and Anisotropic filtering enabled)..Escpecially high resolution >1024 x 768...I mean thats what we pay $400 dollars for correct

Just what I need, more framerate in counter-strike.

OMG, tt's really stupid. Franky, honestly.

You can't be serious...
 
John Reynolds said:
DemoCoder said:
Just what I need, more framerate in counter-strike.

This is a specious assertion. I couldn't play Morrowind on my Ti4400 with 4x AA and 8x AF like I can on my 9700. The same likely goes for other fairly new games, such as NFS:HP2, NOLF2, Mafia, etc.

Agreed.
 
I do agree with Democoder's opinions as to caring or not caring for DX9 performance. As a gamer, you either take the card for what it can give you today, or what it will be theoretically capable of tomorrow. Personally, I believe a good deal will give a combination of both. Even though the R300 meets this mark, lets not count the NV30 out just yet.
 
John Reynolds said:
DemoCoder said:
Just what I need, more framerate in counter-strike.

This is a specious assertion. I couldn't play Morrowind on my Ti4400 with 4x AA and 8x AF like I can on my 9700. The same likely goes for other fairly new games, such as NFS:HP2, NOLF2, Mafia, etc.

Democoders statement was about Doomtroopers doubts about the NV30 outperforming the R9700 with FSAA + Aniso.

As i interpret his statement, he's simply saying that he doesn't need any more framerate then the R9700 (since that's the card he has and what Doomtrooper was comparing it to). And that DX9 performance then will be a very important benchmark.
 
All I'm stating is simply the DX9 feature set is not a big seller for me, I was very excited about the Radeon 8500 and all the new advanced features it brought to the table...

When reality is, games today that support any of these features are few and far between..Neverwinter Nights just supported the 8500 and up cards for water effects (what a difference that made)..UT 2003 Pixel Shader Support is minimal. etc..

So for myself I look at things differently now, I will purchase a card that will carry me through the gaming year with great features that will be used in that life span..to me a 9500 Pro is just what the doctor ordered.

I personally don't believe a 128-bit bus card can outperform a 256 bit bus card in AA and of course final IQ output..but I've been wrong before. :LOL:
 
128-bit bus, 256-bit bus, 0.13 micron, 0.15 micron....

They're a means to an end, that is all.

Bandwidth, clockspeed, and design complexity are those ends.

As far as nv30 and r300:

The real significance of 128-bit versus 256-bit is ~16 GB/s at DDR-II protocol efficiency versus ~19 GB/s at DDR-I protocl efficiency. What use is made of this bandwidth is even more significant still.

The real significance of 0.13 micron versus 0.15 micron is 500 MHz w/ ~120M transistors versus 325 MHz w/ ~110M transistors. What use is made of these transistors is even more significant still.

JMO.
 
humm i know than clock for clock the r300 should be faster than nv30 but if we take a 128 bit card .. 9500 pro !!! we double his clock to 500/500 ( *cough* nv 30 *cough* ). I think we can say than a 128 bit card can beat a 256bit card ( 9700 or 9700 pro).
 
The cards speed or framerate is not everything.
If the difference between two cards is less than 40% or 50% I dont think it´s important at all.

The purpose of a gfx-card is to output a signal to the monitor. That signal is the picture you will have to look at.
The quality of that signal is what is really important.
A card with a low quality HF-filter is a poor card regardless how fast it is.
If my eyes hurt when I try to read text at 1600*1200 and it´s so blurry at 2048*1536 that I cant read it at all then it makes no difference how fast that card is.
The ouput signal must be undistorted and crystal clear.

The card will ofcourse have to be fast enough to support high levels of FSAA and anisotropic.
Features intended to improve the image quality in DX9 and earlier DX versions and Ogl must ofcourse be supported and be implemented in a way that gives the user the highest quality picture possible.
Implementations and designes intended to give high framerates with those features enabled but that sacrifice quality are not very good.
The purpose of the feature is to give the user a better picture not to win benchmark tests with it on.

The fact that most people(magazines,hardware sites) that test and compare 3d-cards only focus on speed and framerate is not good for the consumer and the progress of 3d technology.
It means that companies will be afraid to implement new features that will improve the picture quality if they might make the card slower.
Cards will be designed to be as fast as possible and if money needs to be saved it will be the picture quality that will suffer.
The driver teams will have to make new drivers that improves framerates not that makes the picture look better.

I will try to compare a GfFx and a Radeon9700 pro. The card with the best picture will be the best card IMO.
Because if it looks better it IS better. 20% faster or slower is not that important.

Regards!
 
Well, is it 40-50%, or 20% that doesnt matter?
One you state at the beginning of your comments, the other at the end.

And honestly, both are wrong.

If the difference is 50%, thats the difference between 40fps and 60fps - vastly different in gameplay. One is kinda choppy in some game,s the other is almost always ok.
And even 20% can be the difference in being able to enable that one last feature or not, and still retain playability.
 
It could be 100 fps compared to 150 fps. It depends on how fast the cards are and on the game.

If you enable a feature on one card and the picture looks almost the same as with the feature off then why use the feature?
Perhaps the performance hit is small when you use the feature.
But if that´s the result of a implementation that gives the picture quality low priority I wouldn´t use the feature.
It´s pointless IMO.

A different card might take a bigger performance hit but the picture looks much better with the feature enabled.
Both compared to when the feature is disabled and compared to the other card.
Unless the card is terribly slow I would prefer it.

Framerate in itself is not important.
A feature on a card is not better than it´s implementation.
If I only want high framerates I can run the game at 640*480.
The reason that I use 1600*1200 instead is that I want better picture quality.
The same is true for any feature I might enable.

If one card(slower) has better image quality at 1024*768 compared to another card(faster) at 1600*1200 then the slower card is better.
It´s better to use the slower card at 1024*768 in that case.
If the slower card is still fast enough at 1024*768.

High resolutions and features are pointless unless they improve the picture quality.
The problem is that some people forget that. They only compare numbers.
And if companies think that high numbers sell better than high image quality it´s very wrong and will hurt true progress.

Ofcourse it´s true that you need good framerates if you want to enjoy and play a game.
I agree with that.
But there are alot of ways to get good enough framerates. Some good ways and some bad ways.

Regards!
 
Back
Top