ATI engineering must be under alot of strain. MS funding?

Ailuros said:
That I want to see. I haven't seen a 5200Ultra yet being able to outperform a NV25 in preliminary Doom3 benchmarks, I'll be generous and leave the 64bit versions of the former out of the discussion.
It doesn't need to outperform the NV25. The NV25 has quite a lot more fillrate. All I'd like to see is it outperform cards that perform similarly to it in DX8 games. The GeForce4 MX would be an okay comparison, but I'm not sure that's really fair. A better comparison may be against a product in ATI's R2xx line (I'm not sure...is there a card there that performs similarly in DX8 games? I would hope so...for the 5200's sake...).

As for real dx9 games only time will tell. But to be honest I don't even expect todays high end dx9 games to be able to cope adequately with true dx9 games, let alone a budget iteration of those.
"Cope" can take different meanings. Remember that DOOM3 is supposedly designed with a GeForce2 MX or GeForce SDR in mind as the minimum. Many already have the feeling that these cards will not be enough to feel the full experience of DOOM3. Remember that this is coming from a mindset of people who want to be able to play games at high resolution, with high levels of FSAA and anisotropic, and with all effects turned on. Not all users need all of these things, and one who ons the bare minimum to play a game certainly won't be able to do all of these things.

I'd say s.o. is lucky if he'll get 30fps with high detail in 1024*768 in Doom3 with a 5200Ultra, unless we mean some weird version of point sampling AF here.
I wouldn't expect a 5200 Ultra to be playing at 1024x768 in DOOM3. I would expect somewhere between 640x480 and 800x600 with all details turned on (and this probably without FSAA/anisotropic...all details simply meaning full shaders, meaning full specular highlights, or whatever other unannounced things there may be).

Update:
I just looked at Anand's review of the 5200 Ultra and 5600 Ultra. It looks to me like the Radeon 9000 Pro would be good to compare against the 5200 Ultra. The only issue is that the results from just that one review are all over the place, sometimes the 9000 coming way ahead, sometimes the 5200 coming way ahead. So, we'd need a number of data points to show a change in trend.

So, I propose a specific definition: The GeForce FX 5200 Ultra can be described as a "DX9 card" if its performance becomes noticeably higher than that of the Radeon 9000 Pro in the majority of games that use DirectX 9-level effects not to do additional math, but to improve performance.

Second edit:
Yes, I realize that this may be hard to really see as a definition. An easier one may be to just say that it's a "DX9 card" if its performance is still acceptable in games, which are bound to be more numerous, that use DX9-level shaders to actually do more math (and hence will reduce the performance). The only problem is that this is a highly subjective judgement (an example is that I thought the 32-bit color on the original TNT was quite usable). I think the only thing close to an objective judgement would be the one I listed above. Hopefully we'll have enough sample points to realize it...
 
Chalnoth said:
So, I propose a specific definition: The GeForce FX 5200 Ultra can be described as a "DX9 card" if its performance becomes noticeably higher than that of the Radeon 9000 Pro in the majority of games that use DirectX 9-level effects not to do additional math, but to improve performance.

And from here on out the waters get murky. What kind of precision are we talking about here? Surely it should be obvious that if the shader uses all FP32 for the gfFX5200Ultra that it almost certainly won't outperform the r9000. Personally I feel a mix of FP32 and partial precision would be very fair. Or do you feel such a comparison would only be fair if only partial precision was used? What if FX12 is used, which is not part of the DX9 spec? Is this still a valid comparison?
 
It doesn't need to outperform the NV25. The NV25 has quite a lot more fillrate. All I'd like to see is it outperform cards that perform similarly to it in DX8 games. The GeForce4 MX would be an okay comparison, but I'm not sure that's really fair. A better comparison may be against a product in ATI's R2xx line (I'm not sure...is there a card there that performs similarly in DX8 games? I would hope so...for the 5200's sake...).

Let me get this straight a user should upgrade to a 5200 to get barely playable (more or less) 30fps in Doom3 and get a tad better performance than a GF4MX in all other applications?

The NV25 IS a dx8.1 card and they can be had nowadays for similar prices than 5200's. And yes for the immediate future one can make times more better use of the additional texel fillrate, than the 5200's dx9 features.

Remember that DOOM3 is supposedly designed with a GeForce2 MX or GeForce SDR in mind as the minimum. Many already have the feeling that these cards will not be enough to feel the full experience of DOOM3. Remember that this is coming from a mindset of people who want to be able to play games at high resolution, with high levels of FSAA and anisotropic, and with all effects turned on. Not all users need all of these things, and one who ons the bare minimum to play a game certainly won't be able to do all of these things.

Just as the GF2MX today is capable of playing UT2k3 adequately? I'd urge you to try one out especially in the heavier maps and experience the glory of single digit framerates.

If one is to play recent games in 640x480 he'd be better off investing his money in a console, as harsh as it may sound. Of course can someone also turn the stencil shadows off and gain quite a bit in performance, but I'm not so sure if there's any use anymore in playing Doom3 after all.

Yes, I realize that this may be hard to really see as a definition. An easier one may be to just say that it's a "DX9 card" if its performance is still acceptable in games, which are bound to be more numerous, that use DX9-level shaders to actually do more math (and hence will reduce the performance). The only problem is that this is a highly subjective judgement (an example is that I thought the 32-bit color on the original TNT was quite usable). I think the only thing close to an objective judgement would be the one I listed above. Hopefully we'll have enough sample points to realize it...

For all of those paradigms past and today's, there's only one thing I have to say not enough fillrate/bandwidth/polygon throughput and whatever else one could list.
 
StealthHawk said:
And from here on out the waters get murky. What kind of precision are we talking about here? Surely it should be obvious that if the shader uses all FP32 for the gfFX5200Ultra that it almost certainly won't outperform the r9000. Personally I feel a mix of FP32 and partial precision would be very fair. Or do you feel such a comparison would only be fair if only partial precision was used? What if FX12 is used, which is not part of the DX9 spec? Is this still a valid comparison?
Of course each case would have to be examined individually, but I will state that I'm using "DX9-level" in the loose sense. That is, it can include similar functionality in OpenGL. So, yes, FX12 is definitely fair game.

I do have to say, though, that I really don't like this use of DirectX versions for identifying features. I'll see if I can discipline myself to use other terminology. I was thinking along the line of calling it "3rd gen" (to match with NV3x, R3xx, and also to state the third generation since the introduction of the term, "GPU").
 
Here we are arguing semantics wether a half-a**ed Porsche deserves to be called a Porsche, while it may look like one but carries the engine of a Beetle :rolleyes:
 
Ailuros said:
Let me get this straight a user should upgrade to a 5200 to get barely playable (more or less) 30fps in Doom3 and get a tad better performance than a GF4MX in all other applications?
I never said that. I never made any connection to what users to upgrade to or why. This is something different entirely.

The NV25 IS a dx8.1 card and they can be had nowadays for similar prices than 5200's. And yes for the immediate future one can make times more better use of the additional texel fillrate, than the 5200's dx9 features.
Well, this is a good question. I did really like my GeForce4 Ti 4200 (It recently died, unfortunately...). But, as I said, I made no connection to what people should buy and whatnot.

My stance on that has always been that people should buy a video card that complements their current games. Anything other type of purchase would just be based upon speculation, and there is always the chance that speculation will not pan out. If a new video card that you purchase now improves the games that you play now, then you will have gotten part of your money's worth. It will, in all liklihood, not be horrible compared to the competition in a short time, so there you will get the rest of your money's worth.

On a personal note about the current state of the market, I don't like it. To be frank, I am somewhat disappointed with nVidia's current product line. And I have had enough hassles with the ATI Radeon 9700 Pro that I don't like their product line, either. So I just wouldn't make a purchase right now.

Just as the GF2MX today is capable of playing UT2k3 adequately? I'd urge you to try one out especially in the heavier maps and experience the glory of single digit framerates.
I played it just fine on my little brother's computer. He was using the integrated video of the original nForce, with one stick of RAM. You just have to turn the detail settings down, and it plays just fine.

If one is to play recent games in 640x480 he'd be better off investing his money in a console, as harsh as it may sound. Of course can someone also turn the stencil shadows off and gain quite a bit in performance, but I'm not so sure if there's any use anymore in playing Doom3 after all.
You won't be able to turn the stencil shadows off.

And there are still many more reasons to go PC than high resolution.

For all of those paradigms past and today's, there's only one thing I have to say not enough fillrate/bandwidth/polygon throughput and whatever else one could list.
Fillrate/bandwidth can be scaled hugely by changing resolution, anisotropic settings, and FSAA settings. Polygon throughput will be scaleable (hopefully) through the use of higher-order surfaces within a couple of years. In the meantime, poly throughput of even low-end cards is quite good compared to what today's games put them through. I don't see why that needs to change.
 
Ailuros said:
Here we are arguing semantics wether a half-a**ed Porsche deserves to be called a Porsche, while it may look like one but carries the engine of a Beetle :rolleyes:
Um, no.

The third generation featureset (which I'll now try my best to use to refer to the NV3x and R3xx common featureset) doesn't define performance. It defines programming-side features.
 
Chalnoth said:
What we want, I should think, is an improvement in the way games play on our mid-high end video cards. Now, game developers may add effects to games that only those with high-end video cards will be able to see. But, they will not have something that fundamentally changes the way a game is played remain optional. It will have to be required.

This is already here with UT2K. If you compare the low-end version of the game with the high-end version, there are significant optional features missing that have a great effect on gameplay. Things like water, fog, smoke, grass, etc *greatly* effect how the game is played. Sure, you don't *need* these things, but they are part of a more complex environment that is designed to impact gameplay. Start removing all of those, and you might as well go all the way to vector graphics.

By your definition, these optional features are not necessary to have the game running, but they *do* have a real impact on gameplay. Epic have allowed these features to be optional (even though some consider their absence a cheat) in order to allow those with low performing cards to still enjoy reasonable and playable framerates - and of course not to cut down their target audience by insisting on high end hardware only.
 
Well, this is a good question. I did really like my GeForce4 Ti 4200 (It recently died, unfortunately...). But, as I said, I made no connection to what people should buy and whatnot.

OT: funny how my Ti4k4 died a completely unexpected death too, after 13 months.

Let's stay on topic here, you mentioned a dx8.1 accelerator and I thought immediately of the one I considered to be the best out of the available bunch.

I played it just fine on my little brother's computer. He was using the integrated video of the original nForce, with one stick of RAM. You just have to turn the detail settings down, and it plays just fine.

Ever heard of the bonus packs? Try those or some very heavy 3rd party maps. If you played the rather light standard indoor maps it's natural that you'll get q3a alike performance heh. Running around just shooting walls helps too.

Have a look here how "great" a NV34 performs in a custom demo with 9 bots in CTF-Face3:

http://www.3dcenter.de/artikel/r9500+9700+9800_vs_gffx5200+5600+5800/zj_bench_ut2003_a.php


Fillrate/bandwidth can be scaled hugely by changing resolution, anisotropic settings, and FSAA settings. Polygon throughput will be scaleable (hopefully) through the use of higher-order surfaces within a couple of years. In the meantime, poly throughput of even low-end cards is quite good compared to what today's games put them through. I don't see why that needs to change.

The raw numbers of the NV34 as in fillrate/bandwidth etc are still pathetic for a dx9.0 card. Where is the relevance here to what future cards will sport anyway?

And I really can't get rid of the feeling that you have in mind only the 5200Ultra's and seem to intentionally ignore the 64bit versions of those. In that case the numbers scale down from pathetic to abysmally pathetic.

Finally we are NOT talking about today's games but future dx9 games, in which you guys caught yourselves debating in circles wether feature A or B makes game X or Y a dx9 game. I doubt a 5200 will stand a chance even in a real dx8.1 game, but that because I'm thinking a bit further into the future.

You can argue back and forth for pages about the same issue, it doesn't change the fact that the NV34's dx9 capabilities are merely checkboard features.
 
Chalnoth said:
Ailuros said:
Here we are arguing semantics wether a half-a**ed Porsche deserves to be called a Porsche, while it may look like one but carries the engine of a Beetle :rolleyes:
Um, no.

The third generation featureset (which I'll now try my best to use to refer to the NV3x and R3xx common featureset) doesn't define performance. It defines programming-side features.

I know performance is a secondary consideration when it comes to 3D. After all everyone is fond of slideshows in low resolutions, oh please....
 
Chalnoth said:
Hmmm...Doom3's NV2x/R2xx and higher codepaths are "gimmicks" then?
Yes.
They will improve the way the game looks, as well as the performance.
They will not significantly alter the way the game is played.
[/quote]

Well then, if those code paths are gimmicks, and the "base" code path (ARB-1) doesn't touch DX9 capability at all....

Then Doom3 is not a DX9 game even by your own definition.
 
Bouncing Zabaglione Bros. said:
This is already here with UT2K. If you compare the low-end version of the game with the high-end version, there are significant optional features missing that have a great effect on gameplay. Things like water, fog, smoke, grass, etc *greatly* effect how the game is played.
Heh, no they don't. They greatly effect how the game looks. Being able to see these affects doesn't change how one aims and shoots at other players.

Unless you want to call stopping and looking at the environment an effect on gameplay :)
 
Chalnoth said:
Unless you want to call stopping and looking at the environment an effect on gameplay :)

It's not "stopping and looking" that effects game play. It's reducing visibility of your opponents.

Feigning death or even crouching in tall grass is certainly more strategic than doing so on flat terrain....
 
Joe DeFuria said:
Well then, if those code paths are gimmicks, and the "base" code path (ARB-1) doesn't touch DX9 capability at all....

Then Doom3 is not a DX9 game even by your own definition.
It uses DX9 to improve the way the game looks. That's all there is to it.

One additional thing: I now seem to remember a comment that the Radeon 9700 was faster with the R2xx codepath, but at slightly lower precision than the R3xx codepath. Obviously not all of the improvements in using the 3rd generation features will be towards performance alone.
 
Chalnoth said:
Joe DeFuria said:
Chalnoth, do you consider TR a "DX9 game?"
How many times have I answered that question in this thread?

Directly? Never at a quick glance, that's why I asked. You've clouded your response by using language such as "gimmicks", "so-called DX9" apps, etc.

I would infer that you are saying:

TR and Doom3's use of DX9 features are completely consistent and comparable with one antother, so to call one a DX9 game, you must call the other a DX9 game. Correct?
 
Chalnoth said:
Essentially, yes.

Right. Then essentially, I disagree.

They use DX9 "features / paths" is fundamentally differently from one another, so it is completely valid to say one is and one is not a DX9 level app.

Doom running on ARB2 is not any fundamentally different, graphics output wise, then Doom running on R200 path. You might be able to look at two pictures, side by side, and point out some minor difference here or there.

Carmack's most recent summation from his .plan (my emphasis added):

John Carmack said:
The R300 can run Doom in three different modes: ARB (minimum extensions, no specular highlights, no vertex programs), R200 (full featured, almost always single pass interaction rendering), ARB2 (floating point fragment shaders, minor quality improvements, always single pass).

The NV30 can run DOOM in five different modes: ARB, NV10 (full featured, five rendering passes, no vertex programs), NV20 (full featured, two or three rendering passes), NV30 ( full featured, single pass), and ARB2.

You can run the R300 core on the R200 (DX8) code path, and in Carmack's terms you are running full featured. He further qualifies ARB-2 as Minor quality improvements over the R200 (DX8) path.

Again, there just is no significant difference between the "DX9" paths and the "full featured" DX8 R200, NV20, or even DX7 NV10 path. Of course, "significant" is subjective, but I'll take the developer's word for it.

With TR, you are not running "full featured" unless you have DX9 hardware.
 
All I see is that Core never wrote some of the effects for PS 1.x.

From what I've heard, one can fully-replicate Renderman in PS 1.1. The main problems are precision and performance, so improved precision and performance are precisely the things that going for PS 2.0 (or the OpenGL equivalent) will get you.

Whether or not you bother to write a fallback is something else entirely...
 
Back
Top