ATI engineering must be under alot of strain. MS funding?

Bouncing Zabaglione Bros. said:
I don't agree with ATI renaming the 8500 core as a 9000 series card. I think that is misleading. However, AFAIK, they are not saying that it is a DX9 card,
ATI first launched their naming scheme to mesh with DirectX. Remember?
Radeon 7500 = DX7
Radeon 8500 = DX8
..etc.

This naming scheme was actually publicly announced. It wasn't even just implied. They then broke this naming scheme with the 9000 (and a few later cards). It is the exact same thing.
 
I don't think that ATI actually ever claimed that the 7500, 8500 etc related to DX version, did they? I seem to remember that it was 'hinted', but they then claimed the initial number related to technology generation when the 9000 was released.

I don't have any proof of this (and can't be bothered to look for a source!) - these are merely my recollections.

I think most people can agree that the GF4MX was named as such so people associated it to the full GF4s and similarly, the Radeon 9000 was named so it was associated with the 9500 & 9700. Misleading in both cases.
 
Chalnoth said:
OpenGL guy said:
Tomb Raider: Angel of Darkness is DX9.
Fine. But one is almost as pointless as zero.

A sample size of one means that this game tells us next to nothing.
First you say there are no DX9 games. I point out there is. Now you say that "one is almost as pointless as zero". Just can't make you happy.
 
Fine. But one is almost as pointless as zero.

A sample size of one means that this game tells us next to nothing.

Along with all the DX9 benchmarks out there it tells us quite a lot actually, you just don't want to see it.

Tell you what though, let's turn this question around: How many DX9 titles would it take to convince you either way?
 
Heathen said:
Fine. But one is almost as pointless as zero.

A sample size of one means that this game tells us next to nothing.

Along with all the DX9 benchmarks out there it tells us quite a lot actually, you just don't want to see it.

Tell you what though, let's turn this question around: How many DX9 titles would it take to convince you either way?
Lets see quite a few, so we know if:
a) Developers are targetting the 9600 as lowest common denominator
b) Developers are targetting the 5200 as lowest common denominator

There's no doubt that the 5200 performs slower than the 9600. The real question is how much will it matter? One or two titles won't tell us that. A larger sampling of titles over the upcoming season or two will.
 
Just an FYI: With this sample DX9 sample being discussed at the moment chooses to disable all DX9 features by default for the 5200 and runs it as it would a 9000/9200 class board (PS1.4).
 
DaveBaumann said:
Just an FYI: With this sample DX9 sample being discussed at the moment chooses to disable all DX9 features by default for the 5200 and runs it as it would a 9000/9200 class board (PS1.4).
Which sample DX9 sample are we talking about? Tomb Raider?
 
OpenGL guy said:
Fine. But one is almost as pointless as zero.

A sample size of one means that this game tells us next to nothing.
First you say there are no DX9 games. I point out there is. Now you say that "one is almost as pointless as zero". Just can't make you happy.[/quote]
Fine. I'll explain exactly what I mean.

For anybody to draw a good conclusion about so-called "DX9 games," one needs to do a few things:

1. Have a definition. It could mean a few different things. For the purposes of this thread I'll make it simple: games that make significant use of DX9-generation shaders (which will include, for example, DOOM3).

2. Have a significant number of games fitting that definition. Depending on how much the games coincide with one another, more or less may be needed. But one is always going to be two few. At least three would be necessary for any kind of trend to be apparent.

3. If a pattern does emerge, alternative explanations must first be ruled out before making a definitive conclusion about the trend of "DX9 games." One example would be lack of focus on the PC version of the game, in the context of games developed for both a console and the PC. Another issue may be just a low-budget, quick game. Remember, for example, that Test Drive (I think it was 6?) that first showed the T&L of the original GeForce in such a poor light? Remember how it turned out it was merely due to the game developers not drawing enough triangles with each call?

Regardless, I do have to say one thing. If there is one DX9 game, while it means close to nothing, it's not quite nothing. How does the 5200 do in the latest Tomb Raider?
 
Dave suggests they turn off the DX9 features if its a 5200.

Though I can't fathom why they'd use PS1.4 as a fallback, which just as slow on the 5200... (or so I thought the data showed that with the 5800, so I'm extrapolating a bit)
 
RussSchultz said:
Dave suggests they turn off the DX9 features if its a 5200.

Though I can't fathom why they'd use PS1.4 as a fallback, which just as slow on the 5200... (or so I thought the data showed that with the 5800, so I'm extrapolating a bit)
Yeah, read that afterwards, but that's only by default. Shouldn't there be a way to turn the features on?

Regardless, I really don't like people using Tomb Raider in these comparisons of shader performance without any knowledge about how Tomb Raider implements the various technologies.

For example, is more processing done with DX9 effects enabled? Or are the same effects done for higher performance?
 
Chalnoth said:
For the purposes of this thread I'll make it simple: games that make significant use of DX9-generation shaders (which will include, for example, DOOM3).

Last I heard Doom3 uses one shader for the lighting model which can be done in one pass on PS1.4 - if this is still the case then I'd hardly classify this as making "significant use of DX9 shaders".

RussSchultz said:
Though I can't fathom why they'd use PS1.4 as a fallback, which just as slow on the 5200... (or so I thought the data showed that with the 5800, so I'm extrapolating a bit)

I think the DX8 shaders may be simpler in operation than the DX9 ones.
 
Chalnoth said:
1. Have a definition. It could mean a few different things. For the purposes of this thread I'll make it simple: games that make significant use of DX9-generation shaders (which will include, for example, DOOM3).

Doom3 doesn't make significant use of DX9 generation shaders. It's DX7 and DX8 level features, actually.

Basically, this definition is a non-starter...because everyone can have a different definition of "significant" when it comes to DX9 feature support.

3. If a pattern does emerge, alternative explanations must first be ruled out before making a definitive conclusion about the trend of "DX9 games." One example would be lack of focus on the PC version of the game, in the context of games developed for both a console and the PC. Another issue may be just a low-budget, quick game.

Neither of which is particularly relevant. If the 5200 is not "powerful enough" to run "games that make significant use DX9 shaders", than that's all that really matters. Why are "low budget / quick" games as you call them, less important than other games?

Are you implying that if it takes some significant resources to get 5200 DX9 game performance to acceptable levels, that doesn't count?

Regardless, I do have to say one thing. If there is one DX9 game, while it means close to nothing, it's not quite nothing. How does the 5200 do in the latest Tomb Raider?

See Dave's post. Tomb Raider turns off DX9 features by default on the 5200.
 
DaveBaumann said:
I think the DX8 shaders may be simpler in operation than the DX9 ones.
I don't doubt that, but from my understanding of benchmarks run to date, PS1.1 is considerably faster on the FX hardware than PS1.4.

Which leads me to question why they would choose PS1.4 as a fallback.
 
Chalnoth said:
For example, is more processing done with DX9 effects enabled? Or are the same effects done for higher performance?

GAH!!!! Read the F*^%ing site. Rev has put extensive work in with the developer to understand what is going on. Plus, for DX9 testing we are utilising a baseline soe eveything will be rendered the same on DX9 boards.
 
Chalnoth said:
Yeah, read that afterwards, but that's only by default. Shouldn't there be a way to turn the features on?

I don't know, but it's the "by default" that matters, IMO. That is, performance is slow enough that the developers don't recommend it. That's important for two reasons:

1) Developers believe the gaming experience on a 5200 is better without shaders. (Performance improvement outweighs visual quality improvement.)

2) That the developers aren't spending time and resources trying to get 5200 shader performance up to speed, and are targeting a higher level DX9 card as the base.

Regardless, I really don't like people using Tomb Raider in these comparisons of shader performance without any knowledge about how Tomb Raider implements the various technologies.

The bottom line is, it's a "real game" (that all the 3DMark bashers were clamoring for). This is a real "data point." Is it indiciative of future data points? Hard to tell, sure. That doesn't diminish this one data point's validity.
 
Joe, I agree with you, except the "s" on "developers". All we have is one data point. Once we get some more, the picture will be less fuzzy.
 
Joe DeFuria said:
I don't know, but it's the "by default" that matters, IMO. That is, performance is slow enough that the developers don't recommend it. That's important for two reasons:
The only problem is, you don't state the reasons that developers don't recommend it. It could be a number of things. After reading the link Dave posted, I'm inclined to think that it's due to the driver automatically dropping to integer precision when applying the DOF affect, which screws that effect up. Again, I think Microsoft should have implemented an integer type so that nVidia was not forced to go the route of not following the spec for reasonable performance under DX9.
 
Back
Top