3DMark03 PS2.0 IQ comparison: R300(6307) vs NV31(4300/4345)

Evildeus said:
OpenGL guy said:
jjayb said:
Sorry, the reference pic and the ATI pic are wrong. Everyone know's the Nvidia pics show it "the way it's meant to be played". ;)
Good luck "playing" 3D Mark 2003 ;)
But... but that's what Ati recommand to play with :p

I think you'll find that ATI recommend that you play anything you want whereas nVidia recommend you only play the things that they like... ;)
 
Mintmaster said:
Does this sound reasonable?

ATI: FP24
43.00: FP16, black dots due to 10-bit mantissa and its poor precision
43.45: I12, low dynamic range due to lack of exponent, but higher precision than FP16
Reference: FP32 because that's what CPU's use

Black dots look like from bugs. FP16 has the same or more precision than I12 (10 bits mantissa + 1 implicit bit + 1 sign bit = 12 bits integer).
 
andypski said:
Evildeus said:
OpenGL guy said:
jjayb said:
Sorry, the reference pic and the ATI pic are wrong. Everyone know's the Nvidia pics show it "the way it's meant to be played". ;)
Good luck "playing" 3D Mark 2003 ;)
But... but that's what Ati recommand to play with :p

I think you'll find that ATI recommend that you play anything you want whereas nVidia recommend you only play the things that they like... ;)
You are sure? I like CS :devilish:
 
pcchen said:
Black dots look like from bugs. FP16 has the same or more precision than I12 (10 bits mantissa + 1 implicit bit + 1 sign bit = 12 bits integer).

Good point. I never thought of it like that.
 
I'm curious as to how much precision versus dynamic range is important for this scene. It'd be nice if we could try different mantisa and exponent values for the reference driver, and see what kind of effects on the scene it has. It seems like it would be very much in nvidia's interests to use FX12 (and to a lesser extent FP16) whenever possible. Could they get away with faking precision/dynamic range with lower precision drivers if they hand tuned the drivers to the specific scene?

Nite_Hawk
 
Evildeus said:
andypski said:
I think you'll find that ATI recommend that you play anything you want whereas nVidia recommend you only play the things that they like... ;)
You are sure? I like CS :devilish:

Well, when you're inhaling that CS gas that you apparently like so much just remember that, although it's not my place to tell you to stop, it could be hazardous to your health. :LOL:

More seriously with regards to CS I can't recall anyone at ATI ever telling you not to play it, or complaining about how it was coded. I certainly can't recall us ever making inferences that its programmers didn't know what they were doing when they created it.

In fact, I can't recall us saying that about any application or its developers. Maybe we're just too nice.

If you are having a problem with a bug in CS then I'm sure that if you have reported it then we will fix it promptly. Either way we're really not telling you whether to play it or not, are we?

So I'm not sure what relevance your comment has in this case.

- Andy.
 
Oh i forgot, on a GF4 :LOL: i don't have the money to buy one of the latest card made by Ati, it's a pity :cry:
Edit: but i do know some people having issues with the latest catalyst on CS. But they can still use older ones ;).
 
Evildeus said:
Oh i forgot, on a GF4 :LOL: i don't have the money to buy one of the latest card made by Ati, it's a pity :cry:
Edit: but i do know some people having issues with the latest catalyst on CS. But they can still use older ones ;).

You don't need to worry about us not knowing about things like this - we do keep a careful eye on things ;)

Anyway - lay off the CS gas (Oh - I wasn't supposed to say that) :LOL:
 
Nite_Hawk said:
Could they get away with faking precision/dynamic range with lower precision drivers if they hand tuned the drivers to the specific scene?
This is the kind of crap that nVidia shouldn't be forced to do.

This is why going with the standardized assembly was stupid.

Microsoft should have gone all HLSL with DirectX 9 (as OpenGL 2.0 intends). This, combined with specific data types (where the data type specifies the minimum accuracy for a particular piece of data...if that accuracy is available...there could be more data types defined than are used by current hardware).

nVidia innovated with an architecture that they felt could be leveraged well, as many programs have different computing requirements, many won't need to be all floating-point. Microsoft didn't accomidate their specific architecture in the assembly (which would have been easy...just define data types!), and nVidia has gotten screwed.
 
nVidia innovated with an architecture that they felt could be leveraged well, as many programs have different computing requirements, many won't need to be all floating-point.

Alternatively you could view it that they produced an overly convoluted architecture, slightly confused in its implemtation, that no-one really wants to take the time to code for. MS opted for the easier path of only having one datatype for ease of programability, which in the burgoening area of developer uptake on programmable pipelines 'ease of use' would be key. Seeing as at least one vendor would be supporting this in their pipeline at the right timeframe they probably made the right decision.
 
The only thing I find so very suspicious is the typo from Microsoft in their previous DirectX 9 specifications (as DeannoC is telling). Before people say that I accuse him of lying, that is not the case. I only say I find Microsofts move suspicious. Why?

Well, Microsoft defines the specifications for DirectX 9, which most probably has been finished quite some time before both ATi and nVidia start developing hardware for it. If I recall correctly, there were some arguments between nVidia and Microsoft and nVidia was kicked out of DirectX 9 specificating. So, we have the DirectX 9 specs with that so called typo in it for a long time now, and just before nVidia launches its card Microsoft says that it was a type. That kind of typos in specifications are hardly something one should overlook. It sounds more like a not so nice move from Microsoft to emberass nVidia.

I am not a fan of any product, but I find this all very suspicious. My replacement for my GF3 Ti 200 will be the very worthy Radeon 9800. :)
 
DX specifications are not set before hardware is underway, but is built in conjunction with the hardware vendors. Ultimately MS has the final say what they want to include, but it is generally built as a consensus of the hardware to be available withing its release timespan.
 
I think it was M$ get the last slap in over the XBox stuff. Makes no difference to them whether nvidia gets into trouble with their hardware.
 
Chalnoth said:
nVidia innovated with an architecture that they felt could be leveraged well, as many programs have different computing requirements, many won't need to be all floating-point. Microsoft didn't accomidate their specific architecture in the assembly (which would have been easy...just define data types!), and nVidia has gotten screwed.
I don't see what's so 'innovative' about creating an architecture that has legacy support for fixed point data types. They can certainly leverage their fixed point support in making legacy applications run fast, so they are getting what they paid for out of those gates. They didn't get screwed at all.

"Didn't accommodate their architecture in the assembly?" :?

The partial precision modifier doesn't make a Radeon 9700 run faster, does it?

Why not make an architecture that has enough floating point power to run legacy applications well? Then your modern applications also benefit from the power that you placed into floating point processing without any need to resort to hand-waving and playing games with calculation precision. I think that 9700 has proved beyond doubt that you can run legacy applications quickly through a well designed floating point pipeline.

That's what seems like a forward-looking and innovative architecture to me, but maybe I'm biased. ;)

IMO fixed point support in a DX9 part is redundant. It's a dinosaur. It's a dead parrot. If it wasn't nailed to the perch it would be pushing up the daisies.

Why don't we now see people arguing for fast fixed-point support in vertex shaders? There are colour calculations carried out in these as well - do we want these available at lower precisions?Apparently not - the original vertex shader as specced (in GF3) was 32-bit floating point, and so for some reason nVidia never seemed to see any need to evangelize fixed point vertex processing as an 'innovative' feature in any of their future products. If PS1.1 in GF3 had also been entirely floating point we would similarly never have seen fixed point pixel processing included in any future hardware.
 
sonix666 said:
The only thing I find so very suspicious is the typo from Microsoft in their previous DirectX 9 specifications (as DeannoC is telling). Before people say that I accuse him of lying, that is not the case. I only say I find Microsofts move suspicious. Why?

Well, Microsoft defines the specifications for DirectX 9, which most probably has been finished quite some time before both ATi and nVidia start developing hardware for it. If I recall correctly, there were some arguments between nVidia and Microsoft and nVidia was kicked out of DirectX 9 specificating. So, we have the DirectX 9 specs with that so called typo in it for a long time now, and just before nVidia launches its card Microsoft says that it was a type. That kind of typos in specifications are hardly something one should overlook. It sounds more like a not so nice move from Microsoft to emberass nVidia.

The problem is that if Microsoft decided that the minimum requirement for precision was 16 bit and NVidia defaulted their drivers to use 16 bit (maximum), there is no way that NVidia would be able to get their 32 bit precision exposed under D3D.
 
Ostsol said:
The problem is that if Microsoft decided that the minimum requirement for precision was 16 bit and NVidia defaulted their drivers to use 16 bit (maximum), there is no way that NVidia would be able to get their 32 bit precision exposed under D3D.
According to thepkrl, there's no difference in performance between FP16 and FP32 :?
 
I agree with all the jazz about having a well-implemented fp pipeline vs. an inefficient one which relies on legacy, integer support for speedup. What I don't understand, is the reason for accepting cpu's integer alu's (which run integer based calculations in a more streamlined manner) while wanting to be rid of a vpu's.

Granted, it is only correct to concentrate on what the architecture was built for, in this case programmablility, DX9 fp precision with reasonable legacy/current software performance; focus cannot be lost. However, wouldn't keeping integer hardware alongside fp be a step in the right direction? Is it because integer will see no more use after fp is widespread enough in the 3D arena; because transistors would be more well spent optimizing for fp ability/performance?
 
For CPUs, it is kinda hard to do pointer arithmetic, multiple-precision arithmetic or bit manipulations without integer types. Also, integer operations have lower latency than their FP counterparts, which is important given the highly serial execution model of CPUs. Neither of these concerns particularly apply to GPU vertex/pixel shaders, and at least ATI seems to do just fine without integer units in their shaders.
 
Luminescent said:
I agree with all the jazz about having a well-implemented fp pipeline vs. an inefficient one which relies on legacy, integer support for speedup. What I don't understand, is the reason for accepting cpu's integer alu's (which run integer based calculations in a more streamlined manner) while wanting to be rid of a vpu's.

Granted, it is only correct to concentrate on what the architecture was built for, in this case programmablility and DX9 fp precision; focus cannot be lost. However, wouldn't keeping integer hardware alongside fp be a step in the right direction?

The issue of binary backward compatibility is a thorny one for CPUs - they either have to be able to run on the unmodified binary, or some conversion software has to be run. All the opcodes and data types of the previous generation effectively have to be supported.

In the 3D graphics industry the translation step has always been implicit - going from some abstracted representation (eg. texture stage states in D3D) to the actual hardware implementation. The issue of binary backwards compatibility therefore does not affect graphics chips in anything like the same way as CPUs. In addition there have never really been any defined data types on anything other than input and output to the processing pipeline - you only ever had a defined minimum precision and range for operations, so you can simply plug in versions with higher precision and range without affecting compatibility.
 
Back
Top