radar1200gs said:The only fud around here is coming from the ATi fanboys - "FP16 is not supported by DX9", "Dawn is a DX9 showcase"...
The 5200 runs half-life2 - it may run it slowly, thats beside the point - if you want performance, buy a more expensive card, not a value/entry level card.
jvd said:radar1200gs said:The 5200 is capable of running HL2 at DX9 levels if the developer allows it to.
I'm sure that if Valve hardcodes 5200 & 5600 to DX8 with no option for the user to alter it, some enterprising person will devise a patch that tells the game the card is really a 5900 or whatever.
Yea it runs it alright. 8fps . Thats with a 2.8 ghz cpu which is not entry lvl at all. Imagine what it will run like with a 2ghz cpu. Thats pathatic on nvidias part and yours for defending such a crap part. The 5600 should be in the price zone of the 5200 and even then i'd have trouble recommending it for any dx 9 code.
You wouldn't run the 5200 at the same resolution as the other cards.jvd said:
Yea it runs it alright. 8fps .
Well, if the source data is only RGBA8, there would be no reason to use FP32 at all...Ostsol said:Well, really any RGBA8 texture could be sampled into a FP16 register (then maniputed in FP32, I suppose).
It depends entirely on the situation. . . In your own example of normal maps, one would be using the sampled pixel as a parameter in a lighting equation along with data not derived from an FX8 source, but from FP32/FP24 interpolated and normalized data. This is, of course, unless one is using an RGB8 cubemap for normalization (in which case basically all parameters of the equation are FX8), instead of a higher precision or simply performing arithmetic normalization. Also, if the texture is simply to be combined with other textures without any significant other operations or if that texture is simply to be modulated with the result of the lighting equation at the end of the shader, then just about any low precision -- even FX12 -- is fine. However, when used as a parameter amongst other data that's at FP32/FP24, running everything at a lower precision could result in a loss of potential quality.Chalnoth said:Well, if the source data is only RGBA8, there would be no reason to use FP32 at all...Ostsol said:Well, really any RGBA8 texture could be sampled into a FP16 register (then maniputed in FP32, I suppose).
I would say this is typically the case. Additionally, FP16 will be enough for most any calculation on color data. For example, if a specific normal map happens to have problems if you use FP16, so FP32 is used for most calculations involving that normal map, you could still go back to FP16 once color information is obtained.Ostsol said:EDIT: The way it's been sounding seems to indicate that you think that FP32 is necessary only for operations with an expectancy of high precision input data, such as dependant texture reads or render-to-vertex-buffer. Am I close?
Would that not imply there is no need for precision modifiers, as it can be done automatically and easily by analysis?Chalnoth said:In other words, what I'm trying to say is that with a few simple rules, one should be able to easily determine which instructions can use what precision, with no perceivable loss in image quality.
Dio said:Would that not imply there is no need for precision modifiers, as it can be done automatically and easily by analysis?Chalnoth said:In other words, what I'm trying to say is that with a few simple rules, one should be able to easily determine which instructions can use what precision, with no perceivable loss in image quality.
OpenGL guy said:I believe Valve said the 5900 will be running with a mixed DX8/DX9 mode.AlphaWolf said:I believe it was mentioned that the 5200's will be running hl2 on a dx8 path. As will the 5600. The 5900 will be running it with _pp.The 5200 runs half-life2 - it may run it slowly, thats beside the point - if you want performance, buy a more expensive card, not a value/entry level card.
Well, it sounds very complex, but might be doable. When a texture is uploaded to DirectX the driver could detect the format and the minimum and maximum values and such to feed the pixel shader compiler. The shader compiler can analyse the code to see if a lower precision type can be used with almost no precision loss for the result. With enough correct data, automatic type selection might be possible without any perceivable loss of precision.DemoCoder said:This would work in a high level language with type inferencing and numerical analysis in the compiler using a heuristic like "minimize error". The problem is, there is no standard for telling the HLSL shader what the precision of the input textures are and the precision of the output framebuffer.
The DCL shader instruction in DX9 only allows you to specify the input mask, and whether something is 2d, a cube, or volume. Similarly, there is no way, specified in the shader itself, what the desired output precision is.
This is not insoluable, but increases the workload for the driver/compiler and renders FXC even more impotent. The driver would have to do on-the-fly compilation of shaders based on the pipeline state (detect texture format being used and infer precision) and the output render target, and use that to reorder expressions and select instructions to minimize error.
sonix666 said:Well, it sounds very complex, but might be doable. When a texture is uploaded to DirectX the driver could detect the format and the minimum and maximum values and such to feed the pixel shader compiler. The shader compiler can analyse the code to see if a lower precision type can be used with almost no precision loss for the result. With enough correct data, automatic type selection might be possible without any perceivable loss of precision.DemoCoder said:This would work in a high level language with type inferencing and numerical analysis in the compiler using a heuristic like "minimize error". The problem is, there is no standard for telling the HLSL shader what the precision of the input textures are and the precision of the output framebuffer.
The DCL shader instruction in DX9 only allows you to specify the input mask, and whether something is 2d, a cube, or volume. Similarly, there is no way, specified in the shader itself, what the desired output precision is.
This is not insoluable, but increases the workload for the driver/compiler and renders FXC even more impotent. The driver would have to do on-the-fly compilation of shaders based on the pipeline state (detect texture format being used and infer precision) and the output render target, and use that to reorder expressions and select instructions to minimize error.
If nVidia manages to do something like that in their drivers, I will bow for their developers.
This definitely looks like a fairly complex problem.sonix666 said:Well, it sounds very complex, but might be doable. When a texture is uploaded to DirectX the driver could detect the format and the minimum and maximum values and such to feed the pixel shader compiler. The shader compiler can analyse the code to see if a lower precision type can be used with almost no precision loss for the result. With enough correct data, automatic type selection might be possible without any perceivable loss of precision.
ROFL~~Reverend said:You know, I got a pair of legs. I can run a marathon with them. Doesn't mean I will enjoy running the marathon.
Totally agreed, it's fun when people choose impossible positions to argue and then insist on arguing them passionately...'specially at this place where everyone knows what the what is and doesn't put up with any FUD.Man, this thread is getting more amusing by the page.
Which you could do, but it is dangerous. There may still be problems with shaders in which errors accumulate, so that the problem isn't related to the input or output formats, or the dynamic range, but is rather due to recursive errors. A perfect example is a mandelbrot set.Dio said:Would that not imply there is no need for precision modifiers, as it can be done automatically and easily by analysis?Chalnoth said:In other words, what I'm trying to say is that with a few simple rules, one should be able to easily determine which instructions can use what precision, with no perceivable loss in image quality.
If you buy a card that says its dx 9 it better be able to play dx 9 class games. Otherwise its false advertising. Just as if you got a car and it wouldn't work on a road you'd be pretty damn pissed wouldn't u ?Yes, the 5200 is functional for DX9. Being functional does not necessarily imply good performance.
BTW: I never reccomended the 5200 in any post, simply stated that the non 64 bit variants are okay for the entry level market.
Personally, the lowest end discrete graphics part I sell to clients is the 5600 Ultra or GF4 4200. For entry level PC's I use IGP nforce1 (business) or IGP nforce2 (home/'net pc).
So the point is, it's not just 'a few simple rules'.Chalnoth said:Which you could do, but it is dangerous. There may still be problems with shaders in which errors accumulate, so that the problem isn't related to the input or output formats, or the dynamic range, but is rather due to recursive errors. A perfect example is a mandelbrot set.Dio said:Would that not imply there is no need for precision modifiers, as it can be done automatically and easily by analysis?Chalnoth said:In other words, what I'm trying to say is that with a few simple rules, one should be able to easily determine which instructions can use what precision, with no perceivable loss in image quality.
People buy Festivas (or Echos, or Geos) all the time. They're not (in my opinion) fit to drive on Texas freeways as their acceleration stinks and they're fragile. It doesn't make them any less of a car.jvd said:If you buy a card that says its dx 9 it better be able to play dx 9 class games. Otherwise its false advertising. Just as if you got a car and it wouldn't work on a road you'd be pretty damn pissed wouldn't u ?