Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
[maven said:]I can't see why you don't just use PointSprites (together with per particle PSIZE).
Yes, it's only supported on GF3+, (don't about ATI, but I suspect 8500 upwards should do), but then quad-rendering is supported nowhere in DX (as it's ill-defined when the vertices aren't coplanar).
Use D3DXSprite as a fallback for non-PSIZE-hardware...![]()
JohnH said:Oh hang on you're using it for sprites, so you'd have to submit in seperate prim calls. Though I still doubt that the extra data for teh indices is going to make that much difference to your performance, unless of course there's something dodgy with the HW you're running on.
Of course this would be fixed by a programmable prim processor.
John.
Xmas said:I think we're even better off with a primitive type called "quad" that is defined as a 2-triangle-fan. Simple as that, no "ill-defined" problems any more. Just the same as it is in reality in any OpenGL implementation.
The point is simply, it is useful functionality. And D3D doesn't offer it, for whatever reason. If you mind that there are already other definitions of "quads", well, then call it something different. "Restricted quads", perhaps?Dio said:Two ways of specifying the same thing aren't necessarily better than one way...
I don't like the current situation. I'm in favour of anything that makes it better. But I don't have much control either so <shrug>Xmas said:The point is simply, it is useful functionality. And D3D doesn't offer it, for whatever reason. If you mind that there are already other definitions of "quads", well, then call it something different. "Restricted quads", perhaps?Dio said:Two ways of specifying the same thing aren't necessarily better than one way...
Chalnoth said:(my current beef being integer calcs in PS 2.0 and higher).
Chalnoth said:FP16 isn't much better for recursive calcs than INT12.
The minimum for PS 2.0 is FP16, when using the partial precision hint. And FP24 also pales in comparison to FP32 when dealing with recursive calcs (which pales in comparison to FP64, etc.).Mariner said:Which I assume is the reason that the minimum for PS2.0 was defined as FP24 by Microsoft.Chalnoth said:FP16 isn't much better for recursive calcs than INT12.
One also has to recognize that not all calculations will exacerbate the errors. Some calculations will tend to hide them, by their very nature. Just because a shader is complex doesn't necessarily mean that it will require much higher accuracy than the final output. It all depends on what calculations are done, and what kind of data those calculations are done on.Personally, I think that FP16 will probably be enough accuracy for the first generation of DX9 games as it is doubtful that these will make too much use of complex shaders which might require the additional accuracy of FP32/FP24. I can therefore understand why FP16 could possibly have been useful as part of the DX9 spec.
I'll make it simple. If INT12 had been supported, nVidia wouldn't be inclined to force the use of lower precisions. The way it is now, DirectX 9 is ensuring lower-quality rendering on the NV30-34 processors, as nVidia must use auto-detection to make use of the significant integer processing power. If INT12 was supported in the API, games could both perform higher and look better on these video cards.Arguing that INT12 should have been supported, however, seems too much of a retrograde step to me.
Talk about reverse logic!Chalnoth said:I'll make it simple. If INT12 had been supported, nVidia wouldn't be inclined to force the use of lower precisions. The way it is now, DirectX 9 is ensuring lower-quality rendering on the NV30-34 processors, as nVidia must use auto-detection to make use of the significant integer processing power. If INT12 was supported in the API, games could both perform higher and look better on these video cards.
At probable cost of violating the DX9 spec.Chalnoth said:The NV30-34 need to use some integer precision for good performance.
nVidia will ensure that the NV30-34 will have good performance under DirectX.
At definite cost of violating the DX9 spec.nVidia therefore must drop down to INT12 precision in a somewhat arbitrary way.
Which is not Microsoft's problem. NVIDIA knew what the DX9 requirements were. NVIDIA chose not to implement them as efficiently as the competition. NVIDIA is to blame here, not Microsoft. I don't care about "woulda, shoulda, coulda", the point is that the DX9 spec. is FP24 minimum, with FP16 allowed on operations specified with _pp, that is it.If Microsoft would simply support the format, at least programmers would have control over when this happens, so that they could use the higher precisions when it's necessary to do so. Right now, programmers don't even have that option on this hardware.
There are two possible reasons for what happened.FUDie said:Which is not Microsoft's problem. NVIDIA knew what the DX9 requirements were. NVIDIA chose not to implement them as efficiently as the competition. NVIDIA is to blame here, not Microsoft. I don't care about "woulda, shoulda, coulda", the point is that the DX9 spec. is FP24 minimum, with FP16 allowed on operations specified with _pp, that is it.
-FUDie
Times change... Also what if MS supported int? Then NV35 would have problems, becouse mixing ints and floats are just not good idea on this GPU. If you feel int is enough you can as well use ps_1_1...Chalnoth said:API specs have always been designed after hardware. It's not nVidia who decided to violate the DirectX 9 spec. It's Microsoft who decided to write the DirectX 9 spec to not work well with the NV30-34.
Int is enough only for some calculations. Even with PS 1.1, many calculations were done in much higher-precision floating-point format (specifically, the texture ops). Anyway, PS 2.0 added a fair amount more than just FP.MDolenc said:Times change... Also what if MS supported int? Then NV35 would have problems, becouse mixing ints and floats are just not good idea on this GPU. If you feel int is enough you can as well use ps_1_1...
No. The NV30 clearly has other significant shader performance problems. Microsoft's writing of the DX9 spec is but one aspect.On the other hand you seem to claim that NV30 is slow in DX9 JUST becouse MS didn't let ints into DX9 spec right?
Chalnoth said:CPU's have always been about picking the right precision for the job. They support multiple precisions for a very good reason, and that reason is that sometimes it is better to sacrifice precision for speed, because that sacrifice in precision will mean nothing for the final output. This is particularly the case in 3D graphics, where the final output will be, at most, 12-bit integer (currently the highest in PC 3D is 10-bit, but most still output at 8-bit).
One also has to recognize that not all calculations will exacerbate the errors. Some calculations will tend to hide them, by their very nature. Just because a shader is complex doesn't necessarily mean that it will require much higher accuracy than the final output. It all depends on what calculations are done, and what kind of data those calculations are done on.Personally, I think that FP16 will probably be enough accuracy for the first generation of DX9 games as it is doubtful that these will make too much use of complex shaders which might require the additional accuracy of FP32/FP24. I can therefore understand why FP16 could possibly have been useful as part of the DX9 spec.
The only question that should be asked is, for most 3D graphics programs, will it be better for hardware to support integer calcs (or any given precision) explicitly? Or will more performance be obtained if those transistors were instead used to improve performance for a higher-precision format?
I'll make it simple. If INT12 had been supported, nVidia wouldn't be inclined to force the use of lower precisions. The way it is now, DirectX 9 is ensuring lower-quality rendering on the NV30-34 processors, as nVidia must use auto-detection to make use of the significant integer processing power. If INT12 was supported in the API, games could both perform higher and look better on these video cards.
And I'll say it one last time. Stating that INT12 or FP16 are just bad for 3D graphics is an arbitrary judgement. Whether or not they are useful depends on the algorithm. Both formats are still higher in quality than the final output, so obviously there will be a number of calculations that will not benefit from higher precisions.