Chalnoth said:Personally, I don't care that much about how Cg performs with the NV30 profiles.
Pssst: You will for damn sure if you're going to run a NV3X card (the word was twitchy)!
Chalnoth said:Personally, I don't care that much about how Cg performs with the NV30 profiles.
Hyp-X said:If I understand correctly in GL2 every IHV will have to write their own compiler for gslang.
So the performance of Cg with the NV30 profile is interesting, as it is what nVidia's GL2 driver will likely use.
antlers4 said:Hyp-X said:If I understand correctly in GL2 every IHV will have to write their own compiler for gslang.
So the performance of Cg with the NV30 profile is interesting, as it is what nVidia's GL2 driver will likely use.
Because of the decisions made about gslang (no support for reduced precisions) it may be that the ARB Cg profile is more representative of what GL2 on the NV30 will be able to do than the NV30 profile.
"When writing general programs, programmers have long given up worrying if it is more efficient to do a calculation in bytes, shorts or longs and we do not want shader writers to believe they have to concern themselves similarly."
pocketmoon_ said:Sadly yes. gslang wont provide an implementation of reduced precision (boo!) but they are keywords 'reserved for future' use e.g.
"long short double half fixed unsigned"
So perhaps the NV30 architecture is too far ahead of its time!
andypski said:Why do you want lower precisions so much? Do you have a particular calculation in mind that for some reason can't be done with higher precision?
Joe DeFuria said:You want to lower precision for certain architectures like the NV30, where higher precision = lower performance. So if you don't "need" the higher precision, you don't want to use it....at least on the NV30.
The point I was trying to make was that Cg won't see widespread use unless it's useful across a wide range of video cards.LeStoffer said:Chalnoth said:Personally, I don't care that much about how Cg performs with the NV30 profiles.
Pssst: You will for damn sure if you're going to run a NV3X card (the word was twitchy)!
pocketmoon_ said:Surely implementing lower precision in hardware is inherently faster.
Thats why Intel etc give you the CHOICE of using doubles, floats, ints etc.
NV30 has that choice - other architectures dont have the flexibility and arguably don't offer the performance they could be offering if they had at least support for half data types.
pocketmoon_ said:Surely implementing lower precision in hardware is inherently faster. Thats why Intel etc give you the CHOICE of using doubles, floats, ints etc. NV30 has that choice - other architectures dont have the flexibility and arguably don't offer the performance they could be offering if they had at least support for half data types.
Xmas said:Just a quick question: what prevents NVidia from 'silently' support half floats in their GLslang compiler, just spilling out a warning like "this shader won't compile on all hardware" but still running it (with increased performance)?
antlers4 said:Hyp-X said:If I understand correctly in GL2 every IHV will have to write their own compiler for gslang.
So the performance of Cg with the NV30 profile is interesting, as it is what nVidia's GL2 driver will likely use.
Because of the decisions made about gslang (no support for reduced precisions) it may be that the ARB Cg profile is more representative of what GL2 on the NV30 will be able to do than the NV30 profile.
Yes, I mean just that, compiling a shader where you actually write "half" in the source code. It's a reserved word anyway.DeanoC said:Edit:
Unless you mean in a non forced way, but then it would be another languge with new datatypes?