What's up with this unlimited instructions craze?

K.I.L.E.R

Retarded moron
Veteran
Unlimited instruction support is useless with current technology. I mean, can a NV30 or R300 handle that many instructions before the cards have trouble processing them? I would seriously doubt that.
I realise the need to remove limitations, but isn't this more about marketing hype than anything? How many instructions can the R300 handle (not theoretically, hard to say eh? :))? Will the R350 handle more than that at acceptable speeds in upcoming games?
 
The issue isn't that games will neccessarily use super long shaders, the issue is to catch exceptional cases and handle them for the developer automatically.

Let's say I wrote a shader that uses 160 instructions. Then I edit one thing, and recompile with HLSL and it's 161 instructions. BOOM, my code is broke. Wouldn't it be nice if the HW handed the few degenerate cases smoothly?


Today, when I write C code, the C compiler will allocate registers to each of my program variables. If I happen to have more live variables than registers on the CPU, the compiler will automatically "spill" the registers onto the stack, freeing up some of them for reuse. This allows arbitrarily complex code to be written without the developer having to try and "count" lines of code, or # of variables.

The whole point of HLSL is to remove much of this low level housekeeping. If instead, I have to keep track of my register usage and shader length, for fear of "breaking", I may as well code in assembly.


It's just irritating having to count instructions, registers, cycles, and what have you. (some developers enjoy doing it, that's why we have the Demo Scene, and 4k, 64k demo challenges, etc. But most developers just want to be productive)
 
It may not benefit real time graphics too much (talking about this generation), but for non-real time rendering it could be a huge advantage.

For example, if it takes 1 minutes to render a frame that you've created using Maya (or whatever) on your CPU, imagine being able to send it of to your graphics card to render that same frame in about 4 or 5 seconds.
 
Fuz said:
It may not benefit real time graphics too much (talking about this generation), but for non-real time rendering it could be a huge advantage.

For example, if it takes 1 minutes to render a frame that you've created using Maya (or whatever) on your CPU, imagine being able to send it of to your graphics card to render that same frame in about 4 or 5 seconds.

Heard that one before. :)
 
K.I.L.E.R (could you lose the periods in your nick? I always seem to get them wrong... :) ), read (or, maybe, re-read) what Tim Sweeney has to say about this (you may just pay attention to his last paragraph and forget the rest wrt your topic question), basically the same thing as DC's post above (who, bless his heart, always comes up with examples).
 
Reverend said:
K.I.L.E.R (could you lose the periods in your nick? I always seem to get them wrong... :) ), read (or, maybe, re-read) what Tim Sweeney has to say about this (you may just pay attention to his last paragraph and forget the rest wrt your topic question), basically the same thing as DC's post above (who, bless his heart, always comes up with examples).

K.I.L.E.R is just an acronym. :)
Yes, I have read what TS had to say. Part how the CPU only has 8 registers but we are able to use far more variables. I thought it was due to being able to store all the extra variables in a stack.
 
Back
Top