First of all, hi everybody. I've been lurking here for years, and now I've finally won over my laziness and decided myself to register!
Disclaimer : I know nothing about programming except the very basics (little more than hello world kind of stuff) and certainly nothing about hardware design, though over time I've formed a vague and simplified idea of many terms and concepts relating to 3d graphics.
If you answer, try to explain it as siiiimply as possible.
On to my question :
GeforceFX had both PS 1.1 and (rather crappy)PS 2.0 shading units. NV has since abandoned this approach, but at the time it seemed like a good idea (well, in theory) to me.
Why isn't it be desirable(I'm thinking mainly about consoles here) to make a hybrid GPU that has different types of shader units, maybe each even having with it's own precision?
For example, a lot of FP24 PS 2.0 units and a smaller pool of FP32 PS 3.0 ones for shaders that actually need them.
It seems to me that such a configuration could in most circumstances,with the kind of techniques and shaders in use now or(I think) in the next few years, have a higher performance per transistor.
I can imagine that this might make things a bit more difficult for programmers, but especially in a console, where the exact resources on the GPU are known and there are no drivers messing with the code, it could be a way to raise efficiency, since the coder can choose how to allocate the shaders: have a shader run on a PS3.0 unit or make a different version 2.0 version that runs a little slower, but has many faster PS2.0 units ready to work on it.
Or even make two versions of some shaders and run it on whatever unit type is less occupied at the moment.
Maybe this kind of load balancing would be too problematic? Or do programmers simply no't WANT to have to deal with this(not to mention that shader development seems to be moving away from coders and toward artists who might not be able to deal with these issues)?
Sure, it'd be simpler to have a single shader model, but I think it'd be a pragmatic way of increasing efficiency and performance in real world situations in the near future.
Or maybe there are also difficulties from a hardware design point of view? Or am I even making any sense here? I''m not so sure.
Anyway, in your opinion, is there any chance that we might see such a design decision in one of the next gen console GPUs?
Disclaimer : I know nothing about programming except the very basics (little more than hello world kind of stuff) and certainly nothing about hardware design, though over time I've formed a vague and simplified idea of many terms and concepts relating to 3d graphics.
If you answer, try to explain it as siiiimply as possible.
On to my question :
GeforceFX had both PS 1.1 and (rather crappy)PS 2.0 shading units. NV has since abandoned this approach, but at the time it seemed like a good idea (well, in theory) to me.
Why isn't it be desirable(I'm thinking mainly about consoles here) to make a hybrid GPU that has different types of shader units, maybe each even having with it's own precision?
For example, a lot of FP24 PS 2.0 units and a smaller pool of FP32 PS 3.0 ones for shaders that actually need them.
It seems to me that such a configuration could in most circumstances,with the kind of techniques and shaders in use now or(I think) in the next few years, have a higher performance per transistor.
I can imagine that this might make things a bit more difficult for programmers, but especially in a console, where the exact resources on the GPU are known and there are no drivers messing with the code, it could be a way to raise efficiency, since the coder can choose how to allocate the shaders: have a shader run on a PS3.0 unit or make a different version 2.0 version that runs a little slower, but has many faster PS2.0 units ready to work on it.
Or even make two versions of some shaders and run it on whatever unit type is less occupied at the moment.
Maybe this kind of load balancing would be too problematic? Or do programmers simply no't WANT to have to deal with this(not to mention that shader development seems to be moving away from coders and toward artists who might not be able to deal with these issues)?
Sure, it'd be simpler to have a single shader model, but I think it'd be a pragmatic way of increasing efficiency and performance in real world situations in the near future.
Or maybe there are also difficulties from a hardware design point of view? Or am I even making any sense here? I''m not so sure.
Anyway, in your opinion, is there any chance that we might see such a design decision in one of the next gen console GPUs?