ultimate_end
Newcomer
RSX redundancy
This is something that has bugged me for ages, ever since Ken Kutaragi made the comment that RSX will also include redundancy measures to improve yields - just as Cell with its disabled SPE. I keep wondering what these redundancy measures are. The topic of RSX redundancy has been briefly touched on by people here before, but I've been waiting all this time for someone to make an actual thread of its own (I'm not a very pro-active person). So I thought, what better way to discuss this than with a poll?
Just for the record, I've always thought option 1 was a good idea, but I'm unsure of how effective it would be. But I really want to know what everyone thinks, even if it's very much just speculation at this point. If I haven't been very clear or if you don't know what a FP16 normalize is (I don't really either), please vote anyway. Also, any further speculation is very welcome.
Where appropriate, for the purpose of this poll I have made the following assumptions:
1. RSX is directly based on an nVidia G7x GPU.
2. KK isn't BS-ing about RSX redundancy :smile:.
3. The FP16 normalize operation has been counted as part of RSX's "136 shader ops per clock" (E3 presentation), giving a maximum of (2x vec2 + (FP16nrm + 2x vec2) or (vec3 + scalar + (FP16nrm + vec3 + scalar) operations per clock, i.e. 5ops/cycle, per pixel shader.
4. Vertex shaders represent 2 shader ops per clock each.
5. I don't know what a normalize operation is, or or is used for, so I'm not sure whether it should be classified as a "shader op" or not. I only know that it runs in parallel with the main shader ALU(s), on the first shader unit in each pixel pipe.
6. AFAIK the "mini-ALU" are there for shader model backwards compatability and as such do not run in parallel with the main ALUs they are attached to (thereby not adding to maximum shader ops per clock). Please Dave or somebody correct me if I'm wrong about what these mini ALUs do.
---
Here is a diagram of G70's pixel shader for reference (hijacked from Dave's G70 article ):
Disclaimer: Option 5 is just a little joke. Otherwise, why would Sony put an SLI 6800 Ultra setup in the PS3 devkits when a single 6200 or 6600GT would do?
This is something that has bugged me for ages, ever since Ken Kutaragi made the comment that RSX will also include redundancy measures to improve yields - just as Cell with its disabled SPE. I keep wondering what these redundancy measures are. The topic of RSX redundancy has been briefly touched on by people here before, but I've been waiting all this time for someone to make an actual thread of its own (I'm not a very pro-active person). So I thought, what better way to discuss this than with a poll?
Just for the record, I've always thought option 1 was a good idea, but I'm unsure of how effective it would be. But I really want to know what everyone thinks, even if it's very much just speculation at this point. If I haven't been very clear or if you don't know what a FP16 normalize is (I don't really either), please vote anyway. Also, any further speculation is very welcome.
Where appropriate, for the purpose of this poll I have made the following assumptions:
1. RSX is directly based on an nVidia G7x GPU.
2. KK isn't BS-ing about RSX redundancy :smile:.
3. The FP16 normalize operation has been counted as part of RSX's "136 shader ops per clock" (E3 presentation), giving a maximum of (2x vec2 + (FP16nrm + 2x vec2) or (vec3 + scalar + (FP16nrm + vec3 + scalar) operations per clock, i.e. 5ops/cycle, per pixel shader.
4. Vertex shaders represent 2 shader ops per clock each.
5. I don't know what a normalize operation is, or or is used for, so I'm not sure whether it should be classified as a "shader op" or not. I only know that it runs in parallel with the main shader ALU(s), on the first shader unit in each pixel pipe.
6. AFAIK the "mini-ALU" are there for shader model backwards compatability and as such do not run in parallel with the main ALUs they are attached to (thereby not adding to maximum shader ops per clock). Please Dave or somebody correct me if I'm wrong about what these mini ALUs do.
---
Here is a diagram of G70's pixel shader for reference (hijacked from Dave's G70 article ):
Disclaimer: Option 5 is just a little joke. Otherwise, why would Sony put an SLI 6800 Ultra setup in the PS3 devkits when a single 6200 or 6600GT would do?
Last edited by a moderator: