More info about RSX from NVIDIA

Fafalada said:
unless you're processing quaternions, why would you want to normalize a r4 vector?
Same reason we have DOT4 in the ISA even though most of the time everyone just uses DOT3.

well, whereas dot4 is useful for various things, a normalisation of an r4 vector would be useful effectively for quaternions only. until we move to 4D-spatial graphics, that is ; )

I was under impression that NV4x already supported free Normalize4, so it would be weird if they downgraded G70 in that respect.

according to the NV_fragment_program2 specs nv40 has normalization of r3 that leaves the w component undefined.
 
bbot said:
Wow. Looks like G70 and RSX will be pretty good parts. C1 is beginning to look like doodoo.

I really think there's hardly cause for that. R500 is awesome - take numbers with a grain of salt.

It's what Cell an bring to the table here now that it seems RSX is a straight G70 that I'm most interested in.
 
bbot said:
Wow. Looks like G70 and RSX will be pretty good parts. C1 is beginning to look like doodoo.
:rolleyes: Nah..I don't think so. Even if RSX has 'bigger numbers' I'm not that sure about it being faster than C1 that should be quite more efficient than RSX.
 
Efficiencies aside, my interpretation of the specs would lead me to believe that Xenos can be significantly faster in vertex heavy work like a z only pass or extremely small triangles, while as the load shifts to pixel shading they become very comparable. A combination of Dave's comments plus Nvidia's re-use of NV40 block diagram components for the G70 leads me to think that the TMU's still block ALU1 in some way.
 
Rockster said:
What would RSX do with physics or collision data?

The "data" in this case is just vertices. The blog quote isn't really saying anything that hasn't been mentioned before - just talking about greater physical modelling - talks a lot about "movement" and how Cell can shine in making more things move more realistically - and the possibility for doing more vertex shading on the CPU if necessary.
 
Rockster said:
What would RSX do with physics or collision data?
Let's imagine a boat on water. It looks crappy on current consoles because the water physics don't accurately respond to the object on it. With RSX having access to physics data, all kind of cool things can be done. Imagine the PS3 rendering waves tossing a ship around. The GPU is able to render this scene accurately in real-time according to the physical interaction between the ship and the waves. That's just one example I can think of.
 
Alpha's post reminded me, I guess asides from "just" vertices, Cell could more generally be feeding results of simulation into vertex or pixel shaders, and not just relating to object movement or whatever (data for lighting would be another example). Again, the concept isn't fundamentally new as such, but the sheer power + bandwidth makes it much more feasible than before.
 
I know the intention of the blog, I was just picking on the rough translation.

In Xenos' case, the "fluid reality" marketing term relates to texture access by vertex shader programs. I think the blogger was getting at a CELL generated displacement map among other things. Honestly, both console have interesting potential, and neither has an unmitigated advantage it any area. It's just different techniques that exploit their efficiencies.
 
Alpha_Spartan said:
Rockster said:
What would RSX do with physics or collision data?
Let's imagine a boat on water. It looks crappy on current consoles because the water physics don't accurately respond to the object on it. With RSX having access to physics data, all kind of cool things can be done. Imagine the PS3 rendering waves tossing a ship around. The GPU is able to render this scene accurately in real-time according to the physical interaction between the ship and the waves. That's just one example I can think of.
Interesting, how would that work in practice? What would the gpu do with that physics data? Would it offset the location of vertices based on that? Why can't this be done at the cpu level?
 
Alpha_Spartan said:
Rockster said:
What would RSX do with physics or collision data?
Let's imagine a boat on water. It looks crappy on current consoles because the water physics don't accurately respond to the object on it. With RSX having access to physics data, all kind of cool things can be done. Imagine the PS3 rendering waves tossing a ship around. The GPU is able to render this scene accurately in real-time according to the physical interaction between the ship and the waves. That's just one example I can think of.
There's nothing that can't be done on a current console or PC
 
nAo said:
Alpha_Spartan said:
Rockster said:
What would RSX do with physics or collision data?
Let's imagine a boat on water. It looks crappy on current consoles because the water physics don't accurately respond to the object on it. With RSX having access to physics data, all kind of cool things can be done. Imagine the PS3 rendering waves tossing a ship around. The GPU is able to render this scene accurately in real-time according to the physical interaction between the ship and the waves. That's just one example I can think of.
There's nothing that can't be done on a current console or PC

On a "cpu computes data and hands it to the GPU" level, or the level of Alpha's example, that's correct, but on a practical level I think there is a seperation between what you can do with current PCs and consoles with regard to this, and what you can do with next-gen systems, if only because of the greater CPU power, not to mention bandwidth. That goes without saying, I guess, just want clarifying..
 
Quaz51 said:
i think is 2*(vec4 + scalar) par pixel pipeline not 2vec4 + 1 scalar (2 vec4 +2 scalar + 1 norm = 5 instruction)
total = 56 scalar not 32

I was thinking that. And that would leave 7fp ops for the nrm operation. Now what the hell is a nrm operation anyway? Is there any chance it refers to a texture address op?
 
Titanio said:
Alpha's post reminded me, I guess asides from "just" vertices, Cell could more generally be feeding results of simulation into vertex or pixel shaders, and not just relating to object movement or whatever (data for lighting would be another example). Again, the concept isn't fundamentally new as such, but the sheer power + bandwidth makes it much more feasible than before.

And that could also be made only in Xenos with MEMEXPORT instruction.
 
Love_In_Rio said:
Titanio said:
Alpha's post reminded me, I guess asides from "just" vertices, Cell could more generally be feeding results of simulation into vertex or pixel shaders, and not just relating to object movement or whatever (data for lighting would be another example). Again, the concept isn't fundamentally new as such, but the sheer power + bandwidth makes it much more feasible than before.

And that could also be made only in Xenos with MEMEXPORT instruction.

Sorry, not sure if I understand?
 
nrm = vector normalization.
nrm(V) = V/||V|| = V*(1/sqrt(DOT3(V,V)))
So you need a DOT3 (5 ops), a invsqrt (1 'high level' op) and a MUL3 (3 ops) to do a vec3 normalization: total 9 ops
NV40 can a do a nrm on a 16bit per component vec3 per cycle (and it's free..).
If they haven't changed nrm hw on G70 I don't think Nvidia should add 32bit math ops with 16bit math ops.
 
Sorry, wrong quote, it was for Alpha_Spartan, but anyway i think that the MEMEXPORT function in Xenos allows to make in the ALUs the phisics operations and send them to the main memory as vertex data as a new input for the vertex shaders. This way phisics can be done in Xenos without CPU aid.
 
Back
Top