The X-Rayengine (Stalker) uses some nVidia only features :(

Skinner

Regular
After a visite to the Stalker forums I asked if they are still using nVidia only features, which would result in visualeffects to be seen on nVidia cards only :( and why?

They stated nVidia's shaders are longer compares to ATI's but we know that's not relevant, because nVidia's shaders are way to slow to use the full length of it, Am I right?

Look at this small thread:

http://www.gsc-game.com/index.php?t=community&s=forums&s_game_type=xr

Maybe we can save Stalker.
 
They stated nVidia's shaders are longer compares to ATI's but we know that's not relevant, because nVidia's shaders are way to slow to use the full length of it, Am I right?

I believe few long length shaders don't lose as much performance compared to many short lengthed shaders.
Correct?
 
Well, IIRC, the F-buffer isn't exposed under DX9 at the moment ?

I think the game will feature a handful of very slow longer shaders, mostly for demo purpose. And that's not a problem, I find it cool that Nvidia hardware is actually used to push the envelope. As the developer said, ATI hardware will run the game fine. I think we can all agree that Nvidia holding the development or improvement of shader games because their hardware can't handle it is bad, but I don't see anything wrong with developers using some of the most advanced GFFX features provided the performance is decent. Let's be fair...
 
Has anyone tried to write and run a fragment program for the GeforceFX that is longer than what the R3xx series can support (ignoring the F-Buffer, of course)? Given the FX's slow FP performance and significant register usage issues, would such a program run well enough to be practical in a realtime engine?

Also, what features outside of the programmable pipeline does NVidia have an advantage of ATI in?
 
F-Buffer isn't exposed peroid and I don't know if it will be.

F-Buffer is a opengl 2.0 apparently and can't be used on opengl 1.0 transparently but that basicly says either it can't be used newhere transparently or only works on glslang. Now the glslang is in 1.5 as a extentsion does that mean ati will be exposing F-Buffer I rather doubt it. F-buffer is vapourware :/
 
Bah. . . ATI will probably release some extension to support it in OpenGL 1.x. It'll probably be just a new token for glEnable/glDisable.
 
CorwinB said:
but I don't see anything wrong with developers using some of the most advanced GFFX features provided the performance is decent. Let's be fair...

I agree, but aren't these effects also possible with a R9800Pro via an other way?

I'm afraid for the idea that they don't exposure the full force of a R9800 in an engine which I look so much foward too.

The F=buffer is realy vaporware?
 
The F=buffer is realy vaporware?

IIRC, the F-buffer is already exposed through an ATI-specific OpenGL extension, and may be exposed in the next revision of DX9 (together with some of the PS2.0+ features of the NV3x). I may be wrong, though.
 
Ostsol said:
Has anyone tried to write and run a fragment program for the GeforceFX that is longer than what the R3xx series can support (ignoring the F-Buffer, of course)? Given the FX's slow FP performance and significant register usage issues, would such a program run well enough to be practical in a realtime engine?
Yes, this has already been done: http://www.ati.com/developer/ashli.html.
Also, what features outside of the programmable pipeline does NVidia have an advantage of ATI in?
None that I am aware of. In fact, I would argue the opposite since ATI is exposing float buffers, MRTs, etc.
 
CorwinB said:
The F=buffer is realy vaporware?

IIRC, the F-buffer is already exposed through an ATI-specific OpenGL extension, and may be exposed in the next revision of DX9 (together with some of the PS2.0+ features of the NV3x). I may be wrong, though.

Didn't know it was exposed through the ATI specific functions but yeah its not in the arb shaders to my knowledge it wasn't in older 9800 drivers.

Yeah dunno if dx9 shaders use f-buffer but I don't think anyone has said that they use it anywhere.
 
Ostsol said:
Also, what features outside of the programmable pipeline does NVidia have an advantage of ATI in?

Dynamic Branching support in the VS is the biggest one IMO - unless that would fit in the "programmable pipeline" category? I think not, but then again, it's 21:10 here...


Uttar
 
He has a time machine, or for whatever reason uses military time....you never know with him...
 
Uttar said:
Ostsol said:
Also, what features outside of the programmable pipeline does NVidia have an advantage of ATI in?

Dynamic Branching support in the VS is the biggest one IMO - unless that would fit in the "programmable pipeline" category? I think not, but then again, it's 21:10 here...


Uttar

Ah, past your bedtime is it, Uttar? ;)
 
9:10 in the p.m.?! How do you stay awake that late--intravenous caffeine?

:p
 
Back
Top