Siggraph Rendermonkey Details and more....

Status
Not open for further replies.
I haven't read the final spec for DX9 (Is it available for download? Does it exist?), but I would be extremely suprised if it didn't allow 1024 PS instructions. The only shader length limits in the DX8 spec were of the form "at least X instructions".

I would also be rather suprised if the NV30 instruction sets don't make it to PS/VS 2.1 (or some other version number).
 
Didn't you know..Nvidia owns Direct X too..they tell everyone what to do. It wouldn't make sense that the operating system creator..the one that powers the video card would have a say in a 'standard'....

This is where CG comes in, it is not Microsoft controlled...Nvidia can change , twist, optimize code paths to its hearts content and no one can do a damn thing about it :rolleyes:
 
Basic said:
I haven't read the final spec for DX9 (Is it available for download? Does it exist?), but I would be extremely suprised if it didn't allow 1024 PS instructions. The only shader length limits in the DX8 spec were of the form "at least X instructions".

I would also be rather suprised if the NV30 instruction sets don't make it to PS/VS 2.1 (or some other version number).

Last time I looked @ Meltdown specs:
PS 2.0

2 color iterators
8 texcoord iterators
16 textures/samplers
16 temporary registers
32 constant registers
Arbitrarily intermixable Instructions:
32 address ops
64 math ops
 
The PURPOSE of DX is to have a LEVEL playing field for all hardware manufacturers...so a title can be developed on DX and runs approximately the same on all cards

This is going to be my last post in this thread, since it's obviously going nowhere.

PS1.4 was obviously added to enable the full featureset of ATI's R200 hardware (and PS1.3 was added to enable most of the featureset of NVIDIA's NV25 hardware). Which is why they are occassionally referred to as PS1.NV and PS1.ATI.

Both versions are very closely spec'ced to the hardware that originally implemented them (the "phase" instruction is not a feature of the R200 -- it's a limitation put in place so that ATI could better optimize dependent fetches). SiS may have a brilliant pixel shader implementation in Xabre that is severely handicapped due to the restrictions placed in the PS1.1 spec (it's not likely, but it's possible)... a spec which bears more than a passing resemblance to NV2x texture shaders/register combiners in OpenGL.

If you think that PS1.4 was created by Microsoft, and then within a month ATI had hardware which mapped _exactly_ to the new features, you're completely mistaken about the hardware design process. ATI had hardware that went above and beyond the PS1.1 spec, and Microsoft decided to extend DirectX to support it. It's really no different than OpenGL extensions, except for the fact that all progress is ultimately governed by Microsoft.

The PURPOSE of DX is to have a LEVEL playing field for all hardware manufacturers...so a title can be developed on DX and runs approximately the same on all cards.

That was the purpose. That purpose was completely lost with the introduction of PS1.3/1.4. Ultimately, it's up to the developers now whether a titles runs approximately the same on all cards, or whether advanced features of some cards are used to provide more pleasant (or faster) experiences to their customers or not.

Huh? Wouldn't the company defining the features to be supported within its own API be considered the standard by which hardware manufacturers should target their hardware?

Nope... at least not the very first manufacturers. As I said above -- DirectX specs are finished long after hardware design (remember, hardware design is the first step in a long process between thinking it would be nice to have a new chip, and actually having a new chip). For products like SiS' Xabre, which are started after DirectX specs are finalized, targeting the DX spec might make sense. But you can't target a spec that doesn't exist.
 
Doomtrooper,

What exactly are you arguing against here? Are you arguing about the need to go beyond the DX 9 spec, or are you arguing about how usefull CG is? Make up your mind please, you appear to be jumping back and forth between points.

For one thing, how do we know that the DX spec hasn't changed slightly since ATI finalized the Radeon 9700 hardware? Simply put we don't know and I wouldn't rule that out. After all it happend to Nvidia with DX8, correct? DX8.1 anyone?

Also, I don't know why you're making such a huge deal out of all of this? For all we know these features WILL be included in DX9 or MS made it perfectly allowed to have more than the specified/recomended number of instructions. If I remrmber correct, many of the DX feautres MS requires you to at "least" have the features they state in order to be compliant. It doesn't say you are restricted to the numbers they ask for.

You're going to look pretty foolish if this ends up being the case.
 
Gking,

I'm not graphic engineer but I know how it works with DirectX..yes ATI worked with Microsoft to get Ps 1.4 implemented. Yes this had to approved by the company that engineered the operating system, yet Microsoft does not make graphics cards. This is a conflict of interest to allow a competing company to set standards in HLSL.
So lets fast forward...Assume CG takes over as the HLSL of choice...ATI wants to implement Pixel Shader 3.0 in DX 10..is Nvidia gonna support Pixel Shader 3.0 that they didn't design it...NO.
Did Nvidia support Pixel Shader 1.4 in CG now..even though ATI cards are the only ones that support the highest level of Pixel Shader this is NOT a ATI specific feature..all the documentaion is needed to include PS 1.4 in hardware.
I thought CG was platform independent, yet doesn't support the Dx 8.1 standard of PS 1.4...I though CG was neutral..

I simply want CG to go away
 
Assume CG takes over as the HLSL of choice...ATI wants to implement Pixel Shader 3.0 in DX 10..is Nvidia gonna support Pixel Shader 3.0 that they didn't design it...NO.

Why are you making an assumption like this? For the sake of the discussion (which has vanished since you started complaining) do you actually know if you CAN'T use pixel shaders 1.4 in CG as it's currently written in v1.0? I mean yes it would make sense that it's the case since nvidia created it for themselves first, but does it restrict you from using any DX features completely? Also, will that NOT change once they release NV30 and the second version of CG?
 
Of course it will becuase Nvidia chips will finally be able to use it with the NV30 :LOL:


I mean yes it would make sense that it's the case since nvidia created it for themselves first

Huh I thought CG was platform independent, designed for DX8 hardware (yes the Radeon 8500 is DX 8.1)...

Nvidia only wants what is good for them, and they won't pull any wool over my eyes.
 
Why would you assume NVIDIA would have control over m$'s HLSL if they included features from Cg? (Apart from pigheadedness.)
 
Doomtrooper:
That's a preliminary presentation, and it's not strange that they just used the numbers from the reference platform.
 
"
Nvidia only wants what is good for them, and they won't pull any wool over my eyes.
"

Well i hope so and i hope that ATI wants the same.
I dont think ATI, AMD, Intel and so on are different in that case.
Well of course if ATI does not want what is best for them then it is no wonder for me that they don't make any money :)

I like Cg and i think Microsoft likes it too. Everything which might boost x-box or x-box2 sales in the future is good for them.
If Microsoft doesnt like CG the guys @ Nvidia would not have announced it in the way they did.

I simply think that is competition. How boring would that business be if hardware companies only develop according to the next DirectX version.
ATI will try to block everything conerning OpenGL and Nvidia with their vote and Nvidia will try to get developers on their side.
That's how business works - just face it guys :)
Maybe the NV3x architecure goes a lot beyond DirectX. I don't know but Nvidia wants best support for their hardware and that is what the other companies want too.
 
I don't....I've seen enough evidence to stand by my claim...CG is not needed.Rendermonkey is the proper way to approach the Shader complexity issue.
If Nvidia would have spent 10% of its time pushing OGL 2.0 through vs. making their own language...this would never have happened.
We would then have DX9 HLSL and OGL 2.0 HLSL and Rendermonkey style plugins from both ATI and Nvidia...too simple :p
 
well OpenGL 2.0 doesn't help the x-box market at all.
Most games on PC are DirectX games - only a few are OpenGL.

Concering Rendermonkey. Not enough about that is said but i think it might be a nice tool too.
But finally i have to say that ATI is not in the position concerning market share to set up any standards. I am sorry dude but that is how business works.

They will try their way with Rendermonkey and Nvidia will go its way with CG.
The future will show which is the better way. Maybe it is one not even mentioned - i don' know.
I don't think right now that we can judge which is the best way. We do not know enough and until now there are no practical experiences concerning these tools.
Nvidia will try to get best developer support for their hardware and ATI will try the same.
I can live with that.
 
why not use both? I personally like the "Cg" language approach to doing shaders. Its about DX9 and OGL 2.0 because you write Cg and then it is compiled into DX9 and OGL 2.0 stuff... unless i've gravely mistaken a lot (i havent had a chance to read the whole thread).

Wouldnt it be beneficial, if say, i'm developing on a Radeon9700 (and i will be) to write Cg code into Rendermonkey (once a plug in is made) and do it that way? As I understand it, NVIDIA isn't making Cg itself proprietory, rather, NVIDIA is making their Cg development program proprietory. If thats the case, then I say, write a pluging for Cg for Rendermonkey, and enjoy BOTH worlds at once :)
 
Yes I know ATI is losing becuase they are aiding in developing 2 other VENDOR independent shader languages

1) DX9 HLSL
2) OGL HLSL

Rendermonkey plugs directly into them...yes..yes... I see the error in ATI's thinking :LOL:
 
multigl2 said:
why not use both? I personally like the "Cg" language approach to doing shaders. Its about DX9 and OGL 2.0 because you write Cg and then it is compiled into DX9 and OGL 2.0 stuff... unless i've gravely mistaken a lot (i havent had a chance to read the whole thread).

Wouldnt it be beneficial, if say, i'm developing on a Radeon9700 (and i will be) to write Cg code into Rendermonkey (once a plug in is made) and do it that way? As I understand it, NVIDIA isn't making Cg itself proprietory, rather, NVIDIA is making their Cg development program proprietory. If thats the case, then I say, write a pluging for Cg for Rendermonkey, and enjoy BOTH worlds at once :)

Rendermonkey requires a shader language (DX9 HLSL or OGL) or software renderer (maya) to plug into...Nvidia CG is not on ATI's to do list :). Look at pages 2-4 of the ATI .pdf.
That will not work, both of these are designed to bring ease to devlopers for shaders..question is if OGL survives why put any effort into a third HLSL (Cg).
 
i wish i had more time to read about both Cg and Rendermonkey (My girlfriend is taking me from the computer!) but from what I've read so far, it seems both are open source, or are becoming open soucre. Why couldnt, say, I write a Cg plug in for Rendermonkey?
 
Renderman and Cg are quite different approaches to a similar problem. Both have their pros and cons. Cg is probably more easily implemented directly in the coding of a game, plus has the ability to work with content creation software as well. Rendermonkey appears to be more of a content creation focused tool, while still being able to produce shader code for a variety of languages. What it comes down to, is that both can generate shader code easier than it used to be up until now. Bravo! What I have yet to see, is wether Rendermonkey is really "neutral" concerning its output, or wether it is optimizing its code for ATI hardware. There is simply no way to know that yet from what I have read...

It's nice, Doomtrooper, that you wish for Cg to go away (and not very surprising), but from what I hear from developers, some are actually happy that its there! Fact remains, Cg is available now and has ben for a while, plus it will be compatible to upcoming API's and even HLSLs (at least DX9, maybe even OGL2.0). It's all nice and dandy that other languages are up and coming, but DX9 is probably still a good 3-4 months away and OGL's HLSL is most likely gonna take a good while longer (OGL has been dragging along slowly for years now, hardly only because of Nvidia or MS alone holding it back, many other companies have their own agendas, blaming Nvidia is dramatically oversimplifying the situation) and believe it or not, some developers actually like to be able to have tools at their disposal sooner than later - a LOT can be accomplished in a couple of months of programming, even more so if this programmig is simpler than before thanks to said tool! The main concern I have with Cg is the same I have with Rendermonkey, wether the code it produces is "neutral" or not. Right now it isn't (no PS 1.4 and we don't know what else is going on), but that can change with time, Nvidia is certainly opening Cg up and providing documentation that should help others improving the possible usefullness of Cg for other than Nvidia hardware.

Like I said, I have my doubts about how optimal the produced shader code from both tools will be for competing IHV's products. Unlike Doomtrooper, I don't automatically see ATI doing only good vs. Nvidia being out to take over the world. Sometimes during his statements I feel reminded of Pinky and the Brain episodes, based off Chris Carter scripts, with all the conspiracies and stuff going on, no offence intended...
 
There is no evidence, NOT ONE IOTA, that Cg will only work on NV30.


Secondly, are you people complete idiots? The only functional difference posted so far between NV30 and DX9 pixel shaders is the increased instruction count. The R300 is also functionally different than DX9. DX9 specifies that pixel shaders are a maximum of 96 instructions long, but the R300 extends this to 160!

All Cg2.0 will do for NV30 is allow it to generate longer programs without going to multipass, the exact same thing a programmer would have to do if coding by hand!


Doomtrooper, you don't sound like a developer, so why are you even partipicating in this discussion?
 
DX9 specifies that pixel shaders are a maximum of 96 instructions long, but the R300 extends this to 160!

I think the two numbers are exactly the same, like I posted elsewhere, since ATI documentation refers to DX9 pixel shaders as both 96 and 160 instructions, and the example shaders make a pretty significant distinction between rgb and alpha operations.

The only functional difference posted so far between NV30 and DX9 pixel shaders is the increased instruction count

Derivatives and condition codes, per-pixel sine and cosine, more temporaries and constants, no limits on texture fetching. Derivatives can be emulated with multiple passes, but wouldn't be terribly fun. Trig functions can be computed using Taylor series, but you'll fill up your instruction slots pretty quickly if you need a bunch of them.
 
Status
Not open for further replies.
Back
Top