State of 3D Editorial

What is the cause of the performance degradation caused by the use of additional temp registers in the NV3x architecture?

Is it storing values off-chip or into some slower cache?
I think it was mentioned earlier what the threshold was for performance to suffer, but I couldn't find it.
 
Another interesting aspect of the NV3x hardware is the ability to combine several simple shader instructions into one complex instruction.

Apparently the nvidia cards have a JIT compiler on their chips now?????
 
nelg said:
penstar said:
NVIDIA on the other hand looks to have 3 vertex units on the NV35/36/38 series of GPU’s, but these units nearly reach the VS 3.0 level of complexity
Whats that saying about being half pregnant.

OMG... :LOL:

thats just tff.. :cool:
 
bloodbob said:
Another interesting aspect of the NV3x hardware is the ability to combine several simple shader instructions into one complex instruction.

Apparently the nvidia cards have a JIT compiler on their chips now?????

I wonder if he was referencing how NV3x can collapse into fewer passes what used to take many passes in the previous generations. However, that is more of an avenue permitted by the architecture available to programmers, not built into the hardware itself.
 
Reverend said:
Dave said:
I'll bet not many - why the hell aren't ATI telling us about these things?
Because, truth be told, these sort of things fly over 99% of people's heads. By "people", I'm not talking about 3D enthusiasts like many here. By "people", I mean those that really matter, folks that buy video cards.

....

Of course, it wouldn't hurt ATI one bit if they take that one further step in educating media outlets... but then again, most media outlets don't bother to understand 3D anyway and just re-gurgitate whatever is given to them.

Rev, it matter not that perhaps .001% of your target market may actually understand the principals of whats being said. This is what I term as NVIDIA's "air of infalabilitiy" - even put of the shit they have come through this year they still still come through smelling of roses with many media, and the driver compiler is being touted as one of the largest contributiong factors (desptite the fact that they haven't actually quantified exactly how much performance gain is really due to te compiler and how much is really down to the same old tricks).

It matters not that few people will read and understand it, all that matters, in fact, is that said media outlets do parrot it. With the shader compiler as an example every review site is currently congratulating NVIDIA for this driver compiler but had ATI actually educated some people when they had done it people might be saying "Well, NVIDIA were late, but they've finally done it, but why haven't they got some of this stuff in hardware".

Understanding is nothing, perception is everything.
 
Of course Cg was heavily optimized for NVIDIA parts, but NVIDIA also took great pains to make Cg run well on the competitions’ parts.

This part of NVIDIA's Cg effort must have completely slipped me by. I certainly don't remember it ever producing code that was remotely optimal for an R300.
 
andypski said:
Of course Cg was heavily optimized for NVIDIA parts, but NVIDIA also took great pains to make Cg run well on the competitions’ parts.

This part of NVIDIA's Cg effort must have completely slipped me by. I certainly don't remember it ever producing code that was remotely optimal for an R300.

Tss you ATI guys are so negative. That thread http://www.beyond3d.com/forum/viewtopic.php?t=7422&postdays=0&postorder=asc&start=0 showed us how CG was great for ATI cards, and you would dare to be doubtful ? :devilish:
 
PatrickL said:
andypski said:
Of course Cg was heavily optimized for NVIDIA parts, but NVIDIA also took great pains to make Cg run well on the competitions’ parts.

This part of NVIDIA's Cg effort must have completely slipped me by. I certainly don't remember it ever producing code that was remotely optimal for an R300.

Tss you ATI guys are so negative. That thread http://www.beyond3d.com/forum/viewtopic.php?t=7422&postdays=0&postorder=asc&start=0 showed us how CG was great for ATI cards, and you would dare to be doubtful ? :devilish:
Yup - that was a classic example of the 'great pains' being taken to make it run well on the competitions's parts right there.

The only great pains produced by CG in that case would have been great pains for us not them.

It's just as well we made our hardware so fast, otherwise taking that 30%+ performance hit from CG might have actually managed to make us look slower than the competition... ;)
 
Just a quick Ooops apology :oops: for not looking in this section first before reporting this strange article - taa
 
I admit not to have read the whole article; I just rapidly looked at the paragraph's name and seen if anything could tell me if the whole thing as BS or not... And I stumbled on the register usage paragraph.

That paragraph was so hilariously wrong that I didn't even laugh :(


Uttar

P.S.:

nelg: Okay, maybe the NV30 is half pregnant, but AFAIK, the baby was cancelled ;)
That is, NVIDIA engineers did try to get VS3.0. running by (ab)using the PS lookup units, but they failed.
 
andypski said:
PatrickL said:
andypski said:
Of course Cg was heavily optimized for NVIDIA parts, but NVIDIA also took great pains to make Cg run well on the competitions’ parts.

This part of NVIDIA's Cg effort must have completely slipped me by. I certainly don't remember it ever producing code that was remotely optimal for an R300.

Tss you ATI guys are so negative. That thread http://www.beyond3d.com/forum/viewtopic.php?t=7422&postdays=0&postorder=asc&start=0 showed us how CG was great for ATI cards, and you would dare to be doubtful ? :devilish:
Yup - that was a classic example of the 'great pains' being taken to make it run well on the competitions's parts right there.

The only great pains produced by CG in that case would have been great pains for us not them.

It's just as well we made our hardware so fast, otherwise taking that 30%+ performance hit from CG might have actually managed to make us look slower than the competition... ;)

As ATi would be well aware, any hardware vendor is free to create their own Cg backend, and nVidia actively encourage this. Of course, ATi has never taken the time to actually do this, being far too busy slagging off Cg instead...
 
As ATi would be well aware, any hardware vendor is free to create their own Cg backend, and nVidia actively encourage this. Of course, ATi has never taken the time to actually do this, being far too busy slagging off Cg instead...

Sure. Nvidia encourages other hardware manufacturers to write back-ends for a language of which Nvidia controls all specifications. Upside of doing this when compared to using HLSL and GLSlang ? None that I can think of.

To continue your reasoning, you could say that Nvidia was far too busy slagging off the R300 technology ("A 256 bits bus is overkill", "You can't build a true next generation part on 0.15 microns", "I personally think 24 bits is the wrong answer") that they forgot building a competiting part...
 
CorwinB said:
As ATi would be well aware, any hardware vendor is free to create their own Cg backend, and nVidia actively encourage this. Of course, ATi has never taken the time to actually do this, being far too busy slagging off Cg instead...

Sure. Nvidia encourages other hardware manufacturers to write back-ends for a language of which Nvidia controls all specifications. Upside of doing this when compared to using HLSL and GLSlang ? None that I can think of.

To continue your reasoning, you could say that Nvidia was far too busy slagging off the R300 technology ("A 256 bits bus is overkill", "You can't build a true next generation part on 0.15 microns", "I personally think 24 bits is the wrong answer") that they forgot building a competiting part...

nVidia controlling the specifications for the Cg language makes no difference whatsoever. Every backend implimentation must successfully compile the Cg program handed to it in the first place or it isn't doing its job properly... What the backend does is allow the hardware vendor to optimise the output for their own architecture and take full advantage of the features found in that architecture.

If ATi is unhappy with how Cg currently runs they only have themselves to blame. nVidia is under no obligation to make their competitors look any better than they have to...
 
Why would ATI cares off CG? Nvidia tried to enforce its personnal view and failed to force the market to follow.

Why would you expect ATI to spend time and money to make extensions working well with CG as they are happy with DX 9 like it is ?

Anyway what was said in the article was exactly the opposite:
NVIDIA also took great pains to make Cg run well on the competitions’ parts.

So you would move the job done by ATI instead ? What would be their interest ?
 
nVidia controlling the specifications for the Cg language makes no difference whatsoever.

Sure. And I agree, it is be a pretty smart move for a company to give a strategic hold on part of its core business to its biggest competitor...

If ATi is unhappy with how Cg currently runs they only have themselves to blame.

Agreed. While at the same time Nvidia gets MS to blame for the abysmal performance of the NV3x product line under DX9. How convenient, and not a double standard in the slightest.

nVidia is under no obligation to make their competitors look any better than they have to...

Which is why it would not be such a good move for any graphic company apart from Nvidia to invest into CG... Don't you agree ? Even Nvidia acknowledged CG as a failure, IIRC.
 
JoshMST said:
...but rather because now that DX9 VS/PS 3.0 specifications are well known, both companies are playing on a much more level field

I'm sick of this red herring! I originally heard it from an OEM source as postulated by a particular sales rep when a 2nd tier VAR was about to pull an account...
 
radar1200gs said:
As ATi would be well aware, any hardware vendor is free to create their own Cg backend, and nVidia actively encourage this. Of course, ATi has never taken the time to actually do this, being far too busy slagging off Cg instead...
I believe that the point being highlighted was that according to the article nVidia was apparently taking 'great pains' to make sure CG ran well on competitors' cards. Clearly if, in order to run well, it required us to write the whole back-end of the compiler then that is hardly nVidia taking great pains - that seems to me to be them leaving the pains entirely up to us.

Instead of writing back-ends for an unnecessary and divisive additional shading language we were busy concentrating our efforts on providing high-quality support for the two industry-standard high level shading languages.

I don't see that you're making any relevant point here - perhaps instead of automatically repeating some tired, irrelevant rhetoric about ATI's lack of 'support' for CG you should instead read the thread more carefully.
 
radar1200gs said:
nVidia controlling the specifications for the Cg language makes no difference whatsoever.


Sure, sure, That is until everyone is onboard the CG Bandwagon and Then Nvidia decides it wants to start Liscensing the Technology and charging everyone who uses it. After it's well established of course. Then you can Play ball with Nvidia or not get any Updates. :oops:

some Drug Dealers give away free samples until your hooked too.

I am not so foolish as to believe that Nvidia's CG is an Altruistic attempts to help everyone. It's a rather bald attempt to push the Industry backwards towards Proprietary API's (Remember 3Dfx, Rendition?)

Nvidia controlling the CG Spec after wide adoption means Nvidia controlling the 3d Landscape. Their Master Plan all along.
8)

PS OnT, That article was very entertaining. I always need a light fluff piece in the morning to brighten my Day. ;)
 
JoshMST said:
So please, help me to understand where I am mistaken or confused. I have no problems updating the article, or even re-writing the entire thing if my information and conclusions are so mind-numbingly bad.
I'd say the core issue is the referral to R300 as a less clever, brute force design.

R300 is an extremely smart architecture designed from the ground up to run DX9 applications extremely well, and as we at ATI expected it has performed superbly well on every application - particularly DX9 - because it is very well balanced. "It Just Works" is not an idle boast: R300 will take what you throw at it and run it well. No 'checkbox DX9', or PS 2.0 games having to run in DX8 mode to have acceptable performace.

Note also that R300 is the only chip in the shops right now that runs all the Shadermark 2.0 tests - because it has the most DX9 functionality of any chip. R300 is the chip that is ahead of its time, and it is an architecture for the future.
 
Back
Top