Wich card is the king of the hill (nv40 or R420)

Wich card is the king of the hill (nv40 or R420)

  • Nv40 wins

    Votes: 0 0.0%
  • they are equaly matched

    Votes: 0 0.0%

  • Total voters
    415
Status
Not open for further replies.
SM3.0 already *is* a standard. It's been in DX9 since it was shipped. Secondly, developers don't have to do that much extra work to target it. When PS1.x was shipped, HLSL didn't exist, so developers had to hand-code several versions. Now, more work can be done by the compiler, easing the work needed to target multiple shader versions.

As for card availability, we know we're talking about SM3.0 usage 12 months from now, not "right now". Games "right now" were started 18-months ago. The fact that PS2.0 got little to no usage when the R300 was first shipped didn't stop you from super boosting PS2.0 back then and using it to beat the NV3x over the head, even though 99% of games, even games shipped in the last 12 months, were using 1.1 shaders or less.

So why the dismissal? No one's saying the R420 sucks because it lacks SM3.0. We're just saying SM3.0 isn't the worthless feature you keep asserting it is.
 
DemoCoder said:
So why the dismissal? No one's saying the R420 sucks because it lacks SM3.0. We're just saying SM3.0 isn't the worthless feature you keep asserting it is.
Evildeus, do you see what I mean?

DemoCoder, who said that SM3.0 is worthless? Show me where I said that. Show me where Dave said that. (And I don't mean your interpretation of what I or he said.)
 
Madashi, i think you didn't get my point. I'm not asserting that DC for your exemple is discussing without being biased (now i think he is talking about Jvd and he seems to have this behaviour to dismiss SM3.0 as Nv fans dismissed PS1.4), i'm saying that the fact of being more biased doesn't alter the fact that someone else is also biased.
 
Evildeus said:
Madashi, i think you didn't get my point. I'm not asserting that DC for your exemple is discussing without being biased (now i think he is talking about Jvd and he seems to have this behaviour to dismiss SM3.0 as Nv fans dismissed PS1.4), i'm saying that the fact of being more biased doesn't alter the fact that someone else is also biased.

Wait a second . I'm not dismissing anything .

When people were saying standard , i understood it as the standard the shaders would be develop for .

IT wont be because of the small userbase .

It will still be used mostly for speed increases as it should be a quick simple to take advantage of .

But because nvidia has a p.s 3.0 part out devs aren't going to stop developing for p.s 2.0 games .

That will be the standard of which the games will be based on if even.

After all we can thank nvidia for slowing that adoption.
 
Evildeus said:
Of course, SM2.0 won't be stopped because SM3.0 is released.

Yes and games will be made to take advantage of sm2.0 because of its larger installed base . Rather than the very small base of sm3.0

Or am I wrong in assumeing that the trend in pc games which as taken place as long as i can remember will be changed ?
 
jvd said:
Yes and games will be made to take advantage of sm2.0 because of its larger installed base . Rather than the very small base of sm3.0

Or am I wrong in assumeing that the trend in pc games which as taken place as long as i can remember will be changed ?

No. But this isn't the point in this discussion.

After all we can thank nvidia for slowing that adoption.

Should we "thank" ATi then for slowing down the adoption of SM3.0, FP blending.. ?
 
As Nv and Ati will develop full range SM3.0 today and future hardware will benefit from SM3.0. So the trend is not changing it's just that both will deliver SM3.0 cards, and then there's no reason to not develop it (moreover if it simplified the coding)
jvd said:
Evildeus said:
Of course, SM2.0 won't be stopped because SM3.0 is released.

Yes and games will be made to take advantage of sm2.0 because of its larger installed base . Rather than the very small base of sm3.0

Or am I wrong in assumeing that the trend in pc games which as taken place as long as i can remember will be changed ?
 
@jvd,
like already said, you most likely would be right if there was no hlsl.
the simplest way in adding sm3.0 support is to just compile to that render target, not much of an effort at all.....
 
Evildeus said:
Madashi, i think you didn't get my point. I'm not asserting that DC for your exemple is discussing without being biased (now i think he is talking about Jvd and he seems to have this behaviour to dismiss SM3.0 as Nv fans dismissed PS1.4), i'm saying that the fact of being more biased doesn't alter the fact that someone else is also biased.
Generally that's true. But I think it's not the right way for a strongly NVidia biased person to jump on an slightly ATI biased person. That's just plain wrong IMHO.

(And just for the record: I don't think Dave is biased at all. He might not always be right, but that's different to being biased).
 
Am I the only one who thinks that using FP blending and filtering isn't anything like "giving up half your performance"?
Who is going to use one-cycle shaders to render to FP16 color buffers anyway? I'm rather inclined to believe that shader execution time will hide the cost of FP blending (increased FB bandwidth, lower ROP rate) almost completely in the most common cases. Of course there's still the lack of MSAA, but people have the option of choosing what they like most.

I applaud 3Dlabs for their fully orthogonal FP16 framebuffer implementation.
 
Xmas said:
Am I the only one who thinks that using FP blending and filtering isn't anything like "giving up half your performance"?
Who is going to use one-cycle shaders to render to FP16 color buffers anyway? I'm rather inclined to believe that shader execution time will hide the cost of FP blending (increased FB bandwidth, lower ROP rate) almost completely in the most common cases.
Well, I cannot really judge how the performance will be like. If you believe that the drop will not be big, then I'm eagerly awaiting first benchmarks of fp16 blending/filtering. Until then I better withhold judgement... :)
 
Bjorn said:
jvd said:
Yes and games will be made to take advantage of sm2.0 because of its larger installed base . Rather than the very small base of sm3.0

Or am I wrong in assumeing that the trend in pc games which as taken place as long as i can remember will be changed ?

No. But this isn't the point in this discussion.

After all we can thank nvidia for slowing that adoption.

Should we "thank" ATi then for slowing down the adoption of SM3.0, FP blending.. ?

Yes you can .

But thats 2 to nvidia and 1 to ati :)
 
christoph said:
@jvd,
like already said, you most likely would be right if there was no hlsl.
the simplest way in adding sm3.0 support is to just compile to that render target, not much of an effort at all.....

Right but if all they do is convert sm 2.0 to sm3.0 you will just see speed increases .

A game is going to have to be designed for sm3.0 to see any glaring diffrences . Its not like the sum from sm1 to sm2 . At least by all the posts on this site .

Also we do not know if and when ati will support sm3.0 . They may never. They may go strait to sm4.0
 
DemoCoder said:
So you don't think ATI is going to ship any cards with new SM support before mid 2006? Let's see your attitude when ATI releases their SM3.0 card. Then, it won't be pooh-poohing how SM3.0 does nothing for games, it will be about how lovely it is, and of course how awesome SM3.0 features are made "usable" only by ATI.

Your putting words into my mouth. I know ATI is making an SM3.0 compliant card, and no I will not praise it as the best thing ever. While I believe SM3.0 is obviously a step towards evolution I also believe it's a small step. Nvidia tries to make it sound like the best thing since sliced bread.

Yes, lets recall them.

6800 offers vertex texturing, predicates, arbitrary swizzle, dynamic branching, geometry instancing, FP filtering, FP blending, gradient instructions, indexable output registers, vPos register, vFace register, indexable input registers, 224 constant registers, loops, procedure calls, and of course a texture fetch instruction which can use gradients to choose mipmap LOD.

Better ease of use, some features which "can't be done in SM2.0 without slow emulation" and other features which can boost performance.

Again, all these additions are primarily for performance. Nothing new visual quality wise and if ATI manages to keep up or surpass nvidia even with these speed enhancments they really are unimportant.

I will reiterate my point. Until games start to use SM3.0 in a widespread fashion i'll consider it a useless addition to the 6800. Certainly nothing to base a purchase on. Look how long it has taken games to finally incorporate SM2.0 shaders (~2 years), SM3.0 won't be any different and offers much less of an advantage then SM2.0 did over SM1.3.
 
jvd said:
Right but if all they do is convert sm 2.0 to sm3.0 you will just see speed increases .

Why? nv40 (and any other future hw) does not use different units for ps_2_0 and ps_3_0. The only thing you are likely to get by simply recompiling shader to different hlsl target is instruction ordering that more matches nv40 right now (just as ps_2_a generates code that's more suited for nv30). How much do you expect to get from that? 1%?

When it goes to new ps3.0 features - facing register may save a few cycles (nothing huge, really), dynamic branching may turn out to be performance win in some cases, and I am sure hlsl compiler will generate it when approprite. But simple question - how many today used hlsl shaders recompiled to ps_3_0 will use it? I'd risk saying that close to none - today's shaders are highly linear.

I am not trying to bash ps3 because it is great move forward. I just don't expect any real performance increases. In fact I expect quite the opposite considering the number of shader instructions that can be executed - it is few hundred times more than base ps2.0, and there are quite a few cool things that can be done with the added flexibility. They'll definitely not run on any ps 2.0/2.x hw, and they'll definitely be way slower :)
 
ANova said:
Nvidia tries to make it sound like the best thing since sliced bread.

I would actually say that they've been rather upfront about, f.e the main benefits of PS3.0. That is, mainly to make it easier for developers.
 
Bjorn said:
ANova said:
Nvidia tries to make it sound like the best thing since sliced bread.

I would actually say that they've been rather upfront about, f.e the main benefits of PS3.0. That is, mainly to make it easier for developers.

What about the launch event where nvidia was praising Far Cry running on SM3.0. They never mentioned once that what they were showing is possible on SM2.0, the developer was the only one who said that. They also made it sound like displacement mapping was a new feature only available with SM3.0. Then there were the screenshots claiming to show the difference between 2.0 and 3.0 which was obviously not the case at all.
 
ANova said:
What about the launch event where nvidia was praising Far Cry running on SM3.0. They never mentioned once that what they were showing is possible on SM2.0, the developer was the only one who said that. They also made it sound like displacement mapping was a new feature only available with SM3.0. Then there were the screenshots claiming to show the difference between 2.0 and 3.0 which was obviously not the case at all.

It was pretty obvious from the presentation that the Far Cry patch would be SM2.0/SM3.0. And the screenshots weren't claiming to show the difference between SM2.0/3.0 but rather SM3.0(SM2.0) vs SM1.0. That was still not correct though since they used low quality settings on the SM1.0 path.
 
ANova said:
DemoCoder said:
Better ease of use, some features which "can't be done in SM2.0 without slow emulation" and other features which can boost performance.

Again, all these additions are primarily for performance. Nothing new visual quality wise and if ATI manages to keep up or surpass nvidia even with these speed enhancments they really are unimportant.

I tend to agree with the SM3.0 naysayers in that it will provide negligible performance benefit. However, instead of trying to derail Nvidia's efforts to push technology forward why don't you recognize that some of these technologies will allow us to do more at the same level of performance today which is in my opinion is more worthwhile than another 50fps in UT.

I will reiterate my point. Until games start to use SM3.0 in a widespread fashion i'll consider it a useless addition to the 6800. Certainly nothing to base a purchase on. Look how long it has taken games to finally incorporate SM2.0 shaders (~2 years), SM3.0 won't be any different and offers much less of an advantage then SM2.0 did over SM1.3.

I would consider your point valid but for the use of the word useless. Consider what the market will be like if ATI didn't adopt SM2.0 so early? Although few games are built with 2.0 in mind from the bottom up it has trickled into the market and we will soon be seeing such titles. Should ATI have waited for Nvidia? Should Nvidia have waited for ATI?

I agree that SM3.0 is not all it's chalked up to be by Nvidia PR (it's their job anyway) but it is quite far from useless. Whatever trickle of SM3.0 usage that Nvidia's support will stimulate will definitely accelerate the adoption of SM3.0 in the future. It is quite apparent that DemoCoder's comment about such technology only being appreciated by some if it comes from one particular IHV was not completely unfounded.
 
Status
Not open for further replies.
Back
Top