Wich card is the king of the hill (nv40 or R420)

Wich card is the king of the hill (nv40 or R420)

  • Nv40 wins

    Votes: 0 0.0%
  • they are equaly matched

    Votes: 0 0.0%

  • Total voters
    415
Status
Not open for further replies.
It is quite apparent that DemoCoder's comment about such technology only being appreciated by some if it comes from one particular IHV was not completely unfounded.

Sorry, but the fact that there will be people with preferences from people within a forum doesn't mean the entire forum is like that - they are made up of mixes of people and its why there its the basis of the discussions, numerous as they are here.

However I don't find your statement a trueism, since I don't see reams of discussion and praise for the new features in ATI's hardware - there's been a thread on 3Dc, a thread on TAA and no discussion on PS2.0b since the hardwares announcment, which is all within the bounds of normalcy - this is in contrast to the number of discussions of the implementation of PS3.0 there have already been and there will continue to be.
 
DaveBaumann said:
It is quite apparent that DemoCoder's comment about such technology only being appreciated by some if it comes from one particular IHV was not completely unfounded.

Sorry, but the fact that there will be people with preferences from people within a forum doesn't mean the entire forum is like that - they are made up of mixes of people and its why there its the basis of the discussions, numerous as they are here.

Ummm Dave where did I say anything about the entire forum :rolleyes:

However I don't find your statement a trueism, since I don't see reams of discussion and praise for the new features in ATI's hardware - there's been a thread on 3Dc, a thread on TAA and no discussion on PS2.0b since the hardwares announcment, which is all within the bounds of normalcy - this is in contrast to the number of discussions of the implementation of PS3.0 there have already been and there will continue to be.

I'm not sure about the relevancy of this to my previous post but point is....how many people have claimed that 3Dc, TAA or PS2.0b are useless?
 
trinibwoy said:
I'm not sure about the relevancy of this to my previous post but point is....how many people have claimed that 3Dc, TAA or PS2.0b are useless?

The point being these are all technologies brought forth by ATI at the moment, but I don't see a disproportionate "appreciation" for them. Your point can be turned around equally for either vendor.

While I've not read everyones comments in this forum I don't see that many people claiming that NV40's features are "useless"; I see skepticism in people as to whether its something they feel they will get benefit from now and historical patterns suggest that those skepticsms are well founded - inevitably SM3.0 titles will turn up, but there is a question as to whether that will impact them negatively in the lifetime they feel they may be using a card for and that is weighed against any more immediate benfits they may feel they might be able to get out of hardware that doesn't offer this but has other benefits. Thats normal.
 
DaveBaumann said:
trinibwoy said:
I'm not sure about the relevancy of this to my previous post but point is....how many people have claimed that 3Dc, TAA or PS2.0b are useless?

The point being these are all technologies brought forth by ATI at the moment, but I don't see a disproportionate "appreciation" for them. Your point can be turned around equally for either vendor.

You mean turned around like you're doing now ;)

My personal take on it is that people are reacting to Nvidia's PR on SM3.0. From a purely technological standpoint I see nothing wrong with it so what is up with all the negative vibes? It is infinitely better to have new tech that runs slowly than not at all. From a consumer's point of view it's not like Nvidia would have sold their cards cheaper if there was no SM3.0 support anyway....so what's the gripe about. TAA is similar in that it garners it's base value from merely being present and accessible.

What is it that I'm missing here :?:
 
well the obvious. None of this really matters untill the cards ship. smart shader 3.0 could be the best thing since sliced bread but if the cards ship late and then there are so few of them that only devs are really getting them, there will be to few of them in the public sector to really matter for now.

We really just need to wait and see.
 
It is infinitely better to have new tech that runs slowly than not at all.
For owners of the next gen of cards, maybe.

From a consumer's point of view it's not like Nvidia would have sold their cards cheaper if there was no SM3.0 support anyway....
Possibly, though it's also possible nV could have clocked their cards higher (like ATi) if they didn't have to work in SM3.0.
 
trinibwoy said:
It is infinitely better to have new tech that runs slowly than not at all.

For a consumer, how is that true? If it doesn't run fast enough, it's useless.

From a consumer's point of view it's not like Nvidia would have sold their cards cheaper if there was no SM3.0 support anyway....so what's the gripe about. TAA is similar in that it garners it's base value from merely being present and accessible.

True, but perhaps they would have run at 500mhz and taken only 1 slot and 80 watts of power?

<edit> damn you Pete for typing faster :devilish:
 
trinibwoy said:
It is infinitely better to have new tech that runs slowly than not at all.

While I'm not going to suggest anything in relation to NV40's feature support in relation to this particular comment, however as a historical precedent thats not necessarily always the case. We've seen two developers publically state they were not looking at PS2.0 last year because of the performance on NV3x boards - this was to the detriment of all R3x0 users that may look at these titles and any future NV4x / R4x0 users.

From a consumer's point of view it's not like Nvidia would have sold their cards cheaper if there was no SM3.0 support anyway....so what's the gripe about. TAA is similar in that it garners it's base value from merely being present and accessible.

Perhaps some people have made up their mind based on the other factors that they may feel are more beneficial to them at the moment (AA support / TAA, power+size to performance, etc.) and they hoping they aren't going to loose out of anything becuase it can be achieved via PS2.0. This postition doesn't seem to be that different from DemoCoder's insistence of supporting alternative DXT5 compression formats or voicing his dislike of TAA because it doesn't appear to be working well for his requirements.
 
For a consumer, how is that true? If it doesn't run fast enough, it's useless.
Well for one thing we don't know how fast it will run on NV40. Just like with SM2.0 adoption there may be specific areas that will benefit from SM3.0 and run acceptably on it. And if it's not there at all then it's all the same to the consumer.


True, but perhaps they would have run at 500mhz and taken only 1 slot and 80 watts of power?
Yep I agree and I would have no problem if this were the only argument used to discredit the worth of Nvidia's early adoption. Maybe the 9700PRO would've run faster without PS2.0 but since Nvidia had no faster PS1.1 solution this never came up.

damn you Pete for typing faster :devilish:
Yeah and his quoting style is cooler too. Think i'll steal it :p
 
DaveBaumann said:
[Perhaps some people have made up their mind based on the other factors that they may feel are more beneficial to them at the moment (AA support / TAA, power+size to performance, etc.) and they hoping they aren't going to loose out of anything becuase it can be achieved via PS2.0. This postition doesn't seem to be that different from DemoCoder's insistence of supporting alternative DXT5 compression formats or voicing his dislike of TAA because it doesn't appear to be working well for his requirements.

I would never suggest making a 3.0 only game. I think it's insane.

On the other hand, the biggest thing holding back PS2.0 adoption is of course, integrated chipsets, and massive base of NV2x and R2x0 chipsets out there. We had dot product in HW for awhile, and it ran reasonably fast, but no one seriously used polybump-like techniques until this year. Now we have FarCry, D3, and Halo2 coming out this year.

I would suggest that adoption of HW features goes beyond the mere presence of feature, or the size of the market, but also the invention of valuable techniques to utilize those features. The features in the NV40 will be heavily dependant upon people figuring out cool algorithms and tricks using those features.

We are starting to gather critical mass in effects now. We have compilers, effects frameworks, tie-ins with 3D modeling environments, and hundreds and hundreds of pages of books with shader effects in them now. Adoption of PS2.0-level effects no long requires you to be a Carmack or a pioneer. People can pick up these new algorithms and implement them much easier now.
 
trinibwoy said:
For a consumer, how is that true? If it doesn't run fast enough, it's useless.
Well for one thing we don't know how fast it will run on NV40. Just like with SM2.0 adoption there may be specific areas that will benefit from SM3.0 and run acceptably on it. And if it's not there at all then it's all the same to the consumer.

I was merely replying to the suggestion that slow is better than nothing at all. Slow is useless. If I can't use the feature how is it useful to me? The early signs of ps3.0 aren't all that promising. Perhaps related to bad drivers /shrug we'll see.
 
We've seen two developers publically state they were not looking at PS2.0 last year because of the performance on NV3x boards - this was to the detriment of all R3x0 users that may look at these titles and any future NV4x / R4x0 users.
Well that's not an actual parallel to the current situation...would we have been better off if neither ATI or Nvidia had support for PS2.0?

Perhaps some people have made up their mind based on the other factors that they may feel are more beneficial to them at the moment (AA support / TAA, power+size to performance, etc.) and they hoping they aren't going to loose out of anything becuase it can be achieved via PS2.0.
Agreed but that it is one thing to say that Nvidia should have focused on such things instead of SM3.0. However, what I perceive to be attacks on SM3.0 seem to be based on its technical merits and not the relative merit of other technologies.

This postition doesn't seem to be that different from DemoCoder's insistence of supporting alternative DXT5 compression formats or voicing his dislike of TAA because it doesn't appear to be working well for his requirements.
He is entitled to his opinion but in my view something like TAA has inherent value even if I can use it in just one game. Because that's one game more than I could if it wasn't there at all and it's availability is not detrimental to other games where it is not applied. And it's free!!
 
Pete said:
From a consumer's point of view it's not like Nvidia would have sold their cards cheaper if there was no SM3.0 support anyway....
Possibly, though it's also possible nV could have clocked their cards higher (like ATi) if they didn't have to work in SM3.0.

And these are valuable points. For instance, SM3.0 is being sold on the fact that some of its features will give performance increases or are easier to support, and that’s very true, especially when you may be comparing SM2.0 on the same board as you are comparing SM3.0, however when you have different hardware that has different properties you can just take everything as read.

For instance, SM3.0 introduces vertex instancing which should provide a performance benefit on NV40 - however, NV40's vertex performance appears to be lower than R420's; will the use of vertex instancing allow NV40 to regain that ground? We don't know until we've tested it in a wide variety of scenarios. NV40's PS3.0 unit supports dynamic branching, which can be a performance benefit in some cases, however (as we've learnt recently) it does also have some detrimental performance points - however, R420's PS2.0 performance is generally higher than NV40's - will the cost of state changes for unrolled PS2.0 shaders on R420 be faster or slower than NV40 dynamically branching.

These things aren't determined because we've not seen any metrics for them and even when we do, they are unlikely to be definitive since they will inevitably change from application to application. However, from the end user perspective these aren't settled. As trinibwoy points out, R300's PS1.x support showed parity, or better, in PS1.x applications to NVIDIA's PS1.x support, however what would you say if it turns out that ATI's PS2.0 support was faster than NVIDIA's PS3.0 with similar IQ?

Factors like these are neither settled, or frankly understood yet as we've not seen any sufficient tests, or even titles that will go to this level of complexity.
 
The FFP results show their vertex performance isn't really lower. It's a problem with the drivers.


Vertex instancing solves a big problem with DX batch overhead, which scales differently than the compiler issue. I disagree that it's been proven that the clock-for-clock PS2.0 performance of the R420 is higher. I'd say more studies have to be done, especially after NVidia and ATI fix their drivers.

I agree these issues aren't settled. But by the same token, I could have claimed the same thing about PS2.0, that ATI perhaps could have created a PS1.4 chip with vastly higher clocks, more pipelines, and lower power if they had left out PS2.0. Since PS1.4 is more than suffient for a majority of the pixel shaders being used today (D3, FarCry, even most HL2 shaders), one could argue that the R300's PS2.0 support was a marginal gain for gamers. The increased precision helped out in only a few areas.

At the time when PS2.0 was introduced, did anyone have any clue as to whether or not the tradeoffs made implementing the more complex pipeline were worth it compared to what games needed?

I think people need to chillout and wait for a few months before making conclusions. If I had to make my conclusions solely upon the PS2.0 titles available a year ago (e.g. Tomb Raider: Angel of Crappiness), I would have said "Screw it". Funnest game I played recently was Call of Duty, a Quake3 era game with beefed up graphics and artwork that look pretty nice, and no PS2.0.

We won't know what SM3.0 is like for atleast 6 months. Drivers have to be fixed, developers have to sink their teeth into it, etc.
 
trinibwoy said:
Agreed but that it is one thing to say that Nvidia should have focused on such things instead of SM3.0.

Personally, I’m not suggesting they should have. They have, for the most part, fixed their main issue which was PS2.0 performance – it may not quite look like its up to the performance of R420 at the moment, but its clearly streets better than any of the previous generation hardware. Some of the other peripheral issues surrounding the NV40 implementation may well be of more pressing concerns to end users though.

He is entitled to his opinion

And nobody is suggesting that he isn’t, but then aren’t others as well? The TAA case is one thing that is just down to a personal preference / not getting any value from it in his situation; however his call for the use of DXTC normal map compression because most hardware doesn’t support 3Dc doesn’t seem that different a position that others have expressed over NVIDIA’s implementation of SM3.0 and the discussions around it presently (and ignoring the fact that 3Dc / DXT5 normal map compression could actually see a quicker adoption because its based on currently used / understood principles and may be a fast route to providing the end user some more detail).
 
DaveBaumann said:
He is entitled to his opinion

however his call for the use of DXTC normal map compression because most hardware doesn’t support 3Dc doesn’t seem that different a position that others have expressed over NVIDIA’s implementation of SM3.0 and the discussions around it presently (and ignoring the fact that 3Dc / DXT5 normal map compression could actually see a quicker adoption because its based on currently used / understood principles and may be a fast route to providing the end user some more detail).

Agreed I would make the same argument for support of 3Dc as I have for SM3.0. As a potential owner of an R420 I would gladly welcome any new tech that would improve my gaming experience in select titles if it has no adverse effect on titles that do not employ it. One caveat however is that there may be more effort required on the part of the developer to support both 3Dc and DXTC than it takes to support SM2.0/3.0 on the same title. I'm not a 3D programmer so this is just speculation based on forum posts and a little research.
 
DemoCoder said:
I disagree that it's been proven that the clock-for-clock PS2.0 performance of the R420 is higher. I'd say more studies have to be done, especially after NVidia and ATI fix their drivers.

Why is clock-for-clock important? If the product is faster because it's capable of higher clocks that's just as valuable.

DemoCoder said:
I agree these issues aren't settled. But by the same token, I could have claimed the same thing about PS2.0, that ATI perhaps could have created a PS1.4 chip with vastly higher clocks, more pipelines, and lower power if they had left out PS2.0. Since PS1.4 is more than suffient for a majority of the pixel shaders being used today (D3, FarCry, even most HL2 shaders), one could argue that the R300's PS2.0 support was a marginal gain for gamers. The increased precision helped out in only a few areas.

I agree that ps2.0 was a marginal benefit to r300 users. The fact is that r300 was also released as the clear performance leader, by a significant margin. Nv40 can't claim that.

At the time when PS2.0 was introduced, did anyone have any clue as to whether or not the tradeoffs made implementing the more complex pipeline were worth it compared to what games needed?

I think people need to chillout and wait for a few months before making conclusions. If I had to make my conclusions solely upon the PS2.0 titles available a year ago (e.g. Tomb Raider: Angel of Crappiness), I would have said "Screw it". Funnest game I played recently was Call of Duty, a Quake3 era game with beefed up graphics and artwork that look pretty nice, and no PS2.0.

We won't know what SM3.0 is like for atleast 6 months. Drivers have to be fixed, developers have to sink their teeth into it, etc.

I agree the vote isn't in yet on the value of ps3.0 in the nv40. I however believe its not likely to be in for 18 months when an end user may start to see significant benefits from ps3.0 support assuming that nv40 will perform them at usable speeds.
 
DemoCoder said:
I would never suggest making a 3.0 only game. I think it's insane.

And I didn’t suggest that you did or would.

I would suggest that adoption of HW features goes beyond the mere presence of feature, or the size of the market, but also the invention of valuable techniques to utilize those features. The features in the NV40 will be heavily dependant upon people figuring out cool algorithms and tricks using those features.

IMO, tools and education are really the critical issues. HLSL’s appear to be a massive help to the greater adoption of shaders, but, normal mapping only really took off when Polybump and other free tools became available making it easier for to produce and understand.

The FFP results show their vertex performance isn't really lower. It's a problem with the drivers.

Well, that may be the case, but you can’t necessarily tell what the final performances will be – neither’s VS’s are exactly the same as their previous versions. If you look at things from a purely theoretical perspective you see they both have a vector and scalar unit, so assuming instruction performance parity, the per clock performances should be similar on that basis, however there is a large clock disparity.

I disagree that it's been proven that the clock-for-clock PS2.0 performance of the R420 is higher

I didn’t say anything in relation to the per clock PS2.0 performances, but then you can’t remove clock rate from these type of comparisons when you are comparing products.

But by the same token, I could have claimed the same thing about PS2.0, that ATI perhaps could have created a PS1.4 chip with vastly higher clocks, more pipelines, and lower power if they had left out PS2.0.

That’s not quite the situation we are talking about though, because they have still provided more headroom in terms of pixel shader instruction support, well beyond what they feel most developers are using currently, it just doesn’t match what NVIDIA has provided.

At the time when PS2.0 was introduced, did anyone have any clue as to whether or not the tradeoffs made implementing the more complex pipeline were worth it compared to what games needed?

But for the end user was that much of a concern? The PS2.0 on introduction boards provided tangible benefits in almost all areas, and not many detriments.
 
trinibwoy said:
One caveat however is that there may be more effort required on the part of the developer to support both 3Dc and DXTC than it takes to support SM2.0/3.0 on the same title. I'm not a 3D programmer so this is just speculation based on forum posts and a little research.

In terms of developer support for compressed normal maps the process is the same as it is currently, but then you just need to save a higher resolution version and then use a compression tool to provide the compressed version and then have a test in the code as to whether the hardware supports either a compressed format, if not you’d fall back to (probably) a lower resolution (detail) uncompressed map. There is a question as to the tools required for good DXt5 compressed normal maps, but that’s not likely to be the case for 3Dc compression tools soon enough.

As for SM2.0/3.0 that entirely depends on what you are intending to use SM3.0 for – if you are just doing it for the increased instruction lengths over PS2.0 then you’d just code it in HLSL and compile it to a target; it may be the case that the instruction lengths will automatically support PS_2_a and PS_2_b targets as well and nothing much more need be done by the developer – if it falls in PS2.0 instruction lengths then you’d probably just keep it as a PS2.0 shader in the first place. However, if you are making use of a feature such as dynamic branching in SM3.0 then there will be larger changes required to support PS2.0 as you’d also need unroll the combinations and provide separate PS2.0 versions of them.
 
DemoCoder said:
I think people need to chillout and wait for a few months before making conclusions. If I had to make my conclusions solely upon the PS2.0 titles available a year ago (e.g. Tomb Raider: Angel of Crappiness), I would have said "Screw it". Funnest game I played recently was Call of Duty, a Quake3 era game with beefed up graphics and artwork that look pretty nice, and no PS2.0.

We won't know what SM3.0 is like for atleast 6 months. Drivers have to be fixed, developers have to sink their teeth into it, etc.

This is exactly the point I'm trying to make. My intention is not to discredit SM3.0, I support any technology that is an improvment over past technology. However, I also see people proclaiming the 6800 as the winner strictly because of SM3.0. And yet they most often know nothing about it or it's benefits. Nvidia's PR is incredible, I'll give them that, but I look past the bs to see how things truely are. At the moment (and what seems to be for quite awhile longer) SM3.0 offers little to nothing for me. I see alot more features on the R420 that appeal to me then I see on the 6800.

I also see nvidia outright lie to the average consumer on a daily basis. This doesn't bode well for me, and only serves to increase my dislike for them. I admit sometimes they make a worthwhile product but in my honest opinion they are an unprofessional and unethical business.
 
Status
Not open for further replies.
Back
Top