Wich card is the king of the hill (nv40 or R420)

Wich card is the king of the hill (nv40 or R420)

  • Nv40 wins

    Votes: 0 0.0%
  • they are equaly matched

    Votes: 0 0.0%

  • Total voters
    415
Status
Not open for further replies.
I was speaking on behalf of the average consumer. Yes the 6800 has shader model 3.0 and yes it's better for developers but that is the only technological feature I consider it to have over the X800.

Oh come on. SM 3.0 is not only being embraced by many developers as we speak, but it is also the standard that the entire industry is moving towards.

Some other "technical" features that NV has in their favor: Ultrashadow II technology, hardware that is very suitable for shadow-intensive games, OpenGL performance, and arguably sharper/clearer AF quality, even now with NV's new angle-dependent AF algorithm.

There are many other advantages to the X800 as well.

Yes there are advantages to the X800 too, and no one here denies that as far as I know. These include 3dc, temporal AA, relatively low power consumption, etc.

And anyone who chooses the 6800 over the X800 from a gamer's perspective is doing so primarily because it is made by nvidia. That's not to say there are some exceptions, nvidia's better support for Linux being one that comes to mind.

I think this is a bit presumptuous. How about people preferring NV for the OpenGL performance? For advanced software and hardware features suitable for shadow-intensive games like Doom 3? For the slight edge in AF quality? For features like SM 3.0 that the industry is moving towards? Seriously, try to be more open minded here.
 
My Gawd, the X800s are nowhere to be seen, the Geforce 6800s are nowhere to be seen. Nobody I know has one, and yet everyone is commenting on the performance and usability of these cards like they had it for 10 years.

I seriously doubt we can judge the performance of the X800s and Geforce 6800s today. All we have are todays games that weren't really designed with the X800 or Geforce 6800 in mind. Does it matter?

Remember the Pentium 66 and 486DX-133 comparisons? We all know that's all a farce now, since we had to code and compile things differently before the real power of the Pentiums were exposed.

Think next 6 months! We have nothing today that can truly expose the power of these 2 GPUs. I suggest we stop being crusaders and let performance speak for themselves when the 2 supposed summer releases finally arrive. It's not like you are out at Bestbuys camping those shelves and waiting to grab your favorite vendor's card right now. Besides, I assume only the FanATIcs and Nvidiots would buy the first cards off the shelf. They made their decision already. Most smart buyers, I assume, still are on the fence, waiting to find out if which would run tomorrow's software better.
 
jimmyjames123 said:
Oh come on. SM 3.0 is not only being embraced by many developers as we speak, but it is also the standard that the entire industry is moving towards.

Ok maybe, maybe not. When they have the cards then they can code to it. For now it desn't mean much of anything.

Seriously, try to be more open minded here.

For me I make no bones about it. I'm biased because Idont like being lied to and nvidia has done more than their fair share. That means I speak with my wallet. I didn't care for them even when they had the fastest cards. I have been selling ati exclusively for about 3 or 4 years now. I took flak for it then but I still dont care. Sometimes you jsut have to do things becasue it's the right thing to do. Sometimes I wish I knew nothing about computers at all but owning a computer shop makes that hard.
 
For me I make no bones about it. I'm biased because Idont like being lied to and nvidia has done more than their fair share. That means I speak with my wallet. I didn't care for them even when they had the fastest cards. I have been selling ati exclusively for about 3 or 4 years now. I took flak for it then but I still dont care. Sometimes you jsut have to do things becasue it's the right thing to do. Sometimes I wish I knew nothing about computers at all but owning a computer shop makes that hard.

And sometimes you do things not because they are right, but because you are blinded by zealotry. Oh, the horrors of history committed by people believing they are "doing the right thing" in personal crusades.

Can we know the name of your computer shop so I can recommend people to stay away, because it runs a sales staff that pushes products based on idealogy, not what the consumer's needs are?
 
DemoCoder said:
And sometimes you do things not because they are right, but because you are blinded by zealotry. Oh, the horrors of history committed by people believing they are "doing the right thing" in personal crusades.

Can we know the name of your computer shop so I can recommend people to stay away, because it runs a sales staff that pushes products based on idealogy, not what the consumer's needs are?

no I said nothing about zelot in fact if ati, matrox, xgi however it doesn't matter makes a good card I'll sell I've just had it with nvidia. They jsut hapened to show thier colors durring the whole nv30 fiasco. Any company that acts like that doesn't deserve anybodies money no matter how good the product is.

Having said that I'm quite pleased with the 6800 product as a whole. It's just the company that sucks. They are learning that the hard way right now and if they clean thier act up I will be willing to sell them again some day.
 
jimmyjames123 said:
Oh come on. SM 3.0 is not only being embraced by many developers as we speak, but it is also the standard that the entire industry is moving towards.

Some other "technical" features that NV has in their favor: Ultrashadow II technology, hardware that is very suitable for shadow-intensive games, OpenGL performance, and arguably sharper/clearer AF quality, even now with NV's new angle-dependent AF algorithm.

SM3.0 will eventually become the standard but that won't really happen for at least another year. Games that will use it today (like that of Far Cry) simply use it for it's performance improvments. If the X800 runs the game just as well or better using SM2.0b then I don't see it as an advantage at all.

We have yet to see how much of an impact Ultrashadow provides, ATI is rewriting it's OGL code (and games are slowly but surely moving away from OGL in favor of DX at any rate) and nvidia's AF is not sharper from what I have seen.

I think this is a bit presumptuous. How about people preferring NV for the OpenGL performance? For advanced software and hardware features suitable for shadow-intensive games like Doom 3? For the slight edge in AF quality? For features like SM 3.0 that the industry is moving towards? Seriously, try to be more open minded here.

Again, see my comments about OGL and AF above. As for Doom 3 I think i'll wait for benchmarks before I comment on that.

I am being completely open minded here. SM4.0 will have a much larger impact on the industry then SM3.0.
 
ANova said:
SM3.0 will eventually become the standard but that won't really happen for at least another year. Games that will use it today (like that of Far Cry) simply use it for it's performance improvments. If the X800 runs the game just as well or better using SM2.0b then I don't see it as an advantage at all.

There's a whole bunch of people that doesn't buy a new card every year.

I am being completely open minded here. SM4.0 will have a much larger impact on the industry then SM3.0.

That might (or not) be true. But we don't know exactly what SM4.0 is and not when it's going to be released. Well, some people here might have some clues but i don't :)
 
but it is also the standard that the entire industry is moving towards.

and as soon as some of them have started srrranging they'l be moving off to SM4, what's your point?

OpenGL performance

That's not technology it's a driver issue and if you've been paying attention ATi's got a brand new OpenGL renderer in the works.

and arguably sharper/clearer AF quality, even now with NV's new angle-dependent AF algorithm.

Had a 5950, thought it pretty good for about a week or so, till I ripped it out and replaced it with a 9800 Pro. Why? Because AF IQ well and truly sucked (Mip map banding if you haven't been paying attention), now ATi's angle dependency is annoying but at least the mip map boundaries don't stand out like a sore thumb. Now if the NV40 has the dodgy boundaries and angle dependency, yurk. So where's the extra tech lead here for me?

Ultrashadow II technology,

so where's the shipping games which use it?
 
SM4 is not an argument. It won't exist until Longhorn ships, assuming it does then. It's in a pretty embryonic state now. . Developers are far more likely to support some of the features in SM3.0 and those exposed on 6800 next year (HDR rendering, geometry instancing, etc) than they are likely to jump directly to SM4.0. Even ATI is looking to ship an SM3.0 part before an SM4.0 part if you read the leaked Huddy presentation in a certain way.

The situation with new shader models is alot different now than it was with PS1.x because of HLSL.
 
DemoCoder said:
SM4 is not an argument. It won't exist until Longhorn ships, assuming it does then. It's in a pretty embryonic state now. . Developers are far more likely to support some of the features in SM3.0 and those exposed on 6800 next year (HDR rendering, geometry instancing, etc) than they are likely to jump directly to SM4.0. Even ATI is looking to ship an SM3.0 part before an SM4.0 part if you read the leaked Huddy presentation in a certain way.

The situation with new shader models is alot different now than it was with PS1.x because of HLSL.

Longhorn is going to reach beta by mid to late 2005. It is very likely we will see it in it's finished form by early to mid 2006. This means DXNext (or DX10) won't be far behind. Now lets recall the advantages SM3.0 holds over SM2.0; dynamic branching and an increased number of executable instructions. UE3 uses shaders comprised of 50-100 instructions. The limit for PS2.0 is 512 which means UE3 doesn't even come close to requiring SM3.0. The point of all of this is by the time games actually start to surpass the 512 mark SM4.0 will already be available and offer many more and better features then SM3.0. Thus the 6800's support for SM3.0 looks good on paper and can be touted as the latest and greatest, but that's about it.
 
Even NVidia admitted that the main purpose of SM3.0 is to make programming easier and to have a performance advantage (e.g. geometry instancing and cleverly used dynamic branches). No game developer can afford to ship a SM3.0 only game, that would be just stupid. So what do *GAMERS* lose by not having SM3.0? They might lose some speed, but that's it, as far as I can see. How much speed advantage SM3.0 will bring is still to be decided. Maybe FarCry will later give us a first hint about that.

If I would do game development, I'd surely buy a NV40. But as a gamer I have to say that I'd buy a X800 for 2 reasons, namely usable 6xAA and less heat/noise (important for my silent computing needs).

HDR: I think Dave didn't say anything against HalfLife2 + HDR + 9800 because it didn't seem to cost much performance and it improved the IQ noticably. So why should anybody complain? The fp16 blending on NV40 looks nice but we don't know yet how the performance will look like. As far as I understand it Dave thinks that fp16 blending will cost serious performance - and technically that seems to make sense to me. I can't believe that some of you guys call him biased because of that. I think some of you guys should clean your own room before jumping on others. Just ask yourself: If we'd do a poll here at Beyond3D about who's more biased (you or Dave) - who whould "win"? Might it be that man in the mirror?
 
But the more biased one shouldn't jump at the less biased one. (If the less biased one is biased at all, that is).
 
ANova said:
Longhorn is going to reach beta by mid to late 2005. It is very likely we will see it in it's finished form by early to mid 2006. This means DXNext (or DX10) won't be far behind.

So you don't think ATI is going to ship any cards with new SM support before mid 2006? Let's see your attitude when ATI releases their SM3.0 card. Then, it won't be pooh-poohing how SM3.0 does nothing for games, it will be about how lovely it is, and of course how awesome SM3.0 features are made "usable" only by ATI.

Now lets recall the advantages SM3.0 holds over SM2.0; dynamic branching and an increased number of executable instructions.

Yes, lets recall them.

6800 offers vertex texturing, predicates, arbitrary swizzle, dynamic branching, geometry instancing, FP filtering, FP blending, gradient instructions, indexable output registers, vPos register, vFace register, indexable input registers, 224 constant registers, loops, procedure calls, and of course a texture fetch instruction which can use gradients to choose mipmap LOD.

Better ease of use, some features which "can't be done in SM2.0 without slow emulation" and other features which can boost performance.


The point of all of this is by the time games actually start to surpass the 512 mark SM4.0 will already be available and offer many more and better features then SM3.0.

SM4.0 will compliment SM3.0. The point of SM4.0 isn't to run over 512 instruction shaders. Those won't be realtime on 2006-era hardware either. The point of the shader models it to make programming more general purpose, even short programs. For example, generalizing the I/O model. The first step was giving a texture unit to the vertex shaders for example and allowing vertex frequently stream divider. SM4.0 will take that further. Another major factor will be to add tesselation. But these are *in addition* to SM3.0.

Everything that developers write for SM3.0 will still be applicable in 4.0. It's not even guaranteed that 4.0 shaders will run things faster, just that it will be easier to program and enable some algorithms which are blocked by the current I/O model.



HDR: I think Dave didn't say anything against HalfLife2 + HDR + 9800 because it didn't seem to cost much performance and it improved the IQ noticably. So why should anybody complain? The fp16 blending on NV40 looks nice but we don't know yet how the performance will look like. As far as I understand it Dave thinks that fp16 blending will cost serious performance - and technically that seems to make sense to me

You don't understand what was being said. Blending is a *performance* optimization over what is being done today: *emulating blending in the pixel shader* There is no way that NV40 FP16 blending will run slower than the method used in HL2. It saves a complete shader pass.


Sigh... :rolleyes:
 
madshi said:
But the more biased one shouldn't jump at the less biased one. (If the less biased one is biased at all, that is).
Why? Doesn't make his comment on the biased comments less accurate :?
 
Evildeus said:
Why? Doesn't make his comment on the biased comments less accurate :?
Why? I think Matthew 7,5 explains quite well why:

"First get rid of the log from your own eye; then perhaps you will see well enough to deal with the speck in your friend's eye."

Understand?
 
SM3.0 will eventually become the standard but that won't really happen for at least another year. Games that will use it today (like that of Far Cry) simply use it for it's performance improvments. If the X800 runs the game just as well or better using SM2.0b then I don't see it as an advantage at all.

Right or it might be nothing like p.s 1.4

Lets look at the facts .

The geforce 3 , ti , 4 ti , fx ,6800 line, radeon 8500-9200 , radeon 9500- 9800xt , radeon x800 line will all do p.s 1.1 . That is a huge market . So i would think for awhile at least another year or so games will have p.s 1.1 shaders built in as a fall back for these older cards .


The radeon 8500-9200, radeon 9500-9800xt , radeon x800 line and geforce fx , 6800 line all do p.s 1.4


That once again is a whole lot of cards . But it will most likely be skipped in favor of p.s 2.0 as we are seeing now .

The radeon 9500-9800xt , radeon x800 line and geforce 6800 line can all run p.s 2.0 at a good performance speed . That is a good amount of cards and with the 9500-9800xts moving down the price ladder it will be come bigger . Then of course with the 6800 and x800 lines it will expand again.


Now p.s 3.0 has the geforce 6800 line


That is the only line of cards that can do it and right now u can't get any . Lowest price we have heard for one of these is 300$ which still be above the sweet spot for video cards .

Meanwhile all of atis cards will be selling and they will all do p.s 2.0

So i don't see how u can say it will become the standard .

Nvidia may keep releasing cards that support it but if ati skips over it then it will not become standard.

More over the 6800s may end up like the geforce 3 . Very very slow running the games finally designed for its feature set .

The future is very foggy .

Back when i got my 9700pro it was the fastest card out. A few months later it was still the fastest card out and the tests out there at the time showed it would be the best dx 9 card out .

This has been proven true with regards to the r3x0 tech.

Right now there are no tests showing how useable the p.s 3.0 will be in the geforce 6800.

Till we get those test numbers and see some performance info I can't really say the geforce 6800 line is p.s 3 capable just like i can't say the fx line is p.s 2.0 capable .
 
Sorry, but that's BS (Ad hominem argumentation). There's still a speck in your friend's eye, it's a fact. It's not because you have a log in your eye that the speck disapear.
madshi said:
Evildeus said:
Why? Doesn't make his comment on the biased comments less accurate :?
Why? I think Matthew 7,5 explains quite well why:

"First get rid of the log from your own eye; then perhaps you will see well enough to deal with the speck in your friend's eye."

Understand?
 
Evildeus said:
Sorry, but that's BS. There's still a speck in your friend's eye, it's a fact. It's not because you have a log in your eye that the speck disapear.
But having that log in your eye you simply can't see clear enough, so you might misjudge your friend.
 
Status
Not open for further replies.
Back
Top