Gainward talks smack

Eronarn said:
mikechai said:
Hellbinder said:
You know how is it that so many peole have forgotten that Nvidia has never released a new card that delivered playable performance in the areas new fetures they PR about?

Nv40 is already behind the curve in PS2.0 performance compared to X800. Yet people seem to think they are magically going to have great PS3.0 performance. Not going to happen.

X800 will continue and increase its advantage in shader intensive games over the next couple years. Its that simple.

You'are a fanATIc. Its that simple.

I can see his point- they're having problems with PS2.0, I'm willing to bet money that they'll have the same problems with 3.0.
I fail to see how they're having problems in PS2.0, though. "As fast as ATI?" No. Attributable to the clock speed difference? Possible, I haven't been able to test one so I don't know (nor have I seen a X800 XTPE tested at 400Mhz. Hey Dave, you listening? ;) ). PS3.0 DOES improve efficiency if used correctly. We will see it used within the next 18 months because of TWIMTBP. Will it be Der Uberfeature? No, but if the X800's advantage is primarily because of the clock speed difference (GODDAMNIT, SOMEBODY TEST THIS FOR ME. I'm curious now.), PS3.0 will almost certainly significantly reduce that advantage.

I do fail to see how two cards with very disparate clockspeeds but similar ALU and pipeline configurations results in one having significant PS2.0 performance problems. Slower? Yes. But, if it's clockspeeds, calling the one that's 20% slower to be at a significant disadvantage is... hmmm..

<TheBaron> Eronarn, what's the word I'm thinking of?
<Eronarn> Smacktarded?
<TheBaron> Yup. Smacktarded.

As much as it pains me to say it, PS3.0 is coming simply because of TWIMTBP. If those games are shader limited, then NVIDIA could have a huge advantage there. Oh R420. Can't we just call you R399? You're a 16-pipe R350 with GDDR3 and low-k, Snookums. But it's okay. I love you for your antialiasing.
 
Eronarn said:
Remember, if you're not as fast, it is a problem in an industry like this.
Right now, games where X800's advantage really matters:

1. Far Cry
2. ...

If PS3.0 offers no performance benefit in HL2, that will be a big problem for NV. But, like I said, it can and probably will offer a big efficiency increase. If that increase is large enough (and it will vary considerably), it will be more than enough to compensate for the lower clocks. But, right now, nobody knows.
 
mikechai said:
NV40 is faster than R420 in OpenGL.

And ATI is writing new drivers (or so I hear), wait for those (since by the time they come out the NV40 will be available).

And Baron, sure, the speed may not matter now. But if there isn't enough of an increase in PS3.0, the X800 may end up being the card of choice for people who aren't upgrading to PCI-E for a while but want an immediate performance boost. It'd end up being the fastest performing AGP card, remember.
 
Eronarn said:
mikechai said:
NV40 is faster than R420 in OpenGL.

And ATI is writing new drivers (or so I hear), wait for those (since by the time they come out the NV40 will be available).

So, wait, it's acceptable for you to rectify ATi's inferior preformance at this time by stating they're, "writing new drivers" - but all you folk have a problem with the author of this email for stating the same thing wrt nVidia and PS2.0?

Why not, Lets "wait for those" new drivers - the NV40 ones....
 
Vince said:
Eronarn said:
mikechai said:
NV40 is faster than R420 in OpenGL.

And ATI is writing new drivers (or so I hear), wait for those (since by the time they come out the NV40 will be available).

So, wait, it's acceptable for you to rectify ATi's inferior preformance at this time by stating they're, "writing new drivers" - but all you folk have a problem with the author of this email for stating the same thing wrt nVidia and PS2.0?

Why not, Lets "wait for those" new drivers - the NV40 ones....

It doesn't rectify their performance. They do have inferior performance. It's my opinion that PS2.0 performance is more important than OGL performance, not that it really matters- the point was that 'NV40 is faster than R420 in OpenGL' is irrelevant because the NV40 isn't able to be bought yet, and by the time it IS out, it may not be faster any more due to the driver changes.

As it stands now, the R420 is out, and is faster. That may change when the NV40 comes out.
 
The Baron said:
If PS3.0 offers no performance benefit in HL2, that will be a big problem for NV. But, like I said, it can and probably will offer a big efficiency increase. If that increase is large enough (and it will vary considerably), it will be more than enough to compensate for the lower clocks. But, right now, nobody knows.

Gabe said he has more important things to do than supporting SM3.0 in HL2 at E3.
________
MEDICAL MARIJUANA DISPENSARIES
 
Last edited by a moderator:
mikechai said:
The Baron said:
If PS3.0 offers no performance benefit in HL2, that will be a big problem for NV. But, like I said, it can and probably will offer a big efficiency increase. If that increase is large enough (and it will vary considerably), it will be more than enough to compensate for the lower clocks. But, right now, nobody knows.

Gabe said he has more important things to do than supporting SM3.0 in HL2 at E3.

Important things to do, like not making HL2, right? :LOL:
 
Hellbinder said:
You know how is it that so many peole have forgotten that Nvidia has never released a new card that delivered playable performance in the areas new fetures they PR about?

Nv40 is already behind the curve in PS2.0 performance compared to X800. Yet people seem to think they are magically going to have great PS3.0 performance. Not going to happen.

X800 will continue and increase its advantage in shader intensive games over the next couple years. Its that simple.

I haven't forgotten, I cancelled a preorder of a NV30, and got the 9700Pro instead. You do not need to insinuate I know nothing about graphics cards. I buy the best card I deem for the money.

I have not seen any indication saying the PS 2.0 performance of the X800 Pro is head and tail better than the 6800 GT (such as with the NV30 and the R300). Granted Far Cry plays a lot better on the Radeon, but that is one game. IF I was going to buy either of the two I'd wait to see how they perform on Doom and Half Life. Until then, I do not make assumptions when dealing with $400. I can not afford to make assumptions. When I have a clearer representation of the situation, then I'd purchase the best card for the money.

I am interested in Shader Version 3 since I hear it is mostly a speed booster. If 2.0 and 3.0 produce the same image, but 3.0 shaders are X% faster, then I'd want a 3.0 enabled product. And in 2-3 years a lot of shaders will most likely use 3.0 due to both IHV's having a 3.0 card available. I do not know this for sure, but there is a great likelyhood.

I am glad you are trying to look out for my card purchases, but I think I have a hold on what I need. And it is up to ATI and NVIDIA to make a good card in order to convince me which one to purchase, not anyone else.

Edit:Spelling (enabled is not spelled endabled)
 
Pete said:
Brandon said:
Gainward/FS conversation never happened, the Gainward rep cut and pasted from an article we posted, I honestly haven't spoken with anyone from Gainward in over 6 months.

What was meant to be an entertaining article for the weekend is now really getting out of hand. :(
Wow, that was a direct cut 'n paste? Did they even include a link to your article (if not to include your rebuttal, at least to properly credit FS)?

As for the article itself, well, let's just say I get enough forum talk in the forums--I don't need to see Jakub repeat it in a more formal setting. And I was a bit scared at how closely your answers mirrored my forum replies.

Yes, it was a direct cut and paste. AFAIK there was no mention of the article or a link, so it read as if I had a conversation with him when it was actually a conversation I had with Jakub when I was in Toronto for their Tech Days or whatever it was called.

I didn't get the letter so I wouldn't know though.
 
I'm not sure where people get the idea that the GeForce 6 series is really lagging behind in PS 2.0 speed. The X800XT is generally slightly faster than the 6800U in Shadermark, but the 6800U also wins some tests too. The 6800GT seems to be generally slightly faster in Shadermark than the X800Pro. All of these cards have very fast PS 2.0 speed, in general. At the moment, the NV beta 6x.xx series Forceware drivers seem to be a bit more raw and buggy than the ATI beta drivers used on the R4xx cards.

FarCry in it's current form, according to FiringSquad, does not make very heavy use of PS 2.0. Apparently, most of the shaders used are PS 1.1, with PS 2.0 being used relatively little, primarily for lighting effects.
 
The Baron said:
I fail to see how they're having problems in PS2.0, though. "As fast as ATI?" No. Attributable to the clock speed difference? Possible, I haven't been able to test one so I don't know (nor have I seen a X800 XTPE tested at 400Mhz. Hey Dave, you listening? ;) ). PS3.0 DOES improve efficiency if used correctly. We will see it used within the next 18 months because of TWIMTBP. Will it be Der Uberfeature? No, but if the X800's advantage is primarily because of the clock speed difference (GODDAMNIT, SOMEBODY TEST THIS FOR ME. I'm curious now.), PS3.0 will almost certainly significantly reduce that advantage.

I do fail to see how two cards with very disparate clockspeeds but similar ALU and pipeline configurations results in one having significant PS2.0 performance problems. Slower? Yes. But, if it's clockspeeds, calling the one that's 20% slower to be at a significant disadvantage is... hmmm..

Your forgetting one very important aspect to this argument. All sources point to the NV40 core as being unable to scale very well. Most reviewers were not able to overclock the card more then 450 MHz, some as little as 425. Then there are the power requirements which increase in multiples. If the core is unable to go higher then 450 MHz then it doesn't matter that it's at a lower clock speed then the R420. That is simply it's limit. I believe the Ultra Extreme was around that clock and it still lost to the XT PE in the majority of the DX tests. I do, however agree the 6800 does not have problems with PS2.0, certainly nothing to call for a "significant disadvantage". If you look at it from a neutral viewpoint it's hard to say the 6800 isn't equal visual quality wise in SM2.0 to the X800 this go around. The X800 is simply faster. As for the OGL performance and whether or not SM3.0 will reduce that gap. All inclinations point to yes, it will reduce the speed differences in DirectX. However, you also have to take into account the fact that ATI is currently optimizing it's DX shaders further and is completely rewriting it's OGL code. So even if nvidia manages to increase it's performance, ATI may counter; and who knows, maybe the same will happen in the case of the OGL gap.
 
Your forgetting one very important aspect to this argument. All sources point to the NV40 core as being unable to scale very well. Most reviewers were not able to overclock the card more then 450 MHz, some as little as 425.

You are mixing up the words "scale" and "overclock". As an architecture, the NV GeForce 6 series is designed to "scale" very well, meaning that NV can easily bring out lower priced variants that still have some similar features and properties as the flagship card. ATI's architecture is also very scaleable.

Now, regarding overclocking, way too early to tell. Most of the 6800U cards were A1 samples. Firingsquad had an A2 6800GT, and they were able to overclock their card to 425Mhz core. They were able to overclock the Ultra Extreme card to 475Mhz core.

Also, be careful when talking about the performance of "X800" vs "6800", because there is more than one card that makes up each grouping. Looking at the intial set of reviews, the X800XT is generally slightly faster than the 6800U, and the 6800GT is generally slightly faster than the X800Pro. But then again, it totally depends on the game and the settings, of course.

Here you can see some Shadermark 2.0 tests on the X800XT, 6800U, 6800 GT, and X800Pro:

http://www.firingsquad.com/hardware/geforce_6800_ultra_extreme/page6.asp
 
jimmyjames123 said:
Your forgetting one very important aspect to this argument. All sources point to the NV40 core as being unable to scale very well. Most reviewers were not able to overclock the card more then 450 MHz, some as little as 425.

You are mixing up the words "scale" and "overclock". As an architecture, the NV GeForce 6 series is designed to "scale" very well, meaning that NV can easily bring out lower priced variants that still have some similar features and properties as the flagship card. ATI's architecture is also very scaleable.

Now, regarding overclocking, way too early to tell. Most of the 6800U cards were A1 samples. Firingsquad had an A2 6800GT, and they were able to overclock their card to 425Mhz core. They were able to overclock the Ultra Extreme card to 475Mhz core.

Also, be careful when talking about the performance of "X800" vs "6800", because there is more than one card that makes up each grouping. Looking at the intial set of reviews, the X800XT is generally slightly faster than the 6800U, and the 6800GT is generally slightly faster than the X800Pro. But then again, it totally depends on the game and the settings, of course.

If the NV40 can scale well why then does nvidia not plan on bringing the Ultra Extreme to the market? Like I said, power consumption also increases in a nonlinear fashion, which is definitely detrimental. And so what if the GT was able to be overclocked to 425, that still doesn't prove the Ultra will be able to go any higher then what we have seen, even if Ultras come out using the A2 core.

When talking about the X800 vs 6800 I'll always be referring to the performance cards (X800 XT and 6800 Ultra) since they should best represent what each core is capable of. I would also disagree with you on the GT being faster then the X800 Pro. In fact the pro catches up to the Ultra in a fair amount of tests. If anything I would say they are equal in performance, if not the pro being slightly faster.
 
If the NV40 can scale well why then does nvidia not plan on bringing the Ultra Extreme to the market? Like I said, power consumption also increases in a nonlinear fashion, which is definitely detrimental.

I don't think you are clear with what I mean by "scaleability". The NV 6 series is very scaleable as an architecture. Read the 3dcenter article for more info. Power consumption is a separate issue altogether, and is based more on fabrication process than sheer architectural design. The 6800GT and all lower priced variants will all be single slot/single molex cards , so power consumption shouldn't be a huge issue on the lower priced variants anyway.

And so what if the GT was able to be overclocked to 425, that still doesn't prove the Ultra will be able to go any higher then what we have seen, even if Ultras come out using the A2 core.

And it doesn't disprove this either. Like I said, way to early to tell.

When talking about the X800 vs 6800 I'll always be referring to the performance cards (X800 XT and 6800 Ultra) since they should best represent what each core is capable of.

They represent what each CARD is capable of, so what's the point of overgeneralizing? The X800Pro is 12 pipelines anyway. As you can see from the Shadermark 2.0 results, it is much to simplistic to overgeneralize like this when talking about performance.

I would also disagree with you on the GT being faster then the X800 Pro. In fact the pro catches up to the Ultra in a fair amount of tests.

I stand by my comments. I also said that the 6800GT is generally slightly faster than the X800Pro. With Shadermark 2.0 in particular, you can see that the 6800GT generally has a slight edge over the X800Pro. Fillrate at stock core clocks is about equal for the 6800GT and X800Pro, but the 6800GT has faster memory, and therefore more memory bandwith in comparison. Again, time will tell.
 
jimmyjames123 said:
I stand by my comments. I also said that the 6800GT is generally slightly faster than the X800Pro. With Shadermark 2.0 in particular, you can see that the 6800GT generally has a slight edge over the X800Pro. Fillrate at stock core clocks is about equal for the 6800GT and X800Pro, but the 6800GT has faster memory, and therefore more memory bandwith in comparison. Again, time will tell.

You are correct about the GT being faster in Shadermark 2.0, I would expect it to be with an additional four pipes. Overall though, the X800 Pro seems to be equal or faster. There was a recent chart showing this, I just cannot remember what site it was from atm.
 
Back
Top