Can someone tell me why ATI's PS 3.0 is better than Nvidia's?

Bouncing Zabaglione Bros. said:
There's no doubt that going to .09 with such a complex chip will be difficult for Nvidia. There's no doubt that ATI has upped the stakes dramatically as far as SM3.0 branching performance goes (one of the main reasons for SM3.0 at all).

I just don't know what Nvidia would be thinking if after having SM3.0 in NV40 and then G70 and promoting it so heavily, they don't fix one of the most important parts of SM3.0 that is severely lacking on their chips with their third attempt.

I doubt there is much Nvidia can do now to change their design for a .09 G70, so if they were expecting to get a free pass from R520 and planned accordingly, they will be in big trouble six months from now. Nvidia will get slaughtered as soon as we see games and benchmarks that use the SM3.0 they've been plugging so heavily.

Knowing nV's level of aggression, they probably have at least three different implementations they've been testing for a while already. Don't expect them to give up that easily, they've never been so serious about destroying the competition :)
 
neliz said:
I wonder how STALKER would do (if it ever gets released) it used to be a "pure win" for the nv40 with the beta benchmarks and sm3.0.. now I wonder how it compares...

Also, benchmarks on new games like AoE3 should tell you something about SM3.0 performance..

I suppose it depends on the complextiy of the shaders, the length of the shaders, and whether much branching is used. Given the limitation of NV40/G70's SM3.0 implementation, I think it would be very easy to create something that runs well on R520 that would bring a NV40/G70 to its knees. I doubt developers would want to do that unless they are willing to have two paths. I can see the posibility of R520 running a bells-and-whistles branching SM3.0 path, and G70/NV40 getting a simplified SM2.0 path.

Until games starts using the more advanced features of SM3.0, I think there may not be that much difference evident between ATI and Nvidia. When those advanced features come in, G70/NV40 will show it's SM3.0 weaknesses.
 
_xxx_ said:
Knowing nV's level of aggression, they probably have at least three different implementations they've been testing for a while already. Don't expect them to give up that easily, they've never been so serious about destroying the competition :)

That's what I mean. Nvidia have already had two bites at the SM3.0 cherry, and now ATI has put them to shame on their first go. I can't belive that Nvidia wouldn't have already been planning on fixing their SM3.0 weaknesses on the third attempt, especially now they have some real competition in that area.
 
Bouncing Zabaglione Bros. said:
That's what I mean. Nvidia have already had two bites at the SM3.0 cherry, and now ATI has put them to shame on their first go. I can't belive that Nvidia wouldn't have already been planning on fixing their SM3.0 weaknesses on the third attempt, especially now they have some real competition in that area.

Which is why the 5950 destroyed the 9800XT? Oh, wait. . .;)

I have tremendous respect for NV engineering, but unless you tell me the code name for the first 90nm part starts G8x then I just don't know how much serious mucking about they are going to do. And I'm guessing it would take some serious mucking about. . .
 
Bouncing Zabaglione Bros. said:
That's what I mean. Nvidia have already had two bites at the SM3.0 cherry, and now ATI has put them to shame on their first go. I can't belive that Nvidia wouldn't have already been planning on fixing their SM3.0 weaknesses on the third attempt, especially now they have some real competition in that area.
Tell me in what sm3 game does ati put nvidia to shame?
 
HaLDoL said:
Tell me in what sm3 game does ati put nvidia to shame?

Well you need games that make use SM3 first :)

/me runs for cover :)

*Disclaimer. When the first PS2.0 games came out early in the PS2.0 days they were just quick tack on features. Same thing for todays games that use PS3.0. While today games may have been recomplied to run on a PS3.0 path, that does not mean the were written to take advatage of the SM3.0 features.. PF excluded of course :)
 
HaLDoL said:
Tell me in what sm3 game does ati put nvidia to shame?

That's the problem with people making such claims. Yes, the X1800 XT outperforms a 7800 GTX by a significant margin in titles like FEAR and Chaos Theory in high res. testing, but is that due to the larger amount of physical RAM and its near 10GB bandwidth advantage rather than the architecture itself? I would tend to say yes.

That said, we shouldn't be too quick to discount the potential benefit of synthetic testing for analyzing architectural peculiarities of these parts. IMO, 'natch.
 
John Reynolds said:
That's the problem with people making such claims. Yes, the X1800 XT outperforms a 7800 GTX by a significant margin in titles like FEAR and Chaos Theory in high res. testing, but is that due to the larger amount of physical RAM and its near 10GB bandwidth advantage rather than the architecture itself? I would tend to say yes.

That said, we shouldn't be too quick to discount the potential benefit of synthetic testing for analyzing architectural peculiarities of these parts. IMO, 'natch.

Sadly though this advantage in ram isn't going to magicly go away . If purchasing the gtx or the xt you can get the 512 meg card and have the higher bandwidth . This is a choice now and in the next few months untill nvidia can up the mhz of thier gpu .

So going foward which do you believe will out perform the other ? The gtx or the xt and I"d have to go with the xt . It has more ram and higher bandwidth which of course is going to help going foward as newer games push more data and textures .
 
John Reynolds said:
That's the problem with people making such claims. Yes, the X1800 XT outperforms a 7800 GTX by a significant margin in titles like FEAR and Chaos Theory in high res. testing, but is that due to the larger amount of physical RAM and its near 10GB bandwidth advantage rather than the architecture itself? I would tend to say yes.

That said, we shouldn't be too quick to discount the potential benefit of synthetic testing for analyzing architectural peculiarities of these parts. IMO, 'natch.
I thought FEAR isn't an SM3 title?

jvd said:
Sadly though this advantage in ram isn't going to magicly go away . If purchasing the gtx or the xt you can get the 512 meg card and have the higher bandwidth . This is a choice now and in the next few months untill nvidia can up the mhz of thier gpu .

So going foward which do you believe will out perform the other ? The gtx or the xt and I"d have to go with the xt . It has more ram and higher bandwidth which of course is going to help going foward as newer games push more data and textures .
The 512MB GTX is expected to be on the market sooner than the 512MB XT.
 
Last edited by a moderator:
HaLDoL said:
I thought FEAR isn't an SM3 title?

Well, if it isn't the point still stands that new games are probably benefitting from the memory configuration more than the graphics chip itself.

The 512MB GTX is expected to be on the market sooner than the 512MB XT.

At this point in time, would one week mean diddly-jack squat? And regardless of which hits the market first, will NVIDIA use the same memory? Slower? Faster? There's really no point in even discussing this until we know for certain.
 
John Reynolds said:
At this point in time, would one week mean diddly-jack squat? And regardless of which hits the market first, will NVIDIA use the same memory? Slower? Faster? There's really no point in even discussing this until we know for certain.

About the only thing we know for certian is that it will not be at the same price point that the current GTX are going for so that should stop some ******is from saying "but so and so is 100 bucks cheaper" :)
 
HaLDoL said:
Tell me in what sm3 game does ati put nvidia to shame?
Anything that makes extensive use of SM3.0, especially branching. Which card is going to be useful in six months time when UE3 engined games or Alan Wake hits? Which one will need to be replaced because it can't do branching?
 
Last edited by a moderator:
trinibwoy said:
He asked "does" not "will" :LOL:

Which one are you going to be glad you bought, and which one will you be cursing when it comes to play those games in the next year? Or are you just rich enough to drop $500 on a new graphics card every six months?
 
John Reynolds said:
At this point in time, would one week mean diddly-jack squat? And regardless of which hits the market first, will NVIDIA use the same memory? Slower? Faster? There's really no point in even discussing this until we know for certain.

http://www.nvnews.net/vbulletin/showpost.php?p=717357&postcount=23

Translated from http://www.hkepc.com/bbs/viewthread.php?tid=486092

According to Samsung's disclosed order manifest, apart from ATi, nVidia has also placed order for the 1.26ns GDDR3 memory chips, therefore we expect to see nVidia release its new 1.26ns GDDR3 featuring product in the near futre, which is believed to be the weapon against the X1800XT. The size of the Samsung K4J52324QC-BJ12, which is currently used on the ATi Radeon X1800XT, is different from the more commonly used GDDR3 memory we've seen, because the packaging used has been changed to 144 Ball FBGA unlead from 136 Ball FBGA, the specs are 2M x 32Bit x 8 Bank,voltage 2.0V. BJ12 means the speed is 1.26ns,the highest Samsung official clockspeed is 1.6GHz, much higher than the 1.6ns on the Geforce 7800GTX. If this memory is to be used, the PCB will have to be modified.

In additon,there appears to be a new G70 definition in the recently released Forceware 81.84 beta driver. This could well be the 1.26ns GDDR3 Geforce 7 part. If the 1.26GDDR3 is used along with an increase in core clockspeed, it's not difficult to surpass the ATi Radeon X1800XT in performance.
 
Bouncing Zabaglione Bros. said:
Which one are you going to be glad you bought, and which one will you be cursing when it comes to play those games in the next year? Or are you just rich enough to drop $500 on a new graphics card every six months?

Yes I could drop $500 on a new card every six months without thinking twice. But that's besides the point - I was just pointing out that your answer wasnt relevant to his question. And yes, the XT will be the better purchase should games pop up soon with greater reliance on dynamic branching performance.
 
Bouncing Zabaglione Bros. said:
Anything that makes extensive use of SM3.0, especially branching. Which card is going to be useful in six months time when UE3 engined games or Alan Wake hits? Which one will need to be replaced because it can't do branching?

afaik the 6800 and 7800 series of chips can do branching. It is just a matter of how efficiently.

I have a feeling when UE3 shows up these cards will both get slammed on high settings. And chances are the advantages of branching that we see in the X1800 wont magically make it an over the top best card.

Chances are the next generation cards will be required to play the thing with any kind of decent settings.

The performance some of the devs mentioned after the first showing was a 6800Ultra didnt really provide playable performance, just viewable results.

I am sure it will scale and be playable but just not at the highest settings.
 
Maintank said:
afaik the 6800 and 7800 series of chips can do branching. It is just a matter of how efficiently.

I have a feeling when UE3 shows up these cards will both get slammed on high settings. And chances are the advantages of branching that we see in the X1800 wont magically make it an over the top best card.

Chances are the next generation cards will be required to play the thing with any kind of decent settings.

The performance some of the devs mentioned after the first showing was a 6800Ultra didnt really provide playable performance, just viewable results.

I am sure it will scale and be playable but just not at the highest settings.
he meant do effciently 'm sure.
 
Thats another thing that most people seem to be overlooking.

R520 has almost 10GB bandwidth advantage and a 625mhz core clock.. Yet most of the time it *barely* outperforms or loses to the 7800GTX.

10GB of bandwidth advantage,,, just to break even. That is somehow a more efficient architecture??? hardly..

Which is why driverheavens efficiency test is valid. It shows what is really going on. The R520 is not even close to an efficient architecture.. its uses extremely high clocks and Gross overkill in bandwidth just to get a 3 FPS lead...
 
Maintank said:
afaik the 6800 and 7800 series of chips can do branching. It is just a matter of how efficiently.

I have a feeling when UE3 shows up these cards will both get slammed on high settings. And chances are the advantages of branching that we see in the X1800 wont magically make it an over the top best card.

Chances are the next generation cards will be required to play the thing with any kind of decent settings.

The performance some of the devs mentioned after the first showing was a 6800Ultra didnt really provide playable performance, just viewable results.

I am sure it will scale and be playable but just not at the highest settings.

Actually the G70 and NV40 take a big hit on branching - that's why Nvidia tells devs not to use branching if they can help it. It's been this way so long (since NV40), that I can't believe that Nvidia will leave it that way in the face of the new SM3.0 competition from ATI.

Because branching can be used to increase efficiency, we should actually see SM3.0 producing the same results as SM2.0, but faster. That should make quite a difference in upcoming shader heavy games like Unreal Tournament 2007, Far Cry, Stalker, Alan Wake, FEAR, etc.
 
Back
Top