ATI responses about GeforceFX in firingsquad

Pete said:
Chalnoth, I recall you complaining about the 3-pin power connector on the 9700 Pro. Did you complain even more about the harder-to-insert-and-remove 4-pin connector on the NV30 U yet? Not to mention the beast of a cooler.... ;)

Oh, I think he managed to denounce 9700 in all kinds of ways when it was announced and yet we hear neary a peep from himn over NV30's similarities.

For instance, I seem to recall him saying that 9700, with its 8x1 design, was designed specifically for DoomIII. Of course, that was when NV30 was thought to be an 8x2 card. Now, with NV30's high clock rate, 8x1 design and low(er) bandwidth, which would seem to be a good case for DoomIII, we certainly hear no mutterings from him now... ;)
 
Oh, I think he managed to denounce 9700 in all kinds of ways when it was announced and yet we hear neary a peep from himn over NV30's similarities.

For instance, I seem to recall him saying that 9700, with its 8x1 design, was designed specifically for DoomIII. Of course, that was when NV30 was thought to be an 8x2 card.

And I specifically recall how he stated the 9700 was just too hot for the normal person. Of course, compared to the GFFX, the 9700 looks like a cooler.

Same situation with the power connector. Wow, from some people you would have thought the 9700 was sin because of it. Now the GFFX does it and not a peep.
 
Mintmaster said:
As for NVidia's much touted 128-bit rendering, that will need even more bandwidth.

...

In the end, ATI has a very good point. NVidia's pipes are quite unbalanced and bandwidth starved nearly all the time.

I'll ask my question again, since no one answered last time. Is GFFX aimed primarily at performance desktop computing (eg games), or is it equally (or moreso) aimed at digital cinema (Pixar, Square)?

All the debate has so far focused on GFFX's performance relative to 9700 in real-time games where having enough bandwidth to maintain high fps is crucial. As many have pointed out, GFFX has some nice new features, but appears to lack the bandwidth to use any of them at high enough frame rates for games. ATI has said that the GFFX appears poorly balanced, a little heavy on the features and too light on the bandwidth.

Now correct me if I'm wrong, but CGI creation does not require high fps, and therefore does not require extremely high bandwidth. Of course, production time and costs are probably ~ inversely proportional to bandwidth, but CGI can still be created without it.

However, doesn't digital animation greatly benefit with some of GFFX's new features? For example, doesn't color and image quality improve with full 128bit fp pipeline? Won't digital animation CGI quality improve with the use of some GFFX's other unique features, bandwidth limited as they may be? I'm no animator, so I won't guess, but hopefully someone can make some informed comments on this.

I think Nvidia knew this was going to be an issue, but wanted to go ahead and make a card that included all the features necessary to power the next generation of CGI and digital animation. If you were Pixar, and were in the process of upgrading your entire rendering farm, which solution would you pick: a farm based on GFFX or one based on 9700, all else being equal?
 
fbg1 said:
As many have pointed out, GFFX has some nice new features, but appears to lack the bandwidth to use any of them at high enough frame rates for games.

Wow. considering:
a) its got pretty much the exact same feature set as the R300 (excepting longer shaders)
b) from the few benchmarks we've seen (or extrapolated), it performs on par or slightly better than the current R300 incarnations. (Though we presume in super high resolutions and FSAA settings, the R300 would beat the NV30)

I guess nothing out there is good enough for games right now. ;)

As for which one would Pixar, et al. pick? It would be the one with the right software and neither competitor has really demonstrated to us (or we really haven't dissected it) that they've got the right software to adequately enable render farms with these chips.

I SUSPECT that NVIDIA has a software team working on this, because they've pushed the cinematic aspect in their marketting material(their toystory and Final Fantasy statements), and ATI less so. Given this wild and off the cuff speculation, I'd say we'll see NVIDIA chips in renderfarms, rather than ATI--but its all wild speculation on my part.
 
I SUSPECT that NVIDIA has a software team working on this, because they've pushed the cinematic aspect in their marketting material(their toystory and Final Fantasy statements), and ATI less so.

Russ, I don't think ATI haven't been pushing R300 for Cinematic / DCC uses, I just don't think they have been doing it quite so publically. I have reason to believe there are already multichip/render farm R300 incarnations already out there. Plus, if they aren't pushing it, what the hell is this '3December' all about?

OT - Is that DIY done yet? ;)
 
DaveBaumann said:
I SUSPECT that NVIDIA has a software team working on this, because they've pushed the cinematic aspect in their marketting material(their toystory and Final Fantasy statements), and ATI less so.

Russ, I don't think ATI haven't been pushing R300 for Cinematic / DCC uses, I just don't think they have been doing it quite so publically. I have reason to believe there are already multichip/render farm R300 incarnations already out there. Plus, if they aren't pushing it, what the hell is this '3December' all about?

OT - Is that DIY done yet? ;)
*cough* Ummm, its moving along. Still too many things to pick out with the house to devote too much spare time to anything else, but I am taking notes at work as we work on our next chip and trying to understand more of the process that goes on.
 
Nagorak said:
3December, LOL, those marketing guys are actually pretty good. :D
I will be at the Sydney showing. Hope it will be a good event... I doubt it though, Australians always get the reject shows.
 
Fuz said:
Nagorak said:
3December, LOL, those marketing guys are actually pretty good. :D
I will be at the Sydney showing. Hope it will be a good event... I doubt it though, Australians always get the reject shows.

you mean you aren't rejects? damn british didnt know what they started there....


;)
 
Randell said:
Thanks for that Jandar. I agree 40fps doesnt seem that much higher, but it is HIGHER, which is a long way from DT's assertation of not being abe to compete on FSAA/Aniso.

Well just keep in mind that the 40 fps number came on a p4 3.0 ghz cpu where as the other scores were from slow clocked cpus.
 
Jandar - Rejects???? Come and play us in sport buddy, we'll show you why a country with a population the size of New York came top 5 in the World at the last Olympics! At least we don't wear half a car every time we step onto the football field :) If you wanted to beat us in rugby (either codes) you might just get a hand-picked international league dream team up to the standard of our womens side in the foreseeable future. Just don't challenge us to any water sports, its way too embrassing when we win that easily! Or, if you're feeling really brave we could have a friendly drinking competition :LOL:

Sadly though its a shame we are on Asia's doorstep and the tech often gets to the USA before we see it. Don't they realise we're a perfect test market for high technology adoption?
 
nah, Ill pass.

Aussie Rugby is rough.

Only thing that comes close in American football is the scary fact that a 330# lineman can run as fast as a running back 1/2 his weight. (yes, it hurts to have a 300+# guy landing ontop of you)

but to play without helmets? thats barbaric.... ;) (but very manly)


but back on topic here...

Yeah, your computer parts selection sucks. You get stuff late and then pay more than its worth.
 
g__day said:
If you wanted to beat us in rugby (either codes) you might just get a hand-picked international league dream team up to the standard of our womens side in the foreseeable future.

Alternatively in Union you could just get the Ireland or England rugby teams to beat their men's side for you instead... ;)
 
fbg1 said:
]As many have pointed out, GFFX has some nice new features, but appears to lack the bandwidth to use any of them at high enough frame rates for games.

I don't think anyone said that. I just said that NV30's really high clock rate is not going to help it a whole lot. It is not that far behind R300 in terms of bandwidth (R300 has 25% more), so what's playable on one will likely be playable on the other.

It will be faster at longer shaders that aren't bandwidth limited. The argument I'm making is that it's unbalanced. The Geforce2 is also unbalanced. It has over double the fillrate of the Radeon, yet can barely outperform it (and often falls behind). That's just an utter waste of silicon, as a Geforce2 MX with the same bandwidth as a Geforce2 GTS would probably perform almost as well. However, the GTS, as unbalanced as it is, was still usually a tad faster than the Radeon.
 
It's not unbalanced for compute bound shaders. NVidia has spent an enormous amount of transistors on their shader execution units, so obviously, they see shader execution performance as the crucial thing the card is designed to handle. You could say that ATI is unbalanced, because on long compute bound shaders, the memory bus will be inefficiently used.

NVidia bought ExLuna and got a state of the art RenderMan renderer, and some of Pixar's best engineers, so obviously they are planning on going after the DCC market hard.

I question the whole point of continually harping on what you think is "unbalanced" If this card performs as well as the R300 on short shaders or no shaders, and better on longer shaders, and costs on par, just what is your point? If they had designed the card differently according to your wishes, it would execute cinematic quality shaders slower and hence miss one of the major use cases for the part.
 
The thing is you would assume they would just make custom cards for render farms anyway (designed so that there would be the least amount of bottlenecks possible for 3D rendering), so there's really no reason to release a consumer card that's balanced for rendering as opposed to gaming. I'm not really sure how valid that argument is. It's like a pizza shop giving you a bowl of spaghetti instead and then claiming they are an italian food resteraunt. Maybe so, but that's not what you wanted from them...
 
NVidia thinks that cinematic quality rendering can be done in "real time", so while they may be looking at the offline market, they also think that in the real-time market, complex shaders will be possible as well.
 
DemoCoder said:
It's not unbalanced for compute bound shaders. NVidia has spent an enormous amount of transistors on their shader execution units, so obviously, they see shader execution performance as the crucial thing the card is designed to handle. You could say that ATI is unbalanced, because on long compute bound shaders, the memory bus will be inefficiently used.

So, are you saying that on a long, compute bound shader NV30 is getting high utilisation from its memory bus resources? :eek:

Since it's compute-bound one would suspect not. :-?

So, in the very-long shader case neither chip would be getting good use of the available memory bus bandwidth (in fact you could make the argument that with really long shaders nVidia get almost no use of their memory bus at all, whereas an R300 implementation is making more intelligent use of its overall available resources by multipassing - at least the bus gets some use and isn't completely idle)

In the very short shader case if we assume (probably safely) that NV30 cannot get full use of its core because it is completely memory bound, then R300 with nearly twice the memory bandwidth per pixel available could theoretically get much better utilisation.

Of course there will be some case in the middle of this range where you have the correct situation of instructions vs. bandwidth for NV30 to be 'balanced'. We will have to wait and see whether this turns out to be an interesting case from a price/performance standpoint.

- Andy.
 
andypski said:
So, are you saying that on a long, compute bound shader NV30 is getting high utilisation from its memory bus resources? :eek:

Since it's compute-bound one would suspect not. :-?
Hu? I've understand the contrary... :-?

In the very short shader case if we assume (probably safely) that NV30 cannot get full use of its core because it is completely memory bound, then R300 with nearly twice the memory bandwidth per pixel available could theoretically get much better utilisation.

Excuse me, a silly question: More bandwidth to do what? :-?

We will have to wait and see...
Yeah, that's right till the first reviews.
 
I've previously mentioned this (I can't remember which thread).

Multipass shaders are not slower than single pass shaders IF bandwidth is not a limitation. In fact, under some circumstances a multipass shader can be faster than a single pass shader.
 
Back
Top