ATI responses about GeforceFX in firingsquad

jpeter

Newcomer
http://firingsquad.gamers.com/features/comdex2002/page3.asp

The focus of our conversation with ATI was dealing with the misconceptions brought about by NVIDIA during the GeForce FX launch. ATI essentially feels that the RADEON 9700 is a more balanced solution than GeForce FX, which doesn’t have the bandwidth to perform many of the operations it’s boasting at an acceptable frame rate.

For instance, NVIDIA is proud to claim that GeForce FX boasts pixel and vertex shaders that go beyond DirectX 9.0’s specs, yet a 400-500MHz chip with 8 pixel pipelines running very long shaders would spend all of its time in geometry, bringing frame rate to a crawl. ATI feels that with RADEON 9700’s multi-pass capability, having native support for thousands of shaders is useless, as the RADEON 9700 can loopback to perform those operations. ATI ran a demonstration of a space fighter that was rendered using this technique.

As far as NVIDIA’s bandwidth claims of GeForce FX’s 48GB/sec memory bandwidth, ATI states that the color compression in their HYPERZ III technology performs the same thing today, and with all of the techniques they use in RADEON 9700, they could claim bandwidth of nearly100GB/sec, but if they did so no one would believe them, hence they’ve stood with offering just shy of 20GB/sec of bandwidth.

One other clarification is in regards to DDR2 memory support. Late last week rumors were floating around that ATI’s DDR2 demonstration wasn’t actually running as DDR2 memory. ATI reiterated that the RADEON 9700 memory controller does indeed support DDR2 and that was the memory type used in the demonstration board.
 
I do wonder why NVIDIA have been playing the importance of their colour compression and then implying that it is this that will give the GFFX the advantage in realworld effective fillrate when ATI's Radeon 9700 Pro already does it.

Another thing I would like someone to clarify is the FSAA method.. does it include gamma correction also like the Radeon 9700 Pro?
 
misae said:
Another thing I would like someone to clarify is the FSAA method.. does it include gamma correction also like the Radeon 9700 Pro?

My understanding from what little info NV has released is that it does include gamma correct FSAA like ATI.
 
Don't forget page2:

Finally, it will be interesting to see what ATI is up to come February. Word on the street is that ATI’s follow-up to RADEON 9700 PRO (codenamed R350) has taped out recently and will be marketed under the name RADEON 9900. We’ve heard conflicting reports on its manufacturing process, we naturally assumed it would be a 0.13-micron part, but we’ve also been informed by another source that the design is 0.15-micron.

In any case, R350 will certainly boast higher clock speeds and performance, so GeForce FX could be in for quite a battle if it slips any further.


Sounds more interesting for me... ;)
 
I am still waiting for the time that gfx and cpu performance will be so abundant there will be no need to upgrade anymore.

:eek:
 
jpeter said:
http://firingsquad.gamers.com/features/comdex2002/page3.asp

The focus of our conversation with ATI was dealing with the misconceptions brought about by NVIDIA during the GeForce FX launch. ATI essentially feels that the RADEON 9700 is a more balanced solution than GeForce FX, which doesn’t have the bandwidth to perform many of the operations it’s boasting at an acceptable frame rate.

For instance, NVIDIA is proud to claim that GeForce FX boasts pixel and vertex shaders that go beyond DirectX 9.0’s specs, yet a 400-500MHz chip with 8 pixel pipelines running very long shaders would spend all of its time in geometry, bringing frame rate to a crawl. ATI feels that with RADEON 9700’s multi-pass capability, having native support for thousands of shaders is useless, as the RADEON 9700 can loopback to perform those operations. ATI ran a demonstration of a space fighter that was rendered using this technique.

As far as NVIDIA’s bandwidth claims of GeForce FX’s 48GB/sec memory bandwidth, ATI states that the color compression in their HYPERZ III technology performs the same thing today, and with all of the techniques they use in RADEON 9700, they could claim bandwidth of nearly100GB/sec, but if they did so no one would believe them, hence they’ve stood with offering just shy of 20GB/sec of bandwidth.

One other clarification is in regards to DDR2 memory support. Late last week rumors were floating around that ATI’s DDR2 demonstration wasn’t actually running as DDR2 memory. ATI reiterated that the RADEON 9700 memory controller does indeed support DDR2 and that was the memory type used in the demonstration board.

Except for the first one (about balance, nobody can really judge that without in depth analisys of a retail product IMHO), those are fair enough stetements. I'd like to know more precisely what they mean with the "very long shaders would spend all of its time in geometry" comment, do they suggest that using loopback in such situations is going to be faster? Other than that, most educated people know that Nvidia's bandwidth numbers are crap, its only fair that ATI clarifies it also has colour compression and chose not to make misleading marketing statements. The last comment doesn't make sense to me though, nobody ever claimed it wasn't DDR2 memory on Tech TV, but instead that the DDR2 was running in DDR compatibility mode, which is a different thing.
 
Gollum said:
The last comment doesn't make sense to me though, nobody ever claimed it wasn't DDR2 memory on Tech TV, but instead that the DDR2 was running in DDR compatibility mode, which is a different thing.
This has been answered already by sireric.
 
Gollum said:
Thanks GLguy, I will search for his response... :)


http://www.rage3d.com/board/showthread.php?s=&threadid=33647901&perpage=20&pagenumber=1

Middle of the page, posted by sireric:

The card demonstrated on TechTV and the ones mentioned in the ATI press release are all full DDR2 based graphics cards. ATI is the first and only company to have demonstrated the use of DDR2 in a graphics card. There are no tricks being played. In fact, the president of JEDEC demonstrated that DDR2 card on TechTV. No games.

FUD from "unknown" sources is just a sign of ignorance or simple denial of reality.
 
mmmm.... bring on the R350.. ....then NV35

not to mention the R400 later in the year... for which, Nvidia will probably NOT be able to respond to, with NV40, until 2004.

Nvidia needs to be whipped back into shape. :D


competition is good for us. :D
 
I'd really like to dissect this one...

jpeter said:
The focus of our conversation with ATI was dealing with the misconceptions brought about by NVIDIA during the GeForce FX launch. ATI essentially feels that the RADEON 9700 is a more balanced solution than GeForce FX, which doesn’t have the bandwidth to perform many of the operations it’s boasting at an acceptable frame rate.

Irrelevant. If nVidia can produce performance numbers as advertised (Particularly with FSAA enabled), then it doesn't matter who has more raw bandwidth.

For instance, NVIDIA is proud to claim that GeForce FX boasts pixel and vertex shaders that go beyond DirectX 9.0’s specs, yet a 400-500MHz chip with 8 pixel pipelines running very long shaders would spend all of its time in geometry, bringing frame rate to a crawl. ATI feels that with RADEON 9700’s multi-pass capability, having native support for thousands of shaders is useless, as the RADEON 9700 can loopback to perform those operations. ATI ran a demonstration of a space fighter that was rendered using this technique.

This is just stupid. Yes, the GeForce FX has more fillrate compared to geometry rate compared to the Radeon 9700, but that doesn't matter. From what I've been hearing on these very forums, most of the calculations will be moving away from the vertex shader and onto the fragment shader.

Not only that, but there's absolutely no possible way that you can say that when running long shaders, the NV30 will be bogged down in geometry. This statement says nothing about the average size of each triangle, or the difference in size between the vertex program and the fragment program (i.e. the NV30 will do very well for relatively short vertex programs, but comparitively long fragment programs, for near-pixel size geometry).

As far as NVIDIA’s bandwidth claims of GeForce FX’s 48GB/sec memory bandwidth, ATI states that the color compression in their HYPERZ III technology performs the same thing today, and with all of the techniques they use in RADEON 9700, they could claim bandwidth of nearly100GB/sec, but if they did so no one would believe them, hence they’ve stood with offering just shy of 20GB/sec of bandwidth.

Again, irrelevant. Final performance is what matters.

One other clarification is in regards to DDR2 memory support. Late last week rumors were floating around that ATI’s DDR2 demonstration wasn’t actually running as DDR2 memory. ATI reiterated that the RADEON 9700 memory controller does indeed support DDR2 and that was the memory type used in the demonstration board.

It doesn't make any difference until ATI releases a board with DDR2.

And, I will say again, it will most likely be the case that when ATI releases a DDR2 board, it will be on a 128-bit bus. Yes, there's a chance for it to be a 256-bit bus, but it would require a much more beefy memory controller or much higher core clock speeds.
 
Chalnoth, what would we do without your constant rebuttal of any comments that don't put Nvidia in a good light? You're right, based on the hand-fed Nvidia benchmarks from a card that could heat an average sized house, it's obvious that ATI is outclassed in just about every way, rigth? ATI shouldn't even attempt to defend themselves, right?

Please. The comments from ATI did not attack or slander anything. To label the comments from someone that is obivously not stupid as irrelevant or stupid is both insulting to the original person and to readers in general.

What ATI is saying, and anyone that is honest with themselves would agree, the benefits of issues like 128bit color (as opposed to 96bit) and the difference in the number of shadars that can be computed is simply not applicable to 99.99% of the user-base (especially gamers) nor to anything we are likely to see running on our desktops.
 
Back
Top