ATI responses about GeforceFX in firingsquad

Russ, neither you (or a thousand of you) nor I will ever get what we both want (simplistically, you're asking for a perfect world - er, forum). It's useless. It is more down to the Beyond3D staff, not its participants.

Just quit whining and live with it (like I have/do). It has become rather easy for me to ignore.
 
andypski said:
nggalai said:
ATI feels that with RADEON 9700’s multi-pass capability, having native support for thousands of shaders is useless, as the RADEON 9700 can loopback to perform those operations. ATI ran a demonstration of a space fighter that was rendered using this technique.
Correct me if I'm wrong, but wouldn't that mean that the Radeon9700 would lose precision, as the frambuffer doesn't support the pipeline's 96bit FP format?

ta,
-Sascha.rb

You're wrong.

The framebuffer supports 128-bit floating point format, so no precision is lost.

Consider yourself corrected. ;)
Thanks!

*stands corrected* ;)

ta,
-Sascha.rb
 
Joe DeFuria said:
Well, it seems to me you tend to be more vocal of these things where people are taking "jabs" and "jokes" specifically at nVidia's expense.
Maybe because majority of the jabs and jokes on this messageboard are aimed specifically at nVidia?

-dksuiko
 
as for R350, I'd expect that to be 256-bit bus (perhaps 8*32 crossbar)
with GDDR-3 or DDR-II. not a 128-bit bus and DDR-II. that makes no sense.

128-bit might be used on RV350 though
 
As others have pointed out, 128-bit DDR-II is nowhere close to sufficient for any new high-end product from ATI, as it would guarantee that they wouldn't be able to beat NV30. And I think it's a bit too early for GDDR3 just yet, as last I read, even the specs were not 100% completed yet.

8-way crossbar makes sense considering that DDR-II doubles the minimum burst length; I would expect much of R300's innards to be optimized around 128-bit memory (64 bit per controller * burstlength of 2 for DDR-I) accesses.
 
IIRC ATI showed a WORKING 9700 Pro w/ DDRII on 256 bit - so, do we need to care about preposterous speculations like new ATI high-end card equipped w/ DDRII on 128 bit?

C'mon, it's just a funny (but really stupid IMHO) idea from Chalnoth, (And, I will say again, it will most likely be the case that when ATI releases a DDR2 board, it will be on a 128-bit bus. Yes, there's a chance for it to be a 256-bit bus, but it would require a much more beefy memory controller or much higher core clock speeds. :LOL: ) nothing more. Typical FUD...

EDIT: typo
 
Randell said:
Doomtrooper said:
Effective bandwidth is also limited by hardware limitations, no matter how exotic bandwidth saving features are included in the Nv30 if it only using a 128-bit bus it will not be able to compete in the FSAA and Anisotropic benchmarks vs 256 bit bus cards... IMO

A example is here from Ante P:

http://www.beyond3d.com/forum/viewtopic.php?t=3232&highlight=835

UT2003 Inferno 1024x768: Max FSAA Max Aniso:
GeForce 4 Ti 4600 (306/660): 8 fps
Radeon 9700 Pro (325/620): 75 fps

OK do me a favour please DT to test this. Whats your Nature 1280x1024 4xAA/8x AF score. My score at 10x70 is 33fps, but my LCD cant do 1280x1024.

On a stock XP2200 512MB PC2100 DDR
Radeon9700 Pro 6193 Drivers
1280X1024@32Bpp 4xAA 8xAF

I get 32fps.

EDIT#
with the 6228 Drivers, same settings:
34.8

suddenly 40fps doesnt seem that much higher.
 
Chalnoth, I recall you complaining about the 3-pin power connector on the 9700 Pro. Did you complain even more about the harder-to-insert-and-remove 4-pin connector on the NV30 U yet? Not to mention the beast of a cooler.... ;)

As for the nForce comment, you know Athlons weren't made with 333FSB in mind, and the dual-channel nF memory architecture seems to benefit the onboard video the most. Still, newer, 333FSB Athlons are showing very impressive performance paired with a dual-channel nF2 and DDR333.

I'm very excited about the 9900, moreso about the prospect of a sub-$200 9700 (non-Pro) with improved drivers in a few months. Imagine that kind of performance reaching upper mainstream--sweetness.
 
jandar said:
suddenly 40fps doesnt seem that much higher.

Thanks for that Jandar. I agree 40fps doesnt seem that much higher, but it is HIGHER, which is a long way from DT's assertation of not being abe to compete on FSAA/Aniso.
 
no_way said:
Developers dont sell games, publishers do. What im saying is, not all developers are in this industry just for $$$.

Well I hate to say it, but if they're not in the industry to make money, they won't be in the industry very long... :cry:

Also I agree that if you were starting to develop today aiming for DX 8.1 specs would be smart. But the thing is, by the time you finished 1.5 years from now, DX 8.1 would still end up being old by then. So it's not all just aiming for lowest common denominator, but also the production time. If you keep adapting in the newest tech that comes along, you'd never end up finishing the game up. Also, I think a lot of people on this forum overestimate the importance of graphics in the overall scheme of things (but that's to be expected seeing as it's a video card forum).
 
Chalnoth said:
I'd really like to dissect this one...

jpeter said:
The focus of our conversation with ATI was dealing with the misconceptions brought about by NVIDIA during the GeForce FX launch. ATI essentially feels that the RADEON 9700 is a more balanced solution than GeForce FX, which doesn’t have the bandwidth to perform many of the operations it’s boasting at an acceptable frame rate.

Irrelevant. If nVidia can produce performance numbers as advertised (Particularly with FSAA enabled), then it doesn't matter who has more raw bandwidth.

I'm going to give you a few examples of what ATI is talking about. One of the most important upcoming 3D effects is HDR lighting and tone-mapping, e.g. the Paul Debevic "Natural Light" demo. Tone mapping is one thing, but what really looks great is the glowing / light bleeding. Even John Carmack said this is probably the next big thing with 3D graphics.

To do this you need to either a) make a few mipmap levels of the current scene (though you get a bit of a blocky blur) or b) do a gaussian sampling of neighboring pixels as described in ATI's papers. You need at least a 64-bit framebuffer.

For a), that's 4 texture (your original rendered scene) samples per pixel, each 64-bits. You also have a 64-bit write. You also take 4 cycles (due to 1 tex unit per pipe and lack of FP texture filtering) to do this. That's 80-bits of access required per pixel, per clock cycle. R300 has 61-bits of memory access per cycle. NV30 has 32-bits. Then you have to composite these mipmaps together, and you have similar bandwidth requirements (although lower mipmaps average the texels over more pixels, so caches alleviate this somewhat).

For b), it's a similar situation, exept you're not making mipmaps, and you're taking more texture samples (16 in ATI's paper). Still, you will never average below 64-bits per pixel per cycle.

A similar situation happens for depth of field. If you don't use 64-bit pixels, you can get 4 32-bit samples in one clock via bilinear texture filtering, so bandwidth is still needed.

As for NVidia's much touted 128-bit rendering, that will need even more bandwidth.

In the end, ATI has a very good point. NVidia's pipes are quite unbalanced and bandwidth starved nearly all the time. Only when we start seeing mostly mathematical shaders (even the "dawn" demo looked like more texture blending than math) will NV30's high clock be of much use, and that is a long ways off. NV30's high clock will be good for low-bandwidth stencil-only operations like Doom 3, or low bandwidth multitexturing situations where req'd bandwidth is spread over several cycles (like Quake3 w/o FSAA).

For instance, NVIDIA is proud to claim that GeForce FX boasts pixel and vertex shaders that go beyond DirectX 9.0’s specs, yet a 400-500MHz chip with 8 pixel pipelines running very long shaders would spend all of its time in geometry, bringing frame rate to a crawl. ATI feels that with RADEON 9700’s multi-pass capability, having native support for thousands of shaders is useless, as the RADEON 9700 can loopback to perform those operations. ATI ran a demonstration of a space fighter that was rendered using this technique.

This is just stupid. Yes, the GeForce FX has more fillrate compared to geometry rate compared to the Radeon 9700, but that doesn't matter. From what I've been hearing on these very forums, most of the calculations will be moving away from the vertex shader and onto the fragment shader.

I think FiringSquad misinterpreted this. "Geometry" probably means vector math in the pixel shader. Having many 1024-instruction pixel shaders won't run in realtime, and if they are being used sparingly, multipass will suffice. You'd need a really pathalogical pixel shader if you couldn't break it up. If they were talking about vertex shaders, which can't be broken up, they must be talking about NVidia's theoretical 65536 instruction shaders which would allow very low poly models.

NVidia's "beyond DX9" shaders won't be used any more than ATI's 1.4 pixel shaders. DX9 already gives developers more freedom than they can handle, so there is really no need to go beyond that. Anyway, this has been argued countless times already so I won't mention it any more.

Again, irrelevant. Final performance is what matters.

OK, lets look at their quotes:
-2.7 times GF4 Ti4600 HQ fillrate. 9700 already does that (OK, fine, only 2.5 times. Look at Q3 16x12, FSAA)
-173 fps in Quake3 vs 93 for GF4 @ 2048x1536. ATI would be a bit slower, being one of the situations I mentioned above
-40 fps in Nature w/ 4xFSAA @ 1280x1024. ATI does 35+ from what I've heard

Furthermore, from what we've heard NV30 (from NV reps) has ordered grid FSAA, so you'll likely need 4xS FSAA or 6xS to match ATI's 4xFSAA from what we know about FSAA.

Doom3 is the only game which shows NV30 in particularly good light, and that's not even final yet.

Finally, to top it all off, ATI is quite likely to have a faster card to compete with NV30. I'm not sure of it myself since ATI hasn't done this in the past, but this is also the first time ATI has been in this position. You've been a huge NV30 supporter for a very long time, but in the end it has many flaws, and won't have much more to offer.
 
Reverend said:
Russ, neither you (or a thousand of you) nor I will ever get what we both want (simplistically, you're asking for a perfect world - er, forum). It's useless. It is more down to the Beyond3D staff, not its participants.

Just quit whining and live with it (like I have/do). It has become rather easy for me to ignore.

How is it down to the B3D staff...are they just going to start banning people who seem biased? That would lead to pretty boring conversations after about a week when the only person still posting is AlexSOK...err wait, he'd be the first to go, NM.
 
Mintmaster said:
Doom3 is the only game which shows NV30 in particularly good light, and that's not even final yet.
I supposed you know to what market is aimed DOOM3, so the "only" is pure misinformation ;)
 
Reverend said:
Russ, neither you (or a thousand of you) nor I will ever get what we both want (simplistically, you're asking for a perfect world - er, forum). It's useless. It is more down to the Beyond3D staff, not its participants.

How is the staff responsible for the overall tone of a message board? A conversation's tone is created by those conversing. Sure, heavy-handed moderation would result in the removal of anything even remotely resembling rudeness, vulgarity, and/or excessive zealotry (and I have locked quite a few fanboi-ish threads that served no purpose other than to say, "Hey, new drivers!"), but that's not something the current staff is interested in. And there's always going to be a certain element who feel it's their 'job' to cheerlead their favored IHV's market approach and products. IMO, ignoring such people means they'll most likely take their cheerleading elsewhere, whereas always reacting negatively simply generates more noise.
 
The B3D forums have always been pretty self regulated. The only times we step in is if things go really bad in terms of language or completely improper contents, which IIRC only happened once.

Fanatics come and go and it usually balances out quite quickly... everybody has a preference knowingly or not... jabs happen in real life and its just easier to jab/joke about some PR than about others. And its a seasonal thing... usually coupled with excessive PR activity. I dread the day the PowerVR season is declared open :LOL:

I don't think we'll ever move to heavy handed moderation of the forums.
 
Back to topic
What about some of the points Mintmaster makes.. can someone refute any of them? That is without actual hardware and looking at specs.. I would love a game developer to make some educated guesses... even anonymously.

Truth be told I was slightly underwhelmed with the specs NVIDIA released for the GFFX - it no longer seems revolutionary as was originally hyped even by NV's CEO. Seems more like an extension to the R300 core and even then seems to be lacking in some places. But I am not technical enough to make these calls.....

I believe NVIDIA spent too much resources in making sure the NV30 would be faster in all benchmarks (by adding native support rather than emulating it for past technology) and the vertex performance already seems weaker (3 vertex units?).
 
misae said:
Back to topic
Truth be told I was slightly underwhelmed with the specs NVIDIA released for the GFFX - it no longer seems revolutionary as was originally hyped even by NV's CEO. Seems more like an extension to the R300 core and even then seems to be lacking in some places. But I am not technical enough to make these calls.....

does nVidia recognize ATI as competitor? or do they try to hold on that old vision where they were years ahead the others? if you look at the all stuff they showed, does anyone those talk anything about ATI and how GeForce FX is doing better than the Radeon 9700 Pro? or do they still want to give consumers the vision that Ti4600 based product is the best that money can get until GeForce FX based boards come and will replace it as the fastest high end card?

a lot of questions there...
 
Now I dont believe NVIDIA do see the GF4 Ti4600 competing in the same bracket as the Radeon 9700 Pro. The market is small at that price range and it seems that if you are after spending £250+ on a graphics card then your eyes are going to be on the Radeon 9700 Pro or waiting for the GF4 Ti4600 successor (no, not that 8x version ;)).

In fact I heard from a couple of news sites that NVIDIA would/has stopped making GF4 Ti4600 chips. But that could possibly be due to the fact that they want to concentrate on the GFFX.

I wonder how soon OEM's are going to take up the GFFX because most (as in 90% of OEM's I work with not including DELL) have SIS chipset motherboards and are using the GF4MX...

What kind of penetration has the Radeon 9700 Pro made it into OEM market? I believe the NVIDIA GFFX will make at least double the impact but it will still be abysmally small in the bigger picture.
 
Evildeus said:
Mintmaster said:
Doom3 is the only game which shows NV30 in particularly good light, and that's not even final yet.
I supposed you know to what market is aimed DOOM3, so the "only" is pure misinformation ;)

Well, I'll give you that. This is the most important game in the eye of the hardcore gamer looking at future tech. But designing a graphics card to excel at one game engine can lead to a card being imbalanced (as I've mentioned), and is not the greatest idea in my opinion. Also, from a cost/benefit viewpoint, this is very risky.

Using Doom 3 as a performance argument is even more speculative than our current NV30 talks, but in all likelyhood this will be NV30's trump card. We'll see how this pans out in a few months.

Good point.
 
Back
Top