ATI responses about GeforceFX in firingsquad

What ATI is saying, and anyone that is honest with themselves would agree, the benefits of issues like 128bit color (as opposed to 96bit) and the difference in the number of shadars that can be computed is simply not applicable to 99.99% of the user-base (especially gamers) nor to anything we are likely to see running on our desktops

That may be true...and I'll probably sit here and agree with you on that...But let's also not forget the ole' PS 1.4 arguments that were made both by ATI, and their following. They weren't saying that PS 1.3 is "good enough" or that PS 1.3 will be about the only Shader version to gain any acceptance by game developers...It was quite a bit different (and rightly so).
 
Chalnoth (the irrelevant one) said:
I'd really like to dissect this one...

jpeter said:
The focus of our conversation with ATI was dealing with the misconceptions brought about by NVIDIA during the GeForce FX launch. ATI essentially feels that the RADEON 9700 is a more balanced solution than GeForce FX, which doesn’t have the bandwidth to perform many of the operations it’s boasting at an acceptable frame rate.

Irrelevant. If nVidia can produce performance numbers as advertised (Particularly with FSAA enabled), then it doesn't matter who has more raw bandwidth.

And if they cant? What makes you think they can?
Your "irrelevancy" is irrelevant to the discussion. ATI didnt say "we will stomp the GFFX", they said their part is more balanced. And quite frankly, more bandwidth and lower clock speed of the core does make it more balanced, if past history (which you are so fond of using to prove nVidia will have better performance) means anything. GFFX will be bandwidth limited with such a high core clock and lack of memory bandwidth, imo.

As far as NVIDIA’s bandwidth claims of GeForce FX’s 48GB/sec memory bandwidth, ATI states that the color compression in their HYPERZ III technology performs the same thing today, and with all of the techniques they use in RADEON 9700, they could claim bandwidth of nearly100GB/sec, but if they did so no one would believe them, hence they’ve stood with offering just shy of 20GB/sec of bandwidth.

Again, irrelevant. Final performance is what matters.
Not at all, you totally miss the point (again!)
ATI is simply saying that nVidia isnt the only one with color compression (most web sites previews left this little fact out) and ATI also says that talking about "effective" bandwidth is worthless. The only thing "irrelevant" that i see is nVidias claims of 48GB/sec...

And, I will say again, it will most likely be the case that when ATI releases a DDR2 board, it will be on a 128-bit bus. Yes, there's a chance for it to be a 256-bit bus, but it would require a much more beefy memory controller or much higher core clock speeds.
And why exactly do you think this?
Despite all the facts (ATI has stated their current memory controller supports DDR2, ATI has shown a card running DDR2, etc...
Why would htey cut the bus width back? Do you even know what you are talking about?
 
Typedef Enum said:
What ATI is saying, and anyone that is honest with themselves would agree, the benefits of issues like 128bit color (as opposed to 96bit) and the difference in the number of shadars that can be computed is simply not applicable to 99.99% of the user-base (especially gamers) nor to anything we are likely to see running on our desktops

That may be true...and I'll probably sit here and agree with you on that...But let's also not forget the ole' PS 1.4 arguments that were made both by ATI, and their following. They weren't saying that PS 1.3 is "good enough" or that PS 1.3 will be about the only Shader version to gain any acceptance by game developers...It was quite a bit different (and rightly so).

Actually, I maintain if there is a visible difference that can be demonstrated to be gained by "pure" 128-bit, they have every right to demonstrate and harp on it. The same with their generally enhanced pixel shader functionality. The problem is at this level of features, there doesn't seem to be a demonstrable difference (of course the nv30 isn't actually out yet, so nVidia has plenty of time to make their case...it is just that they've already demonstrated and haven't so far). If the GF3/4 had high precision floating point intermediate buffers, the situation might be a closer parallel, but it doesn't.

The difference so far is what nVidia has done is not just say their functionality is better and/or faster, but tried to marginalize the R300 by saying the nv30 is the first "cinema shading GPU" and consistently pretend the statements that follow about what the nv30 can do support this, when the similarity of the R300's featureset to those following statements seems to simply make their initial statement false. I think it is working pretty well so far, and I'm pretty sure many forum members from other websites believe it totally. I don't think it is surprising that ATI found it necessary to respond to this and other implications.

That said, I do agree with all of Chalnoth's "final performance is all that matters" comments, I just happen to think the "irrelevant" and "just stupid" commentary before them are a bit biased. :p Too bad we still have so many questions about final performance between the two parts though.

I don't get the 128-bit bus assumption at all, though, why would they go backwards in bandwidth? I doubt they will try for 1 GHz right away, they don't need to, the speed increase using DDR II will likely be much more gradual. *shrug* Anything is possible, I guess.
 
Chalnoth said:

And, I will say again, it will most likely be the case that...

Now, Chalnoth...what did I tell you about using phrases like "most likely", "definitely", "obviously"?

The rest of you should learn to basically ignore anything Chalnoth says after any such phrases...
 
I will say this, even if I don't always agree with Chalnoth (or fanatics on any either side) I do have to say that he rarely (if ever) reverts to slander to anyone on this board (that I've seen). I really respect that and do read what he says just because of that. It tells me that his beliefs/opinions are more solid and don't require knee-jerk attacks.

My personal fault is that I'm for some (foolish?) reason enjoying seeing Nvidia slightly stumble. I enjoy my TI4400 and if I really think about it, I want them to succeed as much or more than ATI. However, I really find that Nvidia's marketing in areas such as these is borderline unethical.

Back to the topic at hand... What I should have said earlier is that it seems petty that so many people take such strong sides on issues like 128bit color vs 96bit color, etc, etc when the fact is that both cards are so far above what we are all using today that it's really comical. It's like comparing 8-track tape to Dolby Digital and DTS.
 
I simply cant understand how people read some of the comments made on this board and not get at least *irritated*. I know that most of you guys just say *hey its only video cards who cares*. But some things matter to me.

For instance Chalnoths comments.

And, I will say again, it will most likely be the case that when ATI releases a DDR2 board, it will be on a 128-bit bus. Yes, there's a chance for it to be a 256-bit bus, but it would require a much more beefy memory controller or much higher core clock speeds.

The above is a completely untrue statement whos only motivation seems to be sheer product favrotism. No it would not require a more beefy memory controler, or higher Core clock speeds. Just because Nvidia decided to go for a 128bit bus.... i mean.. come on. Where is the integrity to technology in general??? You act like you are reading right off an Nvidia Talking points handout.

This is just stupid. Yes, the GeForce FX has more fillrate compared to geometry rate compared to the Radeon 9700, but that doesn't matter. From what I've been hearing on these very forums, most of the calculations will be moving away from the vertex shader and onto the fragment shader.

Not for years and years.. You know very well how long Game developers take to catch up to Technology. The point is that by the time any games come out that would use such long shader routines, the actual routines would run at a snails paace even on the Nv30.. It will take a coupple full generational hardware upgrades to get to in game performance levels. It is no different than what we see with games like Doom-III that will run extremely slow on standard GF3 hardware, yet was the base for its development. Also, The only reason they even have a fill rate advatage is core clock speed alone. Yet even That advantage will dissapear before the Nv30 has barley hit the street.

It doesn't make any difference until ATI releases a board with DDR2.

So where is the DDR-II Nvidia board? Both were seen, but not experienced. Both exsist, but not available. There is no difference.

Again, irrelevant. Final performance is what matters.

I totally agree with this statemet. And will be saving it for February.

Another thing.. The only Nv30 card numbers that have been shown are the *mythical* 500mhz Ultra version. There is no gurantee that they will actually be able to ship at that speed, nor do I think the non-ultra version is going to outperform the 9700pro. Also their performance advantage at 500mhz is due to its massive fillrate. yet... No one outside of Nvidia has actually been able to test one.

We will see what happens when the actaul shipping cards are reviewed.
 
Hellbinder[CE said:
]Another thing.. The only Nv30 card numbers that have been shown are the *mythical* 500mhz Ultra version. There is no gurantee that they will actually be able to ship at that speed.
Well i don't for the other presentation, but i do know that in France, the Nvidia rep guaranteed the 500MHz core speed.
 
I'm not at all sure about the GeforceFX speeds. The launch said 500Mhz+ parts didnt it?

Is it guaranteed that the 500mhz part is the Ultra?
 
Hellbinder[CE said:
]Not for years and years.. You know very well how long Game developers take to catch up to Technology. The point is that by the time any games come out that would use such long shader routines, the actual routines would run at a snails paace even on the Nv30.. It will take a coupple full generational hardware upgrades to get to in game performance levels. It is no different than what we see with games like Doom-III that will run extremely slow on standard GF3 hardware, yet was the base for its development.
I dont care for one or other company's next-gen products ( they'r both quite a pieces of impressive sand :p ) but i care for 3D tech thus ...
Dont base your judgement only on what happened in past ( slow T&L adoption, slow DX8 shader adoption, slow cubemaps on GF1 ). At the moment, both major IHVs are doing some heavy efforts to make using their new tech easier for developers ( rendermonkeys, Cgs, etc )
Here's for hoping that software developers will really pick up on that.
Like stated before, long shaders on NV30 are not useless for interactive applications even when they are slow, because one doesnt need to fill 100% of the display with something like procedural fire done in PS. What matters for performance is average shader length per scene.
Where does your statement about DOOMIII and GF3 come from ? Have you tested the final game or even official demo on aforementioned graphics card ?
 
And, I will say again, it will most likely be the case that when ATI releases a DDR2 board, it will be on a 128-bit bus. Yes, there's a chance for it to be a 256-bit bus, but it would require a much more beefy memory controller or much higher core clock speeds.

How about this for an idea:

Assuming DDR2 is greatly reduced in price and is freely available when RV350 is released (around, say mid-2003). What are the chances that we'll see RV350 as an 8-pipeline 0.13 micron device using 128-bit DDR2 ~450MHz.

R400 would then be 0.13 micron 256-bit DDR2.

Of course, in the interim, I'm sure ATI could release a 0.15 micron 256-bit DDR2 device without too many problems.
 
no_way said:
I dont care for one or other company's next-gen products ( they'r both quite a pieces of impressive sand :p ) but i care for 3D tech thus ...
Dont base your judgement only on what happened in past ( slow T&L adoption, slow DX8 shader adoption, slow cubemaps on GF1 ).

So are we supposed to ignore that in the past the sun has always risen? I mean, maybe tomorrow it won't come up...

Slow adoption of 3d Tech isn't going to change. Developers target the lowest common denominator not because adapting new tech is hard, but because they want to sell games. Unfortunately that means the game has to run on a crappy GF1 card and the only thing they can really add are little effects on top of the base that add a little extra but don't really take advantage of the newer cards' features.

The slow adoption of 3D tech is definitely as much a "law" as Moore's Law. Maybe we should call it something like Carmack's Constant. :p
 
Nagorak said:
Developers target the lowest common denominator not because adapting new tech is hard, but because they want to sell games. Unfortunately that means the game has to run on a crappy GF1 card
Developers dont sell games, publishers do. What im saying is, not all developers are in this industry just for $$$.
Of course, you're right, if you wanna make some major money you have to cater for as large audience as possible, thus you could theorethically still choose directx3 as your target platform. But this still limits your audience and you could try to develop for all the consoles out there as well.
What im trying to illustrate is, in given point of time, "lowest common denominator" is not a fixed spec. Someone developing a Sims clone might do with intel i810 graphics and be happy about that. Quake clone will obviously require more.
So choosing your target platform is a tradeoff between size of your potential audience and platform capabilities, and everyone is free to choose by themselves.
If, today, as a indie developer, i'd be starting development of lets say arcade space sim with very low budget, i'd happily choose DX8 and Cg as my development tools just based on my gut feeling ( i know gut feelings dont justify business desicions but hey .. its MY money ) and thus limiting my anyways limited audience ( there are scant few of Wing Commander fans left ) even further to DX8 capable gfx card owners.

Target platform is not a "set-in-stone" limitation, there are tradeoffs when choosing one, and choosing higher can perhaps pay off better ( if development time is drastically reduced, content creation is easier, more focus can be put on game itself, not the technology etc. )
 
ATI feels that with RADEON 9700’s multi-pass capability, having native support for thousands of shaders is useless, as the RADEON 9700 can loopback to perform those operations. ATI ran a demonstration of a space fighter that was rendered using this technique.
Correct me if I'm wrong, but wouldn't that mean that the Radeon9700 would lose precision, as the frambuffer doesn't support the pipeline's 96bit FP format?

ta,
-Sascha.rb
 
"You're a fanboi!"

"Me? No, You're a fanboi!"

"No, obviously, you're the fanboi!" (followed by long list of reasons)

"Whatever, its plain to everybody else who the real fanboi is."

etc.

Is anybody as tired of this as I am?
 
Is anybody as tired of this as I am?

In all honesty...not quite as tired of that as I am about seeing you post about it in every other thread. :-?

Seriously, you are going to have to learn to tolerate it to an extent, because although it goes on to a degree here, there are at least valid points of view and good points and ideas being brought up in these threads as well, despite the fanboish tone at times.

The fanboism on B3D isn't anyhwere near that of sites like Rage3D or NVNews...but it's never going to completely disappear. You simply cannot "mod out" every simgle post that hints of fanboism...it would be counterproductive.
 
nggalai said:
ATI feels that with RADEON 9700’s multi-pass capability, having native support for thousands of shaders is useless, as the RADEON 9700 can loopback to perform those operations. ATI ran a demonstration of a space fighter that was rendered using this technique.
Correct me if I'm wrong, but wouldn't that mean that the Radeon9700 would lose precision, as the frambuffer doesn't support the pipeline's 96bit FP format?

ta,
-Sascha.rb

You're wrong.

The framebuffer supports 128-bit floating point format, so no precision is lost.

Consider yourself corrected. ;)
 
Back
Top