FP16 and market support

radar1200gs said:
I would not put too much weight on what OpeGL guy has to say. Remember he is/was responsible for OpenGL on the S3 Savage series and for ATi cards. We all know how the OpenGL quality of those products stacks up vs nVidia's OpenGL.
I'm sure Dave will ban me for this... but you're a real dick, aren't you, radar1200gs? I bet you don't have many friends.

As has been mentioned, OpenGL guy works on the D3D driver at ATI. Maybe you should get your facts straight before you open your yap.

-FUDie
 
don't worry too much about him. I also used to be very pissed at S3 but you know what, I got over it but obviously he has not :)

OGLguy does a great job for all of us ATi users and most of us really appreciate him posting here and at rage3d.........
 
I hope everyone had a Merry Christmas and I want to wish you all a Happy New Year.

I also want to take the time to thank all the people who took the time and effort to post their thoughts on this thread; even those I don't agree with. This is one of the best, if not the best forum on the web for discussing graphics and 3D issues and I hope to see it continue for a good long time.
 
YeuEmMaiMai said:
don't worry too much about him. I also used to be very pissed at S3 but you know what, I got over it but obviously he has not :)

OGLguy does a great job for all of us ATi users and most of us really appreciate him posting here and at rage3d.........


Judging by your posts in every thread, it's a pity you can't get over NVidia.
 
get over what? the fact that people in here are saying that ATi does this and that while all the time ignoring the significant drawbacks to NVidia's hardware.


WE got too many people like chalnoth saying that ATi sucks and NVida rocks.....

I only mention valid points such as

1. NVidia's IQ is inferior to ATi's when the cards run at a comparable performance level.

2. Nvidia has to resort to driver hacks to get acceptable performance

3. When you objectively compare the R3X0 to NV3X it is quite clear who has the best intrest of the consumer in mind and sorry to say it is not nVidia.....

4. Futuremark, whom a lot of Nvidia diehards are very quick to discredit, accurately perdicted the dysmial performance of the NV3X when it comes to runnning DX 9 code. This was backed up by Halo and Tomb Raider AOD. Did you notice how quickly Nvidia threw a wad full of cash at edios to get them to remove the benchmark mode from the game?

Valve, again whom is easily dismissed by diehard nVIdia fans as being bought out by ATi, stated that it took them 5x the time to develop for NV3x. Valve and ATi made a marketing deal so that ATi could include HL2 in the box. Valve stated that the reason Nvidia did not get it was solely due to the inferior performance of NV3x.

Oh, BTW where is doom III?

The only two things that are superior about the NV3x are:

1. ansiotropic filtering
2. FP32 (unfortunately it is too slow to be of any use)

Now lets talk about the superior features on R3X0

1. DX 9 performance right out of the box
2. FSAA performance compared to NV3X at the same settings and the fact that ATi's 6x MSAA looks BETTER than NV3x 8x
3. Single slot design
4. TRUE 8x1 design unlike nVIdia for all practical purposes is a 4x2 when playing current games....

Judging by your remarks it looks like you cannot get over the fact that ATi's DX 9 card is superior to NVidia's...........

DemoCoder said:
YeuEmMaiMai said:
don't worry too much about him. I also used to be very pissed at S3 but you know what, I got over it but obviously he has not :)

OGLguy does a great job for all of us ATi users and most of us really appreciate him posting here and at rage3d.........

Judging by your posts in every thread, it's a pity you can't get over NVidia.
 
Dude, I don't show up in everythread with a snide comment about some gfx company. You are obviously the one with the vendetta. And when you start talking about which large corporation has the "best interests" of the consumer at heart, you've revealed your fanboy colors. I don't even own an NVidia card, and I was one of the first to criticize NVidia after I attended the launch of the NV3x in Las Vegas.


Anyway, with regard to Valve's comments, this company has shown themselves to be even bigger liars than NVidia. September 30th my ass. The pirated source code shows that the game wasn't anywhere near finished. It also shows that Gabe Newell lied during his presentation at E3 with regard to AI. Then in a recent Gabe interview, he used wrong verb tense when talking about how HL2 developers were currently experimenting with adding "soundscapes". Umm, for a game that almost went gold on 9/30, why are they experimenting with new technology instead of supposedly fixing any security holes left open by the theft? All features should have been frozen by August if they were shipping on 9/30.

I fear we've all been hoodwinked with an elaborate tech demo, and I now have my doubts if HL2 will ship by April, I wouldn't be surprised if it it shipped next Christmas, just like HL1 was delayed. So maybe what Gabe meant to say was that their smoke-and-mirrors tech-demo took 5 times as long to develop on the FX. In any case, any statements from Valve are highly suspect.

I always wondered how they were able to develop a complete game in complete secrecy and spring it on the world only 1 month after announcing it. Turns out, they couldn't. And don't even get me started on TF2.
 
Ostsol said:
. . .

I vote that this thread be closed. Nothing good is coming out of it anymore.

Seconded.
There's some good discussion here, IMO, even in recent pages. However, every single interesting bit has been 100% OT, and could perfectly be done in another thread, or a new one.


Uttar
 
I have ATI and Nvidia cards, Hell I even have SiS, Matrox, S3, 3Dfx, and Trident cards.

Having said that, I have some serious issues with Nvidia right now: Not so much for what the GfFX cards are, but for what they have been wrongly claimed to be. They are good solid DX8 cards, but not much for DX9. In and of itself, that's not a bad thing. What is a bad thing is their continued claims that poor DX9 performance is someone else's fault rather than their own responsibility and the tactics they have employed to make up for their deficits.

I don't like TWMTBP, especially when it leads to things like locking out resolutions on a competitor's card just because of the device ID. I think that's slimy. The Tomb Raider fiasco bothers me too. I don't want to reward a company that does things like that for not having acceptable performance when forced to adhere to the spec. That sets a bad precedent and I don't want to do that.

Regardless of the reasons, NV3x based cards don't perform as well using full precision shaders at FP32 as they do when performing partial precision shaders at FP16. So needless to say, they will try to get the PP hint used wherever they can in order to gain acceptable performance. However, this should only be seen as a temporary workaround for this generation of cards. Precision has nowhere to go but up, and anything that seeks to cut it is a move in the wrong direction. ATI will have to move up beyond FP24 at some point, but for the current generation their FP24 implementation seems to be working just fine.
 
Rugor said:
They are good solid DX8 cards, but not much for DX9. In and of itself, that's not a bad thing. What is a bad thing is their continued claims that poor DX9 performance is someone else's fault rather than their own responsibility and the tactics they have employed to make up for their deficits.

er how is that not a bad thing in and of itself? Nvidia are selling their 5xxx range as "solid DX9" cards, not DX8, so they are actually misleading the public.
 
My point was that it is the false claims that the FX series are good DX9 cards rather than the fact they are good DX8 cards that is the bad thing. There is nothing intrinsically wrong with a card being better at DX8 than DX9, but there is something wrong with claiming that cards DX9 problems are all the fault of everyone else.
 
Regardless of the reasons, NV3x based cards don't perform as well using full precision shaders at FP32 as they do when performing partial precision shaders at FP16. So needless to say, they will try to get the PP hint used wherever they can in order to gain acceptable performance. However, this should only be seen as a temporary workaround for this generation of cards. Precision has nowhere to go but up, and anything that seeks to cut it is a move in the wrong direction.

Since we all know this is the case, the last thing Nvidia should be doing is saying things like "fastest DX9 gaming solution bar none"

Showcasing their supposid DX9 demo Dawn while using 99.9% FP16 (this was before the crapola hit the fan) I find it extremely funny that ATi's Radeon 9500Pro or greater cards can run the demo faster than NV3x can and ATi has to use a WRAPPER

Claiming things like 4X FSAA that is in actually more like 2x......

lowering of IQ to get speed

documents that get released that just happen to put competitors in a bad light

calling other companies cheaters when in fact they are the ones cheating...


the list goes on and on and on.....
 
YeuEmMaiMai said:
Showcasing their supposid DX9 demo Dawn while using 99.9% FP16 (this was before the crapola hit the fan) I find it extremely funny that ATi's Radeon 9500Pro or greater cards can run the demo faster than NV3x can and ATi has to use a WRAPPER
Why is it that people always emphasize the use of a wrapper with a purely graphics card limited demo?

Claiming things like 4X FSAA that is in actually more like 2x......
"Claiming" things like 4xAA that is 4xAA, you mean?
 
Althornin said:
X2 said:
Why is it that people always emphasize the use of a wrapper with a purely graphics card limited demo?
you think that a wrapper does not impact the speed at all?
If the demo still runs fast enough (CPU) to keep the driver command buffer filled at any time, then there should be no impact at all. Dawn certainly isn't terribly CPU-hungry.

But the wrapper does some operations differently (normalization?) IIRC, doesn't it?
 
like I said claiming 4x fsaa that looks like 2x. every single reviewer states that NV3X 4X FSAA looks like ATi's 2X setting....

as for the dawn demo, what exactly are you trying to say? It has been proven that ATi renders it faster and it looks better due to FP24 vs mostly FP16

What Nvidia is doing to the comminity is not good why should I as an ATI owner be foreced to suffer because of a competitors lusy DX9 performance and lower percision?
 
YeuEmMaiMai said:
like I said claiming 4x fsaa that looks like 2x. every single reviewer states that NV3X 4X FSAA looks like ATi's 2X setting....

as for the dawn demo, what exactly are you trying to say? It has been proven that ATi renders it faster and it looks better due to FP24 vs mostly FP16

What Nvidia is doing to the comminity is not good why should I as an ATI owner be foreced to suffer because of a competitors lusy DX9 performance and lower percision?
Yes, we all know that nVidias AA looks inferior (over 2 samples, and its OG, and its not gamma corrected).
This does NOT mean that nVidia isnt doing 4xAA.
Stop ranting and raving!
 
Althornin said:
YeuEmMaiMai said:
like I said claiming 4x fsaa that looks like 2x. every single reviewer states that NV3X 4X FSAA looks like ATi's 2X setting....

as for the dawn demo, what exactly are you trying to say? It has been proven that ATi renders it faster and it looks better due to FP24 vs mostly FP16

What Nvidia is doing to the comminity is not good why should I as an ATI owner be foreced to suffer because of a competitors lusy DX9 performance and lower percision?
Yes, we all know that nVidias AA looks inferior (over 2 samples, and its OG, and its not gamma corrected).
This does NOT mean that nVidia isnt doing 4xAA.
Stop ranting and raving!
I think the point is that what good is 4x AA if it arguably looks no better than the competition's 2x AA. It can be argued that ATI is being penalized for having better image quality.
 
Back
Top