FP16 and market support

radar1200gs said:
Althorin: What part of "entry level" or "value" product do you not understand?

Baron: you need to read what I originally posted:
The 5200 is capable of running HL2 at DX9 levels if the developer allows it to.

I'm sure that if Valve hardcodes 5200 & 5600 to DX8 with no option for the user to alter it, some enterprising person will devise a patch that tells the game the card is really a 5900 or whatever.
Yes, because we all know that since MS is calling HL2 the next big game benchmark, there IS NO OPENGL PATH! Excuse the fact that GLSL wasn't ready, or that GL1.5 drivers barely exist yet... IT MUST BE THE MICROSOFT CONNECTION!

Do you realize how silly that sounds? Valve isn't going to hardcode the mixed-mode pathway for 5200s any more than they hardcoded 640x480 for GF2MXs. They will default to a specific pathway because that way, the casual gamer (which will make up the bulk of HL2's fanbase, thanks to the success of HL1 multiplayer and CS in particular) will be able to click "Multiplayer" and have the game run decently. It's not some grand ATI-Valve conspiracy; it's so millions of people don't flood message boards with "omg valv is teh suxxor bcuz hl2 runz lke @$$."
 
RussSchultz said:
Why you insist on calling it (the 5200) incapable is beyond me. It's quite plainly capable--just not at the speeds you (or most people) would desire it to be.

In this particular field, the speed of execution is an important part of "capable". The current low speed of the cards is like having a DVD player that only plays at 10 frames per second, or skips every third second of film.

Below the mimimum speed required to give a reasonable playing experience, you can't call the card "capable" when getting to that reasonable playing experience is one of the prerequisites for being "capable".
 
Bouncing Zabaglione Bros. said:
In this particular field, the speed of execution is an important part of "capable". The current low speed of the cards is like having a DVD player that only plays at 10 frames per second, or skips every third second of film.

Below the mimimum speed required to give a reasonable playing experience, you can't call the card "capable" when getting to that reasonable playing experience is one of the prerequisites for being "capable".

As long as the card is fillrate limited, you can always decrease the resolution to gain speed. This is and always was a trade-off of budget cards.
I guess you don't remember the the MX200 do you? ;)
 
This may be OTish, but what isn't? ;)
Chalnoth said:
Which you could do, but it is dangerous. There may still be problems with shaders in which errors accumulate, so that the problem isn't related to the input or output formats, or the dynamic range, but is rather due to recursive errors. A perfect example is a mandelbrot set.
I am under the impression that the issue with mandelbrot sets is coordinate interpolation. IE the starting values you feed into the iteration process for each sample. Higher temporary precision won't improve mandelbrot fidelity if you can't interpolate at at least the same precision.

I've been *cough* playing around with this stuff, so I hope I'm qualified to tell you that this isn't a particularly good example. Sorry ;)
 
RussSchultz said:
jvd said:
If you buy a card that says its dx 9 it better be able to play dx 9 class games. Otherwise its false advertising. Just as if you got a car and it wouldn't work on a road you'd be pretty damn pissed wouldn't u ?
People buy Festivas (or Echos, or Geos) all the time. They're not (in my opinion) fit to drive on Texas freeways as their acceleration stinks and they're fragile. It doesn't make them any less of a car.

If you buy a cheap car, you get a cheap car.

Why you insist on calling it (the 5200) incapable is beyond me. It's quite plainly capable--just not at the speeds you (or most people) would desire it to be.
I don't find it capable at all at dx 9 tasks . Esp not at its price range.

Buying an echo for the 12 grand they cost you and you get close to the same horse power of the other 12 k cars like civics and focus . You buy this card and you 1/10th the frame rate of other cards around its class. I.e the 9600 (which costs 10 more than the 5200ultra)
 
andypski said:
You really have very little knowledge at run-time of the usage of a texture - basically as far as I see it if you have two differing input precisions and want to guarantee the correct result that you really need to default to the higher of the input precisions.

Yes, this is exactly what you would do, but what percentage of games use FP32 textures in every one of their shaders? It seems to me that the vast majority of input textures in games, even DX9 games, are not FP, but plain old integer. (yes, even HL2) That means in most cases, the compiler would be able to detect that lower precision operations could be used.

As more and more games started to uptake FP/HDR textures, the effect of the heuristic would go down, but by the time games use FP32 inputs for most of their pipeline, we'll probably have the NV50/R500 and a whole new set of problems to worry about.

The core issue is multipass. The compiler can estimate error in a single pass, but how can it estimate ACCUMULATED ERROR without saving it in an additional buffer (E-Buffer? Error Buffer?) for every pixel? Perhaps it would use MRT or some kind of packing to store the error to pick it up in later passes.

Anyway, like a Haskell version of "quicksort", the smart compiler inferencing theory sounds nice, but is really hard to get it to work as fast and easy as a strongly typed system. (what I mean with the Haskell reference is, functional languages can express quicksort in a single line, but no compiler I know of can generate the "inplace" sorting of the C version. In theory, it is possible to go from the specification of the sort to an inplace version, but compilers haven't advanced that far yet)

It's better to get the developer to explicitly say what his intentions and requirements are. Not only does it usually work better, but it serves as documentation for later. When you see _PP, you know what it means. Without it, or other semantic identifiers, you are left running the compiler "inference" heuristic in your head to guessitimate what's going on.
 
Hyp-X said:
As long as the card is fillrate limited, you can always decrease the resolution to gain speed. This is and always was a trade-off of budget cards.
I guess you don't remember the the MX200 do you? ;)

When a card recieves 17fps in TR:AOD at 640x480, thats unplayable.


Sure, any card can rule the roost at 4x3 gettting 100 fps!
SWEET!

where do YOU draw the line?
I draw it at 640x480. If a card cant perform at 640x480, a card is unplayable.

I dont see how you can argue that the FX 5200 is gonna look better at 320x240 DX9 in HL2 rather than 640x480 DX8.

Ergo, most people and most developers are going to treat the 5200 as a DX8 card.
 
Hyp-X said:
As long as the card is fillrate limited, you can always decrease the resolution to gain speed. This is and always was a trade-off of budget cards.

There's always a bottom limit, even for budget cards - and that includes a mimimum ability for resolution, as well as speed. For instance, if you have to drop below 800x600 (or for the more forgiving, 640x480) with no AA/AF, is that a "capable" resolution? Heck, is it even worth *trying* to use it as a DX9 card? Isn't it kind of like plugging your top of the range DVD player into a 14 inch TV?

Hyp-X said:
I guess you don't remember the the MX200 do you? ;)

Stuff that bad doesn't even register on my radar. Again, it's marketing instead of capability from Nvidia, which is just their standard operating procedure, especially on first generation functionality.
 
Isn't the 5200 as fast as a GF4 4200? That seems pretty capable.

I find it ironic that people are using a horrible, badly bugged game like TR:AOD as a DX9 benchmark. Isn't this game a PS2 port?
 
Basicaly a GeForce FX5200 is a half GeForce 4 Ti with the texture adress processor expended as a general FP32 ALU. It can do single texturing at the same speed as a GeForce 4 Ti but everything else is half as fast. DX9 ability is good for the sticker on the box and full longhorn support bot not really for 3d gaming. Anyway this doesn't change the fact that this is a DX9 GPU.
 
DemoCoder said:
Isn't the 5200 as fast as a GF4 4200? That seems pretty capable.
The 5200 lags far behind the 4200 in many cases.

From Tom's:
UT2003 Inferno Timedemo - 4200 = 55.3, 5200 U = 42.4
Call of Duty - 4200 = 85.5, 5200 U = 55.5
Warcraft 3 - 4200 = 53.1, 5200 U 37.1
Halo - 4200 = 28.0, 5200 U = 19.6
NASCAR - 4200 = 54.5, 5200 U = 35.6
X2 - 4200 = 40.2, 5200 U = 24.6
I find it ironic that people are using a horrible, badly bugged game like TR:AOD as a DX9 benchmark. Isn't this game a PS2 port?
"Horrible" - subjective
"Badly bugged" - What bugs and do they affect benchmarking?

It's the only current game out using lots of DX9 features (float buffers, PS/VS 2.0, etc.). Whether the game is good or not has nothing to do with that.
 
Althornin said:
where do YOU draw the line?
I draw it at 640x480.

Me, too.

Btw, if the card is too slow at 640x480, chances are that it isn't fillrate limited, so reducing resolution further doesn't make sense anymore.

Notice that games nowdays can use a lot of off-screen render targets (like shadow buffers) whose resolution might not automaticaly reduced with the screen resolution - so you might need to adjust those as well.

If it's vertex limited it is possible that reducing geometry/skinning detail would do the trick. (Better than giving up on the extra effects.)

If it's CPU limited - then it should run well or it won't run well on any other cards.

Ergo, most people and most developers are going to treat the 5200 as a DX8 card.

Well, that's likely.
Altough it can depend on the game.
 
Yeah, one thing developers can do is write simpler shaders for the FX5200. I guess the reason they don't is that they already have to write shaders for DX8 cards anyway, shaders that are necessarily simpler than those used for speedy DX9 cards. The only quality advantage that simple DX9 shaders will have over the DX8 shaders would be due to floating point precision, so why bother? Just treat it like a DX8 card, as its speed isn't much more than those cards.
 
Hyp-X said:
Althornin said:
where do YOU draw the line?
I draw it at 640x480.
Me, too.

Btw, if the card is too slow at 640x480, chances are that it isn't fillrate limited, so reducing resolution further doesn't make sense anymore.

Notice that games nowdays can use a lot of off-screen render targets (like shadow buffers) whose resolution might not automaticaly reduced with the screen resolution - so you might need to adjust those as well.
Yes and these extra features can consume fillrate, leaving the card fillrate limited even at low resolutions.
 
OpenGL guy said:
"Horrible" - subjective

Got panned by most review sites (e.g. < 7.0 score).

"Badly bugged" - What bugs and do they affect benchmarking?
No, they affect game play. Fubar'ed camera, controls, and clipping issues. But it is possible to code an engine poorly, just look at the first Max Payne engine, a game engine with a fubar'ed resource/memory management system.


It's the only current game out using lots of DX9 features (float buffers, PS/VS 2.0, etc.). Whether the game is good or not has nothing to do with that.

Any more detail on this? How can a game that uses so many DX9 features be so graphically underwhelming? And I found the texture's blurry for a PC title. Is this a symptom of it's relation to the PS2 port?

HL2 looks like what a DX9 title should be. TR:AOD doesn't IMHO. The game looks like it could have been done on a DX7 HW.
 
Mr. Chairman, the junior member from California moves a mercy killing of this thread.

Friends don't let friends go 'round and 'round in ever tighter circles indefinitely.
 
OpenGL guy said:
DemoCoder said:
Isn't the 5200 as fast as a GF4 4200? That seems pretty capable.
The 5200 lags far behind the 4200 in many cases.

From Tom's:
UT2003 Inferno Timedemo - 4200 = 55.3, 5200 U = 42.4
Call of Duty - 4200 = 85.5, 5200 U = 55.5
Warcraft 3 - 4200 = 53.1, 5200 U 37.1
Halo - 4200 = 28.0, 5200 U = 19.6
NASCAR - 4200 = 54.5, 5200 U = 35.6
X2 - 4200 = 40.2, 5200 U = 24.6

Note that it's actually a FX 5200 Ultra. There are also vanilla 5200's and 64bit versions of the cards.

If I'm not mistaken the 128MB 4200's are actually by a small margin cheaper than the 5200U's.
 
It was a while ago, but I saw a 5300 Ultra retailing for the same price as a BBA 9600 Pro locally. And since a very large number of 5200's are the 64-bit model they will generally be a lot slower than those numbers indicate.
 
DemoCoder said:
Any more detail on this? How can a game that uses so many DX9 features be so graphically underwhelming? And I found the texture's blurry for a PC title. Is this a symptom of it's relation to the PS2 port?
First of all, how do you define X number of DX9 features as "so many"?

Secondly, is there a single way -- the only real way -- that showcases the entire features of an API, as a whole, via a PC game?

Thirdly, many -- many -- game developers I know place very less importance in the quality of textures when you're talking about the abilities of shaders (via current HW) to promote creativity in programming. Do you agree/disagree?

TRAOD uses DX9 features. That should (reasonably) be enough to justify its use in B3D reviews. I/B3D explained such DX9 features in TRAOD. If you're going to associate "brilliant graphics" with "the use of the latest API features", then you have a one track mind or is sadly misled. You have a great deal of experience, DC, and I'm surprised by your comments regarding the game and its use as a benchmark, especially taking into context that it's used at a site such as B3D. The game may be buggy but this has much to do with its game logic.... not its rendering engine.

Excuse while I try to figure out why you read certain B3D content and participate in the B3D forums, while also trying to figure out why you lambast games for poor gameplay quality.
 
Back
Top