Luminescent
Veteran
What about aquanox, doesn't aquanox utilize a robust directx 8 engine with pixel and vertex shader support, along with an advanced lighting model?
I can assure you that we did NOT lock out anything from customers that is on the CD.
We pick (conservative) default settings based on detected HW and the code currently in place will only pick the highest texture detail on 256 MByte cards by default though you can easily change this either via the menu or by modifying the ini file.
At highest texture detail some levels might use up to 180 MByte of data (textures + geometry) and if you have a lot of players this number might be even higher
Here's how the detail settings work:
FWIW, we didn't ship with any textures that require a detail setting above High to be shown at full detail. The texture detail is basically a bias against the LOD level defined in the texture package. So e.g. large textures might have a "NormalLOD" of 2 which means at normal texture LOD ("Normal") the engine will skip the first two mip levels. At "Lower" it will skip 3 and at "Higher" it will skip only 1. To the best of my knowledge the highest NormalLOD used in UT2k3 is 2 which means that by setting your texture detail to "High" (ini and menus use a different notation as texture detail ended up being to fine grained and I'm refering to ini options) you'll end up with the full quality textures. We also do some clamping so small textures won't loose mipmaps at low texture detail settings.
Below are the ini options and their bias level.
-4 UltraHigh
-3 VeryHigh
-2 High
-1 Higher
0 Normal
+1 Lower
+2 Low
+3 VeryLow
+4 UltraLow
As this is too fine- grained for the regular user we mapped the words differently so XYZ in the menus doesn't necessarily map to XYZ in the ini so this might have caused some confusion.
-- Daniel, Epic Games Inc.
rhink said:It would be nice if it is a mode that works on all cards.... however, it wouldn't be unprecedented for Epic to support this feature on only one card.... we all remember UT and the extra CD you could only use with 2 S3 cards until Vogel added support into OpenGL, right?
The fragment level processing is clearly way better on the 8500 than on the
Nvidia products, including the latest GF4. You have six individual textures,
but you can access the textures twice, giving up to eleven possible texture
accesses in a single pass, and the dependent texture operation is much more
sensible. This wound up being a perfect fit for Doom, because the standard
path could be implemented with six unique textures, but required one texture
(a normalization cube map) to be accessed twice. The vast majority of Doom
light / surface interaction rendering will be a single pass on the 8500, in
contrast to two or three passes, depending on the number of color components
in a light, for GF3/GF4 (*note GF4 bitching later on).
Initial performance testing was interesting. I set up three extreme cases to
exercise different characteristics:
A test of the non-textured stencil shadow speed showed a GF3 about 20% faster
than the 8500. I believe that Nvidia has a slightly higher performance memory
architecture.
A test of light interaction speed initially had the 8500 significantly slower
than the GF3, which was shocking due to the difference in pass count. ATI
identified some driver issues, and the speed came around so that the 8500 was
faster in all combinations of texture attributes, in some cases 30+% more.
This was about what I expected, given the large savings in memory traffic by
doing everything in a single pass.
If ATI had paid Epic to include a special TruForm mode (which only, to this day, really works well on ATI hardware), I doubt Doomtrooper and others would be complaining.
In other words, if there are some advanced textures in UT2003 that require 256MB of vmem to support, then any videocard with 256MB should be able to use them. The complaints would start when you have competing products of equal or same capabilities and a feature has been intentionally thwarted to give an illusion of hardware superiority
quattro said:umm, call me stupid, but UT2003 DOES support truform. and quite nicely, too. Pity it is visible just on player (and weapon) models. those adrenaline pills sure could use some rounding. (and remember tim's rant about how truform sucks)
also, Doomtrooper, AFAIK Loki = Vogel. Vogel, please correct me, if i'm wrong.
And if you ask me, this is just anoter example of hearsay theresay argument thread. there are still MONTHS since we'll see nv30 on the shelves and i don't care a smallest bit if there will be a nv specific feature or not. i got mine.
in the oter news: i just recieved 9700 board. woohoo!
If ULTRA_HIGH_RES == ON and CARD != NV30 then FAIL
IF OpenGL_extension_exists(NVidia_Extension) AND can_run_extension_fast(NVidia_Extension) then Enable_Special_Feature();
You however, are operating on a false rumor that somehow UT simply says
Code:
If ULTRA_HIGH_RES == ON and CARD != NV30 then FAIL
The question is if there is no difference between High->UltraHigh due to lack of vmem requirements, or if it's simply being ignored in software until the next patch.
As said before this, UT2003 could have different bottlenecks which limit these gains. But, I would suspect since UT and Doom seem to be using these pass systems for different reasons they will not necessarily yield the same set of results.Doomtrooper said:Carmack has stated about the speed gains by executing everything in a single pass, as much as 30%:
Luminescent said:What about aquanox, doesn't aquanox utilize a robust directx 8 engine with pixel and vertex shader support, along with an advanced lighting model?
Humus said:Mephisto said:No, UT2003 uses neither Pixel nor Vertex Shader.
From UT2003.ini:
[D3DDrv.D3DRenderDevice]
UseHardwareVS=True
MaxPixelShaderVersion=255