is all of this possible?

would a video card with these driver options and hw specs be possible? I'm not trying to troll or anything, that's why I'm posting in the beginners section. If all of this is possible, what's stopping nvidia and/or ati from doing this?


checkboxes:
emulate high quality dithering
enable alt pixel center
emulate w-buffer (disables z units)
emulate palletised textures
disable dxtc
disable s3tc
disable 3dc

fog types:
off
dx10
emulate type 1
type 2
type 3
type 4

open gl:
max reported version
conformant texture clamp hw/gl spec/off

d3d/opengl:
double/triple buffer
max number of frames to render ahead 0,8
vsync force on/off
4x rgss forced
mip lod bias slight shift left/neutral lod mip bias.
8x unoptimized trilinear af (forced; angle invariant; only mode in hardware; all fp and int formats)

256 dual issue fp40 shaders @ 1.65 GHz
core @ 475 MHz.
320 z/stencil units (fp64 total precision) (4:1 lossless z compression; early z)
32 ROPs (2:1 lossless color compression)
96 TMUs 96 TLAF units

2GB GDDR3 (reduced v; .6ns) on gb bus @ 1.75GHz effective (memory cooling not necessary?)

55nm; .88-1.02V operating range; 3 digital pwm phase power for core; 2 digital pwm phase for memory; 1 PCI-E 8-pin connector; 1 PCI-E 6 pin connnector; copper dvi-port; display port; hdmi port (highest revision; full content protection for lossless 8 channel audio); all solid made in japan caps; 16 layer pcb; ac mx2; all copper hs base, heatpipes, and fins; scythe s-flex 2600 rpm 120mm fan.

dx 10.1 compliant

external tmds:

full rgb range on/off

HW high def video decoding.

bicubic fx aspect ratio scaling

centering

custom res; max 2560x1600 @ 75Hz; 1920x1200 @ 120Hz
 
emulate high quality dithering
Possible, with a minor perf hit.

enable alt pixel center
Already possible by driver panel options. However alt pixel center is very problematic for programmers (it's impossible to create a blur filter for example if you do not know the pixel sampling center point).

emulate w-buffer (disables z units)
You can already write any value in pixel shader to the depth buffer. You can write W instead of Z in both DX9 and DX10. This decision is made by the game programmer (for compatiblity reasons).

emulate palletised textures
All drivers emulate palettised textures already. However the performance is not as good as with hardware friendly formats such as DXT/BC.

disable dxtc
disable s3tc
disable 3dc
DXTC and S3TC are identical formats. At least ATI driver control panel allows you to disable support for these.

3dc (ATI2 fourcc) is disabled with the same switch on ATI cards (if I remember correctly).

fog types:
off
dx10
emulate type 1
type 2
type 3
type 4

Pixel shaders do not understand the term "fog" at all. Shaders are only math/logic instructions. It's completely a developer decision to calculate the final pixel colors (including any form of fog/dust/environment effects). So it's impossible for drivers to detect any fog calculation code, as the code is completely different on every game released.

Also DX10 fog does not exist. The api does not have any fixed function fog calculation support (and neither does DX9 when you are using pixel shaders). The game developer has to code the "fog" by themselves.

Also even if the driver would be clever enough to detect and remove the fog calculation from all the pixel shaders, it would not increase the visiblity range at all. Majority of games are optimized so that they only send the visible geometry to the graphics card every frame (none of the far away geometry hidden in fog is sent). With fog disabled you would only see emptiness on the further distance (single color or last frame data most likely).

dx 10.1 compliant

No problem to disable dx 10.1 support, just report reduced the supported hardware capability / caps, if the check box is enabled from control panel.

full rgb range on/off
HW high def video decoding.
bicubic fx aspect ratio scaling
centering
All of these are entirely possible, and most are supported by both vendors.

custom res; max 2560x1600 @ 75Hz; 1920x1200 @ 120Hz
You can add custom resolutions with display tweaking tools. No problem integrating these features to manufacturer control panel.
 
Last edited by a moderator:
Possible, with a minor perf hit.


Already possible by driver panel options. However alt pixel center is very problematic for programmers (it's impossible to create a blur filter for example if you do not know the pixel sampling center point).
I thought only on ati; this would just be a compatibility option, like high quality dithering=]

You can already write any value in pixel shader to the depth buffer. You can write W instead of Z in both DX9 and DX10. This decision is made by the game programmer (for compatiblity reasons).

But couldn't the driver have a compatibility option to force the ps units to write W?

I don't know so that's why I'm asking=]


All drivers emulate palettised textures already. However the performance is not as good as with hardware friendly formats such as DXT/BC.

What is BC? If I disable dxtc in riva tuner will the palettized textures look as good as they do on hardware that supported them thru hardware?

DXTC and S3TC are identical formats. At least ATI driver control panel allows you to disable support for these.

I had thought that ati's CP only allowed disabling it for D3D.

3dc (ATI2 fourcc) is disabled with the same switch on ATI cards (if I remember correctly).



Pixel shaders do not understand the term "fog" at all. Shaders are only math/logic instructions. It's completely a developer decision to calculate the final pixel colors (including any form of fog/dust/environment effects). So it's impossible for drivers to detect any fog calculation code, as the code is completely different on every game released.

Also DX10 fog does not exist. The api does not have any fixed function fog calculation support (and neither does DX9 when you are using pixel shaders). The game developer has to code the "fog" by themselves.

Also even if the driver would be clever enough to detect and remove the fog calculation from all the pixel shaders, it would not increase the visiblity range at all. Majority of games are optimized so that they only send the visible geometry to the graphics card every frame (none of the far away geometry hidden in fog is sent). With fog disabled you would only see emptiness on the further distance (single color or last frame data most likely).

I had been mistaken. I had thought that fog (pixel fog?) table emulation was possible thru shaders. What would have to be done to restore 90% or higher accurate fog to older games? Would it have to be done per-app (for older games)? Or could anything else be done?

One thing that led me to believe that there was dx10 fog, was b/c hl2 had different fog on dx 10 hw, so I thought that dx10 changed the fog. And then the 6/7 series had different fog from ati (prior to the hd2k series; i.e., x1k series and before) in half-life 2.


No problem to disable dx 10.1 support, just report reduced the supported hardware capability / caps, if the check box is enabled from control panel.


All of these are entirely possible, and most are supported by both vendors.

Right=]

You can add custom resolutions with display tweaking tools. No problem integrating these features to manufacturer control panel.

True; I forgot about that=]

Thanks for answering all of my questions=] I tried to make a better thread, and not blow smoke out of my ass. Was I an epic fail (at creating a good thread), or an epic success?

What I had basically done was made specifications for an imaginary gpu and its drivers=]
 
True; I forgot about that=]

Thanks for answering all of my questions=] I tried to make a better thread, and not blow smoke out of my ass.

Would that be table smoke or range smoke ?

dont forget not all games use compressed textures for performance some of them use them for improved quality
 
What is BC? If I disable dxtc in riva tuner will the palettized textures look as good as they do on hardware that supported them thru hardware?
I had thought that ati's CP only allowed disabling it for D3D.
S3TC compressed textures and palettized textures have nothing in common. Disabling compressed formats won't improve palettized textures in any way.
BC stands for block compressed. Since S3TC is block compressed (4x4 texels) and there are now other variations (3Dc, DX11 formats) it's better to refer to all these formats as BC (+ some number).
Also disabling S3TC compression might cause problems very fast. Games might relay on such support and ship assets on DVD in compressed form...
 
dont forget not all games use compressed textures for performance some of them use them for improved quality

This is very much true. One uncompressed A8R8G8B8 texture takes 32 bits per pixel. One compressed DXT5 texture takes 8 bits per pixel. With DXT5 compression you can use 4 times higher texture resolution with the same memory and bandwidth usage (actually bandwidth usage for 4 times larger DXT5 is usually less, as the largest mipmap is rarely used for all the rendered pixels).

DXT5 with 4 times (2x2) higher resolution looks almost always considerably better compared to similar sized uncompressed texture. DXT compression was the key feature that allowed new games to use high quality 1024x1024 (and 2048x2048) textures on majority of graphics objects. Without hardware texture compression we would still have to rely mainly on low quality 512x512 textures for our graphics objects. Texture compression allows us to improve the image quality a lot.
 
S3TC compressed textures and palettized textures have nothing in common. Disabling compressed formats won't improve palettized textures in any way.
BC stands for block compressed. Since S3TC is block compressed (4x4 texels) and there are now other variations (3Dc, DX11 formats) it's better to refer to all these formats as BC (+ some number).
Also disabling S3TC compression might cause problems very fast. Games might relay on such support and ship assets on DVD in compressed form...
I know that. I don't recall ever saying they were the same. But sometimes you can disable tc for better iq. So why not allow users that option? If you start the game up and it doesn't work w/ tc off, then turn dxtc back on.

Fortunately rivatuner allows dxt formats to be turned off.
 
But sometimes you can disable tc for better iq. So why not allow users that option? If you start the game up and it doesn't work w/ tc off, then turn dxtc back on.

Yes of course. Turning DXT off can improve image quality on older games on newer hardware with extra performance to spare. I was talking about new games mainly. The developer rather chooses to have 4 times more texture resolution than to use uncompressed textures.

Even for older games, disabling texture compression support only works if the uncompressed images are provided with the game disc. Most games have all images precompressed during the production (to save disk space and to reduce loading times nicely). If you turn off image compression support for games like this, the result depends on developer choices. If the developer has used common D3DX image loaders, the API will automatically decompress the compressed texture to a A8R8G8B8 buffer. The resulting image will look identical (same compression artifacts and all), but the game will run a bit slower compared to the case when hardware can use compressed textures. If the game directly manipulates it's own textures (for example loads data using it's own image format) and does assume that the buffers are properly compressed, the game will likely crash if DXT support is disabled.

I doubt many commercial games released nowadays (and during DX8 and DX9 era) include uncompressed textures to their discs anymore. You rather fill the disc with some useful content. Old (DX5 and DX6 era) games had uncompressed textures mainly included for compatiblity reasons.
 
Last edited by a moderator:
Yes of course. Turning DXT off can improve image quality on older games on newer hardware with extra performance to spare. I was talking about new games mainly. The developer rather chooses to have 4 times more texture resolution than to use uncompressed textures.

Even for older games, disabling texture compression support only works if the uncompressed images are provided with the game disc. Most games have all images precompressed during the production (to save disk space and to reduce loading times nicely). If you turn off image compression support for games like this, the result depends on developer choices. If the developer has used common D3DX image loaders, the API will automatically decompress the compressed texture to a A8R8G8B8 buffer. The resulting image will look identical (same compression artifacts and all), but the game will run a bit slower compared to the case when hardware can use compressed textures. If the game directly manipulates it's own textures (for example loads data using it's own image format) and does assume that the buffers are properly compressed, the game will likely crash if DXT support is disabled.

I doubt many commercial games released nowadays (and during DX8 and DX9 era) include uncompressed textures to their discs anymore. You rather fill the disc with some useful content. Old (DX5 and DX6 era) games had uncompressed textures mainly included for compatiblity reasons.
Most, if not all opengl games allowed turning off tc. Doom 3 and Quake IV had Ultra Quality settings.

The 1st (IIRC) dx9 game, TR AOD showed a huge difference between selecting dxt1,3,or5 vs. uncompressed (in game.) Can't believe that was 5 years ago. I don't think it would've hurt frame rates on a 9800 pro (the best at the time.) I had a geforce fx 5900 ultra, and was the only person who didn't regret it. Yes, I know that's retarded, but I had a 9700 pro before, and did not like it. I guess it was probably b/c the drivers weren't mature at the time since I got a Hypersonic system with it as soon as it came out and only kept it for a few months.

I kind of find it hard to understand why dxtc and s3tc were even invented when lossless tc was available with the same compression ratio, IIRC. Didn't the DC's PowerVR use 4:1 lossless TC? Isn't that also the same ratio, and effectively performance as dxtc?

Generally lossy tc with higher res textures looks better than blurry low-res uncompressed textures, but I personally don't think there was ever any excuse to use lossy TC and especially no excuse to have used it for as long as it has been, since lossless tc with the same ratio exists. Just my opinion.
 
I kind of find it hard to understand why dxtc and s3tc were even invented when lossless tc was available with the same compression ratio, IIRC.
You don't remember correctly, "lossless" compression technology certianly existed, but it was nowhere near the same compression ratio -- and still isn't to this day.

Generally lossy tc with higher res textures looks better than blurry low-res uncompressed textures, but I personally don't think there was ever any excuse to use lossy TC and especially no excuse to have used it for as long as it has been, since lossless tc with the same ratio exists. Just my opinion.

Don't start down this road. You were proven wrong about audio compression, let's not go down the road of proving you wrong about texture compression in the same embarassing way. Unless that's the kind of thing you just like to do...
 
I kind of find it hard to understand why dxtc and s3tc were even invented when lossless tc was available with the same compression ratio, IIRC. Didn't the DC's PowerVR use 4:1 lossless TC? Isn't that also the same ratio, and effectively performance as dxtc?
PowerVR has PVRTC, which is lossy.

For a GPU, the only way to get performance benefits out of TC is with fixed ratio block compression. And any kind of fixed ratio compression is by definition lossy. See also the Pigeon Hole Principle.
 
I kind of find it hard to understand why dxtc and s3tc were even invented when lossless tc was available with the same compression ratio, IIRC.
No, that would involve magic. Can I suggest you read an introduction to a paper on texture compression. I'm sure google scholar might find some for you.
 
Uncompressed data quality should be only better if the output device is limited to some predetermined bit depth and resolution or sampling rate (limiting the compressed data to be smaller than the uncompressed data at the hardware limit). In all other cases lossy compression should deliver better quality for same storage and bandwidth requirements.

In 3d graphics, the texture maximum resolution (8192x8192) is rarely used, so there is no limit preventing developers using (lossy) compressed textures with higher resolution than uncompressed ones. Using a same size (in bytes) compressed texture looks almost always considerably better than uncompressed one.

Non-lossy compression is better than uncompressed in size and bandwidth requirements. However non-lossy compression is usually very inefficient (poor compression ratio, and very high uncompression cost).
 
You don't remember correctly, "lossless" compression technology certianly existed, but it was nowhere near the same compression ratio -- and still isn't to this day.



Don't start down this road. You were proven wrong about audio compression, let's not go down the road of proving you wrong about texture compression in the same embarassing way. Unless that's the kind of thing you just like to do...
I wasn't embarrassed=] I thought it was a good test and I appreciate you creating it=] Many times I can tell, maybe as many times I can't tell, but if you give me a double blind test between mario's crying out for me in whatever format myspace uses and the wma lossles version, i'll pass it. same goes for imogen heap/frou frou's let go. I didn't lie to you when I said what version I thought sounded best. So I wouldn't lie to you when I can tell a difference.

I could also tell you that I forgot that the crystalizer was on and at 100% (hadn't heard the track you tested me on before you did, so i didn't notice the small amount of distortion the crystalizer made until i checked the audio console and saw that on it,) but you probably wouldn't believe me. I just wonder why. i'm not angry with you or anything, but I don't get why you don't believe me at times. I know my knowledge and iq is probably less than anyone's here, but the same applies to just about every non-member of beyond 3d.

And I certainly can tell the difference between the geforce 2's s3tc rendering of the sky in quake 3 compared to uncompressed as well as with 3dfx's fxt1. And tomb raider aod's pink sky wasn't blotchy with the textures not set to dxt formats.

PowerVR has PVRTC, which is lossy.

For a GPU, the only way to get performance benefits out of TC is with fixed ratio block compression. And any kind of fixed ratio compression is by definition lossy. See also the Pigeon Hole Principle.
I thought that the g80 did 2:1 lossless color compression. Isn't that fixed? or not? I don't know lol.
 
regarding the geforce 2's compressed textures in quake 3 Sky, that's a well known problem.. I believe s3tc is essentially "broken" in the geforce ddr, geforce 2 and even geforce 3 so it's only 16 bit
The MX400 supports DXTC & S3TC texture compression modes, although disappointingly the GeForce 2 family was allowed to inherit the same fault the Geforce 256 had. This fault being in relation to DXT1 compression. The problem is that in DXT1 the MX 400 uses only 16-Bit interpolation, while other graphics cards use higher interpolation depths. As a result any DXT1 compressed textures with the MX400 (or any other GeForce card, including the GeForce 3) look hideous. The most commonly used example to illustrate this is the sky in Quake 3
http://www.techspot.com/reviews/hardware/siluro_mx400/siluro-3.shtml
 
I thought that the g80 did 2:1 lossless color compression. Isn't that fixed? or not? I don't know lol.
That's not it! If you talk about texture compression, there is only DXTC in hardware which essentially uses two 16bit color keys followed by 4x4 texel block with two bit interpolation indices for each texel. Interpolation is typically done in 32bits, but this was not so on GeForce 2 and 3.
If you talk about color/z buffer compression then you need to know that this compression doesn't strictly save space, it saves bandwidth. For example instead of writing 4 identical samples into MSAA framebuffer (say 128 bits) you only write one sample (say 32 bits) and some control mask. This is obviously lossless, it's also obviously 4:1 and also obviously doesn't work all the time.
If you would have actually read what silent_guy linked you would know why fixed ratio lossless compression is impossible!
 
Is fixed ratio lossless compression actually impossible as a mathematical exercise so to speak, or just imposible on the hardware we have ?

ps: does anyone remember fractal compression as used on ms encarta ?
 
Last edited by a moderator:
Is fixed ratio lossless compression actually impossible as a mathematical exercise so to speak, or just imposible on the hardware we have ?
It is impossible or we would have working compression programs which would compres file 100 times and we would have a 1-bit packet with gigabytes of information. ;)

There is always a point where packing doesn't work anymore and you can have larger file as a result.
 
Back
Top