ms/nv/ati=bad iq for games; ?'s

Status
Not open for further replies.
This is my first post, so obviously, I'm new here. 1st, I'm going to list many things things ati/nv/ms are responsible for that make games look rather awful compared to what's easily and more sensibly possible. 2nd, I have several questions that will come a bit later. any comments are welcome and i would really like some opinions of my ideas, even though i'll never be open to the way things are currently, it's always good to know what others think and to debate using facts and logical reasoning.

I've become an IQ activist, so I was told to come here. I'm a technophile and I don't accept what ati and nvidia release as the "best" (best only in performance, but awful in iq and compatibility as well as sub-par reference coolers and and reference pcb quality/features. and I'm furious with microsoft's influence.)

I'm obviously an open GL supporter.

_ms ref rasterizer looks worse than any rasterizer than ever before. i'd say 3dfx's products' general rasterization was the best (especially with colors and fog.) the 6/7 series rasterizer looked much better than the ms ref rast used in everything current. the ms ref rasterizer makes textures look very blotchy, comparable quake3's sky with s3tc on. the geforce 6/7 rasterizer doesn't have the blotchy texture problem that the ms ref rasterizer gives games. half-life 2 shows the difference quite well.

_distance fog shouldn't be used today since all it does is cover up depth buffer optimizations and lack of depth precision which shouldn't be there in the 1st place, as well as lack of dynamic clip planes in dx.)

_all depth buffer optimizations (including but not limited to, z compression, z-range opt. hierarchial z, early z-kill, fast z clear, z-mask, and color compression as well as fast color clear as well as all texture compression formats except dxtc1,3,5 available as a compatibility option).

_all avaiable aa modes except ati's edaa and adaptive high quality aa.

_ ati's af sample optimizations

_bilinear and any trilinear optimizations for any format.

_not forcing 8888 (rgba) textures and frame buffer for when an app calls for anything less.

_fx24 z-buffers; less than fp32 d and fp 32 s being forced.

_less than fp16xRGBA frame buffer mode in dx 9/10 games [2 bits for transparencies is "wholly inadequate and unacceptable"; water looks white in x360 games and pc games that use the 1010102 format.]

_lack of compatibility options, including, but not limited to, lack of invert z-values options, lack of alt pixel center options, lack of disable dx10 fog option

sensible abolition and replacements:

_entire frame-buffer bilinear (or trilinear if that's possible; i've never heard of an emulator offering trilinear, though.) filtering + 2x or 4x aniso-equivalent filter for fsaa as only fsaa mode. each rop will have 1 bilinear fsaa unit and 1 fsaf unit. it can be enabled by default. it would be free, 100% compatible, would aa everything perfectly, and no worrying, no confusion about other aa modes.

_remove ALL compression and optimization techniques from transistor space and replace with more z units, and maybe more texture units.


Finally, I'm infuriated that games have lower audio quality (due to wma "lossless" format and other compressed formats that are unacceptable given the storage and processing power we have today) than allowed by the ages old redbook standard. Personally, the only audio formats i find acceptable for today are PCM (raw uncompressed) 24/96 (2 channel) and the highest bit-rate LPCM (5.1 channel.) They aren't used in any non-ps3 games though. People shouldn't accept the awful quality that we're being given at/by the hands of microsoft. Highs and lows are much better in games that used redbook so long ago, as well as more clarity.
 
I'm obviously an open GL supporter.

Wait is OpenGL that API everyone stop caring about? :p

wma "lossless" format and other compressed formats

You do realize there are three wma formats right? And guess what, WMA lossless format is (brace yourself) lossless! I would really be interested to see if people could actually tell the difference between compressed audio developers use in games and their lossless counterparts (it's doubtful).
 
Last edited by a moderator:
Wait is OpenGL that API everyone stop caring about? :p



You do realize there are three wma formats right? And guess what, WMA lossless format is (brace yourself) lossless! I would really be interested to see if people could actually tell the difference between compressed audio developers use in games and their lossless counterparts (it's doubtful).

I would say that open gl isn't used anymore for games, other than for ps3 games.

I am eternally grateful to nvidia for making it so all open gl games are more perfect than ever before with the 8 series on up; ati abandoned opengl likely due to microsoft's influence.

I liked opengl a lot better. i know dx may have become easier to develop for with dx10, but opengl was always better. my main reason is dynamic clip planes, many quake 3 engine games (as well as the now-8 year old classic mdk2) have much longer draw distances, and less precision lost in the distance, compared to 99% of pc games today. and those quake 3 games were with a 24 bit fixed depth buffer. imagine the possibilities with an fp32 depth buffer in opengl. whenever i see shots of dx9 and 10 games, there's way too much fog in the distance (any fog is too much) and the draw distances are really weak.

no compression is lossless; some compression types may be better than others but they're all lossy even if 99% of people can't tell a difference. and the highest end wma is only 470-940 kbps. All i can say is i can tell a huge difference in dynamic range (highs and lows) and clarity (in vocals) between a raw uncompressed redbook soundtrack and any x360 game. I can, so I'm pretty sure most people can.

People would be surprised at how much more enjoyable games would be if they used uncompressed audio (even at 16 bit/44.1khz; sega cd and saturn games have way higher dynamic range due to the raw uncompressed redbook format; granted, they may have less instruments and shorter tracks, but that's only because of less space back then; more than a decade later extra storage space and more processing power has rendered compressed "lossless" audio formats useless; 5.1 24/192 lpcm is lossless enough for multi-channel audio, and for stereo encoding 24/96 raw uncompressed pcm is perfect, and redbook (2 channel raw uncompressed 16/44.1) is the next best thing, although once a format is dead, there's no reviving it)

DMC4 for the ps3 wouldn't have been encoded in lpcm (less lossy than any of the wma formats) if people couldnt tell the difference between the ps3's low bit rate equivalent to the x360's and games for windows' lossy "wma lossless."
 
While you blaming nvidia/ATI and Microsoft you seem to overlook that most points you list here are game developer decisions. And these decisions are necessary as not everybody have a high end GPU.

What’s your problem with the refrast? This is a debugging tool. Nobody care about the image quality as it is too slow for playing games.

I although don’t understand why you want to remove the depth buffer optimizations? These techniques are transparent and have no impact on image quality. They only safe bandwidth and shader performances.

I am sorry if this may sound harsh but do you really know how FSAA works?

EDIT:

I am not sure what you mean with these dynamic clip planes. But if you refer to techniques that modify the near and far plane based on the objects in the frustum this can be easily done with Direct3D and any other 3D API I know.

Nobody who isn’t totally crazy writes PS3 games with OpenGL.
 
Last edited by a moderator:
no compression is lossless

That's just an ignorant remark. Of course there are compressions that are lossless. When I zip a file, I get the exact same file back when I unzip it (this is roughly how lossless audio compressions work). If I compress a PCM (.wav) file with WMA lossless and then decompress it back to PCM, it will be the exact same file (bit for bit) as the original PCM file. This is fact, something one can't argue.

You also seem grossly uninformed. The 360 supports WMA Pro (which is not WMA lossless) a lossy compression format. However games on the 360 are not compressed with WMA Pro. So I don't quite get your vendetta against the WMA format (or even lossy compression formats in general). Just because audio is compressed with a lossy codec doesn't mean it's not transparent to the user (ie: the user can't tell it's compressed).

my main reason is dynamic clip planes, many quake 3 engine games (as well as the now-8 year old classic mdk2) have much longer draw distances, and less precision lost in the distance, compared to 99% of pc games today.

I lol'd. It makes me wonder the last time you even played a game.
 
While you blaming nvidia/ATI and Microsoft you seem to overlook that most points you list here are game developer decisions. And these decisions are necessary as not everybody have a high end GPU.

What’s your problem with the refrast? This is a debugging tool. Nobody care about the image quality as it is too slow for playing games.

I although don’t understand why you want to remove the depth buffer optimizations? These techniques are transparent and have no impact on image quality. They only safe bandwidth and shader performances.

I am sorry if this may sound harsh but do you really know how FSAA works?

EDIT:

I am not sure what you mean with these dynamic clip planes. But if you refer to techniques that modify the near and far plane based on the objects in the frustum this can be easily done with Direct3D and any other 3D API I know.

Nobody who isn’t totally crazy writes PS3 games with OpenGL.
working backwards=]

i appreciate your reply.

to tell you the honest truth all i know about dynamic clip planes is they made it so opengl had the benefits of both the w and z buffer methods. that's obvious when you look at dx games like crysis (where precision is awful in the distance) and other dx games all of which have distance fog to hide low or no precision in the distance. many open gl games from the past have much less or no fog and much more evenly distributed precision between the far plane and near plane.

in dx, games have much more fog and much smaller draw distances, with detail lost further away.

yes i know how aa works=] currently hardware uses a point sampled pattern (og, rg), coverage point sampling (not very good) or edge detect (which ati uses and is great, when used with their quality adaptive mode; it's 100% compatible with the latest drivers, and it aa's the edge of each object. it's better than anything before; no worrying about turning aa on in-game, the drivers do it fine with the hd 2k-4k series.)

the only other way of aa that people should find acceptable is: every pixel is hardware bilinear filtered, rather than point sample filtered; there would be no aliasing. the blurriness could be reduced by hardware minification sampling; emulators bilinear filter the whole image and it purges all aliasing; emulator bilinear filtering is not a perfect replacement for playing it on the original system on an sdtv, but it would be perfect for 3d fsaa, as long as each rop also had a minification filter unit to offset the slight blurriness cause by bilinear filtering.

just like on an 8800 gtx, each texture address unit has 2 texture filter units which allow for free bilinear + 2:1 af. nvidia's hq af would still work for textures, while the whole frame buffer could be bilinear + minification aa'd.

The depth buffer optimizations (at least some of them) hurt compatibility with games that used the w-buffer, and you lose a tiny amount of image quality with at least some of them. but, for those not interested in that, they effectively hurt performance, b/c if they were removed from the gpu, there would be more room in the transistor budget to simply replace them with more depth buffer units. take for example, the 8800 gtx has 192 z/stencil units. they could free up enough transistor space if they removed the transistors for the depth optimizations and replaced them with more d/s units which would at least boost compatibility and iq and at least not hurt performance, if not help it. all of those optimizations take up a lot of transistor space individually, and all of them together is a huge amount of transistor space that could be used for something better. the amount of frame rate increase they provide is neglible. no more than 10%, and we live in a world of 2 gpu's being basically standard and even more optional.

a real case scenario from 98. the riva 128 couldn't or it's drivers didn't filter textures in false hopes that it would bring performance closer to the voodoo2 which used no optimizations, but rather had another tmu. the riva 128 was still 30% or more behind the voodoo2 in many benchmarks at the time, and the riva 128 still looked awful due to no texture filtering.

well, the 8 series on up does match the ms ref rasterizer closer than the 6/7 series. with dx10 ms asked both ati and nvidia to closer match the ms ref rasterizer, so nvidia and ati, not caring about image quality that much, agreed to.

no it's not the developers fault; nvidia and ati could always force higher precision formats and end all optimizations and compression thru drivers if the hardware to do so existed; ati has already created a 100% compatible aa mode. 3dfx created a 100% compatible aa mode in 2000. they created their own tc which was 100% compatible and looked way better than s3tc/dxtc. back in the day, 3dfx largely determined the tech quality of games. so why can't nvidia and ati? they can. they could even force all games to have their textures rendered internally @ 8k x 8k; in hp texture filtering mode, they force internal texture size to be lower than the application calls for.

the only 2 real non-performance related differences between nvidia and ati's latest gpu's/drivers (other than nvidia is perfect with opengl compatibility and iq) is that ati has superior aa while nvidia has much better filtering to those who truly care about iq. neither one has really made any other excellent hardware features, ever, really. they just use microsoft's low quality standards, and do very little else. was there anything the geforce 8/9/gtx 2x0 did better than the 6/7 series that shouldn't have already been done probably starting with the geforce fx? no. just a higher dx version. same thing with ati. their edaa method was long overdue as was nvidia having aa with hdr and giving the option to disable all filtering optimizations on the 8 series.

they don't have to listen to ms. it's just like how asus made ds3dgx as an alternative to eax with their d2x, and creative labs couldn't do anything about it.
 
working backwards=]
yes i know how aa works=] currently hardware uses a point sampled pattern (og, rg), coverage point sampling (not very good) or edge detect (which ati uses and is great, when used with their quality adaptive mode; it's 100% compatible with the latest drivers, and it aa's the edge of each object. it's better than anything before; no worrying about turning aa on in-game, the drivers do it fine with the hd 2k-4k series.)

the only other way of aa that people should find acceptable is: every pixel is hardware bilinear filtered, rather than point sample filtered; there would be no aliasing. the blurriness could be reduced by hardware minification sampling; emulators bilinear filter the whole image and it purges all aliasing; emulator bilinear filtering is not a perfect replacement for playing it on the original system on an sdtv, but it would be perfect for 3d fsaa, as long as each rop also had a minification filter unit to offset the slight blurriness cause by bilinear filtering.
No mister, you have no idea how AA works... Point sampling? Sure sample positions are either fixed or somewhat programmable, but what this has to do with point sampling? GPU doesn't just take the closest sub-pixel to the pixel center and renders it. It also doesn't just take a random sub-pixel. It takes all sub-pixel and bilinearly interpolates them (or slightly better does it so with gamma correction). Or does what ever ATI wants with them in ATI case...

The depth buffer optimizations (at least some of them) hurt compatibility with games that used the w-buffer, and you lose a tiny amount of image quality with at least some of them. but, for those not interested in that, they effectively hurt performance, b/c if they were removed from the gpu, there would be more room in the transistor budget to simply replace them with more depth buffer units. take for example, the 8800 gtx has 192 z/stencil units. they could free up enough transistor space if they removed the transistors for the depth optimizations and replaced them with more d/s units which would at least boost compatibility and iq and at least not hurt performance, if not help it. all of those optimizations take up a lot of transistor space individually, and all of them together is a huge amount of transistor space that could be used for something better. the amount of frame rate increase they provide is neglible. no more than 10%, and we live in a world of 2 gpu's being basically standard and even more optional.
I think you are seriously underestimating performance implications and hugely overestimating amount of transistors these optimizations take. If you want to know how a chip without these would work, go find some Matrox Parhelia reviews.
 
This is my first post, so obviously, I'm new here. 1st, I'm going to list many things things ati/nv/ms are responsible for that make games look rather awful compared to what's easily and more sensibly possible.
...
I've become an IQ activist, so I was told to come here. I'm a technophile and I don't accept what ati and nvidia release as the "best" (best only in performance, but awful in iq and compatibility as well as sub-par reference coolers and and reference pcb quality/features. and I'm furious with microsoft's influence.)
...
_distance fog shouldn't be used today since all it does is cover up depth buffer optimizations and lack of depth precision which shouldn't be there in the 1st place, as well as lack of dynamic clip planes in dx.)
D3D games are welcome to change clip planes as needed. They aren't fixed.
_all depth buffer optimizations (including but not limited to, z compression, z-range opt. hierarchial z, early z-kill, fast z clear, z-mask, and color compression as well as fast color clear as well as all texture compression formats except dxtc1,3,5 available as a compatibility option).
Most of these options (aside from DXTC) have no impact on image quality and are merely performance optimizations.
_all avaiable aa modes except ati's edaa and adaptive high quality aa.
So all MSAA is bad? *scratch*
_not forcing 8888 (rgba) textures and frame buffer for when an app calls for anything less.
Why would the driver or runtime force a mode the app didn't request? Doesn't make sense.
_fx24 z-buffers; less than fp32 d and fp 32 s being forced.
No idea what you're saying here.
_less than fp16xRGBA frame buffer mode in dx 9/10 games [2 bits for transparencies is "wholly inadequate and unacceptable"; water looks white in x360 games and pc games that use the 1010102 format.]
The 2 alpha bits in 1010102 are only a problem if you're using destination alpha, which is rather rare. If a game was using destination alpha, I doubt the developer would use 1010102.
_remove ALL compression and optimization techniques from transistor space and replace with more z units, and maybe more texture units.
Not viable. Z and frame buffer compression is lossless and boosts performance with no impact on image quality. Removing compression would lower performance. You're not going to make that up by adding more units because you'd likely be bandwidth limited.
 
_all depth buffer optimizations (including but not limited to, z compression, z-range opt. hierarchial z, early z-kill, fast z clear, z-mask, and color compression as well as fast color clear as well as all texture compression formats except dxtc1,3,5 available as a compatibility option).
Have you ever heard of bandwidth compression?

That's what fast color clear, fast z, hierarchical z and what not is all about: instead of reducing the total amount memory required, all the techniques you mention (except tex compression) actually increase the memory footprint: for each compressible unit (say, a 16x16 pixel tile), you need to keep a few bits to indicate whether or not the unit has been compressed. Even the comparison, made by another poster, of zip compression is a bad analogy, since there the goal is still to reduce memory footprint. Not so with the techniques above.

The rest of your rant shows a similar lack of technical understanding.

Did you just pick up a number of terms, invented a strawman story about them and then got really angry about those inventions?

With all your points made, I'm surprised you didn't ask for perfect aniso filtering...
 
OP,

Almost all of real-time rasterisation is a compromise in order to get the final pixels on your screen, so it's natural that software and hardware designed to do the job will co-conspire in order to make it happen with good performance. That includes some possible reduction in IQ in order to gain speed, since the whole damn thing is a delicate balance of one over the other.

As pointed out, some optimisations have no reduction in quality whatsoever (and are easily proven to have that property), and are (usually) geared up towards the chip saving bandwidth. The very way these chips work means that just throwing more units at the problem is never going to help solve your particular problem in a given area (people always forget chips can't get infinitely big, and all that entails, including power and cost).

The upshot is, as Demirug says, that most of the optimisations are actually programmable, so the developer has full control. IMRs can produce truly beautiful pixels because of it, when told to do so, and it's up to the developer to trade those off for performance for his or her users on their given platform. Obviously you have a nicer time of that on a console, but it's still the case there too.

As an OpenGL supporter....go back and see what SGI and the industry's motivations were in creating and fostering GL in the first place. Some of it is precisely to provide what you seem to dislike, and D3D is only following in footsteps. I'm sure you wouldn't want me to edit your thread title and put opengl in there too.
 
D3D games are welcome to change clip planes as needed. They aren't fixed.

Most of these options (aside from DXTC) have no impact on image quality and are merely performance optimizations.

So all MSAA is bad? *scratch*

Why would the driver or runtime force a mode the app didn't request? Doesn't make sense.

No idea what you're saying here.

The 2 alpha bits in 1010102 are only a problem if you're using destination alpha, which is rather rare. If a game was using destination alpha, I doubt the developer would use 1010102.

Not viable. Z and frame buffer compression is lossless and boosts performance with no impact on image quality. Removing compression would lower performance. You're not going to make that up by adding more units because you'd likely be bandwidth limited.
msaa and csaa aren't any good. they're better than nothing, but only if the game supports them.

by fx24 z, i meant 24 bit integer (i.e. fixed) z-buffer; by fp32 d and s i mean 32 bit floating point depth buffer and 32 bit fp shadow mapping.

since less than int 8 formats look awful on dx10 hardware, it would make sense for the driver to force them for games that call for less precision.

well, maybe d3d games are welcome to change clip planes, there's something opengl does that allows much more depth precision in the distance and no or much less distance fog.
 
No mister, you have no idea how AA works... Point sampling? Sure sample positions are either fixed or somewhat programmable, but what this has to do with point sampling? GPU doesn't just take the closest sub-pixel to the pixel center and renders it. It also doesn't just take a random sub-pixel. It takes all sub-pixel and bilinearly interpolates them (or slightly better does it so with gamma correction). Or does what ever ATI wants with them in ATI case...


I think you are seriously underestimating performance implications and hugely overestimating amount of transistors these optimizations take. If you want to know how a chip without these would work, go find some Matrox Parhelia reviews.
ok, but what would the equivalent to a genesis emulator's bilinear filtering be? it seems like if the entire frame buffer were filtered it would purge all aliasing.

trilinear filtering textures without optimizations purges texture aliasing, as long as the lod is neutral or higher, but point sampled texture filtering doesn't.

if nvidia wants 100% compatibility and superior aa'ing to what they offer currently, they'll just have to copy ati, right? supersampling can be 100% compatible and technically filters the whole framebuffer by increasing the resolution internally, but it only should be used for transparencies b/c it doesn't edge well at all. even if combined with msaa like sli aa modes, the edging still isn't as good as edge-detect and it isn't 100% compatible.

the voodoo5 worked fine with any of the optimizations i mentioned. and it had the highest image quality for it's day.

the optimizations are fine as long as they don't hurt compatibility with older games that use the w-buffer. they should all be able to be disabled in drivers. other issues that could easily be rectified are hurting bw compatibility also.

finally, i guess the optimizations are fine, but the distance fog used in dx10, isn't necessary with opengl and especially if you're using an fp32 depth buffer with opengl. dx10 depth precision isn't any good and excessive fog is used to cover up lack of precision in the distance.
 
finally, i guess the optimizations are fine, but the distance fog used in dx10, isn't necessary with opengl and especially if you're using an fp32 depth buffer with opengl. dx10 depth precision isn't any good and excessive fog is used to cover up lack of precision in the distance.
What the heck are you talking about? Distance fog is usually used to cover up low draw distances - not hide depth buffer precision artifacts.

Direct3D10 guarantees full support for 32-bit depth buffers. In Direct3D10, the following depth formats are supported:
Code:
DXGI_FORMAT_D32_FLOAT_S8X24_UINT
    32-bit depth, 8-bit stencil, 24-bits unused
DXGI_FORMAT_D32_FLOAT
    32-bit depth
DXGI_FORMAT_D24_UNORM_S8_UINT
    24-bit depth, 8-bit stencil
DXGI_FORMAT_D16_UNORM
    16-bit depth
 
If you find IQ in 2008 unacceptable, may be you should wait till 2015. Perhaps then you will be happy :)

On a more serious note, IQ may not be perfect today, but is getting better all the time. You can only do as much as your hw allows you to do. And while IQ is important, it isnt everything. All the optimizations/tricks/dirty-hacks are necessary to run your games at a min of 30 fps.

API's have to strike a balance between many needs. If you focus on any one particular parameter alone, I am much mistaken if there is any widely used API that will escape criticism. D3D has to balance hw availability, rendering speed, among others. It can't enforce minimum hw requirements in one particular area without upsetting overall balance of all 3D apps.
 
ok, but what would the equivalent to a genesis emulator's bilinear filtering be? it seems like if the entire frame buffer were filtered it would purge all aliasing.
I don't know what that genesis emulator is doing (can we have some screenshots?), but by the sound of it you're comparing an old console that did everything with plotting little 2D pictures on to a framebuffer with current tech which consists mainly of plotting 3D polygons (triangles). And entire framebuffer is already processed (resolved) to get you AA frame on the screen.

trilinear filtering textures without optimizations purges texture aliasing, as long as the lod is neutral or higher, but point sampled texture filtering doesn't.
Tell me what good trilinear filtering does with polygons that are parallel to the screen? And how the hell is this supposed to help AA?

if nvidia wants 100% compatibility and superior aa'ing to what they offer currently, they'll just have to copy ati, right? supersampling can be 100% compatible and technically filters the whole framebuffer by increasing the resolution internally, but it only should be used for transparencies b/c it doesn't edge well at all. even if combined with msaa like sli aa modes, the edging still isn't as good as edge-detect and it isn't 100% compatible.

the voodoo5 worked fine with any of the optimizations i mentioned. and it had the highest image quality for it's day.
And why exactly MSAA is curently not 100% compatible? Sure there are some techniques that currently don't work with it, or require some hacking, but by large it works.
And don't mix Voodoo 5 and how well it did and looked back in the day, becouse techniques that developers used back in the day were alot different then what they use now.

the optimizations are fine as long as they don't hurt compatibility with older games that use the w-buffer. they should all be able to be disabled in drivers. other issues that could easily be rectified are hurting bw compatibility also.
So you basically have a problem in some older game and now you're throwing the "IQ is unacceptable becouse of the optimizations" line. And even though ATI driver guy tells you that these optimizations have nothing to do with quality, you now go to that they brake compatibility with older games. Do you know that w-buffer was ditched all together?

finally, i guess the optimizations are fine, but the distance fog used in dx10, isn't necessary with opengl and especially if you're using an fp32 depth buffer with opengl. dx10 depth precision isn't any good and excessive fog is used to cover up lack of precision in the distance.
DX10 doesn't have distance fog anyway. Developers have to do this in the shaders. So you are comparing what some developer using OpenGL has done and what some other developer using DX10 has done and drawing conclusions while totally missing the point.
 
Voodoo5 AA would still work with any technique I think. It's only supersampling done right (with a rotated grid). Only S3 has offered it (on deltachrome, not sure if they still support it).

This is a feature I still lack. On nvidia you have xS modes which can be useful, but the supersampling is still ordered grid as on the original geforce and radeon. but Transparency Supersampling is said to be rotated grid? Anyway I find 8xS to be very useful in doom3 for instance. supersampling appears to clean shader aliasing.

note that Voodoo5 was far from perfect. It couldn't do trilinear filtering! Any system is a compromise. I'm upset by the geforce 6/7 texture filtering (less than I was upset by the geforce 4ti's worthless 4xAA), but thanks to the rest (features and speed) I can get some great usable IQ. (Still, that terrible filtering makes me want to upgrade to G8x/G9x)

regarding bilinear filtered framebuffer, that looks a bit ridiculous to me. if you're looking for such a blur thing there's 2x Quincux on nvidia since geforce 3 (and also a bad "4x tap 9" mode, at least on NV2x/3x), "wide tent" and "narrow tent" things on DX10 ATI.
I found Quincux to be occasionally useful (doom 3, N64 emulator on the geforce 4ti)
 
msaa and csaa aren't any good. they're better than nothing, but only if the game supports them.
OpenGL isn't magical in removing aliasing either.
by fx24 z, i meant 24 bit integer (i.e. fixed) z-buffer; by fp32 d and s i mean 32 bit floating point depth buffer and 32 bit fp shadow mapping.

since less than int 8 formats look awful on dx10 hardware, it would make sense for the driver to force them for games that call for less precision.
Actually, it doesn't make sense. If the developer has concluded that an effect only needs certain precision, why would the driver change that?
well, maybe d3d games are welcome to change clip planes, there's something opengl does that allows much more depth precision in the distance and no or much less distance fog.
Such as? Direct3D and OpenGL are largely the same when it comes to rasterization.
 
Status
Not open for further replies.
Back
Top