aa

Correct, but what I was saying was that dx could've been written so that the things conflicting with forced aa could be changed, removed, or replaced with something else that didn't conflict.
How would you do that? We are in more or less totally programmable waters now. Will you go to developers and say: "You can do whatever you want, but please for the love of God, don't render anything else then images into render targets"? Games do alot more complex algorithms today then they did in Voodoo 3 era. Ideally every application would be aware of AA and present you with a nice option to enable x sample AA and only create AA render targets only when it actually can.

I do know that as it is now, it's not possible for aa to be forced 100% of the time and work perfectly. However, Microsoft could spec the next dx version to allow 4x RGSS thru HW(ROPs?) if they wanted to, right?;
What's preventing anyone right now not to implement 4x RGSS from D3D side? Nothing. Performance is a huge problem though. Compression doesn't help much as others have already told you. You also need to take into account 4x the pixel shader power requirements...
That's why instead of going brute force a ton of other features were brought into hardware. MSAA for edge AA, anisotropic filtering to improve texture detail inside triangles, TRSS/TRMS to improve edges resulting from transparency (texture kill, alpha tests)...
 
Incorrect. TRSS doesn't affect every pixel, only those that have "texture" edges (i.e. alpha test or texkill). Super sampling affects every pixel.

Less heat? How do you do 4 times as much work and expect to get less heat? Color compression saves work. MSAA saves work. CSAA saves work. You've got everything backwards.

You don't think that's done now? *scratch*
I didn't say it did, but it still affects some pixels just as much as FSRGSS would.

I know that they save work; what I meant was that they still require transistors (unless they're being secretly emulated thru shaders, lol) which if you have more transistors, then you have more heat or have to lower clock speeds, iirc. Of course, I could be wrong about that.

If you removed 16xq m/csaa and color compression and replaced them with only 4x RGSS, then I don't know that there would be a huge transistor difference. I could be wrong, but anyways.

How would you do that? We are in more or less totally programmable waters now. Will you go to developers and say: "You can do whatever you want, but please for the love of God, don't render anything else then images into render targets"? Games do alot more complex algorithms today then they did in Voodoo 3 era. Ideally every application would be aware of AA and present you with a nice option to enable x sample AA and only create AA render targets only when it actually can.


What's preventing anyone right now not to implement 4x RGSS from D3D side? Nothing. Performance is a huge problem though. Compression doesn't help much as others have already told you. You also need to take into account 4x the pixel shader power requirements...
That's why instead of going brute force a ton of other features were brought into hardware. MSAA for edge AA, anisotropic filtering to improve texture detail inside triangles, TRSS/TRMS to improve edges resulting from transparency (texture kill, alpha tests)...
Well, a more logical argument I could make is that a voodoo3 is just as much of a leap over an super nes graphical capabilities as a geforce I mean, was anything lost from a riva 128 to a voodoo5?

Too many sacrifices are being made as we go on, and I really don't think that they would have to be made if microsoft either got more creative or quit dictating specs. 3dfx's standards were far better than microsoft would ever attempt, and nvidia outlasted 3dfx, absorbed them, and they're still here 8 years later, so they should be able to meet 3dfx's standards, since most people think they beat 3dfx. Although I'm going to negate my own argument since I believe 3dfx was consumed b/c of their bad business decisions while creating a superior product.
 
Too many sacrifices are being made as we go on, and I really don't think that they would have to be made if microsoft either got more creative or quit dictating specs.
And alas, this is where you display your ignorance. I don't mind you wanting better AA and image quality (don't we all). I don't even mind that you don't have a good concept of the performance trade-offs involved in various algorithms and hardware implementations. Hell I'm at the point that I almost don't even mind that even though you've been told the aforementioned things many times now you refuse to listen to people who do know these things.

But the biggest trolling thing about your posts is that you constantly misattribute blame (as if you're in any position to blame anyone...) for these apparent image quality blasphemies. For that last time I'm going to say this, and say it bluntly: if you don't like a game's IQ, do not buy it. If the things that you care about graphically are getting worse, there is no one to "blame" but the game/application developers themselves. There is nothing useful that has been taken away or denied. The APIs are now more flexible than ever while at the same time making many more quality guarantees than in the past. These are indisputable facts, and thus continually posting stuff like the above quote is what drives you out of the realm of useful criticism and far into the realm of trolling.

So seriously: stop ignoring what people are telling you, and do some reading and learning before posting this stuff. You've clearly never even glanced at the DirectX API, so you have no right to be making claims about it.

Anyways I should really just stop reading any of your threads, but as one who hates misinformation and too frequently has to deal with the results of it, I feel a responsibility to correct at least the obvious lies. That said, if you refuse to listen to reason and fact, and instead continue your largely baseless crusade, I doubt that people will show much more patience.
 
Last edited by a moderator:
I know that they save work; what I meant was that they still require transistors (unless they're being secretly emulated thru shaders, lol) which if you have more transistors, then you have more heat or have to lower clock speeds, iirc. Of course, I could be wrong about that.

If you removed 16xq m/csaa and color compression and replaced them with only 4x RGSS, then I don't know that there would be a huge transistor difference. I could be wrong, but anyways.
Your claim is that removing 16xq m/csaa and color compression support would offset the cost of making 4x RGSS fast? It's easy to show that's impossible. Just look at a die shot from a recent GPU and see that the shader core is larger than the rest. Now imagine quadrupling that while removing a few features from the ROPs and scan conversion.
RV770 die shot
GT200 die shot

You really think that removing a couple of features from the ROPs and scan conversion is going to make that much difference in clock speeds/heat when the shader cores are so large?
 
processing 4x the resolution isn't really that much of performance-killer. I know it takes 4x as much pixel/texel processing power, but it wouldn't be that far off from 16xQ AA plus trss. With RGSS you don't really need as many samples as you do with aa. Also, if the only hw aa mode was 4x RGSS, then that would save transistors (no color compression, no csaa, no msaa) allowing for more rops, and less heat, which would allow higher clock speeds.

What makes you think 4x is enough? i was just playing with 4xSSAA to see what you were on about, and i still saw aliasing, hell i still saw bits of aliasing at 16xSSAA and this is at 1600x1200.

Compared to when my framerate doesn't get raped like a cheap hooker with 16xQ AA and SS TSAA i would say the quality increase, if there is any isn't good enough to warrant switching to SSAA.
 
But rotated grid supersampling would look better than 4x4 SS given the same number of samples, right? I mean 4x4 SS is 4 times as many samples as 4xRGSS :???:. You could get 16xRGSS at that cost..??

Please correct me if I'm wrong as I'd like to know :smile:
Yes 16x sparse sampling would look better than 16x regular grid sampling.

Good job there, bringing a tactical nuke to knife fight:p
LOL
 
Increased, but many things have been taken away.

But definitely not increased exponentially, as it seems most believe.

Increased proportionally at best.
See, that's your problem... You live in the past!
Across all your threads we can read about:
-palletized textures: Were dropped. However no games released in last couple of years use them anyway (though they could just implement it with shaders). If some old game uses them and it doesn't have fallback it simply won't run if driver does not provide some support for them. So is one of your favorite old games crashing or why is this a problem?

-dithering: Another feature that was dropped. Again no games release in last couple of years have any use for this feature. Only problem are old games that use 16bpp back buffer formats and only those, since many games even back then allowed user to select 32bpp rendering.

-RGSS: Was never really dropped or made illegal by any API or Microsoft. The company that used it simply died and no one else decided to pick it up due to it's huge performance costs (see OpenGL guy's reply).
If you are a game developer and really really want RGSS I'm sure you can implement it in DX10 with geometry shaders or simply looping a couple of times through your pixel shaders with slightly different inputs (you are covered by RGMS for polygon edges).
Such implementation won't be any slower "in software" then it would be "in hardware", since additional work to jitter geometry or to offset interpolated coordinates would be minimal in comparison to increased pixel shader load.
 
If you removed 16xq m/csaa and color compression and replaced them with only 4x RGSS, then I don't know that there would be a huge transistor difference. I could be wrong, but anyways.

It has already been pointed out that your wrong, but since you're so far of the mark...

It's not enough to change the rops to go from 4xrgms to 4xrgss. You would also need 4x the shaders (from 800 to 3200 for Ati) 4x the texture units (40 to 160) 4x the interpolators (32 to 128) 4x the BW (256bit to 1024bit memory bus) and so on. It would be a gigantic chip, with power consumption to match. I don't think it would be even possible to do for nVidia, there are limits to how big chips foundries can make at the moment, something like 33x26mm or thereabouts I think I've read somewhere...
 
Thanks. I take it there wouldn't be much of a performance hit to... um... sparsify the sample points?
Not a great deal. Obviously in a regular grid there is greater opportunity to share parts of the calculation but I can't see it being massively significant.
 
It should be noted that if you're willing to re-project your geometry, any sampling pattern for SS can be implemented trivially in the application at a cost proportional to the number of samples you're taking. If you want to get clever, you can even implement conservative rasterization using geometry shaders and do edge tests in the pixel shader, but I doubt on current architectures that would be a win over the aforementioned "brute force" approach.
 
It has already been pointed out that your wrong, but since you're so far of the mark...

It's not enough to change the rops to go from 4xrgms to 4xrgss. You would also need 4x the shaders (from 800 to 3200 for Ati) 4x the texture units (40 to 160) 4x the interpolators (32 to 128) 4x the BW (256bit to 1024bit memory bus) and so on. It would be a gigantic chip, with power consumption to match. I don't think it would be even possible to do for nVidia, there are limits to how big chips foundries can make at the moment, something like 33x26mm or thereabouts I think I've read somewhere...

True, but man what a chip that would be :oops:

I'll take 4, and a nuclear reactor and LN2 cooling with it :p
 
What specifically do you believe has negatively impacted image quality over the years?
many games don't support aa.
staying with the z-buffer (they seriously need to ditch it, and just use HW programmable clipping with 6 boundaries that can be of any size)
ati no longer supports true trilinear
more distance fog in games today, generally.
we're still using 32 bit frame buffers. (DMC4 used RGB10FPA2FP)

See, that's your problem... You live in the past!
Across all your threads we can read about:
-palletized textures: Were dropped. However no games released in last couple of years use them anyway (though they could just implement it with shaders). If some old game uses them and it doesn't have fallback it simply won't run if driver does not provide some support for them. So is one of your favorite old games crashing or why is this a problem?

-dithering: Another feature that was dropped. Again no games release in last couple of years have any use for this feature. Only problem are old games that use 16bpp back buffer formats and only those, since many games even back then allowed user to select 32bpp rendering.

-RGSS: Was never really dropped or made illegal by any API or Microsoft. The company that used it simply died and no one else decided to pick it up due to it's huge performance costs (see OpenGL guy's reply).
If you are a game developer and really really want RGSS I'm sure you can implement it in DX10 with geometry shaders or simply looping a couple of times through your pixel shaders with slightly different inputs (you are covered by RGMS for polygon edges).
Such implementation won't be any slower "in software" then it would be "in hardware", since additional work to jitter geometry or to offset interpolated coordinates would be minimal in comparison to increased pixel shader load.
I said those are needed for better IQ for older games; what I was saying was that backward compatibility looks like shit, if it even half-way works.

I'm fine with shaders emulating older features and making them look as good as they do on a voodoo5. The problem is, is that they've made no attempt to. I'm not going to be happy until I can play my dx5 and 6 games with 3dfx quality. I really don't think that's too much to ask for, if shaders are as great as people say they are.

To tell you the honest truth, I'm willing to settle for 8x rgms plus trss in every game (all thru hw.) the problem is, is that many games don't even work with that. You can neither force it thru the drivers, nor enable it in game.
 
staying with the z-buffer (they seriously need to ditch it, and just use HW programmable clipping with 6 boundaries that can be of any size)

Just curious, how do you think the z-Buffer affects image quality? And what does it have to do with clipping? I think you should really look up what a z-Buffer really does.
 
Why don't games using the GRAW engine allow aa?

Could nvidia make a driver to allow any aa mode to be forced from the control panel?

Also, what HW/rendering technique/game engine limitation prevents control panel aa from working?

I can tell you how i solved the AA problem in BR2:

The main problem is that developers write the game using textures as render targets (to use the output in other effects like blooming), and, by D3D definition, a texture cannot have an AA buffer binded to it. Only surfaces can.

The trick to fix this sort of problem, is to render to a valid surface (with an AA buffer binded to it), and then copy the surface to the texture.

About the rest of engines, i think that SSAA is virtually compatible with any rendering technique (render 2x2 times bigger, and then shrink the image). Of course, this is a brute force method, but, the image quality is awesome.
 
Back
Top