When I'm buying a $650 video card I have zero intention of being forced to make choices between IQ options. I want it all, bay-bee, and I paid to get it.
That's nice and all Ailuros, but it misses the point that we don't even have the option to enable or disable a "quality" mode of AA that would involve some form of SSAA.
If you noticed I mentioned previously that it would be a feature most likely uses for past game and for currrent and future games that don't go "balls to the walls" on 3D bells and whistles.
For games such as those there would always be the fallback option of enabling a "performance" mode of AA in the control panel that would involve some form of MSAA and Transparancy AA.
Speaking for myself I would gladly lower resolution and texture quality in order to experience the wonderful goodness that was 3dfx's RGSS AA. And if that was unplayable even at lower resolutions/texture details/etc, then I still have the option of trying MSAA.
And if RGSS AA proved too slow and MSAA didn't work with said game engine, I could then put it away and decide not to play it until faster hardware came out that could effectively use SSAA on said game.
I think I and most proponents of SSAA would argue that it's most likely to be used with older games and current and future games that aren't bleeding edge. And we're fine with that.
If only IHVs designed their latest-and-greatest video chips knowing exactly what kind of performance to expect on all machines (i.e. even those with a slowish CPU) with not only current games but games expected to be released at the same time they released that chip of theirs...When I'm buying a $650 video card I have zero intention of being forced to make choices between IQ options. I want it all, bay-bee, and I paid to get it.
I am sure Derek or Anand will say they're not psychic either!Wow, talk about timely and relevant articles.....
http://www.anandtech.com/video/showdoc.aspx?i=2947
Rev, you sure you're not psychic?
Nothing against that; in fact I'm still protesting that NV has stripped the G80 from hybrid modes.
Super-sampling on the card is different, as AFAIK it's uniform grid, consume more memory, etc. Doing it programatically with blending means that you need no more memory, can use whatever sampling pattern you want (rotated grid, quincunx, whatever) and the results scale exact as you'd expect: 1/2 fps for 2x, 1/4 for 4x, etc. So if you're game is running at 300fps, why not throw some 4x super-sampling in there and have it run at 75fps instead, since you're not even seeing the 300fps anyways (even on a CRT)...You'd have to go back way further than say UT2k4, something extremely CPU bound and a resolution threshold not higher than 1280*something. Unfortunately I don't have any SS available anymore to test with on the G80, but the only game that was fully playable on the former 7800GTX with 16xS and 16xAF in it's maximum available resolution (1280*960) was Unreal1.
True it's quite expensive, but as I mentioned it may be unavoidable to get good AA in the future. Furthermore as polygons get near pixel-sized it'll become almost free on the current architecture.It's not the quality of Supersampling I'm questioning here, rather it's usability due to it's inevitable fillrate penalty.
Super-sampling on the card is different, as AFAIK it's uniform grid, consume more memory, etc. Doing it programatically with blending means that you need no more memory, can use whatever sampling pattern you want (rotated grid, quincunx, whatever) and the results scale exact as you'd expect: 1/2 fps for 2x, 1/4 for 4x, etc. So if you're game is running at 300fps, why not throw some 4x super-sampling in there and have it run at 75fps instead, since you're not even seeing the 300fps anyways (even on a CRT)...
True it's quite expensive, but as I mentioned it may be unavoidable to get good AA in the future. Furthermore as polygons get near pixel-sized it'll become almost free on the current architecture.
And when I say "temporal anti-aliasing", I don't mean ATI's alternating sampling mode... in fact I hate the fact that the term is now ruined... I mean taking samples from the game time domain as well so as to get some nice motion blur.
For the time being however, I agree with nAo: deferred shading + MSAA is how I'd design any future engine. I'd even go fully deferred lighting as it avoids the shader permutation combinatoral explosion and gets the rasterizer to cleverly solve the light contribution problem per-pixel
As was I, but ChrisRay has found that they've now come back. They are available via registry or using nHancer with the 100.xx leaked drivers, sorry don't recall the specific number and can't find mention of it in that thread, but I think there are a few sets with it included now.
I still play Myth II a fair bit, so I'm not the right person to askHow many games are there really out there released past let's say 2004 that can sustain an average 300fps?
Hard to say, and actually it's a little bit less that it's "free" (unless we're talking pixel-sized polygons and 4xSSAA) than a much more efficient use of the rendering pipeline. The problem with pixel-sized polygons is that we effective run FOUR PROGRAMS (3 vp + 1 fp) on them. REYES is much more efficient at this kind of workload, but unless/until that gets implemented in GPU hardware, there will be a minimum polygon size to stay efficient that is definitely more than 4 pixels, and probably more like 16 or more.How free exactly? I would imagine that Supersampling would render Multisampling redundant, if polygon interior data gets smaller than polygon edge data.
Perhaps, but that's my jobI'm not saying that you're wrong on the course of things as you describe them, I just think that you're addressing the less foreseeable future.
That comment wasn't meant for you - it was to the person who said that they didn't like temporal AA, and related it to monitor refresh rates, etc.I'm very well aware of what real temporal AA means, both in terms of output quality as in terms of performance.
Can each render target have its own Z-buffer in D3D10? Or do you basically just use a single giant rendertarget and divide it up manually?Incidentally in D3D10 there's another option that I haven't pursued... theoretically one could duplicate and jitter the geometry in the GS and sent it off to different render targets. This would cost a bit more memory, but avoid the CPU work necessary to re-render different frames, and AFAIK be almost as efficient as any hardware super-sampling method.
That's a good point, I believe it's only one. Thus you would have to divide it up and do the clipping yourself... actually wait, I believe there are N viewportsCan each render target have its own Z-buffer in D3D10? Or do you basically just use a single giant rendertarget and divide it up manually?
I'm perfectly happy with 30 fps for anything other than competition grade FPS play. And even 20-25 FPS is prefectly fine if it isn't an FPS.
So that said if a game runs anywhere from 80-120 fps depending on the type of game. I'd be happy to enable 4xRGSS and take the 1/4 hit to FPS.
Regards,
SB
Trouble being that in most flight or racing sims (which don't need excessive framerate) I've tried in the past years I'm having a hard time sustaining something over a 30-40fps average in a high resolution with all bells and whistles enabled.
A game isn't usually a movie where the scene is controlled and there are people who know how most of us will be watching and just how shutter time will affect our perception.And when I say "temporal anti-aliasing", I don't mean ATI's alternating sampling mode... in fact I hate the fact that the term is now ruined... I mean taking samples from the game time domain as well so as to get some nice motion blur.
That's only true if you're blending over a domain large than your frame time. If you're sampling only from the time between "this frame and the next", it's a reasonably good approximation of what we see, and I can't think of a better definition of "temporal anti-aliasing". The "aliasing" in this case is the discrete frames that are being shown on the monitor rather than a continuous sequence of intermediate positions (as in real life).Temporal aliasing is a function of movement ... and movement is relative. Motion blur is an effect more than anti-aliasing.