I must admit I was only giving an approximation but thanks for the more precise maths anywayXmas said:Actually, DAC downsampling is more bandwidth efficient even before you reach a 1:1 fps/refresh ratio.
When the framerate is higher than (s-1)/(s+1) times the refresh rate (where s is the number of samples), DAC downsampling saves bandwidth (e.g. 1:3 for 2 samples, 3:5 for 4 samples)
This means however, that the more samples you take the more your framerate needs to approach refresh rate to benefit from DAC downsampling.
In some respects, yes, it is 'just a matter of where data is located', but in other respects, it isn't. I am interested, to some extent, in the HW configuration. The T-buffer approach is possibly more flexible, in that you could more easily do N independent renders, but then you either have separate memories or possibly suffer greater page break costs etc.Xmas said:Well, no big difference here as this only affects where the samples are stored in memory. I'm not sure whether GF3/4 put the samples one after another in one buffer (=linear memory area) or uses separate buffers for each sample position. I guess this depends on what the memory controller can do best.
Really? I must remember that.... or perhaps that's why I said I was certain that not all systems downsample in the DAC.Chalnoth said:(other than the Kyro series...they do FSAA internally)
Correct. Typically the frame buffer swap is just a matter of changing a couple of registers but, then again, if you need to do a downsampling pass (as was the case with early Geforce system), then it'd be a bit more complicated.rAvEN^Rd said:I though this would be an easy question to answer. Now look what I stirred up!
Are images rendered in the back buffer (and thus need Z-buffer and stencil buffer) and are then stored in the frame buffer(s). And then multiple framebuffers are used to keep the screen from flashing at high framerates? Having done a little programming I realize the need of a dubbel buffer.
Are you refering to triple buffering? If you are double buffering and syncronising the swap of buffers to the VSync signal (in order to avoid 'tearing' of the display) then your final instaneous framerate is will always drop to an integer fraction of your refresh rate. For example, if your refresh rate is 60Hz, then the framerates you will get are any of 60Hz, 30, 20, 15 etc. That is, if your system could only render frames at 59Hz, then you will actually only get 30Hz because of the locking to VSync.Why do I need a third one?
The front one is the one being read by the DAC and hence displayed. The back (i.e. hidden behind the front) is the one in which the next image is being constructed.Is front buffer the same as the frame buffer? I just trying to get the terms right in my head.