Interesting AA quote from SIGGRAPH paper

But, the interesting thing is that the CineFX papers have been discussing 32-bits per component floating-point. I'd really like to see how this pans out.

As a side note, I don't expect any color quality difference between 32-bit and 24-bit floating point, though there may be some differences for non-color data (because I still think that 16-bit floating point is enough for color data...provided there are error controls, such as doing calculations at higher bit depths...).
 
Chalnoth said:

Wonder how I missed that?

A little disappointing, but not overly suprising, given the added complexity of switching to a floating-point pipeline in the first place.

I wonder what Nvidia is targeting? If they're going for 32-bit per component throughout the pipeline, the NV30's expected higher transistor count may not pack in as much extra capability as many have assumed.
 
Well, considering how much precision and the sheer number of pipelines and capability that ATI was able to pack into the R300, which was above my original thoughts of what it would have (i.e. I thought it could only really have 64-bit color, and was doubtful about 8 pipelines, though I did hope for larger program sizes...), perhaps the increase in size from increasing precision isn't as great as I originally thought. Perhaps there are other factors that are much more important, and so the move from internal 96-bit to 128-bit isn't all that costly in terms of transistor count (But ATI couldn't do it because of the more restrictive .15 micron process...).

Anyway, I would just like to reiterate that in the vast majority of circumstances, I don't believe there will be a functional difference between 96-bit and 128-bit color.
 
Chalnoth said:
Perhaps there are other factors that are much more important, and so the move from internal 96-bit to 128-bit isn't all that costly in terms of transistor count (But ATI couldn't do it because of the more restrictive .15 micron process...).

Anyway, I would just like to reiterate that in the vast majority of circumstances, I don't believe there will be a functional difference between 96-bit and 128-bit color.
If you think there is no functional difference between 96-bit and 128-bit color, don't you think others might have come to the same conclusion? Maybe people did some real studies and found that 96-bit was enough? Why should gates be wasted on something that isn't needed? Why must you always think that "ATI couldn't"?

It's obvious you aren't a hardware designer, so why are you so eager to look for limitations of a process you know little about?
 
OpenGL guy said:
If you think there is no functional difference between 96-bit and 128-bit color, don't you think others might have come to the same conclusion? Maybe people did some real studies and found that 96-bit was enough? Why should gates be wasted on something that isn't needed? Why must you always think that "ATI couldn't"?

First of all, why must you always read my posts as some sort of slight against ATI, their engineers, or something like that? I just meant that ATI had much more limited transistor budgets than nVidia has.

Anyway, you may well be right. nVidia may have come to the same conclusion, and may use 96-bit color. The only problem is that the CineFX papers refute this.

It's obvious you aren't a hardware designer, so why are you so eager to look for limitations of a process you know little about?

It doesn't take a brilliant engineer to realize that the .15 micron process is more limiting in terms of transistor count than the .13 micron process. Fewer transistors means fewer features. I was just stating the possibility that ATI felt it could sacrifice some color depth.
 
Chalnoth said:
It doesn't take a brilliant engineer to realize that the .15 micron process is more limiting in terms of transistor count than the .13 micron process. Fewer transistors means fewer features. I was just stating the possibility that ATI felt it could sacrifice some color depth.

It's not that simple. The budget that everybody has is the cost of the chip.
The main factors which detmine die cost are the number per wafer, the percentage yield (i.e. how many work on a wafer), and whether the fabrication house is still trying to pay off it's development costs on that process.

Yield tends to be determined on the maturity of the process and the area of each chip. The larger the chip, the more likley it won't work. The newer the process the more likley it wont work.

0.13u elevates the cost because it's newer, and yield takes a hit because of that. It lowers costs because the chips tend to be smaller (more per wafer) & the yield benefits because of the smaller size.

The point is it's not a simple 0.13u is better than 0.15u. There's more to think about. Also beer in mind that ATI may have said "24-bit floats is enough. We'll get a smaller/cheeper chip using 24-bits regardless of using 0.15 or 0.13" or "We can spend those transistors on <some other feature>"
 
well.. how can they support a 128 bits backbuffer then ? ( better would be, why should they ? )
I understand 64 bits FP framebuffer ( dither from internal 96 bits ), but 128 btis FP frame-buffer ? what would be the point of it ? ( sorry if this question has an obvious answer, I might be too tired at the moment ).. ;)
 
Panajev2001a said:
well.. how can they support a 128 bits backbuffer then ? ( better would be, why should they ? )
I understand 64 bits FP framebuffer ( dither from internal 96 bits ), but 128 btis FP frame-buffer ? what would be the point of it ? ( sorry if this question has an obvious answer, I might be too tired at the moment ).. ;)
This has been answered already:
It's not for use as a 128 bit backbuffer, it's for use as an intermediate stage in multipass rendering or as a texture format. The point is that you lose no data when doing multipass so you can maintain the desired precision.
 
Z3

Tried to understand that Z3 stuff, but it's over my head... :oops:

Could someone please help me out?

Is Z3 edge AA or full scene AA? Does it work on *all* edges, including alpha texture edges (unlike GeForce3/4), including stencil edges (unlike Parhelia)?

I think it's time for some decent AA with *all* edges antialiased. The 9700's AA looks quite impressive to me in this area. Would Z3 really be able to top 9700's AA?

Thank you!
 
Forget about full scene AA, the term has lost all meaning ...

I assume ATI just switches to supersampling when the alpha test is enabled. In theory with Z3 you could do the same to form the fragment mask, in practice you probably would not ... if the fragment mask was exposed in the pixel shader you could as a developer encode edge direction in a texture map and use alpha blending though and construct the map in the pixel shader (still cheaper than supersampling) but again in practice you probably would not :) Depending on the max number of fragments stored per pixel it might not be a real issue though, because Z3 supports a limited form of sort independent transparancy ... with a sufficient number of fragments a developer can use alpha blending without much worry (trivial task for a developer to support both alpha test and alpha blending).

Z3 can recognise edges formed by intersections, and intersections are really what form the edges of shadows with stencil buffer shadows ... so yes it can AA those.

Marco
 
Thank you, Marco.

Well, I thought the 9700's MSAA does smooth alpha texture edges, but maybe you're right and they can claim that only for their SSAA method. Looking forward to the reviews. Hopefully someone will cover that question...
 
Well, I thought the 9700's MSAA does smooth alpha texture edges, but maybe you're right and they can claim that only for their SSAA method.

ATi say that their MSAA does smooth alpha textures, MfA is suggesting that during MSAA if it detects an alpha texture it will dynamically switch to SSAA for the alpha and then switch back once the alpha texture is rendered.
 
DaveBaumann said:
ATi say that their MSAA does smooth alpha textures, MfA is suggesting that during MSAA if it detects an alpha texture it will dynamically switch to SSAA for the alpha and then switch back once the alpha texture is rendered.

And what do you think? Does that sound reasonable to you? I mean if I remember right, the Unreal Tournament Test had some alpha textures, nevertheless the 9700 performed exeptionally well in that test. So *if* it switches to SSAA for alpha textures, it seems to do so without a big performance lost.
 
Back
Top