OpenGL ARB notes are up

I'm pretty sure Apple's talking about front buffer. When they say "our customers don't want more than 10 bits of precision", they are talking about DACs.

In this case, I think they may be shading the truth.

Apple no longer sells CRTs, they only sell LCD displays. I think Apple's LCDs are all currently digital (not that I have any proof of that), so Apple's own hardware is not going to take advantage of anything beyond 8 bits per component.

I think they still provide analog video out to be backwards compatable with CRTs. I think they don't want to get stuck paying for 12-bit-per-component D-to-A converters that don't add any value to their LCD screens.
 
Joe DeFuria said:
I found this interesting:

Revisited issue 23, ability to read from the framebuffer. After considerable discussion, including the performance costs of this in deeply pipelined architectures and desirability of overturning WG decisions in the full ARB, we revoted and decided this would not be supported by a large margin.

I think there have been some discussions here on this before...(Humus?) on the merits or lack therof being being able to read the framebuffer. Apparently, IHVs don't like it. ;)

NOOOOOOOOOO!!!!!! :devilish: :devilish: :devilish: :devilish: :devilish:

Well Carmack for one isn't likely to be very happy

Unfortunately, using a floating point framebuffer on the current
generation of cards is pretty difficult, because no blending operations are
supported, and the primary thing we need to do is add light contributions
together in the framebuffer. The workaround is to copy the part of the
framebuffer you are going to reference to a texture, and have your fragment
program explicitly add that texture, instead of having the separate blend unit
do it. This is intrusive enough that I probably won't hack up the current
codebase, instead playing around on a forked version.

The IHVs are going to have to allow some sort of blending or frame-buffer access, period. If they don't floating-point frame-buffers will be still-born. Just plain too difficult to use.
 
I thought super buffers were addressing that need? There was some mention of the super buffer proposal covering blending, etc...
 
Mmmm. I'm not sure I understand what super-buffers are. It's the first time I've heard that term.

Reading the rest of the notes I'm wondering if they will implement blending in FP buffers in a similar vein to the integer fixed-function pipeline, as against the more flexible (and presumably more difficult to implement) provision of the frame-buffer contents as an input to the fragment program.
 
Revisited issue 23, ability to read from the framebuffer. After considerable discussion, including the performance costs of this in deeply pipelined architectures and desirability of overturning WG decisions in the full ARB, we revoted and decided this would not be supported by a large margin.
I can understand the link to "performance costs of this in deeply pipelined architectures". With current frame buffer reads, that only allow some small fixed function modifications (alpha blend), it's possible to do the read->blend->write in one sweep without getting a page break in between. But if the read value is used long time before the write, you're bound to get page breakes in between. So it's understandable that there is some reluctance to it.

BUT!
I can see no reason why it should cost more than a texture read. It should in fact be a lot simpler than a texture read since the pixels are nicely aligned. So if they don't add frame buffer reads, they should at least make an offical way to use the frame buffer as a texture. Especially since it already works for some chips. (An indication that either the designers like that method, or it's so natural that it worked even though it wasn't explicitly thought of.)
 
Using the framebuffer as a texture will produce rendering glitches unless you keep the texture and framebuffer caches coherent with each other (which adds considerably to the complexity of their designs) and/or flush the caches between texture access and framebuffer access. Also, reading the framebuffer in the pixel shader is kinda hard to combine with Multisampling AA in a proper manner.
 
If you limit yourself to only reading the same pixel as your rendering, then there should be no problems with caching.

But the multisampling problem is of corse still there. And why did I forget to mention it here, when I've discussed it in a few other threads? :) I'll refer to the other threads about what could be done about it. (I'm too lazy to find them, but some discussion were in a Delta Chrome thread.)

So it would be a special kind of texture. You wouldn't be able to insert your own texcoords, and it would need a special filtering mode. For non-MS it should always use pointsampling, but with MS it would need some special way (like something we discussed in the other threads).

Jallen:
Have you tried to do what you said in a multisampling mode? In that case, what happened? I doubt it would work in any way you'd want, but it could be interesting to see what happened.:)
 
Basic said:
If you limit yourself to only reading the same pixel as your rendering, then there should be no problems with caching.

Only if you specifically make sure to read from the framebuffer cache instead of the texture cache, or keep the caches coherent. Consider the case where 2 small polygons overlap each other. The first polygon will presumably be rendered properly, but unless you maintain coherency, the texture cache will still contain the contents of the framebuffer before the first polygon was drawn, and you end up reading old and incorrect framebuffer data for the second polygon.
 
#¤%&@! Yes, you're right. I was thinking about it as a way to pass on values when multipassing non-transparant polys. Then you'd only need to sync caches when you start next pass, and that would put much less strain on cache syncing. (But there's still holes in that argument.)

So it still need to be linked to the framebuffer cache in some way. Probably best to skip the texture idea, and keep it all in frame buffer cache. Reading from it shouldn't be a problem. Only difference form now is that if you're running a FP that want to read the frame buffer, you'd need to preload the cache with framebuffer contents (if you don't already have that block in there). So the differences in hardware are a slightly larger framebuffer cache (since ecách pixel is in there longer time), and circuitry to find out earlier what to cache.

But I still think that if they can handle the precaching and page breakes for multitexturing, then it shouldn't be a big problem with the frame buffer.
 
Well considering that they already do blending on normal 16 bpp and 32bpp buffers they must be already be doing some precaching.
 
Yes in a sense, and hopefully it won't be to much job to do it a bit earlier. But there is a difference with current architecture. You could run FP for a whole block before you bother to read the frame buffer, since blending is separated from FP. That also means that you've got the info on what frame buffer blocks to read "for free".
 
Back
Top