Why is DirectX10 required for this?

At WinHEC, Microsoft made a presentation on Longhorn's advancements in text display/management. It's available (http://download.microsoft.com/downl...41f2-893d-a6f2295b40c8/TW04007_WINHEC2004.ppt) as a Powerpoint presentation.

Specifically, it mentions bits like this:
http://home.comcast.net/~cdorrell/dx10.jpg

Which part of the currently understood DX10 spec makes this possible/necessary?

P.S. I apologize for not contributing to the oh-so-interesting discussions regarding various manufacturers' filtering optimizations...

Picture size too big - Moddie Blokie
 
No bitwise operations in Dx9 (first op) and to do an 8x8 box filter would require a custom pixel shader.

Perhaps Dx10 requires a 8x8 filter kernel without using a complex pixel shader?
 
i'm interested how they add ClearType support to their 3d-environment (wich their new gui technically is..).. means, ClearType has to affect the rendering AFTER every 3d step has been done (means, once text is on screen yet).

we'll see..
 
1bpp bitwise OR should be possible with a 1-bit framebuffer and 1-bit texture support (1-bit additive blend + clamping = bitwise OR; if you need bitwise AND instead, use multiplicative blend), which shouldn't be too difficult to add or even emulate. A box filter should take about a dozen or so instructions in PS3.0 (two nested loops should be enough for a filter of any size).
 
Seems bogus to me. You could obviously do that today if you used more bits per pixel and did blending instead of OR. You could do OR if you wanted, but it would be pointless, OpenGL has supported frame buffer logic ops since forever, but it isn't supported widely in hardware. An 8x8 box filter might not fit in PS 2.0 hw but you could do it in multiple passes if you really wanted to. I doubt the fillrate demands for text rendering are big.
 
GameCat said:
You could do OR if you wanted, but it would be pointless, OpenGL has supported frame buffer logic ops since forever, but it isn't supported widely in hardware.
Untrue. Pretty much all cards today support logic ops in hardware. This includes radeon (r100-based, r200 based), matrox (g200), sis 300, glint gamma, intel extreme graphics (1 & 2 which is the same).
If you wonder about the strange selection of cards I mentioned, those are the ones with open-source DRI drivers ;-). I'm pretty sure that the newer cards and nvidia cards could do it too, you could easily test it with a simple test program, performance would be really low if it's not supported in hardware (because rasterization fallbacks really suck!).
(the cards do not, however, usually announce GL_EXT_blend_logic_op, which achieves pretty much the same functionality as the OpenGL 1.1 color logic op feature.)
 
davepermen said:
i'm interested how they add ClearType support to their 3d-environment (wich their new gui technically is..).. means, ClearType has to affect the rendering AFTER every 3d step has been done (means, once text is on screen yet).

we'll see..

Not really, generally all you have to do is render the text last per rendering context, and then compose them at the end.
 
GameCat said:
Seems bogus to me. You could obviously do that today if you used more bits per pixel and did blending instead of OR. You could do OR if you wanted, but it would be pointless, OpenGL has supported frame buffer logic ops since forever, but it isn't supported widely in hardware. An 8x8 box filter might not fit in PS 2.0 hw but you could do it in multiple passes if you really wanted to. I doubt the fillrate demands for text rendering are big.

It isn't really a issue of text rendering, the root issue is that the text rendering along with the rendering and composing of everything else starts to become somewhat graphic intensive. So partial reasoning for higher video cards is because low cards just don't have the performance for the 'full experince'
 
Its not really that they are saying you have to have DX10 class hardware to render the earlier stages, its just not efficient enough to do so. Yes you could do PS3.0 style multipass emulation but whats the point? It would almost too slow to use (would have to render all glyphs to a render target THEN do the 8x8 filtering on the result) . They just wanted to show at which point the aceleration will begin.

They go on to explain some of their resoning in later sliders (starting at slide 30)

DX10 class hardware can do ALL stages of composition in hardware extremely fast ( 8x faster than DX9 hardware) . The hardware also has dedicated support for 1 bit operations and a dedicate Glyph cache for even more acceleration.

DX9/7 class cards will do all of the composition and filtering in software, and do the blending (pixel shaders for DX9 or multipass FF for DX7) by combining alpha bitmaps (1 for each RGBA from the looks of it).
 
lyme said:
davepermen said:
i'm interested how they add ClearType support to their 3d-environment (wich their new gui technically is..).. means, ClearType has to affect the rendering AFTER every 3d step has been done (means, once text is on screen yet).

we'll see..

Not really, generally all you have to do is render the text last per rendering context, and then compose them at the end.

i'm talking about text that is on a 3d window, rotated, translated, scaled, morphed, etc, as some screenies yet show that its doable. ClearType has to be done as a last thing (just like antialiasing, its just another way of antialiasing).

now i'm wondering how they want to handle this.. dunno:D
 
Back
Top