I seiously don't think so. Too much caching of unbounded data sets._xxx_ said:I say all gfx chips will soon be based on some kind of deferred rendering, regardless of lighting/shadowing (which will become better, for sure).
I seiously don't think so. Too much caching of unbounded data sets._xxx_ said:I say all gfx chips will soon be based on some kind of deferred rendering, regardless of lighting/shadowing (which will become better, for sure).
Remi said:Maybe quadrics will make their entry in real-time ?
Well, with some modifications to the core, the only reason why lots of small polygons would be inefficient would be due to the ineffectiveness of z-buffer compression under such a scheme. But, even then, z-buffer compression might be changed to one that, instead of just storing flat surfaces, starts storing curved surfaces as a compression method.Remi said:Of course the increase of arithmetic intensity should continue to push devs to do "better pixels", but with a PPP it'll be easy to do lots of small triangles too - just not too small if we still want the graphic processors to remain as efficient as they are today.
Sage said:Remi said:Maybe quadrics will make their entry in real-time ?
there's a little failure called the NV1 you might want to look up...
Well, no, the problem with the NV1 was that its quadrics were horrible to attempt to program for.Killer-Kris said:That doesn't mean it was a bad idea, just not supported. Personally I think quadratics/hos are absolutely necesary for us to move forward. If nothing else think of them as very good geometry compression, or as always having the proper LOD for the objects distance from the screen.
Chalnoth said:Well, no, the problem with the NV1 was that its quadrics were horrible to attempt to program for.Killer-Kris said:That doesn't mean it was a bad idea, just not supported. Personally I think quadratics/hos are absolutely necesary for us to move forward. If nothing else think of them as very good geometry compression, or as always having the proper LOD for the objects distance from the screen.
Wow.... Thanks for the info. I knew it did quadrangles, but I didn't knew they attempted quadrics...Sage said:there's a little failure called the NV1 you might want to look up...
I was under the impression that the rasterization order was rather important to keep texels flowing without much trouble. Really small triangles (let's say 4/5 pixels) would rather defeat that scheme, right? Does this mean that today's processors can cope better with a more hectic fetch order?Chalnoth said:Well, with some modifications to the core, the only reason why lots of small polygons would be inefficient would be due to the ineffectiveness of z-buffer compression under such a scheme.
Right, so you'd need to generalize the way pixels are dispatched to the pixel pipelines. Specifically, you'd make use of some sort of tiling mechanism where you attempt to cache as many pixels as you can, then sort them into tiles and render each tile separately. I believe ATI already does this.Remi said:I was under the impression that the rasterization order was rather important to keep texels flowing without much troubles. Really small triangles (let's say 4/5 pixels) would rather defeat that scheme, right? Does this mean that the processors can now cope better with a more hectic fetch order?
Ok, it does makes sense then.Chalnoth said:...you attempt to cache as many pixels as you can, then sort them into tiles and render each tile separately. I believe ATI already does this.
With unified hardware units (helping to minimize communication costs, as Dave hinted), each unit should be able to act as VS and therefore have its own control logic. Doesn't that mean the end of quads? (this is rather unclear to me for now - 4-way SIMD at the PS looks an idea good enough to be kept...) If that's the case, then we might be a lot closer to (really) small triangles that I thought we were...Chalnoth said:...Then, after that, all you need to do is make sure that all pixels on a quad need not be executed on the same triangle...
Sounds like a texture to me.Simon F said:Absolutely, except that those shaders will use a big table of constantsmaniac said:I don't think in 2017 they'll use textures anymore. Everything will be shader or other stuff they come up with.
JF_Aidan_Pryde said:Ailuros said:Probably a stupid layman idea from my behalf, but would something like a SoC stand a chance in the not so foreseeable future?
Ail, what kind of "System on chip"?
nelg said:What, if anything would have to change for true 3D displays? Not stereoscopic but 3D displays that allow for a 360 degree walk around (holographic?).
Chalnoth said:Well, it's not really possible to do that sort of thing on an IMR. A deferred renderer may use such a system to reduce the amount of framebuffer caching that needs to be completed. Such a system would add some latency to the pipeline.
That said, if you really want to make such a technique optimal, you're not going to want to AA every triangle edge. But there's probably not any truly robust system for proper edge detection (as Matrox' FAA showed us).