Mintmaster
Veteran
Nicely done, neliz.(Jan 2000)
2006-7:CPU’s become so fast and power ful that 3D hardware will be only marginally benfical for rendering relative to the limits of the human visualsystem, therefore 3D chips will likely be deemed a waste of silicon (andmoreexpensivebusplumbing), so the world will transition back to software-driven rendering
http://www.scribd.com/doc/93932/Tim-Sweeney-Archive-Interviews
There's a big difference, though. Back then D3D was adding stages/abilities to each element, whereas now they're adding stages at the high level. The pixel/vertex pipelines became programmable because hardware was already implementing features this way. The original Radeon has pixel and vertex shaders in hardware very similar to DX8 class hardware like the GF3.Back when GPUs added ever more fixed-function abilities (e.g. texture coordinate generation and bump mapping), they quickly realized that it was pointless to spend ever more silicon on individual features. Many features were left unused most of the time. So the solution was to use generic arithmetic operations to perform the calculations in 'software'. Nowadays we're at the point where Direct3D keeps adding ever more 3D pipeline stages, with many of them frequently left unused, but they still have an overhead in the hardware and the drivers. The solution is again to add more programmability. It allows to implement the current API pipeline more efficiently, but also to unleash many new abilities by giving more control to developers.
The most important thing in hardware design is data flow. Going from DX7 to DX8/DX9 wasn't very hard because the data going in and out of each pixel or vertex computation barely changed. DX9c/DX10 added a wrinkle with texture data flow into the vertices. What you're proposing, though, makes a complete mess of all the nice constraints of data flow that make GPUs so efficient at graphics.
While I agree that it will eventually happen, I disagree about the cause. It won't be for efficiency/simplification or by graphics programmer demand. The only way I see the GPU pipeline going away is through drastic shifts in the market, i.e. stream computing becomes maybe 50% of the GPU market through growth of the former coupled with shrinkage of the latter due to integrated graphics becoming increasingly adequate.
I think the tipping point will be the development of the last mass-market, high demand (hence high margin) application for HPC in our society: Automated vehicle driving.
EDIT:
This is exactly the point I was making earlier. The pixel and vertex pipelines became programmable because it was basically already structured that way to support the features that consumers were projected to want. GF3 had to add dependent texturing, but ATI barely did anything when going to a DX8 GPU. For DX10, unified pipelines had marginal overall cost because the efficiency gain from load balancing negated the overhead of generality.The problem is not so much the software side, but the hardware. It is incapable of efficiently supporting anything that deviates significantly from the Direct3D pipeline. Today's hardware still dedicates massive amounts of silicon to ROPs, texture samplers, raterization, etc. That's pretty useless for anything other than Direct3D.
All the things you mentioned, though, will never have any economic drive to become generalized or merged. People have been talking about shader based texture filtering for ages but it's not going to happen because you don't come close to winning. If you removed ROPs then you'd have to add some other way of writing data to memory and dealing with generalized read after write hazards, so you're not going to get close to saving anything there. Rasterization and Z-testing has very well defined data flow with very different data formats/precision of the general computation units, so just like filtering it makes no sense to generalize and never will unless the hardware idles >99% of the time. The only thing that has a chance of generalizing, IMO, is triangle setup.
Intel is going to use its manufacturing edge to not get completely blown out of the water with Larrabee, but nobody else can generalize without committing suicide.
Last edited by a moderator: