What if?

A new graphics core were to come out with the capability to determine what load, during this frame, will be placed on the pixel and vertex shading units. And using this information dynamically re-allocates logic from, say, the vertex shader to the pixel shader as the pixel shader will be the bottleneck on this frame. Also, if certin features are not being used then the logic that executes them could be re-allocated to another part of the graphics pipeline.


Im trying to fuse metagence (the technology supposed to be used in graphics cards according to its patent but has done nothing of the sort so far) and Videoligcs comment that series 5 would involve 'processing elements that could be switched in and out to render grahics'. Is the logic between different parts of the GFX pipeline similar enough to be manipulated that way by metagence (which is good coz it does it on the fly, apparently).

What do u think?
First thing I thought was unfeasibly high synthetic benchmark results....
 
Consider 3dlab's P10. It could statically shift some processing power between geometry and pixel processing. I assume all it needs is a brain to dynamically balance the load.
 
I believe the old Not for Idiots site had hinted rather strongly that NV50 and R500 would have shared floating point vector units for both vertex and fragment processing. And more recently in a discussion in the console forum* someone inquired if anyone knew the number of vertex shader units in the XGPU2 (some odd R5XX) and Baumann answered that with "as many as there are pixel shader units."

* Yes it's a cesspool, but occasionally an interesting cesspool.
 
Interesting, but at what level will these be changed over? are we talking a driver slider or could it be re-calculated on the fly between frames?

Certainly that is the sort of thing that metagence is deisgned to do, if ATi are going this way too then what method might they be using to adjust load availability to load requirements?

I think this sort fo thing will definately help the graphics card market, they are becoming true processers, soon they are gonna be doing allsorts;p
 
Dave B(TotalVR) said:
Interesting, but at what level will these be changed over? are we talking a driver slider or could it be re-calculated on the fly between frames?

I'd say load balancing should be dynamic - and doesn't sounds like such a complicated thing (once you have an architecture that can run a vertex of a pixel shader on the same units.)

Make vertex shader higher priority - don't start new vertices when the buffers between the VS and the PS are full.
 
Its surprising fiddly as you need to ensure that your available memory BW is evenly utilised, rather than getting hot spots...

John.
 
"I'd say load balancing should be dynamic - and doesn't sounds like such a complicated thing (once you have an architecture that can run a vertex of a pixel shader on the same units.)"

Well yes, it would be best to do an instantaneous change to whatever kind of processing unit you want is the best way, but what is feasible with current technology?

Im also not just talking about using PS and VS interchangeably, what if you dont have anti-aliasing turned on, surely that large area of silicon used to do the multisampling could be used elsewhere. The logic used to do hidden surface removal is not being used because of a pipeline stall. Why not use it to help speed up the pixel shading until its needed again.

That sort of thing
Dave
 
Dave B(TotalVR) said:
Im also not just talking about using PS and VS interchangeably, what if you dont have anti-aliasing turned on, surely that large area of silicon used to do the multisampling could be used elsewhere. The logic used to do hidden surface removal is not being used because of a pipeline stall. Why not use it to help speed up the pixel shading until its needed again.
While that might be possible it's much more difficult than sharing VS and PS units. Dedicated hardware that only does z tests is much smaller than a general ALU that can help out with the shader duties.
 
Back
Top