What Are The Benefits Of Having More Vertex Shader Pipes?

Hardknock

Veteran
Xenos is a peculiar beast. Having 48 ALUs that can be dynamically assigned to either Pixel or Vertex shaders. What new techniques or effects might having all this Vertex processing power introduce that weren't feasible before?

What current pixel shader effects might be improved in performance or quality if coded to take advantage of Vertex shaders?

I feel the Unified Shader route was a waste if developers don't take advantage of Xenos' Vertex processing power.

Any 360 devs care to comment?
 
I actually dont know so i'll let one of the more educated members answear your question. If i had to guess could they be used for better animation??? And in the case of Xenos would using more for vertex leave less for pixel shading??
 
It could be usefull for vertex texturing or r2vb (render to vertex buffer).
edit :oops: both seems more pixel shader heavy... misleading names...
 
Last edited by a moderator:
!eVo!-X Ant UK said:
I actually dont know so i'll let one of the more educated members answear your question. If i had to guess could they be used for better animation??? And in the case of Xenos would using more for vertex leave less for pixel shading??

Just curious, but why would you say better animation?? Also yes, Xenos using more vertex leaves less for pixel shading.
 
Two obvious cases:
1. 2-pass renderer with a Z-prepass. First pass -> no pixel shading whatsoever here ->pixelpipes would just sit idle, while the vertex shader might be overburdened.

2. Occluded pixels (particularly true for a 2-pass renderer). Modern GPUs can reject a lot of pixels per cycle thanks to hierachical Z-optimizations -> little or no pixel shading is taking place -> vertex shaders overburdened.

A GPU with unified shaders solves this by dynamically allocating shaders units towards whatever the workload demands.

Cheers
 
Last edited by a moderator:
Hardknock said:
Just curious, but why would you say better animation?? Also yes, Xenos using more vertex leaves less for pixel shading.

Animation is largely dependant on the animators, on the artists, the people who actually spend hours tweaking the animation routines, it really doesn't depend on the hardware today.
If evo meant physics, then today that's something that the CPU takes care of.
 
london-boy said:
Animation is largely dependant on the animators, on the artists, the people who actually spent hours tweaking the animation routines, it really doesn't depend on the hardware today.
If evo meant physics, then today that's something that the CPU takes care of.

He probably meant skinning.

Cheers
 
Hardknock said:
Just curious, but why would you say better animation?? Also yes, Xenos using more vertex leaves less for pixel shading.

Because i remember the hype about the X-box 1 having vetex shaders, and how dev's said that Animation would improve alot when there used.
 
Yeah well I suppose you could skin a lot more characters with more vertex units but I thought that was the most obvious answer.
 
At the risk of being painfully obvious, more vertex shader pipes means more polygons.

On Xenos, you might be able to use MEMEXPORT in combination with its tesselation schemes to get better polygonation.
 
Inane_Dork said:
At the risk of being painfully obvious, more vertex shader pipes means more polygons.

I'm not sure if that is so obvious. More vertex units doing more work per vertex wouldn't necessarily result in more polygons versus fewer doing less. So it's all dependent on what the developer is doing.

Your total world geometry, or what is possible, also may be dependent on some other things, not even directly under the GPU's remit.
 
Titanio said:
I'm not sure if that is so obvious. More vertex units doing more work per vertex wouldn't necessarily result in more polygons versus fewer doing less. So it's all dependent on what the developer is doing.
Well of course it depends on what the developer does with them. We need not fall back to that level of obviousness.

From what I've seen, vertex shading is support for pixel shading and that's about it. So there's really not much more to do at the vertex level. Hence my comment about more polygons. More processors doing the same shaders means more verticies can be processed.

Your total world geometry, or what is possible, also may be dependent on some other things, not even directly under the GPU's remit.
It's true that having more vertex pipes does not automatically increase the number of polygons present. Again, though, you're in the realm of being too obvious. We're discussing what can be done with more vertex processing capabilities.
 
Inane_Dork said:
Well of course it depends on what the developer does with them. We need not fall back to that level of obviousness.

Maybe for you and me, but for some reading, you can probably never over-qualify a statement ;)

Inane_Dork said:
From what I've seen, vertex shading is support for pixel shading and that's about it. So there's really not much more to do at the vertex level. Hence my comment about more polygons. More processors doing the same shaders means more verticies can be processed.

It's true that having more vertex pipes does not automatically increase the number of polygons present. Again, though, you're in the realm of being too obvious. We're discussing what can be done with more vertex processing capabilities.

That's all true, and fine as far as theoretical discussion goes. But practically speaking, this will be tied to any number of other issues (both in terms of GPU activity - the budgetting of pixel versus vertex shading - and non-GPU activity).

I think Gubbi was pretty much on the mark. I don't think it's so much about increasing vertex activity overall per frame versus being able to utilise all resources for one or the other at certain points (i.e. for short bursts), where that suits.
 
Last edited by a moderator:
Titanio said:
I think Gubbi was pretty much on the mark. I don't think it's so much about increasing vertex activity overall per frame versus being able to utilise all resources for one or the other at certain points (i.e. for short bursts), where that suits.
Why not? It's a matter of trade-offs no? If they want to trade off some pixel shaders for vertex, and make use of that extra vertex processing they could.

Just like Ninja Theory trades pizel shaders to save bandwidth, couldn't Xenos dev's could trade pixel shaders for extra vertex shading power?

Obviously since it's a unified approach it's slightly different, because the scheduling is done automatically. But if they typically had a game with 20% vertex shading, and decide to make it more like 40% or 50% for whatever reason (i dunno...that's what this thread is for though) they would effectively be trading pixel shading for extra vertex shading.

The question is, what sort of tangible gains are there to be made by using a higher ratio of vertex shading than is possible with traditional non flexible non-unified hardware.
 
scooby_dooby said:
Why not? It's a matter of trade-offs no? If they want to trade off some pixel shaders for vertex, and make use of that extra vertex processing they could..

Of course. I meant to say "typically" I think that's where the benefit will be.
 
Well I'm trying to go beyond the "typical" here and delve into the more extravagant cases. Can't more Vertices be used to make more realistic water(not sure what technique would be used here)? And for far more complex Occlusion Mapping?

I think Scoob said it best:

The question is, what sort of tangible gains are there to be made by using a higher ratio of vertex shading than is possible with traditional non flexible non-unified hardware.
 
If i'm not mistaken, most of the "realistic" water in the next generation systems will be done procedurally in the CPU, which is completely separate from the vertex shaders in the GPU.
 
AFAIK more vertex processing would mean better "quality" polygons...

And for the US architecture, it means better flexibility for vertex/pixel shader ratio...
 
Last edited by a moderator:
I'm the most curious about what devs think of displacement mapping. Will the tesselator and displacement mapping see any real action? Or will they both, and HOS, be relegated to the junkpile (which seems like familiar territory for HOS, in general). It seems like one could implement displacement mapping (with tesselation to save on polygon set up) in fewer instructions than say parallax occlusion mapping, with the benefit of silhouettes appearing correctly.
 
TurnDragoZeroV2G said:
I'm the most curious about what devs think of displacement mapping. Will the tesselator and displacement mapping see any real action? Or will they both, and HOS, be relegated to the junkpile (which seems like familiar territory for HOS, in general). It seems like one could implement displacement mapping (with tesselation to save on polygon set up) in fewer instructions than say parallax occlusion mapping, with the benefit of silhouettes appearing correctly.

Current GPU's become significantly less efficient with very small polygons.
And displacement mapping has it's own set of challenges, notably LOD, but it might be used as a form of geometry compression by some people.
 
Back
Top