Ford vs Chevy?

Democoder,

I am no expert but:

1. Wouldn't the cpu still have to transfer the vertices to the GPU initially for the object?

2. Once the object is inside the VPU, GPU or what ever, doesn't the cpu still will have to give information of location changes due to ones mouse movement, key or other activities such as physic calculations on the object?

I really don't see the great savings in bandwidth, yes I see some but overall it seems rather low and leads to many more quesitions. The VPU would have to take over what the cpu did normally with the whole code dealing with an object. It does sound like a step in implementing physics onto the VPU with objects. By itself it doesn't seem to be that big of a step as far as I see it.
 
Last time on this...

Example:

CPU transfers HOS to GPU consisting of N control points. CPU tessellates HOS into N*100 vertices.

Bandwidth Savings = ~factor of 100 in data space.
Speed Savings = however much faster the GPU is at tesselation compared to the GPU.
 
With dynamic flow control in the vertex shader why not just make the tesselator a vertex shader with one addition to its instruction set, to output a vertex :)
 
Presumably because you'd expand the primitive processor to allow reading and writing complex datastructures instead of trying to emulate it by hacking it into the constant registers. Also, if a vertex shader outputs a vertex, what is it outputing it to? A vertex buffer where it is building a mesh? That would neccessitate more than just adding one instruction, especially if you have to deal with mesh connectivity issues.
 
Back
Top