JF_Aidan_Pryde
Regular
The way I see it is this: today we have 16 pipelines, 500MHz and half-flexible pipelines. All this does rasterisation.
To get to interactive renderman quailty, we need more of what we have now, but also stuff we don't have.
Can 256 Pipelines, 5GHz, fully programmable hardware do the job?
It seems we need some other things too.
- How do we get better AA?
- Primitive processor?
- Should texture memory be totally virtualised and addressable?
- Does it make sense to use the same hardware block for pixel and vertex units? (I asked John Montrym about this and he said there's very little difference between PS and VS 'capabilities' but he was not supportive or against the idea of unifying the hardware).
How will GPUs evolve to close the gap?
To get to interactive renderman quailty, we need more of what we have now, but also stuff we don't have.
Can 256 Pipelines, 5GHz, fully programmable hardware do the job?
It seems we need some other things too.
- How do we get better AA?
- Primitive processor?
- Should texture memory be totally virtualised and addressable?
- Does it make sense to use the same hardware block for pixel and vertex units? (I asked John Montrym about this and he said there's very little difference between PS and VS 'capabilities' but he was not supportive or against the idea of unifying the hardware).
How will GPUs evolve to close the gap?