Why have Vertex Shaders on the CPU instead of the VPU?

freq said:
OK, I'm a little confused here by all sorts of people claiming the next paradigm of 3D graphics will be "texture-less", and how all these things like displacement mapping and stuff will be the next best thing... OK, 2 questions, have you people done any 3D modelling, and if you have, no offence intended(really) but are you on CRACK? Textures ain't going anywhere, at least for the next 10 years or so, and I would suggest many years past that. Textures aren't mutually exclusive with all these other fun methods like displacement mapping...

first, nobody claimed textures and micro-polygon techniques were mutually exclusive. second, i was talking of getting rid of textures in the sense we have them today - as multiple, random-access sampling per pixel per (virtually) each of the components of the lighting equation. the issue i have with that is that bandwidth requirements are getting insane with the increase of realism and advent of more adequate sampling techniques -- too much resources are thrown at satisflying the bandwidth hunger. there are alternatives to that - what Marco mentioned is one viable direction, but FWIW i wouldn't mind even some kind of 'sparse' texture sampling where a traditional lookup is done once per an arbitrary-sane screen/mesh area - i don't see why that would be impossible (even now that i've run out of crack) and what makes us so irreversibly stuck on the present approach (aside from mindset inertia)
 
Sort-first parallel rendering using hierarchical space subdivision, fair memory access arbitration and batching to remove redundant accesses. Their raytracing is really just tiled rasterization with front to back rendering, it doesnt really talk about reflected/refracted/shadow/etc rays (they mention reflection, but dont actually adress reflected rays). Only the operation for primary rays is described, I dont really see why they call it raytracing in the first place.

Can't see anything revolutionary, but then I rarely can.

Marco

PS. I like the approach, because it is basically everything I have always said tiled rendering should be ... but not revolutionary for the same reason.
 
SimonF:

The patent was nice to me cause it's the first time I read about a RT (or like Mfa pointed out, just the first hit..) hw implementation.
I don't know if it's novel or it's not.
In addition to what Mfa wrote the patent describes how from a pool of tile rasterizers those units are assigned to different tiles in a way to optmize workload distribuition. It takes in account spatial and temporal coherency.
On the first page there is a pic showing reflected and refracted rays but only first rays are addressed.

ciao,
Marco
 
nAo said:
SimonF:

The patent was nice to me cause it's the first time I read about a RT (or like Mfa pointed out, just the first hit..) hw implementation.
HW RT? Have google for "ART".
 
It could be an idea But wouldn't be efficient regarding the bandwith 'wasted' to store and read intermediate data between APUs..
Even if it seems from CELL patents such a technique could be easily implemented..
Well that part depends mostly on how interconnects between APUs are supposed to work. EE already had lots of inter-unit communication that took place off the main bus - don't see why that couldn't work here :p (I sorta expect it)

Pixel engines would (wishfully) fetch texels from edram, so latency could be greatly reduced. In the patent the 4 PUs in the Visualizer have half the APUs than those PUs that sit in the BE. 4 APUs die size devoted to just a single 'standard' (like GS..) scan conversion engine seem an overkill to me..there should be something more in those pixel pipes..Ok..that can just be some more edram..
Well I'm still skeptic as GS eDram latencies were not exactly ideal for this kind of thing. I guess if you can keep latency to around 10cycles it would work (I assume APUs pipelines will be longer anyhow now that they are targetting much higher clock speeds).
 
Simon F said:
HW RT? Have google for "ART".
Thanks, I never heard of them before!

Fafalada said:
Well I'm still skeptic as GS eDram latencies were not exactly ideal for this kind of thing. I guess if you can keep latency to around 10cycles it would work (I assume APUs pipelines will be longer anyhow now that they are targetting much higher clock speeds).
Well, if they are targeting 2+ Ghz we should expect a doubling of the PS2 VU pipeline stages at least.
And what about texture filtering? ;)
Or sony will provide full fledged TMUs or we are gonna to devote some APU just to TMU tasks..

ciao,
Marco
 
Back
Top