Cube Maps and Next Gen Consoles

u missed a couple of words
"cubemaps are a way to do (crap) reflections on modern(? == last century) gpus"
ok cubemeaps are crap for reflections, though they do have other possible uses eg pointlight shadowmaps
 
u missed a couple of words
"cubemaps are a way to do (crap) reflections on modern(? == last century) gpus"
ok cubemeaps are crap for reflections, though they do have other possible uses eg pointlight shadowmaps

I'm interested in what you consider a better reflection solution for "modern GPU's".
 
I know this is slightly off topic but ATI usually refers to geometry shaders as being part of the US model (according to their papers they want US to do all three) so what is missing in the ALUs of the Xenos to make GS impossible on the hardware?
 
I know this is slightly off topic but ATI usually refers to geometry shaders as being part of the US model (according to their papers they want US to do all three) so what is missing in the ALUs of the Xenos to make GS impossible on the hardware?

It doesn't have any explicit geometry shader thread allocation as it does for VS and PS (up to 32 and 64 threads, respectively). Though, otherwise, I'd (perhaps ignorantly) think everything else is just a matter of data interpretation, thus you could run a GS, simply considering it VS for thread purposes.
 
I'm interested in what you consider a better reflection solution for "modern GPU's".
Dual Paraboloids are nice and I also don't mind latitude-longitude maps, but in general, when you start getting into non-affine camera geometries, I tend to find the per-vertex projection is problematic. You really need per-pixel projection for those things, and that means either raycasting or extreme tesselation. Hence why I have little qualms with zed referring to GPUs as semi-stoneage.
 
It doesn't have any explicit geometry shader thread allocation as it does for VS and PS (up to 32 and 64 threads, respectively). Though, otherwise, I'd (perhaps ignorantly) think everything else is just a matter of data interpretation, thus you could run a GS, simply considering it VS for thread purposes.
The main problem with GS is that unlike VS and PS, it allows for a relationship between input and output data that is not 1:1 . As such, the control circuitry needed to support geometry shaders is very different from that of vertex/pixel shaders, even though the execution units themselves are quite similar.
 
Dual Paraboloids are nice and I also don't mind latitude-longitude maps, but in general, when you start getting into non-affine camera geometries, I tend to find the per-vertex projection is problematic. You really need per-pixel projection for those things, and that means either raycasting or extreme tesselation. Hence why I have little qualms with zed referring to GPUs as semi-stoneage.

well, call me old-fashioned but i still find the main problem with cube maps not that they're usually impemented as flat-camera based (our viewports have been like that for the past, erm, 20 years) but that they're not per surface, even less so per triangle. when do we get to see self-reflections finally?
 
Dual Paraboloids are nice and I also don't mind latitude-longitude maps, but in general, when you start getting into non-affine camera geometries, I tend to find the per-vertex projection is problematic. You really need per-pixel projection for those things, and that means either raycasting or extreme tesselation. Hence why I have little qualms with zed referring to GPUs as semi-stoneage.

Dual-Paraboloids do offer better storage efficiency than cube-maps (less angular anisotropy) and are as such somewhat preferable for static artwork, although they are pretty much unusable for dynamic reflections; latitude-longitude maps are a PITA to translate into a cartesian-like coordinate-space, requiring a pair of ATANs per lookup that will make most contemporary GPUs squeal. Other than that, neither offer any major advantages or disadvantages over cube-maps.

Per-pixel cube-map reflections have been doable in pixel shaders ever since Geforce3, although the method usually used does make the assumption that the reflected object is "far away" from the reflector.
 
Per-pixel cube-map reflections have been doable in pixel shaders ever since Geforce3, although the method usually used does make the assumption that the reflected object is "far away" from the reflector.

arjan, in case you're referring to my post - i meant per triangle cube-maps building, not the per-pixel reflection calculations. i was complaining against the fact that cube-maps are predominantly built per-object, in the best case per surface and this does not exactly warrant any good self-reflections. this is particularly apparent in car renditions, where car surfaces never reflect each other when they should.
 
Last edited by a moderator:
well, call me old-fashioned but i still find the main problem with cube maps not that they're usually impemented as flat-camera based (our viewports have been like that for the past, erm, 20 years)
Well, even flat cameras still demand per pixel projection because perspective is by nature a non-linear transformation and not every quantity is linear in screen space even if its linear in 3d space. It's just less noticeable when you perspective-correctly interpolate components like textures.

but that they're not per surface, even less so per triangle. when do we get to see self-reflections finally?
*scoff* I'd love that, too... but the fact of the matter is that everything about a rasterizer is against you in this, and by design. Getting information about everything around you at every point will never be feasible unless GPUs have more transistors in them than there are atoms in the plants they're made in ;). Short of a raytracing/raycasting or even raycasting+rasterizer hybrid architecture, it won't ever happen. And quite frankly, I don't think the typical hardware manufacturer (read : nVidia) gives a damn -- or more accurately, they give a damn about making sure it doesn't happen.
 
darkblu said:
if the API does allow for good old-fashioned dynamic cubemaps in a reasonable manner this 'abstraction' buys me negligible little.
Considering rendering cost of updating cubemaps in your typical app is already optimized down to negligible little, lowlevel optimizations are all but irellevant in context of reflections(as we know them so far) anyhow.

Honestly though - IMO this is just some marketting person's idea of being 'clever' - eg. point out how new hardware is better with existing features, because listing new unknown things tends to fly over the heads of most people.
As ERP mentioned, the real usability usually lies well beyond the PR.

acert93 said:
So my question: With the current hardware in the PS3 and Xbox 360 are there ways to speed up cube mapping?
Specifically in reference to "multipass" culling - you can think of cubemap rendering analogous to predicated tiling, so yes, you can do similar optimizations on PS3/360 (though I imagine you'd do it at lower granularity then triangles).
Edit: Ok after reading tread again - another argument is apparently that reducing read bandwith for triangles spanning multiple faces is a significant cost saving for this, which I flat disagree with.
 
Last edited by a moderator:
there's no dedicated abstraction for doing "render to cube map". there is a collection of abstractions that serve multiple purposes:
Geometry shader - perform per-prim computations with data amplification
Render target array - allow the geometry shader to "bin" geometry into different render targets
Resource views - allow some flexibility in reinterpreting a texture or render target in a different structure - e.g., cube map as array of 6 textures or 3D texture as array of k 2D textures.

Render to cube map is an example of how to compose those abstractions for an example task.

One of the intents of these abstractions is to allow more work to be done on the GPU without requiring CPU intervention, such as reissuing geometry, reading data from the GPU, etc. for example, by amplifying detail using the GS. This assumes that GPU processing performance is increasing at a faster rate than CPU processing rate and that multicore processing isn't necessarily going to translate into a way to feed GPUs more efficiently. Or at least that is the theory according to this: http://download.microsoft.com/download/f/2/d/f2d5ee2c-b7ba-4cd0-9686-b6508b5479a1/Direct3D10_web.pdf

It's marketing cleverness in the same way that unified shaders, texture arrays, HDR storage formats, and 1080p are all marketing ploys if developers don't make effective use of them.
 
I'm interested in what you consider a better reflection solution for "modern GPU's".

well u have a latent millions rays/sec force there with the cell (yes it ain a gpu but)
anyways my comment was
'cubemaps are a way to do (crap) reflections on modern(? == last century) gpus"'
A/ its crap cause 99.9999999999% of the answers u get with a cubemap lookup are wrong, now if someone can spin this as not 'crap' ild pin a meddle on them, cause basically it is crap. true to date its a good placeholder (like spheremaps before them) but u gotta admit basically theyre bollux.
case in point instead of whatever cubemap in game X, try this, substitute a cubemap of ronald macdonald eating an icecream (does he ever eat!) it will go unnoticed in game except for some keen observers, in fact ill do this in my game, if anyone has said cubemap, send it to me + ill use it (im not joking)
B/ modern..., well mainstream card gf256 (from 1999) does cubemap lookups thus hardly qualifies as modern, infact cubemaps are rather dated
 
zed, i think you're oversimplifying things considerably. put your ronald macdonald eating an icecream static reflection on a car in a racing game and see what people will say.
 
B/ modern..., well mainstream card gf256 (from 1999) does cubemap lookups thus hardly qualifies as modern, infact cubemaps are rather dated

You seem to have misread what I wrote to take an oppurtunity to grind an axe:

Cube Maps are one way to do reflections on modern GPUs. From what I have read it can take 6 passes to do a cube map on current GPUs

Texturing has been used by GPUs for a decade, hence it is one way to represent detail on onscreen objects in modern GPUs. It doesn't matter if the techniques is old, the point was it was "one way" to do something on "modern GPUs" i.e. the graphic chips currently available.

Anyhow, not to obscure ERP's question: Can you suggest a better (quality & performance) method to achieving reflections on the PS3 or Xbox 360 /or/ a way to accelerate Cube Maps on the current hardware? (i.e. the thread isn't necessarily about the short comings of Cube Maps, but a) are there ways to accelerate Cube Maps or b) are there alternatives with better performance and quality).
 
i agree this method is great, ie rendering all 6 sides of a cubemap in one go
my beef is with cubemap basically being a totally bollux method of doing reflections

btw im still waiting on the ronald macdonald cubemap (or something similar, im not joking)
it'll be great, i kid u not, u want evidence, well picture this

flying in deep space, the player glaces at a passing window + see ronald smiling back at him
turns to his companion, ... shit theyre opening everywhere
 
I hardly see the point of this "feature", it would have been useful if Draw Calls were still prohibitive, but since it's not the case anymore, it's not that useful if at all.

It's not like it's hard to render to a cubemap and doing good culling & HSR (Occlusion...) aggressively should help getting good perfs by reducing the amount of geometry rendered...

On a side note, there's an ATi paper on cubemaps correcting the 'cubemap at infinity' problem
 
Back
Top