Fafalada said:
In current GPUs those are so small they barely register in terms of die space used. Apparently only a few odd freaks like nAo and me think larger sized pixel caches would be useful.
Okay. I wasn't sure about specifics, but I knew they were there. Anyway, the point is there's still a substantial amount of additional logic needed if there was no eDRAM.
Well I do think nAo was also saying you would be surprised given your actual expectations from this hardware. Whether something is wow-factor in absolute terms it's mostly a matter of individual perspective anyways
I think I gave the wrong impression here. I fully expect this level of hardware (everything from ~7800GT level and upwards) to put out far better graphics than we're seeing today. A closed platform will help that, hopefully. But he predicted that one day I'll think, "God, how'd they do that?", which to me is the highest tier of awe.
I don't necesserily agree with this - Warhawk already presents one interesting way to do quality volume smoke without going overdraw happy. Not to say everything has an alternative - but I do believe we have lots of room to rethink particle approaches with new machines.
I've been thinking about these alternatives for a couple years. For example, I was thinking about doing multiple offset texture accesses per pass instead of multiple single texture passes. The problem is that you drift farther away from the realistic model. When racing games generate smoke/wheelspray/dust at the tires, the sprites intersect with the ground, and you see the discrete polygons. Having more lightly coloured sprites ameliorates this.
The Warhawk video showed a similar problem when the plane passed through the clouds (although it doesn't matter for that game since plane-cloud intersections are rare and fleeting), with an abrupt transition in colour when crossing the cloud boundary. Looks to me like the final compositing is done with single z, alpha, and colour values per pixel.
If intersection isn't a problem, though, then this technique does indeed look promising. I'm quite curious to know what exactly they're doing. I wonder if there is any precomputation and thus animation restrictions? From what I've heard around here, they said something about raytracing on CELL. Although I think a realistic scattering simulation is infeasible, they could cast two rays to determine the distance through the cloud in the view direction (for transparency) and sun direction (for shading). That's possible in a low poly cloud, I think.
I like the cleverness of the volumetric fog
technique, but IMO it doesn't give you the feel for parallax and the variety that textured alpha layers do.
I've seen ray marching techniques (i.e. steep parallax mapping and variants) used in fur and grass rendering, but not only is that expensive, it doesn't look as good as alpha blending techniques (like the
Tomohide demo). It's not as flexible or accurate either.
By no means is this list exhaustive, but I'm skeptical that there's a good substitute out there for most alpha effects.
Well someone smart has once said here marketting makes most of hw-design decisions, and I'd say the rest are dictated by target platform - and closed boxes have quite different requirements then PC.
For one, you absolutely don't care how your product runs existing/legacy software/benchmarks.
Okay, that's a very good point. Nonetheless, I find it rather shocking that there would be such low incentive for PC devs to conserve bandwidth.