IgnorancePersonified
Regular
I like the toy shop demo best - had the biggest visual impact on me but the pantheon one looks better than the "Artist Impression" scenes from recent doco's I've seen. COOL!!
Yeah, and it didn't really seem to have the same quality as the other demos to me. I mean, they showcased some excellent parallax mapping in conjunction with a whole lot of texture data, but other than that it just didn't seem all that impressive.Joe DeFuria said:Bump...there's a new demo: Parthenon
That is a darn good point, if it did run,but dog slow people might think a bit.Mintmaster said:I think it will only be in ATI's best interests if you make it work on NV hardware.
Can you just use alternate formats? I assume you're talking about single/double channel FP formats, but I'm not sure. Maybe floating point depth maps to be used as a shadow buffer?
natashaATI said:The demo could run on Xenos and will run quite well. But we haven't tested this theory.
That's good news! I think the good control flow implementation is what interests me the most in the new ATI hardware. Well, that and AA with fp16 render targets (although I'm still concerned that we don't get aniso/mip-mapped fp16 filtering by my understanding...). It should certainly be a fun card to play around with if I can get my hands on onenatashaATI said:Also, the parallax occlusion mapping technique really takes advantage of the excellent dynamic branching that X1K cards have.
I agree that relief mapping probably isn't going to give you as good results, but did you look into PP Displacement Mapping with Distance Functions (Donney, GPU Gems 2)? It seems to produce some very nice results, run extremely fast, and support complex geometry with undercuts, etc. I'd be interested to know how it compares to your modified parallax mapping solution, as it is dependent texture read heavy, but much lighter on control flow.natashaATI said:Of course, one could think up alternative ways to implement some of the algorithms that we have used in this demo. For example, you could use relief mapping instead of parallax occlusion mapping.
I would tend to agree. If the NV50 doesn't have a good flow control implementation as well, it's going to be a very hard sell (same with HDR FSAA, of course), since by that time we'll have games that will have begun to use dynamic flow control, such as UT2007.AndyTX said:That's good news! I think the good control flow implementation is what interests me the most in the new ATI hardware.
Maybe, but remember that it's not CPU time that costs, it's man hours. Making a scene realtime requires a lot more work than doing an offline render. In fact, the latter is often the first step in the former.OICAspork said:The real reason I posted a reply goes back to my original question about animation production. Is ATI pushing for use in development of commercial 3D animation with their hardware solutions? Considering what it was able to render in real time, it seems like an array of current generation cores from ATI could render television series quality animation orders of magnitude faster than is done with through multiple CPUs...
Joe DeFuria said:Bump...there's a new demo: Parthenon
psurge said:AndyTX - is that the method using sphere tracing? If so, it's major drawback appears to be storage space... (3d texture versus 2d heightfield). Also, IIRC the authors mention that it is amenable to optimization with dynamic branching (early exit) as well, even if they haven't implemented it in the example shaders provided.
AndyTX said:psurge - yes that's the one, and yes the major disadvantage is memory usage, although that can be handled in a variety of manners (compression, lower-res distance map, etc). And yes, good dynamic branching support would allow one an "early-out" on pixels that converge quickly. This would be a moderate win at most for high-frequency distance maps due to thread coherency problems, but I suspect a huge win for smooth data sets.
OICAspork said:=D My post (and the entries in it) lured the ATI demo team into the forums. Thanks so much Chris, Thorsten, and Natasha for your time and information. XD Out of curiousity, does Humus hang out with you guys a lot?
Mintmaster said:Maybe, but remember that it's not CPU time that costs, it's man hours. Making a scene realtime requires a lot more work than doing an offline render. In fact, the latter is often the first step in the former.