?s about next gen software

pc999

Veteran
Can anyone explain ( in a "easy" way) to me what are this techniques , how they work or evan why are they good (besides being cheaper and ad detail) in a performance and gfx way.

The new 3DMark engine is a next generation shader engine, with a wide variety of shaders dynamically built and runtime compiled

http://www.beyond3d.com/interviews/fm04/

I've seen demos of terrain and worlds, with no textures in them whatsoever and no geometry - it's just a program that's creating a scene for you," Allard said. Going on to describe the time- and money-saving techniques facilitated by next-generation tools and hardware, however, he explained his notion of 'procedural synthesis':

"There's a lot of new techniques," Allard continued. "Like what shaders have done for 3D, there are a lot of new next-generation techniques for procedural synthesis that's really going to change how game construction is done, but also what the environment looks like so it feels a lot less 'cookie cutter' [i.e. repetitive]." Rather the like the 'Library' level in Halo, we'd imagine, where it was very easy to become disorientated by the repeated environmental features


http://www.computerandvideogames.com/r/?page=http://www.computerandvideogames.com/news/news_story.php(que)id=105001

Thanks in advance
 
I think virtual displacement mapping(i think I've heard it been called offset-mapping?) will be predominant next gen. Although I don't know enough about it, it's why unreal engine 3 graphics looks so good.
 
Unreal Engine 3 (NOT Unreal 3 which will use an older engine apparently) looks as good as it does for a number of reason, not just because of offset mapping, which is hard to find in any of the screenshots provided so far anyway.
UE3 is using an array of features that make it look as good as it does, first and foremost the lighting system (amazing), soft shadows and normal maps (poly reduction which is the same thing being used today). I'm not sure i have seen any offset mapping on the screens yet. Just very good *old* per-pixel lighting. Oh and let's not forget, a whole lot of geometry!
 
I've seen demos of terrain and worlds, with no textures in them whatsoever and no geometry - it's just a program that's creating a scene for you," Allard said. Going on to describe the time- and money-saving techniques facilitated by next-generation tools and hardware, however, he explained his notion of 'procedural synthesis':

I guess he is talking about the demoscene where runtime geometry and texture generation have been used for a long time as a means to conserve space. Prime examples would probably be farbrausch/theprodukkt with stuff like the.product, poem.to.a.horse (64k intros), .kkrieger (96k PS/VS 1.2 "game") and .werkkzeug1 (content creation tool)
 
Yeah the farb-rauch guys have really mastered the art of geometry and texture generation, while it does rely on large amounts of pre computation and isn't all that memory friendly on the RAM front, it is really a great achievment.
 
pc999 said:
Can anyone explain ( in a "easy" way) to me what are this techniques

I'll give it a shot. :D

The new 3DMark engine is a next generation shader engine, with a wide variety of shaders dynamically built and runtime compiled

One reason there's been controversies over 3dmark in the past is that the shaders that were written for it ran sub-optimally on some pieces of hardware due to differences in basic GPU design between different vendors' products. (Comparison: compiling Half-Life without 3dnow support made it run crappy on the AMD K6-2, while it ran well on Intel processors because they had better FPUs).

Instead, by writing shaders in a high-level language, and then have each driver for the different GPUs will allow the driver to tailor the shader as best it can after the actual hardware design; IE, it would be like getting Half-Life to compile at run-time WITH 3dnow support when it's started on a K6-2 system. The output would be the same in all cases - the game being played is Half-Life, but shaders on different hardware would contain different instructions to best take advantage of THAT particular platform's strength.

I've seen demos of terrain and worlds, with no textures in them whatsoever and no geometry - it's just a program that's creating a scene for you,"

This might be a time- and money-saving technique IN A WAY, but not neccessarily. What it really does allow however is greater variety.

For example, instead of having an artist painstakingly build a tree out of polygons, you write a piece of software that runs a (fractal) algorithm that builds the tree for you. By tweaking certain parameters, like how far apart branches should be on the trunk, wether branches sprout out in concentric rings or not, the angle they point outwards from the trunk etc etc, different trees can be created, and they'd all look different, even trees of the same species. Generating a tree with millions of polygons in it would only require a few dozen bytes of data to describe the tree to the algorithmic generator, but storage space would of course be traded for computations when the tree has to be built in order to be displayed on-screen. This would likely happen during level load.

The same basic technique can be applied to textures; a general fractal algorithm that when parameters are set a certain way creates patterns that looks like stone or wood grain or brushed metal or skin/leather/scales or whatever. The program could even generate bump/parallax maps to create fake 3D depth at the same time, but lots of these maps at high resolutions would require lots of computations. They would also need to be compressed afterwards, and doing automatic realtime texture compression and get good results is also computation-heavy.

So in part it would perhaps reduce workload on artists in some regards, but only for certain features that can be generated algorithmically - mostly plantlife really, and also larger geographical features such as hills, valleys, mountain ranges and such, though I feel level designers would rather retain control over this themselves than trust it upon some random generator. ;). Also, the tools would have to be made first (and that is probably not a small task), and they have to be used correctly too. And houses and man-made objects, creatures etc still have to be made by hand...
 
Sorry to take this topic back, but now I have more questions.

Considering DeanoC coments about Xenon, could XeGPU make those things on the fly, if it can memory and/or compression?
The massive multi-thread environments would be good for those stuff?

Thanks.
 
You can do stuff like this on a C64 if you want to. Was a game called Rescue from Fractalus a loong long time ago...

The real question is rather, "how detailed can we make this stuff realtime? "

Answer is, still not very detailed. Considerably better than Rescue from Fractalus of course, but precomputing would allow lots more detail, with no real downside to it. There's simply no point in spending millions of cycles re-drawing all the textures and models every frame.
 
This is the other point of view, I most addmit that I did not thougt this way, but it is good, but it still not resolving the problem of rising cost from art creation :( , maybe they should use p.t. during devolpment, review and then put to final work, they should have enough cycles to do that :D.

Thanks
 
One area that is very well suited to gfractal generation is landscapes. You can have an artist and/or level designer map out all the large scale details like hills, canyons and the tmountain treolls cave, and then have a fractal algorithm that adds small scale detail like pebbles, bumps and small rocks. This both saves time and is fairly computationally efficient.
 
Back
Top