I know that no one here knows what VS 3.0 capable hardware will be capable of (and if you do, you're under NDA) but I wanted to know if my approach was just so ridiculously slow that it' impossible for the NV40/R420 to be able to do it, unless an unimaginable improvement comes along.
As a hobby project, I would like to design a game that has huge terrains that are extremely detailed. Since I can't possibly store the entire terrain geometry, I have to make it almost completely procedural. That is, you have a texture map that specifies the height every 64 or 128 meters or so, and all points in between are interpolated and have perlin noise added for variety. I'd like to think we're at the point where I don't have to deal with slow, CPU-intensive, mindlessly complicated algorithms that constantly change the limited geometry; basically I want to push a static flat mesh (with built-in LOD) to the vertex shader, which would perform all the height calculations and extrude the "terrain" to the appropriate height one point at a time, for every frame.
Would this be impossibly slow? I only own a GF2MX right now , so I don't know anything about the speed of vertex shaders. My Pentium VI 2 GHz processor can perform the algorithm on 64k points in 200 milliseconds. I figure with their 16 parallel vertex shaders the cards of the future (NV40) ought to have no problem doing 64k vertices. The problem is, I'll probably have a lot more vertices than that. How many exactly, I don't know. I'm doing lots of testing now to see what looks the best but is still a small number. Does anyone who has created "realistic" terrain have an idea? I'd also like to be able to see quite far...
Anyway, it seems like it would certainly be possible based on the fact that its at least 16 times faster than my test processor (a CPU which doesn't perform terribly) and probably much faster than that since it can do certain math operations very quickly. On the other hand, I've heard people say that shaders that run into the hundreds of instructions will crawl if given more than a few objects. This runs counter to my estimations, and since my vertex shader would be quite long (might even skirt the 512 limit) I'm thinking maybe my forward looking hope or a simple, good-looking, near-infinite terrain is simply impossible. I know no one knows exactly how these cards will perform, but since a lot of you have some experience with the hardware, I figured you'd at least know if the plan is ridiculous. And thats obviously something I should know before I spend more time on it.
As a hobby project, I would like to design a game that has huge terrains that are extremely detailed. Since I can't possibly store the entire terrain geometry, I have to make it almost completely procedural. That is, you have a texture map that specifies the height every 64 or 128 meters or so, and all points in between are interpolated and have perlin noise added for variety. I'd like to think we're at the point where I don't have to deal with slow, CPU-intensive, mindlessly complicated algorithms that constantly change the limited geometry; basically I want to push a static flat mesh (with built-in LOD) to the vertex shader, which would perform all the height calculations and extrude the "terrain" to the appropriate height one point at a time, for every frame.
Would this be impossibly slow? I only own a GF2MX right now , so I don't know anything about the speed of vertex shaders. My Pentium VI 2 GHz processor can perform the algorithm on 64k points in 200 milliseconds. I figure with their 16 parallel vertex shaders the cards of the future (NV40) ought to have no problem doing 64k vertices. The problem is, I'll probably have a lot more vertices than that. How many exactly, I don't know. I'm doing lots of testing now to see what looks the best but is still a small number. Does anyone who has created "realistic" terrain have an idea? I'd also like to be able to see quite far...
Anyway, it seems like it would certainly be possible based on the fact that its at least 16 times faster than my test processor (a CPU which doesn't perform terribly) and probably much faster than that since it can do certain math operations very quickly. On the other hand, I've heard people say that shaders that run into the hundreds of instructions will crawl if given more than a few objects. This runs counter to my estimations, and since my vertex shader would be quite long (might even skirt the 512 limit) I'm thinking maybe my forward looking hope or a simple, good-looking, near-infinite terrain is simply impossible. I know no one knows exactly how these cards will perform, but since a lot of you have some experience with the hardware, I figured you'd at least know if the plan is ridiculous. And thats obviously something I should know before I spend more time on it.