A lot of people just think “oh of course I want more flexibility I’d love to have multiple CPUs doing all these different things” and there’s a lot of people that don’t really appreciate what the suffering is going to be like as we move through that; and that’s certainly going on right now as software tries to move things over, and it’s not “oh just thread your application”. Anyone that says that is basically an idiot, not appreciating the problems.
Amen.
Don't let the console fans hear you or they'll call you a lazy, over-the-hill, relic of the cold war.
Don't let the console fans hear you or they'll call you a lazy, over-the-hill, relic of the cold war.
Getting good utilization out of multiprocessor systems, SMP UMA systems in particular, is non-trivial. That doesn't mean that monolithic single threaded cores is the way forward, I've been lamenting the lack of explicitly MP aware programming courses at our local universities for well over a decade.
Just because Carmack is right in pointing out the difficulties with parallelising computational tasks, doesn't mean that it isn't the way forward anyway. It is, but expectations for the degree of core utilization in the general case shouldn't be too optimistic.
The problem is not that it looks bad for distant objects (it doens't). The problem is you want near-smooth graduation of LODs to limit popup/flickering. Popup is anoying and destroys overall impressions. If you're aiming for realism and have lots of LOD popups, it just looks bad....there's not enough geometry density right now to use any real-time mesh decimation algos but if wants that for distance LoDs I'm not sure people would mind slight inaccuracies in models when they're 50 or 100 feet away from the player.
The problem is not that it looks bad for distant objects (it doens't). The problem is you want near-smooth graduation of LODs to limit popup/flickering. Popup is anoying and destroys overall impressions. If you're aiming for realism and have lots of LOD popups, it just looks bad.
The problem is that you can't avoid LOD at all. Regardless of speed, you can't just throw all of your geometry at the GPU and expect it to look good. Unless you're willing to do a ton of super-sampling, you want to look at some way to "pre-filter" your geometry. This is just a true for ray-tracing which doesn't have the same performance problems without LOD, but still has the same quality issues.Not to mention it's usually faster to just render the full resolution geometry than to do all the LOD calculations, transitions and bus transfers unless you're doing only the simplest forms of LOD in which case the popping issues are usually too unsightly.
Sure fair enough. The tessellation stuff in DX11 is kind of a step in that direction, but it's probably in the wrong place in the pipeline in the long run... indeed it practically guarantees that we have lots of sub-pixel triangles and hose the efficiency of the current vertex-fragment GPU pipeline.I would like to think that long before that ever became a major concern we would've moved beyond a strictly triangle mesh format for our geometric data into something where the frequency of the data was dependent on the frequency of the sampling, in which case aliasing is handled automatically (as in, you only tessellate as far down as you need to based on proximity to the camera). i.e. something like what Carmack is proposing here, subdivision surfaces, nurbs, etc as once you're at the level of detail we're talking about here (triangles smaller than pixels) you're certainly not going to be modeling using triangle meshes and there are definitely going to be better ways to represent your geometry.
Since the programmer will be choosing the tessellation factor, it'll be the programmer's fault if the efficiency tanks.Sure fair enough. The tessellation stuff in DX11 is kind of a step in that direction, but it's probably in the wrong place in the pipeline in the long run... indeed it practically guarantees that we have lots of sub-pixel triangles and hose the efficiency of the current vertex-fragment GPU pipeline.
The tessellation stuff in DX11 is kind of a step in that direction, but it's probably in the wrong place in the pipeline in the long run... indeed it practically guarantees that we have lots of sub-pixel triangles and hose the efficiency of the current vertex-fragment GPU pipeline.