Carmack talks about id Tech 6, Ray Tracing, Consoles, Physics etc.

A lot of people just think “oh of course I want more flexibility I’d love to have multiple CPUs doing all these different things” and there’s a lot of people that don’t really appreciate what the suffering is going to be like as we move through that; and that’s certainly going on right now as software tries to move things over, and it’s not “oh just thread your application”. Anyone that says that is basically an idiot, not appreciating the problems.

Amen.
 

Don't let the console fans hear you or they'll call you a lazy, over-the-hill, relic of the cold war.

As for the article. Carmack is, as is usual, very interesting but I do wish he'd be a little more precise on what kind of time-frame he is talking about (id Tech 5 still not out so I'm guessing at least 5 years out for id Tech 6). Also, he has previously said virtualised geometry was pushed out of id Tech 5 into id Tech 6 because there's not enough geometry density right now to use any real-time mesh decimation algos but if wants that for distance LoDs I'm not sure people would mind slight inaccuracies in models when they're 50 or 100 feet away from the player.
 
Don't let the console fans hear you or they'll call you a lazy, over-the-hill, relic of the cold war.

And rightly so. :p

Everyone who is having this discussion still right now ... come on! :p

He's just saying that to discourage developers from developing their own engine instead of purchasing the Rage6 engine (never mind the Unreal Engine). :p
 
Don't let the console fans hear you or they'll call you a lazy, over-the-hill, relic of the cold war.

:)
Getting good utilization out of multiprocessor systems, SMP UMA systems in particular, is non-trivial. That doesn't mean that monolithic single threaded cores is the way forward, I've been lamenting the lack of explicitly MP aware programming courses at our local universities for well over a decade.

Just because Carmack is right in pointing out the difficulties with parallelising computational tasks, doesn't mean that it isn't the way forward anyway. It is, but expectations for the degree of core utilization in the general case shouldn't be too optimistic.
 
:)
Getting good utilization out of multiprocessor systems, SMP UMA systems in particular, is non-trivial. That doesn't mean that monolithic single threaded cores is the way forward, I've been lamenting the lack of explicitly MP aware programming courses at our local universities for well over a decade.

I'm actually considering enrolling in one for my masters. It's definitely the future and I've been looking at more programming-heavy courses. Seems nowadays most just focus too much on the management and high-level architectural design.

Just because Carmack is right in pointing out the difficulties with parallelising computational tasks, doesn't mean that it isn't the way forward anyway. It is, but expectations for the degree of core utilization in the general case shouldn't be too optimistic.

Absolutely and I think many people take Carmack's complaining at face value when Q3 was the first (or at least first AAA game) to take advantage of SMP. And Q4 was the first (again AAA title) to properly use Hyper-Threading/Dual core.

id Tech 5, in particular, seems very thread-happy in it being a multi-platform engine using two primary threads for graphics and gameplay and then splitting off work on available cores to do effects physics, content streaming/(de)compression, etc. as well as the usual sound async thread. Quake Wars already follows this model with 2 major threads + the MegaTexture thread.

So while he complains, he's still at the forefront doing the actual work.
 
I thought it was really interesting that he mentioned "refraction skeletons" as a way to do animated characters with raytracing, but he didn't talk much about it and he seems to have coined the term "refraction skeleton" just now as a Google search returns nothing else. Surely he didn't invent this whole concept just now; what do other people call it?
 
...there's not enough geometry density right now to use any real-time mesh decimation algos but if wants that for distance LoDs I'm not sure people would mind slight inaccuracies in models when they're 50 or 100 feet away from the player.
The problem is not that it looks bad for distant objects (it doens't). The problem is you want near-smooth graduation of LODs to limit popup/flickering. Popup is anoying and destroys overall impressions. If you're aiming for realism and have lots of LOD popups, it just looks bad.
 
Not to mention it's usually faster to just render the full resolution geometry than to do all the LOD calculations, transitions and bus transfers unless you're doing only the simplest forms of LOD in which case the popping issues are usually too unsightly.

This is one of those areas of computer graphics where there have been tons and tons of research done on the subject over the years and it never really made sense from the beginning and will make even less sense as time goes on. All that research (and there's probably more on this subject than any other single subject) was, IMHO, almost a complete waste of time and resources.
 
The problem is not that it looks bad for distant objects (it doens't). The problem is you want near-smooth graduation of LODs to limit popup/flickering. Popup is anoying and destroys overall impressions. If you're aiming for realism and have lots of LOD popups, it just looks bad.

Remembers the pointy heads in doom3
 
I'm not convinced that popup would be an issue with LOD in a raycasting method. I say raycasting instead of raytracing, because raycasting has allready been proven to work quite well on GPUs and perhaps Carmack is simply extending this with virtual 3D texturing... For one thing if you were raycasting and surface was defined as an iso-surface, wouldn't the geometry subtly morph between LOD levels?
 
It's an inspiring article. I also agree that popup isn't necessarily going to be an issue. That's just a matter of being smarter with storage. I think his idea of using a tree is very interesting. I can immediately imagine how that would work. I could also imagine that you could build in semi-dynamic lighting and shadow maps fairly easily to scale exactly the same way, with only dynamically moving objects or lights modifying the existing values (not saying that is the most perfect way to do it, but it could be very scaleable). In this way, you would always have a matching level of geometry and lighting / shadow maps (actually that's not entirely the right word perhaps), no matter how deeply you dive into the details (well, up to how much detail you've provided at different levels of the tree of course).
 
Not to mention it's usually faster to just render the full resolution geometry than to do all the LOD calculations, transitions and bus transfers unless you're doing only the simplest forms of LOD in which case the popping issues are usually too unsightly.
The problem is that you can't avoid LOD at all. Regardless of speed, you can't just throw all of your geometry at the GPU and expect it to look good. Unless you're willing to do a ton of super-sampling, you want to look at some way to "pre-filter" your geometry. This is just a true for ray-tracing which doesn't have the same performance problems without LOD, but still has the same quality issues.

It's the same argument as texture filtering really... if you ignore it you need to be doing more super-sampling than you want to which is going to be more of a waste than just doing some proper LOD. Avoiding popping is just a function of doing your LOD on a fine enough scale.
 
I would like to think that long before that ever became a major concern we would've moved beyond a strictly triangle mesh format for our geometric data into something where the frequency of the data was dependent on the frequency of the sampling, in which case aliasing is handled automatically (as in, you only tessellate as far down as you need to based on proximity to the camera). i.e. something like what Carmack is proposing here, subdivision surfaces, nurbs, etc as once you're at the level of detail we're talking about here (triangles smaller than pixels) you're certainly not going to be modeling using triangle meshes and there are definitely going to be better ways to represent your geometry.

In case it wasn't clear, in my original post I was talking purely about the traditional triangle mesh level of detailing algorithms you see in the form of SLOD/DLOD/CLOD algorithms applied to characters and terrain in today's games, not all forms of detail management.
 
I would like to think that long before that ever became a major concern we would've moved beyond a strictly triangle mesh format for our geometric data into something where the frequency of the data was dependent on the frequency of the sampling, in which case aliasing is handled automatically (as in, you only tessellate as far down as you need to based on proximity to the camera). i.e. something like what Carmack is proposing here, subdivision surfaces, nurbs, etc as once you're at the level of detail we're talking about here (triangles smaller than pixels) you're certainly not going to be modeling using triangle meshes and there are definitely going to be better ways to represent your geometry.
Sure fair enough. The tessellation stuff in DX11 is kind of a step in that direction, but it's probably in the wrong place in the pipeline in the long run... indeed it practically guarantees that we have lots of sub-pixel triangles and hose the efficiency of the current vertex-fragment GPU pipeline.
 
Sure fair enough. The tessellation stuff in DX11 is kind of a step in that direction, but it's probably in the wrong place in the pipeline in the long run... indeed it practically guarantees that we have lots of sub-pixel triangles and hose the efficiency of the current vertex-fragment GPU pipeline.
Since the programmer will be choosing the tessellation factor, it'll be the programmer's fault if the efficiency tanks.

Jawed
 
I'm curious how closely the tessellation spec of DX11 matches what was originally planned for DX10. That was one of the features I was pretty disappointed about being dropped (although I understand why). I had started working on a complete displaced subdivision surface renderer when I first saw the specs and later largely abandoned my efforts after seeing the differences between what was proposed and what was actually being implemented.
 
The tessellation stuff in DX11 is kind of a step in that direction, but it's probably in the wrong place in the pipeline in the long run... indeed it practically guarantees that we have lots of sub-pixel triangles and hose the efficiency of the current vertex-fragment GPU pipeline.

Where did they put it in the pipeline? I haven't seen any details on DX11 (apart from vague rumors), is there some public info I've missed?

Edit: Looks like Jawed answered this; I didn't see that until after I posted.
 
Last edited by a moderator:
Back
Top