Carmack On Future Graphics Engines

Discussion in 'General 3D Technology' started by MikeC, Oct 31, 2002.

  1. MikeC

    Newcomer

    Joined:
    Feb 9, 2002
    Messages:
    194
    Likes Received:
    0
    Reminded me of David Kirk's comments a while back about rendering "smarter polygons."

    In a recent statement, id software's John Carmack said that he didn't want more polygons. Instead, he wants "100 passes per polygon." (With each rendering "pass" a computer makes before it displays the graphics, more and more detail can be added.)

    Carmack is no longer demanding more polygons, he wants smarter polygons. He wants the ability to bump map -- simulate raised textures on the surface on polygons -- and to add sophisticated lighting effects. By adding more rendering passes, designers will be able to polish their creations on a pixel-by-pixel basis.

    http://www.gamespy.com/futureofgaming/engines/
     
  2. Tagrineth

    Tagrineth SNAKES... ON A PLANE
    Veteran

    Joined:
    Feb 14, 2002
    Messages:
    2,512
    Likes Received:
    9
    Location:
    Sunny (boring) Florida
    Crikey, 100 passes... WTF would you do with 100 bloody passes? Surely nothing realtime. :|

    Carmack's gone bonkers, IMO...
     
  3. MikeC

    Newcomer

    Joined:
    Feb 9, 2002
    Messages:
    194
    Likes Received:
    0
    That's probably a bit extreme with today's technology :)

    But Kevin Stephens of Monolith followed up by stating that "10 or 20 passes per polygon will not be uncommon."
     
  4. sancheuz

    Newcomer

    Joined:
    Jul 27, 2002
    Messages:
    44
    Likes Received:
    0
    What is a pass? (in plain english please :lol: )
     
  5. MfA

    MfA
    Legend

    Joined:
    Feb 6, 2002
    Messages:
    6,805
    Likes Received:
    473
    When he says passes per polygon he doesnt necessarily mean with the same projection I think.

    BTW, I would like Doom3 to have a couple more polygons ;)
     
  6. fresh

    Newcomer

    Joined:
    Mar 5, 2002
    Messages:
    141
    Likes Received:
    0
    Sure, everybody said he went bonkers too when he asked for 128 bit frame buffers.
     
  7. Ilfirin

    Regular

    Joined:
    Jul 29, 2002
    Messages:
    425
    Likes Received:
    0
    Location:
    NC
    Ah.. wha? No one who knew what they were talking about did
     
  8. Fuz

    Fuz
    Regular

    Joined:
    Apr 17, 2002
    Messages:
    373
    Likes Received:
    0
    Location:
    Sydney, Australia
    Why not have both?
    Carmack says he doesn't want more polygons, but wants more passes per polygon. Imho, you can not make up for a lack of polygons no matter how many special effects you add to a pixel.

    Probably explains why Doom III looks so poly deprived. :wink:
     
  9. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,902
    Likes Received:
    218
    Location:
    Seattle, WA
    I just have to say, the majority of DOOM3's effect on the gamer will be dependent upon motion. That means you should absolutely not judge it until you see it in motion.

    That said, yes, we do need higher polycounts, but for what DOOM3's going to be, I definitely think JC made the right decisions. As technology improves, we will have both better polygons and more polygons.
     
  10. Brent

    Regular

    Joined:
    Apr 11, 2002
    Messages:
    584
    Likes Received:
    4
    Location:
    Irving, TX
    I think the biggest thing about Doom 3 is its lighting and shadow system

    From some screenshots people have debated over it looks like the poly count in some situations aren't as high as some would like :p

    I dunno, I guess we'll just have to wait to see it in action in person to judge it...

    100 passes may sound like a lot now, but so have many other things that are common now compared to a year or two ago...

    one thing is for certain, and is the theme of that article, is that we are coming into some very cool 3d times up ahead... things keep getting better and better...
     
  11. Johnny Rotten

    Regular

    Joined:
    Feb 6, 2002
    Messages:
    327
    Likes Received:
    0
    Location:
    Edmonton, Alberta
    I always snicker at those who make comments about Doom3's looking 'low poly'.

    It just tells me they havent seen it in motion. :)
     
  12. V3

    V3
    Veteran

    Joined:
    Feb 7, 2002
    Messages:
    3,304
    Likes Received:
    5
    Polygon also allow you to put more enemies in one room at once. Its not just how model look blocky. Also bigger room to boot.

    Before he did the 100 passes or so, I want to see more dynamic environment, not just shadow and light and occasional object, but everything.

    100 passes is probably doable if you went with some eDRAM solution.
     
  13. DemoCoder

    Veteran

    Joined:
    Feb 9, 2002
    Messages:
    4,733
    Likes Received:
    81
    Location:
    California
    You don't need 100 passes per surface if you have unlimited texture reads, "effectively" unlimited per-pixel shader lengths, and the ability to render to multiple targets.

    On most architectures you could collapse most of these passes using shaders. You only need to resort to multipass for some rare calculations.
     
  14. Humus

    Humus Crazy coder
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    3,217
    Likes Received:
    77
    Location:
    Stockholm, Sweden
    I'm not sure about "rare". As long as you need information about the rest of the scene and not only the current texture you'll have to resort to multipass. Shadows is such a case, and I would argue it is or will become a quite common case.
     
  15. DemoCoder

    Veteran

    Joined:
    Feb 9, 2002
    Messages:
    4,733
    Likes Received:
    81
    Location:
    California
    Except for shadows (e.g. stenciled shadow volumes) and some 2d image based effects can you really provide an example of where you need 100 passes?

    Most algorithms touch a dataset O(N) - O(N^4) times. Can you point out any algorithms or effects that require O(N^100)? Seems ludicrous. If you're using hacks like stencil for conditionals (no pixel shader predicates) or branching emulation, that's one thing, something which will be alleviated as shaders get more powerful, and we already acknowledged multi-passes for each light, and I'll even acknowledge extra passes for occlusion query, but this is a far cry from 100.

    I stand by my point. For any given procedure P, chosen from the list of all possible computable algorithms that are relevant to 3D shading, the proportion that requires explicit multipass is small compared to the proportion that can be encoded strictly in a pixel shader.
     
  16. LeStoffer

    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    1,253
    Likes Received:
    13
    Location:
    Land of the 25% VAT
    Maybe Gamespy didn't understand the term passes per polygon as we normally do around here (e.g. in DX 9 hardware you only need to send the polygon once to render 16 textures in one pass)?

    Anyway on DX 10 Vertex Shaders and Pixel Shaders are set to be integrated/blend much more. This huge futore flexibility really shouldn't point in the direction of having to use 100 passes per polygon to add the detail you need. You will advanced 2. or 3. generation displacement mapping in future VS and very good per-pixel lightning alhgoritms in future PS.
     
  17. Dio

    Dio
    Veteran

    Joined:
    Jul 1, 2002
    Messages:
    1,758
    Likes Received:
    8
    Location:
    UK
    I think that's a touch confused on computational complexity. An O(n^100) algorithm implies that for every increase in the size of the dataset (or some other 'number' on which it is dependent) the time goes up to the 100th power. I.E. that's a totally unimplementable algorithm - if it even took 2 instructions or had 2 items of data the execution time would be longer than the length of the universe, etc.
     
  18. arjan de lumens

    Veteran

    Joined:
    Feb 10, 2002
    Messages:
    1,274
    Likes Received:
    50
    Location:
    gjethus, Norway
    Just a nit on the O-notation: 100 passes would imply O(100*N) operations, which is still just O(N) with a large hidden constant. O(N^100) is O(N*N*N*N*N* ... 100 times), which is something totally different.

    Still curious on what anybody would need 100 passes for ... doing stencil shadows using an extended light source emulated as 100 point light sources, perhaps?
     
  19. DemoCoder

    Veteran

    Joined:
    Feb 9, 2002
    Messages:
    4,733
    Likes Received:
    81
    Location:
    California
    Err, your right, my bad. Need more sleep.

    Let me rephase the statement by making another analogy (this time hopefully correct)

    In the theory of computation, we have this notion of computability on a given class of abstract machine. e.g. regular expressions /DFA, context free grammars/PDFA, and of course, turing machines and mostly everything else.

    I would assert that in 3D shading, there is a similar notion of computability classes with respect to abstract shader languages/machine models. Note, I am not talking about whether such an algorithm would run on machine M efficiently, but whether it can run at all within the machine (without unbounded augmentation by the CPU).

    For the DX9 streaming processor + VS2.0/PS2.0, there would exist a class of algorithms that can be implemented on this model in a single pass (ignoring # regs and max program length, I'm talking about an algorithm that fundamentally needs 2 passes because of the way of evaluates the scene). Call this PS2.0 Class 1. Then there would exist a class of algorithms which run on PS2.0 in minimum 2 passes, call this a class 2 shader. And so on....

    However, sooner or later you would encounter an algorithm in which the number of passes required on PS2.0 could not be bounded by looking at the algorithm alone, but would be data dependent. Call these Class Null. I would say that Class Null shaders are effectively not computable on M (PS2.0) and require outside help (CPU). The fact that you can translate Class Null into having the CPU push hundreds of passes aimed at emulating a general purpose processor IMHO does not machine that Class Null "runs" on machine M, since you have augmented M with an external CPU.


    Similarly, you can design computability equivalence classes for PS3.0, architectures with a per-primitive processor, 3DLabs P10, etc.


    Now comes my final unproven conjecture:

    The Cardinality of Class Null is less then Epsilon * the Cardinality of Class 1 Union Class 2 Union .....

    That is, Class Null algorithms are "rare" compared to all other classes on PS2.0/PS3.0


    An example of how to build Class N algorithms (an algorithm that *requires* N passes) is to design a recursive function F(a,b,c,d) where a,b,c,d are pixels and F is not analytic. To compute F(a_n, b_n, c_n, d_n) you need to compute F(a_n-1, b_n-1, c_n-1, d_n-1) and so on. Since F is not analytic, you really can't "unroll" the recursion into a linear pixel shader.

    I don't really know of there are any "useful" Class N (where N > some large integer) 3D shading algorithms.

    I suspect you will find that there are oodles and oodles of useful Class 2, 3, and 4 algorithms, but after a certain constant, you will find nothing really useful until you hit Class Null. Call it a hunch.
     
  20. PC-Engine

    Banned

    Joined:
    Feb 7, 2002
    Messages:
    6,799
    Likes Received:
    12
    Would photon mapping require 100 or more passes? :-?
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...