Some DOOM3 tidbits

Discussion in 'Beyond3D News' started by Reverend, May 1, 2004.

  1. Anonymous

    Veteran

    Joined:
    May 12, 1978
    Messages:
    3,263
    Likes Received:
    0
    Carmack started DOOM3 engine basing his feature set on what was achievable on the Geforce1 (IIRC) generation of GPU's. Along the way, things were added and changed as hardware advanced. What features he use in his games are a business and engineering decision. He has always written his software to run the best looking application on the widest range of hardware, which necessitates him supporting the "lowest common denominator" as you call it.

    Beyond that, he has been looking to handle lighting and materials in a general way. This means that he will be thinking about and talking about hardware capabilities in a different way than those guys focusing on creating collections of vertex and pixel shaders. That probably disappoints hose people who are looking for him to tick off checklist features. Oh well, I doubt the expectation of video card enthusiasts and the Ati vs. Nvidia debate occupies a prominent space in his thoughts. :lol:
     
  2. Hellbinder

    Banned

    Joined:
    Feb 8, 2002
    Messages:
    1,444
    Likes Received:
    12
    So, in the end exactly what i said happened happened. People still want to try and say that Carmack is not completely in Nvidias pocket? Please... :roll:

    Gotta love that *new improved* ARB2 path dont you? Gotta love that It goes out of its way to please Nvidia and their Marketing execs.. I cant wait to see the incedible flow of message board posts at various web sites once the game goes public. /will be an Nvidia Fans dream.
     
  3. Hellbinder

    Banned

    Joined:
    Feb 8, 2002
    Messages:
    1,444
    Likes Received:
    12
    :lol:

    You are either Blind or in denial. Look at the facts of what transpired. If he didnt care, he would have had one standard for all DX9 classed hardware and one standard for all Dx8 etc. You dont completely reinvent the wheel to favor ONE companys limitations and supposedly *have no interest in it*..

    Think about it.
     
  4. Anonymous

    Veteran

    Joined:
    May 12, 1978
    Messages:
    3,263
    Likes Received:
    0
    Please explain how he has "reinvented the wheel" :? Do you know what his code looks like and how it works :shock: If so please share :D
     
  5. digitalwanderer

    digitalwanderer Dangerously Mirthful
    Legend

    Joined:
    Feb 19, 2002
    Messages:
    18,992
    Likes Received:
    3,533
    Location:
    Winfield, IN USA
    Ooooh, may I?

    I think HB is referring to how Carmack was special coding a path just for the nV30, that's the "wheel" he's talking about.

    We DO know that Carmack had coded a seperate nV30 path in addition to the ARB2 one. ;)
     
  6. WaltC

    Veteran

    Joined:
    Jul 22, 2002
    Messages:
    2,710
    Likes Received:
    8
    Location:
    BelleVue Sanatorium, Billary, NY. Patient privile
    Which means...what, exactly?...;) To me, it means only that if you fed R3x0 a shader instruction chain long enough to slow it down unacceptably, it would run even slower on nV3x. Not a single nV3x demo nVidia produced exceeded the instruction limits you're talking about, nor a single game. Talking about "unlimited" instruction chains here in relation to 3d-gaming is meaningless, because the longer and more complex the chain, the slower the framerate. If you've paraphrased Carmack correctly, that's another black mark, imo, because he certainly knows better. Specifications like "65,000 instruction limit" have no place in 3d-gaming discussions, where instruction chains of 120-150 instructions for a scene could bring nV3x to its knees and produce single-digit frame rates....;) In the context of 3d gaming, phrases like "65K" or "unlimited" instructions just have no meaning or relevance to nV3/4x or R3/4x0, but are merely marketing bullets.

    Well, think about it...it's only *now* that he's actually announced it, isn't it? If he had intended to do that back in January of '03, he could have made the same announcement then, couldn't he? Since he didn't, but he has now, I won't rationalize and try to infer cryptic, hidden meanings in remarks he made 14-15 months ago...;) As he had no trouble plainly announcing it now, I believe that he'd have had no trouble plainly announcing it then, had it been his intention to do so.

    Heh...;) Apparently you haven't read too many of his .plan updates, have you? Carmack enjoys "screaming to the four winds" about the most miniscule of details in his work, and I've never seen him avoid detailed accounts of the big features he was working on, either.

    Yes, he has said all of that, indeed, which is what made me wonder so much about his grossly truncated, abbreviated answer here. Had he said here what you paraphrase him as having said before, I'd be incomplete agreement, and would not have remarked as I did.

    Good idea...;) I am always in favor of clearing up obscure remarks.

    I have zero evidence to support it. What I'm saying is that of the evidence I have I no longer consider it a foolish notion to imagine that the delay of both HL2 and D3 were in some way tied to nVidia's desire not to see these games ship until it could field hardware capable of supporting them competitively. The idea seems more credible with each passing day, I'm afraid.
     
  7. digitalwanderer

    digitalwanderer Dangerously Mirthful
    Legend

    Joined:
    Feb 19, 2002
    Messages:
    18,992
    Likes Received:
    3,533
    Location:
    Winfield, IN USA
    OMG, FLASHBACK MEMORY!!!! :shock:

    That's actually what brought me to Beyond3D, one of Carmack's .plan updates. His updates are so technical that I couldn't understand what it meant at all and a buddy of mine (Who will remain nameless to spare him from possible repercussions. Ain't that nice of me Reid?) recomended that I check out B3D since it had the biggest brains on the planet when it came to all things 3D.

    He was right, and I did finally figure out the .plan update. (Or rather had it explained to me in littler words that I could understand by some kind people here, thanks again for all the edumaction!)
     
  8. Anonymous

    Veteran

    Joined:
    May 12, 1978
    Messages:
    3,263
    Likes Received:
    0
    Right, but the crucial thing here is that we don't know how much work that actually involves. Remember, he runs all the lighting through a large generalized per pixel lighting shader. this could mean that all the work needed to go from one hardware to another is centralized and easily done. It's probably by no means, reinventing the wheel. Of course, we don't actually know, so Hellbinder could be right and I could be wrong.
     
  9. VtC

    VtC
    Newcomer

    Joined:
    Feb 13, 2004
    Messages:
    20
    Likes Received:
    0
    Location:
    Atlanta
    Because we had so much respect for what he and id did for the gaming industry with Doom. He proved that the little guy could make it playing against the big guys like Sierra, and that's the stuff legends are made of.

    I was willing to attribute the whole investment in the NV30 path to an uber-geek who loves to get the last bit of performance out of quirky hardware. I figured he considered it a challenge, and a testament to his talents and the quality of the Doom 3 engine.

    However, I can't see that same uber-geek just throwing away all that work and sweat (even though it makes sense to do so) without so much as a barb directed at the cause of all that extra work.
    EDIT: Quoting Carmack:
    This is his game. It's his baby. It's his optimized-to-the-nth-degree code! Now, he's just saying "By the way, I just decided to throw all that work out the window, and it doesn't bother me one bit." That sounds like somebody who has just decided to place the interests of a particular IHV ahead of those of his game and engine.
     
  10. demalion

    Veteran

    Joined:
    Feb 7, 2002
    Messages:
    2,024
    Likes Received:
    1
    Location:
    CT
    He didn't diversify according to DX numbering, he diversified according to available extensions and needed functionality.

    Since he was using OpenGL and not DX, this makes sense, since he couldn't just write once for DX 8 and write once for DX 9 (and once for DX 8.1, which is probably what the NV30 would have been using).

    There might be issues with Doom 3 benchmarks, or valid reasons for disagreement with some of JC's statements. That's one matter.

    But what we have in your assertions as stated here is simply having fair representation of the facts being a casualty of you trying to make your extreme and absolute opinion on JC seem like it is represents the final word on the matter.

    For instance: There is a R200 path. Does this mean JC reinvented the wheel to favor ATI? It does according to your simplified assertion of JC's absolute "in-the-pocketedness" because of the NV30 path existing.

    Might there be some other reason for the R200 path? Maybe even one consistent with the NV30 path? Hmm...but then, still, there is no "R300" path. Could there be a reason consistent with all of the above?

    While not forgetting that 1) ATI didn't create a specific extension for the R300 for Carmack to write for 2) because the R300 didn't need it, let's evaluate...

    R300: It runs the standardized ARB fragment program, in floating point, and runs it so well that from the beginning, ATI never even offered a special extension to expose shaders for it in OpenGL, and used the standard. Carmack didn't give it a special path, consistent with that there seems no reason to have done so.

    R200: It can't run the standardized ARB fragment program, and it has its own separate OpenGL extension for its lesser functionality. Carmack gave it a special path, consistent with that there seems reason to have done so.

    NV30-NV34: They run floating point ARB fragment programs very poorly, and it has its own separate OpenGL extension. Carmack gave it a special path, consistent with that there seems reasons to have done so.

    Overall, seems to have created paths according to the hardware capable of using shaders, with a consistent basis.

    What recently changed: NV30-NV34 utilize their integer processing units and not just floating point when running the path Doom 3 asks of them (as they would be required to run at the speed of the prior OpenGL extensions). Carmack removed the special path, consistent with that there seems no more reason to have done so.

    That you consider there proof in this consistency of him being in nVidia's pocket defies logic.

    ...

    There are other issues, such as misrepresenting NV3x ability to perform standard ARB fragment program shaders. However:

    1) It isn't Doom 3's purpose, and Doom 3 is limited in what it needs for its basic shader featureset. The problem with this combination being many people aren't going to be aware of the significance of it not being Doom 3's purpose, and the "ignore synthetics" mantra will actively seek to have them remain ignorant about it. But that mantra isn't JC's mantra.
    2) Doom 3 seems like it will now include a shader material functionality. If it is going to be provided via a consistent interface for content creation, it is probably going to be restricted primarily to some standard, and letting nVidia handle things (with whatever success or failure) on the driver side of the standard extension probably facilitates that.

    As a result: NV3x is still problematic, JC no longer has to deal with it specifically, and people who accept things because they are short and easy to remember ("synthetics are bad") will be misinformed by the benchmarks being represented as something they are not.

    There are problems with this, but I don't see how Doom 3's consistent basis for "special paths" is at fault for those problems.
     
  11. protomech

    Newcomer

    Joined:
    Jul 23, 2002
    Messages:
    9
    Likes Received:
    0
    Location:
    Huntsville, AL
    Re: So, this means...?

    I don't see anything about that that indicates "nvidia are up his backside". NV3X used to have very poor ARB2 performance, so he creates a IQ / performance trade off. Now that ARB2 performance has improved, the tradeoff isn't necessary.

    ..
     
  12. digitalwanderer

    digitalwanderer Dangerously Mirthful
    Legend

    Joined:
    Feb 19, 2002
    Messages:
    18,992
    Likes Received:
    3,533
    Location:
    Winfield, IN USA
    I have been SOOO waiting for someone to say that to me!!! :D

    I put the same question to a programming friend of mine over here:
     
  13. Laa-Yosh

    Laa-Yosh I can has custom title?
    Legend Subscriber

    Joined:
    Feb 12, 2002
    Messages:
    9,568
    Likes Received:
    1,455
    Location:
    Budapest, Hungary
    Don't forget about the market situation either - as far as I can see, ATI has gained a massive dominance in the gamer segments, and thus most of the NV3x cards sold are way too slow to run Doom3 in acceptable quality anyway. A recent poll in a small hw related forum in my country indicated that ATI has a 2:1 lead, and a very large percentage of those Nvidia cards are previous generation cards (mostly Ti4200 and GF4MX), with the occasional NV3x scattered around. Yes Nvidia has a lot of 5200-5600 models sold, but only the minority of those users will run Doom3...
     
  14. Anonymous

    Veteran

    Joined:
    May 12, 1978
    Messages:
    3,263
    Likes Received:
    0
    This might have been the case in the past, but this time around he is dealing with a unified ligthing model, not a collection of disparate graphics hacks. Thus, there may be a substantial level of abstraction at the interface between lighting engine and specific features and effects, with the core engine itself being quite small and self-contained, and, hence, easy to rewrite for specific hardware.

    Unless there is a substantial speed penalty, generally I find that it's easier to structure programs in modular fashion and abstract the interface between different modular components, allowing "plug and play" type interactions between them. This localizes functionality in the code and simplifies debugging, supporting different platforms, etc.

    Of course through the course of design there may be many revisions of the component abstraction and there are also times when you really can't do this. I don't have graphics programming experience, so I don't know how applicable this is to graphics applications. However, I would have to say that besides inherent elegance, reduced complexity should be one of the prime motivating factors for going to any sort of unified model. 8)
     
  15. Richard

    Richard Mord's imaginary friend
    Veteran

    Joined:
    Jan 22, 2004
    Messages:
    3,508
    Likes Received:
    40
    Location:
    PT, EU
    I quoted that because you said only now (with the NV40) JC has realised the power of fragment programs. The quote above demonstrates he was aware of it long ago with the R300 but he didn't intend to put them to full use in DOOM 3. No doubt because of the game's "delay" he has decided to go ahead and implement them.

    If you remember that Activision had a big banner saying "DOOM 3 Coming in 2003" then it makes sense in Jan 29 2003 not to throw a wrench into the works and implement discrete fragment programs. When it was clear the game wouldn't be out in 2003 (and the first official confirmation of this came as early as QCon 2003) it's likely it was then JC decided he would be able to put this new functionality in without a big risk of impacting the game's release date. His Jan 29 .plan mentions he would fork out the codebase to keep experimenting with the arb2 path. It's then quite likely that the decision to put them in was much easier since it could be done on his secondary source tree anyway.

    Understood and agreed. However, his mention of higher maximum instruction count is on the subject of developers experimenting with fragment programs. Not really Present-Day-Games.

    I'm not saying categorically that he implemented them on Jan 30th. I'm saying your assumption that it was only now with NV40's release that he decided to do it that it's a flawed assumption. It might be right but it's not liquid deducting that from what he know (publicaly).

    I have read them actually. However you have to remember two fine points. First of all, he's no longer doing them; this came about for several reasons but most important of all is that he started to be regarded as a god of programming. He abhors that notion. Also, what was once a small community of 'geeks' turned into an industry larger than the movie industry. No longer the mails he got contained intelligent queries. I still remember when he got hate mail for putting in one of his plans "changed the air control in multiplayer". Finally he has talked of quitting the game business as an active participant. He has said he'd only be interested in doing one more engine after DOOM 3's.

    The second point goes to the heart of my remark. When JC used to write those frequent .plans he didn't do it to boast. He did to influence IHV (32 bit colour, fp framebuffers, etc.) and he did to promote garage-based developers (again this is especially evident in his GDC speech).

    It's also for this reason that he opened the original DOOM, then Quake's sources. You don't do that when you're concerned with money.

    Well... not sure I get what you mean. Do you want him to keep repeating what he said?

    Well, there's always that chance of course (though after Gabe's tirade at Shader Day I don't know how much pull nVidia has on Valve). But I happen to think people should refrain from making acusations when there's no proof. Many gamers still believe ATi cheated in that quack incident because no one reads the aftermath, the only thing that matters are headlines. And during this especially hectic/silly time there is a higher propensity to read an opinion and cross-post it on another forum as rumour that then appears in a website-that-shall-remain-nameless as fact. :wink:
     
  16. Wolfmage

    Newcomer

    Joined:
    Oct 4, 2003
    Messages:
    3
    Likes Received:
    0
    Ok I've got to ask the more knowledgable types here -
    -The ARB2 path is now used by both the R3x0 and Nv3x?

    -At least previously nothing D3 did qualitatively required FP24 precision?

    -Higher precision was irrelevant to the way a scene was rendered and indeed, the use of int12 + FP16 proved to be a speed boon?

    -Now; however, there is shader materials functionality which will use SM2.0 in a more traditional sense of D3D shader effects?

    [speculation]
    -Does this mean that there is a kind of lowest common denominator in the palette of possible effects and visuals D3 will use?

    -Carmack isn't going to stand for a potentially damaging IQ exposé here or at DrivenHaven or anywhere else so obviously he isn't going to let shader replacement be an issue.

    -This seems to suggest that the his ARB2 path is not really full precision ARB2 at all in terms of material shaders.... It is SM2.0 lite ie. nothing that can't be rendered identically with FP16.

    -Given that, is it logical to conclude that the potential for superior graphics in D3 via the R3x0's SM2.0 capabilities is truncated because it is tied to crass performance parity with the NV3x?
    [/speculation]

    Perhaps I'm misunderstanding how OGL coding works and the notion of the ARB2 path in relation to the use of material shaders; however, if I am on the right track then it would seem disappointingly oblique of Carmack. In the past he has hardly been shy about championing NV cards no matter what the competition was doing. He has always been portrayed as the fearless coding warrior - forging forward with technology and making better engines for games regardless of IHVs. So I ask again is this reputation bollock?
     
  17. Gump

    Newcomer

    Joined:
    Mar 12, 2002
    Messages:
    28
    Likes Received:
    0
    The fact that he's using Cg instead of GLSlang shouldn't be surprising (nor should it be taken to mean a bias in favor of Nvidia). They're trying to finish up the game and currently GLSlang support in both ATI's and Nvidia's drivers is partial and unstable. If id wants to release this summer/fall then they need a high level shader language that is working fully now, and the only choice for OpenGL happens to be Cg.

    Of course, this move will probably mean ATI will officially support Cg with their own optimized compiler since DOOM3 is destined to be such a big title... which I'm sure other Cg-using developers will like.

    Of course, that's my take on the situation; but it seems a reasonable assumption.
     
  18. 3dcgi

    Veteran Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    2,493
    Likes Received:
    474
    Agreed. If I was writing a game, and had the time, I'd add optimized paths where needed to make the game run acceptable on as much hardware as I could.
     
  19. Anonymous

    Veteran

    Joined:
    May 12, 1978
    Messages:
    3,263
    Likes Received:
    0
    -The ARB2 path is now used by both the R3x0 and Nv3x?

    yes

    -At least previously nothing D3 did qualitatively required FP24 precision?

    Basically nothing does. Basically you can get away with 8bit for 99% of the calculations. Look at how things look fine on the GeForce3/4Ti and 8500. This is why the nv3x sucked at 32bit. There is no point to it other than research. Nothing needs it. nvidia saw this and didn't initially waste transitors on it. ATI saw 32bit wasn't needed, but went with 24bit instead. The only thing that exists now that needs more than 16bit is texture coordinates.

    -Higher precision was irrelevant to the way a scene was rendered and indeed, the use of int12 + FP16 proved to be a speed boon?

    yes

    -Now; however, there is shader materials functionality which will use SM2.0 in a more traditional sense of D3D shader effects?

    I have no idea what you mean. Doom requires multiple passes to render things, even on PS1.3 level hardware. With PS1.4 hardware it can generally be done in one pass. Read his old .plan file which has been linked to multiple times in the thread.

    Note: D3 uses stencil shadows which is fundementally a multipass algorithm.

    [speculation]
    -Does this mean that there is a kind of lowest common denominator in the palette of possible effects and visuals D3 will use?

    yes. JC called it the ARB1 path for cards like the Radeon 7x00, etc. Lots of passes, no specular lighting.

    -Carmack isn't going to stand for a potentially damaging IQ exposé here or at DrivenHaven or anywhere else so obviously he isn't going to let shader replacement be an issue.

    No application developer has any control over that. It is purely up to the driver guys.

    -This seems to suggest that the his ARB2 path is not really full precision ARB2 at all in terms of material shaders.... It is SM2.0 lite ie. nothing that can't be rendered identically with FP16.

    It is full precission if the driver wants it to be. With fragment shaders you don't specify fixed12, floating 16, floating 24, floating 32, floating 64, etc. You simply hint fastest, nicest, or supply no hints. These are just hints the driver can ignore of course. It it up to the driver to decide how to make it run *best* (IQ/speed trade off).

    Since nothing can't be rendered almost identically with fixed 12 there won't be any issue with fp16 or fp24 instead of fp32. And if there is then the drivers will probably increase the IQ.

    -Given that, is it logical to conclude that the potential for superior graphics in D3 via the R3x0's SM2.0 capabilities is truncated because it is tied to crass performance parity with the NV3x?

    nope. The R3x0 is going to run D3 just as well as before. The only advantage it has is it can use fp24 for everything while the nv3x will use a mixture of fp16 and fp32.

    nvidia deserved to be championed. They had the best OpenGL drivers. Even better than 3Dlabs!! nVidia also had the best linux drivers. This is super important. The casual user doesn't even know what a driver is. With nvidia an OpenGL application would just work. In the past it probably wouldn't on an ATI card, and if it did, it would have problems. Hell, just a few months ago ATI cards were suffering in the OpenGL game Homeworld2. ATI eventually sorted it out, but it wasn't a problem for nvidia owners. ATI is generally pretty good now, but looking back they had their problems.

    What he has done now is said he is dropping special optimization for nvidia and making them run the same path that was previously only for ATI. You can look at it as though he is making nvidia run on ATI's rendering path because ATI's path was the standard one. Way to go JC! No special treatment for any IHV!
     
  20. Laa-Yosh

    Laa-Yosh I can has custom title?
    Legend Subscriber

    Joined:
    Feb 12, 2002
    Messages:
    9,568
    Likes Received:
    1,455
    Location:
    Budapest, Hungary
    Doom3 would require higher precision, but Carmack has coded in some trickery ("pre modulation / post scaling") to deal with the problems.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...