Something wrong with the HL2 Story

Discussion in 'Architecture and Products' started by palmerston, Jul 19, 2003.

  1. ZoinKs!

    Regular

    Joined:
    Nov 23, 2002
    Messages:
    782
    Likes Received:
    13
    Location:
    Waiting for Oblivion
    I figured I'd interject here: First off, we'd need to find some consensus of what it means to "run acceptably." Also, what metric are you using? Frame rate? Image quality? Visual effects? Resolution? Obviously, any of these can be better on a dx 9 card than on a dx 6 card, but "acceptable" has a certain amount of opinion in it.

    IIRC, the demo at e3 ran at (about) 1300x700 no aa or af. Valve is targeting 60 fps, effects and detail levels adjusted to maintain that framerate.



    HL2 (the game) in it's current implementation is targeted at dx 9. Note I said "in it's current implementation." Source (the engine) is designed for well beyond dx 9.

    Source (if you'll pardon the pun): http://www.halflife2.net/forums/showthread.php?s=b8f1072ac492b10d76623316a4f20739&threadid=1298
     
  2. DemoCoder

    Veteran

    Joined:
    Feb 9, 2002
    Messages:
    4,733
    Likes Received:
    81
    Location:
    California
    What is Valve's targeted FPS and resolution for the bulk of their expected buyers?
     
  3. Tridam

    Regular Subscriber

    Joined:
    Apr 14, 2003
    Messages:
    541
    Likes Received:
    47
    Location:
    Louvain-la-Neuve, Belgium
    Many people think about water when they think about shader. It's just logical as in many games shaders are used only for the water. For many people CineFX and Smartshader are WaterFX and SmartWater.

    But I assume that you know that shaders could be used for many other things. You talk about impressive textures. Shaders can be used to improve greatly texture quality and details. Maybe it's the case in Half-Life 2…
     
  4. Exxtreme

    Newcomer

    Joined:
    Feb 7, 2002
    Messages:
    87
    Likes Received:
    0
    Location:
    Germany
    Hehe, but what do you do when a pixel-shader-effect produces very strong aliasing? SSAA is now the only method to reduce them. Either anisotropic filtering nor the texture filter will help you to reduce the aliasing. The developers should keep this in mind when they develop the shader-effects.
     
  5. Ilfirin

    Regular

    Joined:
    Jul 29, 2002
    Messages:
    425
    Likes Received:
    0
    Location:
    NC
    Wow.. is Valve actually going to get the IHVs to do their work for them?
     
  6. Ailuros

    Ailuros Epsilon plus three
    Legend Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    9,511
    Likes Received:
    224
    Location:
    Chania
    I don't see it that way. I'd rather applaud that kind of flexibility of a game engine with future hardware, since the graphic card's capabilities can be exploited to a higher degree.
     
  7. DemoCoder

    Veteran

    Joined:
    Feb 9, 2002
    Messages:
    4,733
    Likes Received:
    81
    Location:
    California
    Problem is, we've heard it all before. Need I mention Shiny?

    Everytime someone has promised us a scalable engine, we have seen either two things:

    1) entire game designed with hi-res artwork, engine is supposed to "scale" it on the fly (progressive geometry, etc), but the result is still poor performance plus artifacts like polygon popping, or worse. (e.g. Enter the Matrix anyone? Messiah?)

    2) entire designed with moderate levels of artwork, but has the ability for the developer to drop in and sprinkle a few extra special effects, used sparingly.

    #2 has been the most successful. How many DX8 games have we seen that were really DX6 games, except a bump map here or there, or some shiny surfaces, all added as an afterthought.

    There is a third option, the one you can assure will succeed

    3) do all the artwork twice (have fallbacks for everything)


    I'm just a little bit skeptical of claims of games designed with content for uber-devices not yet invented, but then scale back automagically and algorithmically for whatever device they are being played on.

    I haven't seen many people pull it off.
     
  8. Dave H

    Regular

    Joined:
    Jan 21, 2003
    Messages:
    564
    Likes Received:
    0
    I asked this very question a month or so ago. The answer would appear to be that antialiasing will have to be built into the shader itself.

    If the math is germane, one can calculate the integral of the shader value over the region covered by the current pixel, and use that to determine the final color value. (Rather than just calculating the value of the shader at a single point, e.g. at the center of the pixel region.)

    If the math for analytically determining the integral doesn't work out, one can calculate the "feature-size" of the shader value and, as it approaches the Nyquist limit (where aliasing will occur), blend with a precomputed "average" value for the shader.

    These techniques are how offline procedural shaders (e.g. RenderMan) handle the problem of shader aliasing. (There may be other methods as well that I didn't find in my search.) There wouldn't seem to be any reason why hardware will do it any differently.

    Another way of putting this: there doesn't seem to be any cheap way to antialias shaders in hardware. Certainly these methods should be much more efficient than supersampling complex shaders.

    The post I wrote about it back then
    The SIGGRAPH presentation I took most of my info from
     
  9. Dave H

    Regular

    Joined:
    Jan 21, 2003
    Messages:
    564
    Likes Received:
    0
    It was my impression that this sort of thing happens all the time, particularly on higher-profile games. That is, that the ISV lets developer relations add GPU-specific code to their game, whether it's a special code path to work around performance issues with a particular card, or various add-in effects to show off their cards' new features. Obviously the developer makes the decision about what code goes in the final game, but this sort of thing--particularly when asked--is what devrel does.
     
  10. 3dcgi

    Veteran Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    2,493
    Likes Received:
    474
    I have no inside knowledge of HL2, but I'd like to point out that shaders do not have to be visible on the screen to be there. Typically everyone thinks of pixel shaders, but HL2 could have a lot of DX9 vertex shaders and screen shots would never show this.
     
  11. K.I.L.E.R

    K.I.L.E.R Retarded moron
    Veteran

    Joined:
    Jun 17, 2002
    Messages:
    2,952
    Likes Received:
    50
    Location:
    Australia, Melbourne
    Correct. Shaders don't necassarilly mean awesome IQ. They can be used to offload calculations from the CPU and speed the game up or to save VRAM, IE: procedural textures.
     
  12. Ailuros

    Ailuros Epsilon plus three
    Legend Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    9,511
    Likes Received:
    224
    Location:
    Chania
    Democoder,

    (don't know actually if the former was in reply to my post)

    I don't disagree. Yet I still prefer the flexibility present in the Serious Sam engine as an example, compared to many other engines out there.
     
  13. Dave H

    Regular

    Joined:
    Jan 21, 2003
    Messages:
    564
    Likes Received:
    0
    Re: reply

    There were numerous such debates in the months before NV34 was released. There certainly aren't any now.

    DX9 compliance is entirely a matter of feature support, and the 5200 has exactly the same DX9 feature support as the 5900 or any other NV3x card. There is absolutely no question that the hardware is DX9 compliant. NV3x may lack support for some optional DX9 features like MRTs, but by the same token R3x0 lacks support for different optional DX9 features like PS 2.0_x, FP32, and partial precision. Some of Nvidia's drivers appear to be operating outside the bounds of the DX9 spec, but again that's not the hardware's fault, and it is true of the entire GF FX line.

    The "only" problem with the 5200 is that it's slower than a tree sloth on Valium glued to the side of a barn. Well, that's fine, but it doesn't make it any less DX9 compliant. You could probably live out a full and rewarding life in the time it takes for the DX9 refrast to run through a complete 3DMark03 run, but no one is claiming that the refrast isn't DX9 compliant!

    Ok. First of all, this comment seems to misunderstand the DX standards setting process. You make it seem as if MS sits in a room on its own and comes up with some big list of features which they hand to Nvidia and ATI who then run off and try to design a part that conforms to that list. In reality, design cycles for an substantially redesigned GPU core run on the order of 3 years, and ATI and Nvidia would have both had the rough featuresets of NV3x/R3x0 set in stone well before MS released the DX9 standard. Instead, what happens is that Nvidia and ATI (and whoeever else) goes to MS with the rough featuresets of their upcoming cores, and tries to convince them to support those features in the API. MS chooses some subset of the features, based in part on what the IHVs will be supporting, and in part on what the IHVs can convince MS is worth supporting. The IHVs can then tweak their designs to best support the details of the upcoming spec. But it's generally too late to design in support for substantial new features, or to rebalance part to better fit the performance characteristics implied by the spec.

    There's no denying that in terms of implementation, R3x0 is a much better "DX9 oriented" design than NV3x. One might say fairly that the implementation of NV30-34 is "semi DX9, semi DX8," because of the greater resources available for PS 1.1-1.3 shaders than PS 1.4+. But this is only a matter of performance issues, not of feature support. In terms of DX9 feature support, NV3x is at least on par with R3x0, and if you concentrate specifically on "forward-looking" DX9 features--ones that are not required for PS/VS 2.0 but will be for PS/VS 3.0--NV3x is clearly ahead. It is thus something of an anomaly that R3x0 has hardware support for centroid multisampling while NV3x does not. (Of course it's not so strange in the context of R3x0's much more capable multisampling implementation.)

    In any case, it's no more a matter for praise or blame (in R3x0's context as a PS/VS 2.0 part) than NV3x's forward-looking featureset. I mean, at least NV3x's PS 2.0_x features can be enabled in DX; centroid sampling on the R3x0 cannot be, which drastically reduces its usefulness.

    It seems doubtful that NV3x could be made to support centroid sampling. NV3x's MSAA implementation appears very hardwired and not very flexible. The only solution I can think of would be to perform multisampling via a pixel shader, which would be extraordinarily inefficient to say the least.

    But it's utterly incomprehensible to try to blame Nvidia for not supporting centroid sampling in hardware that doesn't claim to be PS/VS 3.0 compliant. Centroid sampling isn't a part of the PS/VS 2.0 spec. It just isn't. The fact that a PS/VS 2.0 compliant architecture doesn't support it is absolutely in no way a "flaw".
     
  14. hkultala

    Regular

    Joined:
    May 22, 2002
    Messages:
    297
    Likes Received:
    38
    Location:
    Herwood, Tampere, Finland
    off-topic, but:
    http://www.urbanlegends.com/products/beta_vs_vhs.html
     
  15. hkultala

    Regular

    Joined:
    May 22, 2002
    Messages:
    297
    Likes Received:
    38
    Location:
    Herwood, Tampere, Finland
    7500, 8500 and 9000 do not support multisampling,
    so it is not an issue on those chips.

    now this reminds me about Parhelia:
    AFAIK it should be immune to this effect and it's FAA is much faster than any SSAA...
    so parhelia might be quite a good card to run HL2 ;)
     
  16. Fred

    Newcomer

    Joined:
    Feb 18, 2002
    Messages:
    210
    Likes Received:
    15
    Thanks DaveH and Ilfrin for voices of reason.

    Valve can do as they please with their engine, they are sufficiently reputable to be trusted to do the right design choice.

    However, I personally fail to see why the texture packing tradeoff is necessary, it strikes me that the worst case scenario is sufficiently large a loss in performance, that the small avg gains seems a poor choice. Particularily when we know that vanilla msaa is impossible.
     
  17. Dio

    Dio
    Veteran

    Joined:
    Jul 1, 2002
    Messages:
    1,758
    Likes Received:
    8
    Location:
    UK
    Texture packing should not have any significant performance downside unless the hardware texture caching is somewhat broken. If only a small portion of a texture is used, only that portion and small overspills should be read into the cache.

    Reducing texture state changes can be a big win. It enables several other optimisations as well (relaxing sort-by-material enables more aggressive sort-by-depth, for example). How much this helps is very dependent on the engine - an engine written around the assumption of fewer state changes could have a huge performance upside. One does presume Valve know what they are doing...
     
  18. Hyp-X

    Hyp-X Irregular
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    1,170
    Likes Received:
    5
    So will HL2 use point-sampling and no mipmapping for textures as bilinear/trilinear filtering also causes texture samples taken from nearby area?

    It will be very ugly if they do that...
     
  19. biffz0r

    Newcomer

    Joined:
    Jul 21, 2003
    Messages:
    6
    Likes Received:
    0
    First time posting. Have visited these forums regularly over the past years - very refreshing from other sites. While I don't know nearly as much as other folks, I do know this:

    DemoCoder == Troll

    Pretty much of the most dangerous kind (knows enuff to inflame and be dangerous). DemoCoder - you know a friggin lot more than I do on these subjects... why can't you stop polarizing issues and just speak to the matter at hand, rather than pose (silly) rhetorical questions that you KNOW no one has the answer to.

    /me fears for the continued high level of discussion of the forums...
     
  20. RussSchultz

    RussSchultz Professional Malcontent
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    2,855
    Likes Received:
    55
    Location:
    HTTP 404
    /me too, but not due to Democoder, if you catch my drift.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...