Something wrong with the HL2 Story

DemoCoder said:
Are you claiming that HL2 will only run acceptably on DX9 hardware?
I figured I'd interject here: First off, we'd need to find some consensus of what it means to "run acceptably." Also, what metric are you using? Frame rate? Image quality? Visual effects? Resolution? Obviously, any of these can be better on a dx 9 card than on a dx 6 card, but "acceptable" has a certain amount of opinion in it.

IIRC, the demo at e3 ran at (about) 1300x700 no aa or af. Valve is targeting 60 fps, effects and detail levels adjusted to maintain that framerate.



Are you claiming that HL2 was designed from the beginning for DX9? Seems unlikely that a project that started in what, 1999, would be designed specifically for 3 API revisions later.
HL2 (the game) in it's current implementation is targeted at dx 9. Note I said "in it's current implementation." Source (the engine) is designed for well beyond dx 9.

Gabe Newell said:
The way Source is designed, hardware manufacturers can update materials to take advantage of new hardware as it comes out by shipping updates (probably via Steam). So if they come out with a 512 MB card or double the number of instructions possible in a pixel or vertex shader, then customers who have that card can be updated to take advantage of that.
Source (if you'll pardon the pun): http://www.halflife2.net/forums/showthread.php?s=b8f1072ac492b10d76623316a4f20739&threadid=1298
 
DemoCoder said:
DaveBaumann said:
DemoCoder said:
Example? Only thing I can remotely see approaching it is some of the fire and water effects.

For DX9 level boards this is probably going to be one of the most shader intensive titles to date. It is riddled with shader code.

There are publically available screenshots out there from the game levels (not the "tech" walkthrough part). Apart from the water, I would like someone to point out in one of these screenshots something that is truly a DX9 level pixel shader. I watched that E3 video dozens of times, and I saw very little shading that looked impressive. I saw impressively high res textures. I saw impressive fire and smoke and water shaders. I saw a little bit of bump mapping here and there, but nothing we haven't seen before. (e.g. on the pheromone level, the bathroom tile has that overly shiny specular highlight look we have all come to know and love, not really a DX9-required effect)

Many people think about water when they think about shader. It's just logical as in many games shaders are used only for the water. For many people CineFX and Smartshader are WaterFX and SmartWater.

But I assume that you know that shaders could be used for many other things. You talk about impressive textures. Shaders can be used to improve greatly texture quality and details. Maybe it's the case in Half-Life 2…
 
DaveBaumann said:
Actually, shader limited (in terms of pixel shaders) does actually mean fill-rate limited. Think of the difference between MSAA and SSAA. With SSAA all pixel operations need to be duplicated for each subsample, ergo in for 2X SSAA each pixel shader operation will need to be carried out 2X per pixel.

This is why MSAA is very important to have with shader limited titles - the shader ops are still only carried out once per pixel (inside a poly) and if you are shader limited then you have plenty of bandwidth to spare.
Hehe, but what do you do when a pixel-shader-effect produces very strong aliasing? SSAA is now the only method to reduce them. Either anisotropic filtering nor the texture filter will help you to reduce the aliasing. The developers should keep this in mind when they develop the shader-effects.
 
ZoinKs! said:
Gabe Newell said:
The way Source is designed, hardware manufacturers can update materials to take advantage of new hardware as it comes out by shipping updates (probably via Steam). So if they come out with a 512 MB card or double the number of instructions possible in a pixel or vertex shader, then customers who have that card can be updated to take advantage of that.
Source (if you'll pardon the pun): http://www.halflife2.net/forums/showthread.php?s=b8f1072ac492b10d76623316a4f20739&threadid=1298

Wow.. is Valve actually going to get the IHVs to do their work for them?
 
Wow.. is Valve actually going to get the IHVs to do their work for them?

I don't see it that way. I'd rather applaud that kind of flexibility of a game engine with future hardware, since the graphic card's capabilities can be exploited to a higher degree.
 
Problem is, we've heard it all before. Need I mention Shiny?

Everytime someone has promised us a scalable engine, we have seen either two things:

1) entire game designed with hi-res artwork, engine is supposed to "scale" it on the fly (progressive geometry, etc), but the result is still poor performance plus artifacts like polygon popping, or worse. (e.g. Enter the Matrix anyone? Messiah?)

2) entire designed with moderate levels of artwork, but has the ability for the developer to drop in and sprinkle a few extra special effects, used sparingly.

#2 has been the most successful. How many DX8 games have we seen that were really DX6 games, except a bump map here or there, or some shiny surfaces, all added as an afterthought.

There is a third option, the one you can assure will succeed

3) do all the artwork twice (have fallbacks for everything)


I'm just a little bit skeptical of claims of games designed with content for uber-devices not yet invented, but then scale back automagically and algorithmically for whatever device they are being played on.

I haven't seen many people pull it off.
 
Exxtreme said:
Hehe, but what do you do when a pixel-shader-effect produces very strong aliasing? SSAA is now the only method to reduce them. Either anisotropic filtering nor the texture filter will help you to reduce the aliasing. The developers should keep this in mind when they develop the shader-effects.

I asked this very question a month or so ago. The answer would appear to be that antialiasing will have to be built into the shader itself.

If the math is germane, one can calculate the integral of the shader value over the region covered by the current pixel, and use that to determine the final color value. (Rather than just calculating the value of the shader at a single point, e.g. at the center of the pixel region.)

If the math for analytically determining the integral doesn't work out, one can calculate the "feature-size" of the shader value and, as it approaches the Nyquist limit (where aliasing will occur), blend with a precomputed "average" value for the shader.

These techniques are how offline procedural shaders (e.g. RenderMan) handle the problem of shader aliasing. (There may be other methods as well that I didn't find in my search.) There wouldn't seem to be any reason why hardware will do it any differently.

Another way of putting this: there doesn't seem to be any cheap way to antialias shaders in hardware. Certainly these methods should be much more efficient than supersampling complex shaders.

The post I wrote about it back then
The SIGGRAPH presentation I took most of my info from
 
Ilfirin said:
ZoinKs! said:
Gabe Newell said:
The way Source is designed, hardware manufacturers can update materials to take advantage of new hardware as it comes out by shipping updates (probably via Steam). So if they come out with a 512 MB card or double the number of instructions possible in a pixel or vertex shader, then customers who have that card can be updated to take advantage of that.
Source (if you'll pardon the pun): http://www.halflife2.net/forums/showthread.php?s=b8f1072ac492b10d76623316a4f20739&threadid=1298

Wow.. is Valve actually going to get the IHVs to do their work for them?

It was my impression that this sort of thing happens all the time, particularly on higher-profile games. That is, that the ISV lets developer relations add GPU-specific code to their game, whether it's a special code path to work around performance issues with a particular card, or various add-in effects to show off their cards' new features. Obviously the developer makes the decision about what code goes in the final game, but this sort of thing--particularly when asked--is what devrel does.
 
I have no inside knowledge of HL2, but I'd like to point out that shaders do not have to be visible on the screen to be there. Typically everyone thinks of pixel shaders, but HL2 could have a lot of DX9 vertex shaders and screen shots would never show this.
 
3dcgi said:
I have no inside knowledge of HL2, but I'd like to point out that shaders do not have to be visible on the screen to be there. Typically everyone thinks of pixel shaders, but HL2 could have a lot of DX9 vertex shaders and screen shots would never show this.

Correct. Shaders don't necassarilly mean awesome IQ. They can be used to offload calculations from the CPU and speed the game up or to save VRAM, IE: procedural textures.
 
Democoder,

(don't know actually if the former was in reply to my post)

I don't disagree. Yet I still prefer the flexibility present in the Serious Sam engine as an example, compared to many other engines out there.
 
Re: reply

Striker said:
(And we can safely rule out GF FX 5200 from 'DX 9' since there have been numerous debates concerning whether they are 'compatible' or 'compliant'...)

There were numerous such debates in the months before NV34 was released. There certainly aren't any now.

DX9 compliance is entirely a matter of feature support, and the 5200 has exactly the same DX9 feature support as the 5900 or any other NV3x card. There is absolutely no question that the hardware is DX9 compliant. NV3x may lack support for some optional DX9 features like MRTs, but by the same token R3x0 lacks support for different optional DX9 features like PS 2.0_x, FP32, and partial precision. Some of Nvidia's drivers appear to be operating outside the bounds of the DX9 spec, but again that's not the hardware's fault, and it is true of the entire GF FX line.

The "only" problem with the 5200 is that it's slower than a tree sloth on Valium glued to the side of a barn. Well, that's fine, but it doesn't make it any less DX9 compliant. You could probably live out a full and rewarding life in the time it takes for the DX9 refrast to run through a complete 3DMark03 run, but no one is claiming that the refrast isn't DX9 compliant!

It's promising to me that ATI followed the typical DX 9 spec as much as possible so far (in this DX 9 generation) and thus can enable the feature Valve calls for in its drivers. If nVidia chose to follow a 'semi DX9, semi DX 8' route in their hardware, they are to blame.

Ok. First of all, this comment seems to misunderstand the DX standards setting process. You make it seem as if MS sits in a room on its own and comes up with some big list of features which they hand to Nvidia and ATI who then run off and try to design a part that conforms to that list. In reality, design cycles for an substantially redesigned GPU core run on the order of 3 years, and ATI and Nvidia would have both had the rough featuresets of NV3x/R3x0 set in stone well before MS released the DX9 standard. Instead, what happens is that Nvidia and ATI (and whoeever else) goes to MS with the rough featuresets of their upcoming cores, and tries to convince them to support those features in the API. MS chooses some subset of the features, based in part on what the IHVs will be supporting, and in part on what the IHVs can convince MS is worth supporting. The IHVs can then tweak their designs to best support the details of the upcoming spec. But it's generally too late to design in support for substantial new features, or to rebalance part to better fit the performance characteristics implied by the spec.

There's no denying that in terms of implementation, R3x0 is a much better "DX9 oriented" design than NV3x. One might say fairly that the implementation of NV30-34 is "semi DX9, semi DX8," because of the greater resources available for PS 1.1-1.3 shaders than PS 1.4+. But this is only a matter of performance issues, not of feature support. In terms of DX9 feature support, NV3x is at least on par with R3x0, and if you concentrate specifically on "forward-looking" DX9 features--ones that are not required for PS/VS 2.0 but will be for PS/VS 3.0--NV3x is clearly ahead. It is thus something of an anomaly that R3x0 has hardware support for centroid multisampling while NV3x does not. (Of course it's not so strange in the context of R3x0's much more capable multisampling implementation.)

In any case, it's no more a matter for praise or blame (in R3x0's context as a PS/VS 2.0 part) than NV3x's forward-looking featureset. I mean, at least NV3x's PS 2.0_x features can be enabled in DX; centroid sampling on the R3x0 cannot be, which drastically reduces its usefulness.

It's purely some hardware issue, and they never said nVidia can't try to fix the thing (they could emulate it or produce a similar result I suppose, but they won't be able to 100% fix it, hardware restrictions apply). Valve is a company wanting to sell games, remember? They wouldn't leave nvidia users high and dry, but I suppose AA will remain strictly DX9 related in that title, and since the NV3x architecture has many flaws, you can't blame Valve for that.

It seems doubtful that NV3x could be made to support centroid sampling. NV3x's MSAA implementation appears very hardwired and not very flexible. The only solution I can think of would be to perform multisampling via a pixel shader, which would be extraordinarily inefficient to say the least.

But it's utterly incomprehensible to try to blame Nvidia for not supporting centroid sampling in hardware that doesn't claim to be PS/VS 3.0 compliant. Centroid sampling isn't a part of the PS/VS 2.0 spec. It just isn't. The fact that a PS/VS 2.0 compliant architecture doesn't support it is absolutely in no way a "flaw".
 
DemoCoder said:
Well, it doesn't sound specific to the NV3x, it sounds like all cards < R300 which implement multisampling will have this issue. He didn't mention if the 7500 or 8500 had the hardware.

7500, 8500 and 9000 do not support multisampling,
so it is not an issue on those chips.

now this reminds me about Parhelia:
AFAIK it should be immune to this effect and it's FAA is much faster than any SSAA...
so parhelia might be quite a good card to run HL2 ;)
 
Thanks DaveH and Ilfrin for voices of reason.

Valve can do as they please with their engine, they are sufficiently reputable to be trusted to do the right design choice.

However, I personally fail to see why the texture packing tradeoff is necessary, it strikes me that the worst case scenario is sufficiently large a loss in performance, that the small avg gains seems a poor choice. Particularily when we know that vanilla msaa is impossible.
 
Texture packing should not have any significant performance downside unless the hardware texture caching is somewhat broken. If only a small portion of a texture is used, only that portion and small overspills should be read into the cache.

Reducing texture state changes can be a big win. It enables several other optimisations as well (relaxing sort-by-material enables more aggressive sort-by-depth, for example). How much this helps is very dependent on the engine - an engine written around the assumption of fewer state changes could have a huge performance upside. One does presume Valve know what they are doing...
 
So will HL2 use point-sampling and no mipmapping for textures as bilinear/trilinear filtering also causes texture samples taken from nearby area?

It will be very ugly if they do that...
 
DemoCoder said:
What is Valve's targeted FPS and resolution for the bulk of their expected buyers?
First time posting. Have visited these forums regularly over the past years - very refreshing from other sites. While I don't know nearly as much as other folks, I do know this:

DemoCoder == Troll

Pretty much of the most dangerous kind (knows enuff to inflame and be dangerous). DemoCoder - you know a friggin lot more than I do on these subjects... why can't you stop polarizing issues and just speak to the matter at hand, rather than pose (silly) rhetorical questions that you KNOW no one has the answer to.

/me fears for the continued high level of discussion of the forums...
 
Back
Top