Something wrong with the HL2 Story

Doomtrooper said:
I may be wrong, but wasn't the same issue somehow corrected in Splinter Cell? (Not that it did any good -- FSAA even on R9800Pro is still unplayable due to an extremely low framerates in night-vision mode... :()

DemoCoder said:
Well, in general, it will break "cinematic" style effects
Like motion blur, DOF and so? I'd honestly prefer FSAA over these...

DemoCoder said:
Time for the industry to move on from MSAA and find something better that doesn't force programmers to code around it, but is transparent to the application I guess.

Won't happen in a short time i afraid... NV50/R500 maybe...
 
Most of this is from the edited out portion of my original post. I edited it out because I thought I came up with a reason for why they wouldn't be able to simply unpack the textures. After reconsidering, even that wasn't a valid reason (see last part of this post).

Some using declarations (these terms have other definitions, but these definitions are what I mean in this post):
Texture packing - packing a bunch of small textures into a big texture
Texture pack - the big texture

Texture packing is usually done as a Luke-warm performance optimization to cut down on the # of batches (and thus increase the number of triangles per batch), and so decreases CPU overhead. I say Luke-warm because most of the time it only offers a relatively small gain (though does allow for better scaling for future hardware), so it usually isn't something worth making major sacrifices for.

The primary problems with it are:
A) AA won't work
B) You can end up actually hurting performance quite easily since you're packing a bunch of small textures into one big 4096x4096, or 2048x2048 texture. That means if all you need is 1 64x64 texture from a texture pack, you still have to load the entire thing in.

In cases where you can ensure that almost every object is both unique, and confined to a small area, that's when you get the best performance boost. But if you have a decent physics engine (as HL2 does) that will never happen. It's very possible that the user will bring a whole bunch of objects from different texture packs into one room, so that not only are you not saving on batching, you're also transferring several thousand times the amount of data you're using.

The fact that the optimization has a pretty bad, and not all too uncommon worst-case scenario, and only a mildly beneficial best-case scenario (when levels are so small that you can fit all the textures into a single pack) is reason enough for most developers to not do it (the technique isn't anything new, after all). You could very easily end up hurting performance with this "optimization", and you're losing AA too, not a very good deal IMO.

Texture packing of this kind is usually (and should be) done automatically, transparent to the content developers (artists). It's a pretty simple process, and just as easy to undo. All you're doing is making every object that references one texture reference the texture pack, then modify the texture coords to the correct position in the texture pack. Undoing this is simply a matter of taking out one of those textures in the texture pack, creating it as a separate texture, and making every mesh that uses the texcoords of that texture point to the separate texture with a (0,0) offset. After that, no more AA problems.


But, it has become clear to me that Valve isn't doing it as an optimization, but as a way of reducing redundant wrapping of textures. Stuff like having the same exact floor tile repeated a thousand times across the floor could easily benefit from having a texture pack of different floor tiles (of the same type of course) that could be readily substituted in to provide a more natural looking floor with cracks and such. But I still fail to see why this leaves the content creation side of things - there's no reason to do all the offsetting in vertex shaders if the offset isn't going to change (and I don't think it should, but Valve artists might disagree with me on that) during normal operation. And if it's not in the vertex shaders, a simple toggle in the config settings to allow the user to force texture unpacking should solve the problem and require minimal time and effort on Valve's part.

But if for some reason Valve does have all the texture packing stuff in vertex shaders and such (and I hear they have quite a few different shaders), it's understandable that they don't have time to go back through everything and provide a separate, un-packed version of the same shader. But then the question of why they didn't think of this all along comes up.. all these problems have been well known since about as long as HL2 has been in development, if not longer.


But then, I know Valve knows all of this, so I'm quite bewildered..
 
DemoCoder said:
OpenGL guy said:
Unlikely. Most of HL2's environments I see in the videos are traditional DX7 style multitexture "shading", with the exception of a few, like water, and some demo surfaces (church colored glass, "Predator" cloak effect) I doubt they would exceed the limitations of PS1.0. I see nothing that suggests DX9 level shading.
I do.

Example? Only thing I can remotely see approaching it is some of the fire and water effects.
You'll have to wait for comments from Valve.
For it to be relevant, it would have to be something quite common in the artwork to cause such a bottleneck. (not the old ISV trick of sugaring a few showoff effects ontop of old game artwork to showoff one or two materials) I find it hard to believe that a game 4 years in the making would have a significant amount of content designed to be experienced on an API that just shipped at the beginning of the year.

I am willing to entertain the thought that a small fraction of the materials library might have some experimental DX9-only effects sprinkled about. Unless of course you're talking about effects which convert to multiple passes on lower hardware by design, by virtue of newer HW (DX9) can now automagically run in a single pass. However, if Valve's game doesn't run on DX7 and 8 hardware acceptably using multipass on these effects, they will be in trouble anyway. If they want to make money, their game will have to run on existing hardware. I highly doubt they can force people to upgrade to a 9800 just to get playable framerates.
Entertain what you like, when the game is here you can see for yourself.
Nevertheless, my point stands that if the game can run at 1600x1200 on a DX7 or 8 card, it can run at 800x600 supersampled.
And who said it would run at such resolutions on a DX7 or 8 card?
 
Are you claiming that HL2 will only run acceptably on DX9 hardware? I'd like to get you on the record on this.

Are you claiming that HL2 was designed from the beginning for DX9? Seems unlikely that a project that started in what, 1999, would be designed specifically for 3 API revisions later.

Doom3 isn't even a DX9 game.
 
DegustatoR said:
And since when does FSAA need API support? Don't we just "force" it from driver?
Forcing AA from the driver is the worst possible solution and should only be used if the application doesn't support AA.
Now. IIRC HL2 have something like DX6 in minimal video specs. And what about FSAA with this level of graphics? DX7? DX8? MSAA worked fine for these till today.
Did it? Maybe you should look around more carefully.
From my point of view it IS an issue of Valve, not NVIDIA. It is their choice to dump the majority of FSAA supporting hardware for something named "packed textures"...
Which has been used by many, many games.
DaveBaumann

The title is most likely to be shader limited - you probably don't want to be doing superampling in this situation.

Why? Shader limited means not fillrate or bandwidth limited, means you can use SSAA if you want, 2x1 for example. Doubt it'll be very big perfomance hit on NV35 hardware.
Just 2x performance hit if you are fillrate limited.
OpenGL guy

Lots of games already have issues with MSAA

Lots? I can only think of Splinter Cell...
Maybe you should look around more carefully. Look for comments about artifacts.
And what will this technic which conflicts with MSAA bring us? Has anyone seen something very beatiful or special in HL2 texturing?
Yes, the game is very cool looking. As far as MSAA problems go, read Valve's comments.
 
I think Ifrin make a good point about how a "materials library" based engine like HL2 could suffer performance wise if the packed textures are being used as they historically were.

After all, Valve is bragging about how level designers can pull materials from a rich library and plop them down into a level. If I place Red Barrel A into a level, and I incur the penalty of loading Grey Barrel B and Chrome Barrel C because of the packing, then it isn't exactly an optimization, but a waste of AGP, and vid mem loading a much larger texture for a small chunk of it.

The floor texture trick sounds like a better justification. Still, it seems to be giving up alot for it. From what we've seen from the E3 video, HL2 looks like it has great content, level design, AI, physics, etc but it isn't exactly state of the art when it comes to lighting and shading. I'd gladly turn off a few optimization tricks if I could get AA running, even if I have to drop a few showoff DX9 shaders on some materials.
 
I have a feeling OpenGL Guy has already solved the problem with 9500+ cards. :)

If it hasn't been solved already, then he knows what to do and is in the process of doing it. :D

OpenGL Guy, you're a genious and I have 100% faith that you will make FSAA in HL2 work perfectly on the R3xx line of cards.
You're a FSAA god. :LOL:

You made games that use 16bpp use FSAA by forcing 32bpp on them which made those games look even better by getting rid of the banding.

I suppose you think outside the box a lot and that gives you an advantage. :)

I keep reading that all you have to do is change the sample pattern on the R3xx cards, if so then there won't be much work to do to fix the issue.

Guys, arguing with OpenGl Guy is pointless, he sure as heck knows all about the problem as he deals with things like this and most likely has more information than any of us do.
 
I can really only imagine how much ATI had to shell out for this:
Perhaps they didnt pay for it the way you are thinking eh??

1. ATi has had DX9 cards out for a Year.

2. *Virtually ALL* Dx9 development work for Hl2 has been done on Radeon cards for the last year.

3. HL2 shaders run more than 5x FASTER on R350.

Now does it seem so unreasonable that HL2 would be *optomized* for Radeon 9800pro?? Does it really seem that it is nothing but some deperate ploy with lots of cash?? Because of course ATi couldnt possible actually *deserve* that distinction...

Apparently only Nvidia does...
 
I happen to know something about this subject so I thought I'd share.

The real bug here in my opinion is with valve, and I'm sure if they knew what they know now when they started HL2 this issue with multisample wouldn't exist. There is a fairly simple solution for valve to fix for this if valve was willing to modify their textures, but I'm guessing they've put too much time doing the art and validating it to go back and change things around. But before I mention the real fix lets review multisample and this centriod multisample method being talked about.

The reason standard multisample doesn't work so well with packing of multiple textures into a single texture is because the sample pattern can cause nearby textures to blead together on the edge of polygons if texture coordinates go to the very edge of a sub-texture for that polygon. This means in standard multisample it is possible for the egdes of polygons to get the texture color from a neighboring sub-texture rather than the sub-texture they want. With the centriod version of multisample the sample pattern is modified for the edge of triangles to ensure you don't grab texel info from the wrong sub-texture. If you think about this, you will realize that means the centriod version of multisampling actually introduces a different type of corruption since a person would expect a consistant sampling pattern. I think the centriod version of multisample is a hacky way to handle issues developers don't want to deal with. If I had my way it never would have happened, but microsoft felt differently and it will be coming to a dx near you in the future.

Now there are two ways for valve to fix the problem more correctly than using the hacky centriod fix. The first is to tweek texture coords a little so the edges of triangles don't use the edge of a sub-texture. This isn't a very good solution because you can loose the outer boarder pixels for each subtexture and it's difficult sometimes to figure out how much to fudge the texture coords. The good soltion would be to put a boarder around each sub-texture of the same color as the outer most pixel for the sub-texture. This would fix the bleading together of sub-textures when using multisample. The problem is the valve would have to go and redo all their textures and they probably don't have time for that.
 
Hellbinder said:
I can really only imagine how much ATI had to shell out for this:
Perhaps they didnt pay for it the way you are thinking eh??

1. ATi has had DX9 cards out for a Year.

2. *Virtually ALL* Dx9 development work for Hl2 has been done on Radeon cards for the last year.

3. HL2 shaders run more than 5x FASTER on R350.

Now does it seem so unreasonable that HL2 would be *optomized* for Radeon 9800pro?? Does it really seem that it is nothing but some deperate ploy with lots of cash?? Because of course ATi couldnt possible actually *deserve* that distinction...

Apparently only Nvidia does...

Are you out of your mind? Do you have any idea how big an impact having an IHV's logo and the words "optimized for" mean to a title as big as Half Life 2? Marketing execs at ATI and NVIDIA would give up their first born children for this.

While the actual programmers at Valve are most likely singing the praises of the R3xx (deservedly, I have no idea where your NVIDIA comment came from, knee jerk reaction?), they don't call the business shots.

I bet many moons ago that ATI and NVIDIA were banging on Valve's door, showing their roadmaps, trying to get their cards in the development workstations. Co-promotion (ie: sharing E3 booth space), dev resource assistance (ie: donating hardware, dev relations), and even just monetary subsidies are the norm with a SKU this big.

Think about it, alot of gamers, many uninformed, will be saying "i need to upgrade my video card to play Half Life 2, what should I get?". Well, a bigass sticker that says "optimized for Radeon 9800 Pro" will definitely help. And again, sun-deprived programmers have nothing to do with the business and marketing of their game.

It could very well have been a mutual decision between Valve and ATI, but don't think an honour such as plugging a specific IHV's video card on the biggest PC game in many many years comes free.
 
DemoCoder said:
Are you claiming that HL2 will only run acceptably on DX9 hardware? I'd like to get you on the record on this.
Where did I say such a thing? What I was questioning is why you seem to think that 1600x1200 will be playable on a DX7 or 8 card.
Are you claiming that HL2 was designed from the beginning for DX9? Seems unlikely that a project that started in what, 1999, would be designed specifically for 3 API revisions later.
Why are you trying to put words into my mouth?
Doom3 isn't even a DX9 game.
What does Doom 3 have to do with HL2?
 
Where did I say such a thing? What I was questioning is why you seem to think that 1600x1200 will be playable on a DX7 or 8 card.

Well if someone defines playable as <= 1fps then all they need is a DX7 card and a lot of system memory to store textures,geomtry etc... Also there are DX7 AGP cards so I guess it won't be a problem for that specific person. :LOL:
 
Enbar said:
I happen to know something about this subject so I thought I'd share.
Let's see how much is speculation and opinion and how much is fact.
The real bug here in my opinion is with valve, and I'm sure if they knew what they know now when they started HL2 this issue with multisample wouldn't exist.
Speculation.
There is a fairly simple solution for valve to fix for this if valve was willing to modify their textures, but I'm guessing they've put too much time doing the art and validating it to go back and change things around.
Speculation.
But before I mention the real fix lets review multisample and this centriod multisample method being talked about.

The reason standard multisample doesn't work so well with packing of multiple textures into a single texture is because the sample pattern can cause nearby textures to blead together on the edge of polygons if texture coordinates go to the very edge of a sub-texture for that polygon. This means in standard multisample it is possible for the egdes of polygons to get the texture color from a neighboring sub-texture rather than the sub-texture they want. With the centriod version of multisample the sample pattern is modified for the edge of triangles to ensure you don't grab texel info from the wrong sub-texture.
Fact.
If you think about this, you will realize that means the centriod version of multisampling actually introduces a different type of corruption since a person would expect a consistant sampling pattern.
It's called "filtering". Obviously you would only use such a technique where such filtering would not be noticable.
I think the centriod version of multisample is a hacky way to handle issues developers don't want to deal with. If I had my way it never would have happened, but microsoft felt differently and it will be coming to a dx near you in the future.
All speculation and opinion.
Now there are two ways for valve to fix the problem more correctly than using the hacky centriod fix. The first is to tweek texture coords a little so the edges of triangles don't use the edge of a sub-texture. This isn't a very good solution because you can loose the outer boarder pixels for each subtexture and it's difficult sometimes to figure out how much to fudge the texture coords. The good soltion would be to put a boarder around each sub-texture of the same color as the outer most pixel for the sub-texture. This would fix the bleading together of sub-textures when using multisample. The problem is the valve would have to go and redo all their textures and they probably don't have time for that.
None of this works anyway. Why? Because you can have a polygon with steep enough slope such that any sample outside the polygon will touch texels from other packed textures.

Why do you think it's hacky? Because it's not supported by all IHVs? Maybe you think MRTs are hacky too. :rolleyes:
 
I bet OpenGL Guy thinks METs are hacky. (they are, as a poor man's MRT). I'd still like it if DX9 had pack/unpack style instructions. Really comes in handy if you need to stuff a bunch of into a small space.
 
hELLBINDER

Hellbinder is entitled to his own fantasys but I do wish he wouldnt keep acting like he is speaking facts.

How on earth can he know that HL2 shaders run 5x faster on r350? How?

But its all senseless talk anyway. My original thread does not argue whether r3xx may run hl2 better. Thats been argued to death on other threads. My point is that its a poor business decision by valve to alienate the vast bulk of their target customers by optimising for a minority product.

Lets also not forget that there are *some* consumers with kyro or matrox products, -what about them too?

If valve no longer supports opengl would a port be possible?

Valve should think more than just supporting ati, feeling sorry for the under dog is one thing, letting down most of your customers is another. Assuming of course that your customers, in this case nvidia users would like to have fsaa support.

What were valve thinking of?
 
Re: hELLBINDER

palmerston said:
My point is that its a poor business decision by valve to alienate the vast bulk of their target customers by optimising for a minority product.

According to valves speakeasy poll, the vast bulk of their customers can't do MSAA anyway, even most of the rest are probably going to do it too slowly.

Only a fraction of people have top end cards and even out of those only some use FSAA. Many of even the better CS players don't actually have clue what fsaa is and run at 640x480/800x600 anyway even those with r300 cards.
 
Re: hELLBINDER

palmerston said:
But its all senseless talk anyway. My original thread does not argue whether r3xx may run hl2 better. Thats been argued to death on other threads. My point is that its a poor business decision by valve to alienate the vast bulk of their target customers by optimising for a minority product.

What makes you think Nvidia users who want AA are in the majority? Given the current performance of AA on Nvidia cards, the small number of people who have bought high end Nvidia cards (because of poor availability of DX9 parts), and Nvidia's assertation that high end users are only 2 percent of their market, what makes you think that Valve are optimising for a minority product?

You keep saying "minority" and "vast bulk" but you are not considering anything more than "Nvidia = big, ATI=small". The market is more complex than that.

Chances are Valve have taken the view that only those users with high end DX9 parts would want to turn on all the eye-candy and still want AA. Those users with high end cards are a tiny number of their target audience. It's Nvidia users that keep saying that they don't need AA or IQ, that speed is all that matters. It's ATI that bothered to put in Centroid support in their hardware when Nvidia did not.
 
Hi Baron,
The Baron said:
DemoCoder said:
The Baron said:
Can NV3x cards even use supersampling without any multisampling? I remember the NV2x cards could, but I've never seen it with an NV3x.

All cards can do supersampling as long as they have the memory for a large framebuffer.
Well yeah, but is there an option available through RivaTuner or something of the kind right now?
Yes--you can use aTuner for enabling SSAA in D3D. Only 2x2 SSAA is supported, mind.

93,
-Sascha.rb
 
DemoCoder said:
Example? Only thing I can remotely see approaching it is some of the fire and water effects.

For DX9 level boards this is probably going to be one of the most shader intensive titles to date. It is riddled with shader code.

DegustatoR said:
DaveBaumann

The title is most likely to be shader limited - you probably don't want to be doing superampling in this situation.

Why? Shader limited means not fillrate or bandwidth limited, means you can use SSAA if you want, 2x1 for example. Doubt it'll be very big perfomance hit on NV35 hardware.

Actually, shader limited (in terms of pixel shaders) does actually mean fill-rate limited. Think of the difference between MSAA and SSAA. With SSAA all pixel operations need to be duplicated for each subsample, ergo in for 2X SSAA each pixel shader operation will need to be carried out 2X per pixel.

This is why MSAA is very important to have with shader limited titles - the shader ops are still only carried out once per pixel (inside a poly) and if you are shader limited then you have plenty of bandwidth to spare.
 
Back
Top