A question about Unreal Engine 3 demo shown at E3.

Chalnoth said:
Well, I'm sure UE2 can support VS/PS 3.0, because it's pretty trivial to add such support. So it seems pretty apparent that future games released using UE2 technology will use SM3, but this has nothing to do with Epic and their current games.
Well, it's trivial to add support, yes, but the entire materials system would have to be altered as well. If an existing game were to be patched to support SM3, the patch would also have to include new versions of all the materials as well. It's just not worth the time and effort that could be better spent fixing bugs or working on an entirely new game.

I think it's possible that a game developer would add SM 3.0 to the engine for a new game (Ion Storm added SM 1.x), but I don't think Epic would bother adding it to the licensing package. After all, that's what UE3 is for.
 
Diplo said:
Perhaps, just perhaps, that is because the NV3x cards ran UT2003 fine? The NV3x cards only did badly in games that heavily used DX9 features, which wasn't the case with UT2003. So what is wrong with them telling people with NV3x cards that UT2003 will run fine on them? Should their marketing guy be lying and saying it doesn't? Are you mad?

The point is that people in the forum were giving their experiences in a thread. The majority of posters were commenting on how much better the IQ and framerate was with ATI cards. Then out of nowhere, Epic's business guy (not a programmer or technical person) pops up just to tell us how happy they are with Nvidia products, at the same time as customers are recommending a competitor.

Even recently at B3D, Mark Rein made a point of registering and posting when rumours started surfacing that the UE3 demo was running better on ATI products.

It's not difficult to read between the lines: for Epic, TWIMTBP is more than just a bit of advertising for their publisher. It obviously a closer relationship with regular public support from Epic, even defending them against any adverse views with regards to their games.
 
Its the same that is going on with sony.

For everquest 2 they keep saying how no video card will be able to run it maxed out and people ask how about the x800xt and all they comment on is we aren't ready to discuss cards that are not on the market yet and then go on to say how they are optimizing it to run on the nv40 .
 
Chalnoth said:
Not really. It would just be a new target for the chosen shader language.
That's assuming a shader language is being used in the first place -- well, HLSL, really. For Unreal Engine 2.0, pretty much everything is performed in the fixed function pipeline.
 
FP blending is really important for efficiently doing HDR with multiple light sources.
Absolutely not. Rather than transforming the geometry for each light, simply transform it once and perform the various lighting calculations in the pixel shader. That way, you could handle all the lights in just one pass. This is way more efficient than transforming the geometry for each light.
 
poly-gone said:
Absolutely not. Rather than transforming the geometry for each light, simply transform it once and perform the various lighting calculations in the pixel shader. That way, you could handle all the lights in just one pass. This is way more efficient than transforming the geometry for each light.
You have to re-transform for any kind of shadow calculations.

Anyway, the point is, if you want to do HDR on pre-SM3 hardware, you need to either encounter massive inefficiencies, use significant workarounds, or limit what can be rendered. With SM3 hardware you can use HDR in any situation with optimal performance. Thus more games will implement HDR on SM3 hardware.
 
Yes you do. But again, you could definately "accumulate" the shadow factor(s) without blending.

With SM3 hardware you can use HDR in any situation with optimal performance.
That I have to agree with ;)
 
poly-gone said:
Yes you do. But again, you could definately "accumulate" the shadow factor(s) without blending.
No, you can't, but you don't need to do that in color buffer space, so that in and of itself doesn't make FP blending necessary.
 
No, you can't, but you don't need to do that in color buffer space, so that in and of itself doesn't make FP blending necessary.
Well, if we take shadow mapping as an example, you could render all the shadow maps in seperate passes but compare them in final pass and combine all their terms together. This essentially means that you can accumulate shadow terms in color space.
 
jvd said:
Its the same that is going on with sony.

For everquest 2 they keep saying how no video card will be able to run it maxed out and people ask how about the x800xt and all they comment on is we aren't ready to discuss cards that are not on the market yet and then go on to say how they are optimizing it to run on the nv40 .

I have to deal with this every day- Nvidia's FUD is working. Every day, without a doubt, a new thread is made about PS3.0 in EQ2 or whether the X800 will lose to the 6800 because EQ works with Nvidia. (I'm one of the main users of the EQ2 tech forums and probably the best-informed person there thanks to you guys here at B3d).
 
poly-gone said:
Well, if we take shadow mapping as an example, you could render all the shadow maps in seperate passes but compare them in final pass and combine all their terms together. This essentially means that you can accumulate shadow terms in color space.

You could do that, but it is a horrible kludge. How many shadowed lights per surface are you going to support? Ya gotta know ahead of time cuz you probably won't have time to make the buffers on the fly. How much texture ram are you willing to sacrifice for your n shadow buffers at 1024 or 2048 resolution and 24 bits of precision? Don't forget that if you use point lights instead of spotlights, they'll need more than one shadow map (basically a cube shadow map-I'm not sure that support exists for those natively). Are you going to make a version of each of your shaders for 0....n shadowed light sources?

I never said that FP blending is NEEDED to do shadowed HDR with multiple light sources. I said it is important for doing it efficiently, and I stand by that remark.

Chalnoth said:
With SM3 hardware you can use HDR in any situation with optimal performance.

SM3 is less important than FP blending for HDR, although still useful. Remember that they're separate features even though the only hardware that has either of them happens to have both.
 
Diplo said:
No, you actually said : "The fact that they're adding SM3.0 support to a game that doesn't even heavily use DX8 should speak volumes." You stated it was a FACT they were adding SM3 support to a game (by implication UT2004) when you are totally and utterly wrong. Trying to wriggle out of it by changing your statement just makes you look more ridiculous.

Here we go with the bashing. Yes, I'll admit I mistakenly thought they were adding SM3 support to UT2004. I retract my statement.

Oh, I see. If they do add it they are damned, and if they don't they are damned too, eh? What a ridiculous and laughable argument. The fact that they are not adding it to UT2004 in no way makes it 'unuseful', it's just that it wouldn't be practical, useful or feasible to add it in any significant form to the current engine given it hardly uses PS1 at the moment. This in no way means it won't be useful in Unreal Engine 3 which makes extensive use of shaders.

My reasoning was that if they do not add SM3 to a future game, and I'm not talking about UE3 (which btw doesn't use it either atm), then what does that say about the benefits of SM3? If they had added it to UT2004 then their loyalty for nvidia would be quite evident. Since this isn't the case and I was wrong lets just drop it shall we?

Virtually every game you buy has Nvdia's logo on it somewhere. It's called advertising. The marketing guys give them big money and they stick the advert on the box. Welcome to the world of capitalism.

Yes, virtually every game is TWIMTBP because nvidia's marketting department is very convincing. They offer companies their help (or possibly other benefits) in exchange for advertising. I find this to be distressing as it creates a monopoly. Remember, the general public does not know all the facts. If they see the makers of the game suggest nvidia's hardware 'plays this game best' then they assume it to be true. Far Cry is a TWIMTBP game, do you think it runs better on a Geforce FX then a Radeon 9x00? This only serves to hurt the consumer through means of false advertising for the ultimate goal of undermining the competition and putting more money into the pockets of JHH and crew. It's sad that developers are choosing sides in exchange for bribes. Yes it's the world of Capatalism, that doesn't mean it's right.

First, at the time of the first demo back in March the only hardware they had that could run it was the NV40 since ATI hadn't given them a X800.

The first showing of UE3 was at GDC at which time Epic did have an X800.

Second, why on earth should they say how well it runs on either card, when the performance is irrelevant given that the engine is still in development and games based on it 2 years away? Who needs to know and why? By the time games are released based on it then both cards will be redundant any way.

Odd that you would say this. I distinctly remember hearing Sweeney say how past cards were unable to run the demo at decent speeds and the 6800 was the first to allow for this, no mention of the X800. Oh that's right, because it was at an nvidia launch. :rolleyes: This sounds like marketting to me. If the sole reason was to display their next engine they could have demonstrated it to the public in their own presentation regardless of what hardware they were using. Don't let your obvious bios cloud your judgment.

Sadly, whilst trying to insinuate Epic are biased you in fact do nothing more than show your own bias towards ATI which totally blinds you to reality. These forums really don't need any more people like you. Sorry.

And you think nvidia needs more people like you? I'll admit I do not like nvidia, however this has nothing to do with being a blind f*a*nboy. I've followed the graphics industry for years now and what I have learned in that time is nvidia is an unethical and unprofessional business willing to do whatever it takes to stay on top. I will still buy their products if I deem them to be worthwhile but I would rather go with an alternative. At this time that happens to be ATI, in the past it was 3DFX.
 
If you guys are going to continue to talk about NVIDIA or ATI or Epic or any other company, or business ethics, or why you like this company and not the other one, I'll lock this thread up. There's another forum for doing what you like to do. Not here.

Understood?
 
How many shadowed lights per surface are you going to support? Ya gotta know ahead of time cuz you probably won't have time to make the buffers on the fly.
About 4 to 5 would be nice and what do you think is the whole point of shadow mapping? The buffers HAVE to be generated on the fly, we're not talking about projected shadows :)

Don't forget that if you use point lights instead of spotlights, they'll need more than one shadow map (basically a cube shadow map-I'm not sure that support exists for those natively).
You can do with plain old (floating point ones would be better) cubemaps. I've implemented this, so I know they work fine :)

Are you going to make a version of each of your shaders for 0....n shadowed light sources?
Of course not. A single vertex and pixel shader can handle that. While the planar shadow maps use projective texture coordinates, the cubic shadow maps would use light vectors to sample the depth. All this can be handled even within the R3X0's limited 96 instructions.
 
poly-gone said:
About 4 to 5 would be nice and what do you think is the whole point of shadow mapping? The buffers HAVE to be generated on the fly, we're not talking about projected shadows :)

I was talking about having to pre-allocate the memory for the shadow maps...obviously they'd have to be updated on the fly. This is memory that you can no longer use for other textures or geometry that will actually contribute to the detail the user sees on screen. So supposing you have 5 cube map shadowmaps with 24 bit precision and 1024x1024 resolution per side, that is 90 megs of video memory down the drain (only 30 megs if you can get away with 8 bit shadowmaps like you suggest). Do you really think that is an efficient use of that memory? Remember most DX9 cards probably only have 128 megs.

You can do with plain old (floating point ones would be better) cubemaps. I've implemented this, so I know they work fine :)

I wouldn't think there'd be enough precision unless you used a floating point cubemap, but if there is, cool. I haven't personally tried it. I know Humus did a demo with it, though, so I guess it can be made to work in small environments :)

Of course not. A single vertex and pixel shader can handle that. While the planar shadow maps use projective texture coordinates, the cubic shadow maps would use light vectors to sample the depth. All this can be handled even within the R3X0's limited 96 instructions.

But if you're not going to make a seperate shader for 0-5 shadowed light sources, you will either a) need an PS 3.0 style loop or b) do a lot more calculation than you need to when less than the maximum number of lights is applied to a surface.

I disagree with you about what you can fit into ATI's limited instruction count too. In my experience, 3 per-pixel point lights with diffuse, specular, ambient, and emissive terms is the max I can squeeze into a single pass. 4 is too much. And this is without any shadow tests.

Anyway, my whole point was that FP blending removes all of these issues. Wouldn't you agree that is nice? :)
 
Eronarn said:
jvd said:
Its the same that is going on with sony.

For everquest 2 they keep saying how no video card will be able to run it maxed out and people ask how about the x800xt and all they comment on is we aren't ready to discuss cards that are not on the market yet and then go on to say how they are optimizing it to run on the nv40 .

I have to deal with this every day- Nvidia's FUD is working. Every day, without a doubt, a new thread is made about PS3.0 in EQ2 or whether the X800 will lose to the 6800 because EQ works with Nvidia. (I'm one of the main users of the EQ2 tech forums and probably the best-informed person there thanks to you guys here at B3d).

Right. I had to deal with this on ogaming when i was telling them to get 9800pros or xts if they had to update at the time . They kept telling me oh no its going to be optimized for nvidia and it will run faster on nvidia hardware and I had to explain everything to them
 
Eronarn said:
jvd said:
Its the same that is going on with sony.

For everquest 2 they keep saying how no video card will be able to run it maxed out and people ask how about the x800xt and all they comment on is we aren't ready to discuss cards that are not on the market yet and then go on to say how they are optimizing it to run on the nv40 .

I have to deal with this every day- Nvidia's FUD is working. Every day, without a doubt, a new thread is made about PS3.0 in EQ2 or whether the X800 will lose to the 6800 because EQ works with Nvidia. (I'm one of the main users of the EQ2 tech forums and probably the best-informed person there thanks to you guys here at B3d).

Heh, I'm dealing with a similar reaction in the Far Cry community. Despite the fact that I just released an offset bump mapping demo for Far Cry that is 2.0, in the very thread I did it I still have people talking about getting a 6800 and things like that so they can run it... unbelievable. Nvidia really made an impression with their event I guess... which doesn't matter to me cause I know what I'm doing but I hate to see people buying and evangilizing stuff because of its "blast processing" so to speak ;) NV marketting deserves a raise though :LOL:
 
My guess is NV marketing folks make a ton of $.
I wouldn't do what they do for chump change myself.
Sooner or later we all have to answer. :oops:
 
Back
Top