Carmack to use shadow map in the next game?

According to Tim a UE 3 powered game should only be out in 2006. I'd be VERY DISAPPOINTED if a 2006 game was not better graphically than a 2004 one.

In that same way I was disappointed that Doom3 was worse graphically than 3dmark03, and also performed a lot worse. Which is why I want that patch.

Well, if you listened to JC's QC keynote you'd pickup the fact that toggling between shadow buffers and stencils and his version of DOOM gives you:

- A lot lower performance with shadow buffers (this should come as a surprise to some who state that fill-rate is not improving as much as shader throughput making buffers an "inevitability").
- shadow buffers also use up "invisible" fillrate (and a lot in D3 since most lights are point lights).
- Only moderate quality improvements (he mentions that regular people might not even notice the soft shadows at first glance).

With that in mind, for a 2004 game, shadow volumes were a good decision, especially for the game they were doing. And quite a few people assumed since D3 was using stencils that every engine JC would do after that would use stencils so they actually forget that they have to look at the here and now and the advantages of using stencils here and now on the D3 game.

In another thread I already pointed out the cost of cubemap shadowmaps.
But my point here was not to get shadowmaps in Doom3, but GPU-accelerated shadowvolumes/skinning, since the CPU-demands are insanely high for this particular game.
Case in point: above 2-3 enemies, the framerate on my XP1800+ with Radeon 9600Pro drops well below 10 fps, making the game completely unplayable. Compare that to eg the 3dmark03 Ragtroll test, where you have many more skinned and shadowed characters, and a lot more polygons onscreen, and no fillrate optimizations for the shadows or anything, and still running well over 10 fps on my system, Carmack has done something horribly wrong.
 
991060 said:
The lower performance on shadow buffer is probably because you have to switching between difference rendering context in OpenGL while in D3D, that's not a problem.

He mentions that (and I think that's where he points out this was as close as he ever been to wanting to change to D3D :) but that's not the whole story. He mentions the cubemap geometry (since D3 uses a lot of point lights instead of spotlights) where Shadow buffers mode has to actually generate _more_ polys than stencils.
 
Scali said:
In that same way I was disappointed that Doom3 was worse graphically than 3dmark03, and also performed a lot worse. Which is why I want that patch.

I know this is subjective but the only part of 3dmark03 that's comparable to D3 is Battle of Pyroxomon (whatever) and IMHO D3 is significantly better (even if purely from an artistic pov). Regarding performance, I'm sorry but I get 41fps in the D3 timedemo (9800 Pro 128mb, 1024 @ High Quality, VSYNC + triple buffering) and I get lower than that in 3dmark03.

In another thread I already pointed out the cost of cubemap shadowmaps.
But my point here was not to get shadowmaps in Doom3, but GPU-accelerated shadowvolumes/skinning, since the CPU-demands are insanely high for this particular game.
Case in point: above 2-3 enemies, the framerate on my XP1800+ with Radeon 9600Pro drops well below 10 fps, making the game completely unplayable. Compare that to eg the 3dmark03 Ragtroll test, where you have many more skinned and shadowed characters, and a lot more polygons onscreen, and no fillrate optimizations for the shadows or anything, and still running well over 10 fps on my system, Carmack has done something horribly wrong.

In my experience, D3 is primarily GPU bound. If, however, your gfx card costs twice as much as your CPU then... heh. ;-) So, it stands to reason that people with better video cards will have better cpus; and apart from the hardcore crowd, usually CPUs are better than the gfx card the computers come with. Where I live I constantly find stores selling P4 2.8s with crummy GFFX 5200's.

I don't know the details but isn't a dx9-level card required for gpu silluette extrusion? Anyway, what I'm saying is that, in principle I don't disagree with you. Technically speaking at least; there could have been other (market) considerations when that decision was made.
 
I know this is subjective but the only part of 3dmark03 that's comparable to D3 is Battle of Pyroxomon (whatever) and IMHO D3 is significantly better (even if purely from an artistic pov). Regarding performance, I'm sorry but I get 41fps in the D3 timedemo (9800 Pro 128mb, 1024 @ High Quality, VSYNC + triple buffering) and I get lower than that in 3dmark03.

You probably have a CPU of 2.5 GHz or more then. When you don't, the CPU is not powerful enough to skin and shadow multiple characters at a decent framerate.
And as said before, Battle Of Proxycon has no specific optimizations, and has higher geometric complexity (regardless of whether you think that Doom3 looks better, 3dmark03 has higher polycount and more selfshadowing, and more skinned characters on screen than the D3 timedemo). If you get less performance than Doom3, that is most probably because of the extra fillrate requirements.
In my case, Doom3 is CPU-limited, while the GPU could have accelerated it, as 3dmark03 clearly shows. Even with the higher polycount and more fillrate required for the shadows, and more skinning, it is still faster on a simple mid-end 2-vertexshader GPU like mine than on my CPU (which is well above the minimum 1500+ requirements for Doom3).

(FYI, in 640x480 I get 37.5 fps average on Battle of Proxycon, while I get 24 fps in Doom3 timedemo 1).

In my experience, D3 is primarily GPU bound. If, however, your gfx card costs twice as much as your CPU then... heh. So, it stands to reason that people with better video cards will have better cpus;

Nonsense, many people upgrade their GPU more often than their CPU. Besides, my CPU was a lot more expensive a few years ago than my Radeon was now. I have one of the slowest DX9-cards available and already it is faster than most CPUs.

and apart from the hardcore crowd, usually CPUs are better than the gfx card the computers come with. Where I live I constantly find stores selling P4 2.8s with crummy GFFX 5200's.

Who cares? I want the GPU method ADDED to the current method, I don't want it to REPLACE the current method. This way owners of both types of systems can benefit.
It will also open opportunity for people to only upgrade their videocard to the level of Radeon 9600Pro or so (about 100e), rather than buying a new CPU (and motherboard, and memory, and PSU, etc... at least 300e?) or even an entire PC, which is a LOT cheaper. Also, my 1800+ has never been too slow for any other game, so investing in a CPU for games would be a bad decision, only the extra graphics power is required at this time. Doom3 is the exception.

I don't know the details but isn't a dx9-level card required for gpu silluette extrusion?

No, any card with vertexshaders can do it. 3dmark03 uses vs1.1 code.
Although in general only DX9-level cards are actually faster than the CPU. But as I said before, I want it added, not replaced, so this is a non-issue.

Anyway, what I'm saying is that, in principle I don't disagree with you. Technically speaking at least; there could have been other (market) considerations when that decision was made.

I think it has to do with the fact that Carmack is NVIDIA's puppet, and the performance of GF4/FX shaders was not good enough for a GPU-based method (as 3dmark03 so painfully demonstrated at the time). And apparently NVIDIA dictated that Carmack may only implement rendering techniques if they benefit NVIDIA hardware (especially if it also puts the competition at a disadvantage), not if they benefit other (superior) hardware. During development of the Doom3 renderer, the R300 was THE card. Even Carmack himself said so in his .plan. How ironic that the R300 turns out to run the game worse. Ofcourse, many decisions have influenced that (using OpenGL, not using GPU skinning/shadowing, using a lot of textures in the shader...), whether that was deliberate or not, is up to everyone to decide for themselves.
 
Scali said:
If you want to license a product, releasing the full source code is not a Good Idea(tm).
Um, it's still protected by copyright law. If anybody did any significant project based on the engine, they'd be in a world of trouble if they didn't pay licensing fees.
 
Mordenkainen said:
and I think that's where he points out this was as close as he ever been to wanting to change to D3D :)

Yes and I understand him : I've used Pbuffers a lot in our current project and all I can say is that it is an horrible interface for rendering to texture. Hopefully with EXT_render_target we will get a decent interface soon.
 
Um, it's still protected by copyright law. If anybody did any significant project based on the engine, they'd be in a world of trouble if they didn't pay licensing fees.

In theory yes. But how will you discover that someone has used your source in any way? And how will you prove it?
It's a non-issue anyway, since the full code is NOT released, apparently.
 
Scali said:
In my case, Doom3 is CPU-limited, while the GPU could have accelerated it, as 3dmark03 clearly shows. Even with the higher polycount and more fillrate required for the shadows, and more skinning, it is still faster on a simple mid-end 2-vertexshader GPU like mine than on my CPU (which is well above the minimum 1500+ requirements for Doom3).

That's my point. You have a CPU which is only barely above the min req while your gfx card is mainstream (high end if you consider the valve survey ;). Your system would benefit from hardware generation of shadow volumes but how many others are in your situation?

Nonsense, many people upgrade their GPU more often than their CPU. Besides, my CPU was a lot more expensive a few years ago than my Radeon was now. I have one of the slowest DX9-cards available and already it is faster than most CPUs.

That's why I said apart from hardcore users. Regular people rarely upgrade discreet components but buy a new computer at Dell or whatever. And these usually come with crappy gfx cards (when not integrated <shudder>).

Who cares? I want the GPU method ADDED to the current method, I don't want it to REPLACE the current method. This way owners of both types of systems can benefit.

Sure. This I can agree with completely. But you don't call someone "below-average software coder" because he hasn't done this. ;)

It will also open opportunity for people to only upgrade their videocard to the level of Radeon 9600Pro or so (about 100e), rather than buying a new CPU (and motherboard, and memory, and PSU, etc... at least 300e?) or even an entire PC, which is a LOT cheaper. Also, my 1800+ has never been too slow for any other game, so investing in a CPU for games would be a bad decision, only the extra graphics power is required at this time. Doom3 is the exception.

I don't share your opinion. UT2004 is horribly CPU bound and FarCry as well unless you play at high rez. I take it you've tried turning off shadows and see how much fps gain you get. Your CPU limit might be from something else.

I think it has to do with the fact that Carmack is NVIDIA's puppet, and the performance of GF4/FX shaders was not good enough for a GPU-based method (as 3dmark03 so painfully demonstrated at the time). And apparently NVIDIA dictated that Carmack may only implement rendering techniques if they benefit NVIDIA hardware (especially if it also puts the competition at a disadvantage), not if they benefit other (superior) hardware.

I'll cut the 5 pages of discussion of this paragraph alone and just say we'll agree to disagree on this. :)

During development of the Doom3 renderer, the R300 was THE card. Even Carmack himself said so in his .plan. How ironic that the R300 turns out to run the game worse.

JC has already explained why the NV3x are on par with the R3xx. He even said nVidia was "cheating" (that's quoting).
 
Scali said:
I have one of the slowest DX9-cards available and already it is faster than most CPUs.
Can you expand a bit on this ?

Scali said:
I think it has to do with the fact that Carmack is NVIDIA's puppet, and the performance of GF4/FX shaders was not good enough for a GPU-based method (as 3dmark03 so painfully demonstrated at the time). And apparently NVIDIA dictated that Carmack may only implement rendering techniques if they benefit NVIDIA hardware (especially if it also puts the competition at a disadvantage), not if they benefit other (superior) hardware.
I hope this is based on the fact that you get more emails from Carmack than I do.

Do you know how many GF4s were sold and still exists in systems?
 
Scali said:
In theory yes. But how will you discover that someone has used your source in any way? And how will you prove it?
It's a non-issue anyway, since the full code is NOT released, apparently.
Typically when code is copied, it is copied outright. So, one way would be to simply look for comments. Another would be to look for specific function blocks.
 
Chalnoth said:
Scali said:
In theory yes. But how will you discover that someone has used your source in any way? And how will you prove it?
It's a non-issue anyway, since the full code is NOT released, apparently.
Typically when code is copied, it is copied outright. So, one way would be to simply look for comments. Another would be to look for specific function blocks.

That's assuming the offending party releases their own source code which would be doubtful. For instance, if a company has used some of the HL2 source in their own game they're not going to release the source in the first place, let alone for Valve to get a chance to compare.

Besides, when I look at coding examples in books and whatnot and use them on my own programs I never copy & paste them outright. At a bare minimum I change the variable names and I usually end up changing instructions or even variable types (I might prefer to use a long instead of a short).
 
That's my point. You have a CPU which is only barely above the min req while your gfx card is mainstream (high end if you consider the valve survey . Your system would benefit from hardware generation of shadow volumes but how many others are in your situation?

As I said before, anyone who'd rather pay 100e for an upgrade to a 9600Pro-ish card than 200e or more for a CPU upgrade.
And let's not get carried away with the CPU requirements of Doom3. First of all, they are not correct, since you need a 2.5 GHz CPU at least, to not have severe drops in framerate during combat with more than 2 or 3 enemies at a time. Secondly, it is not representative for games in general, no other game requires such a fast CPU. I have played Call of Duty, Halo, FarCry, and ofcourse run 3dmark03, all without any CPU-related problems.
You'd be surprised how many people don't have a 2.5+ GHz CPU, and how many of those use an R300 or similar card, or would upgrade to one if it would benefit their games more than upgrading to a faster CPU, and be cheaper too.

That's why I said apart from hardcore users. Regular people rarely upgrade discreet components but buy a new computer at Dell or whatever. And these usually come with crappy gfx cards (when not integrated <shudder>).

Who cares? There is a considerable market for aftermarket videocard upgrades, so apparently there are many people who upgrade their display card. Just because it's not the largest group doesn't mean it isn't important, let alone that it doesn't exist.
And anyone with a decent GPU will benefit from GPU-acceleration. Even if their CPU would be fast enough to make it playable, there is only a VERY small group where the performance would actually be very good.
For example, only the fastest Athlon64-based systems can get ~100 fps in timedemo1, using an 6800Ultra videocard. If you look at 3dmark03 results for Battle of Proxycon, you will find that pretty much any system gets ~100 fps with a 6800Ultra. And yes, there are systems with 2 GHz CPUs or lower, who have a 6800Ultra in the FutureMark database.
They would be able to play Doom3 just as well as anyone with the most expensive Athlon64, if it would use a GPU-based method. Now they can barely play it if there are a few enemies.

Sure. This I can agree with completely. But you don't call someone "below-average software coder" because he hasn't done this.

Yes you do, if the other people (the 'average') HAVE done this. That is the very thing that makes him below-average. See?

I don't share your opinion. UT2004 is horribly CPU bound and FarCry as well unless you play at high rez. I take it you've tried turning off shadows and see how much fps gain you get. Your CPU limit might be from something else

FarCry is a lot more playable on my system than Doom3. I didn't notice any slowdowns during combat.
I haven't played UT2004, so I don't know how that would run.
And my CPU limit most definitely comes from the skinning and shadowing. That is the only explanation for the extreme framerate drops when multiple enemies (== skinned characters with shadows) are onscreen (it isn't fillrate since 640x480 low detail and 1024x768 high detail make little or no difference).
Another factor could be that ATi's drivers/hardware are less efficient at handling updated geometry through the AGP bus every frame than NVIDIA's. Another good reason to avoid CPU-based methods anyway.

JC has already explained why the NV3x are on par with the R3xx. He even said nVidia was "cheating" (that's quoting).

The point is not whether or not NVIDIA cheats, but why the code is suboptimal for R3x0. Even with cheats NVIDIA was usually not faster than R3x0 with the FX anyway.
 
Reverend said:
Can you expand a bit on this ?

What is there to expand? As I say... When we take 3dmark03, which does everything on the GPU, we get 37.5 fps on average, on scenes that are more complex and less optimized than Doom3's timedemo, which gets 24 fps at most. And that is on a Radeon 9600Pro vs a XP1800+.
Now, if we would take a linear scale, we would have to take a 2800+ CPU to match the performance of my Radeon, which I take it is still well above the CPU that the average user uses.
And that is not including the higher polycount in 3dmark03, and the fact that 3dmark03 has no fillrate optimizations, while Doom3 does.

And since the Radeon9600Pro is one of the cheapest and slowest DX9-cards around... It only has 2 vertexshader units, the only thing that is different with the even cheaper 9600 and SE models is the clockspeed.
Most DX9 cards have 4 vertexshader units, or even more. So they would get close to twice the skinning/shadowvolume performance of my card, or even better. And that would get you into a level of performance that can't be achieved with any kind of CPU yet (that would be a 5600+ at least? And what of the 8x vertexshader cards like X800 and 6800?).

I hope this is based on the fact that you get more emails from Carmack than I do.

It is based on the fact that NVIDIA condemned 3dmark03's approach, and favoured the 'Doom3' approach, while clearly the 3dmark03 approach is the best for R3x0 cards and most probably also NV4x. And then there is the amount of NVIDIA-specific paths and features in Doom3 (even though one has moved to the driver now), and only one for another vendor, which is not a competitive product (ATi R200).

Do you know how many GF4s were sold and still exists in systems?

Do you know how many R3x0-based cards were sold and still exist in systems?
 
Scali, you claim that you are a developer. May I ask which company you work for, and what games you have developed?

Using 3dmark03 as a means of gaming comparison with Doom3 is somewhat inane. 3dmark03 has demos that are designed to stress the GPU and limit the effect of CPU in the test results. 3dmark03 game tests are also not interactive gaming environments. Top that off with the fact that graphics drivers are generally heavily optimized for the 3dmark03 benchmark, and the comparison is nonsensical.

Regarding how playable the game is on your system of 9600Pro and 1.5 (?) Ghz processor, vs another unrelated older game, is also very vague and not really comparable. Most reviewers have said that Doom3 is one of the few games that looks great even at very low resolutions, and looks great even without AA. Some of these same reviewers have claimed that low min framerates are less noticeable and less distracting in Doom3 than in many other games. On top of this, there are different quality settings that can be adjusted so that the game is playable (whatever that means, anyway).

The point about the Doom3 code being "suboptimal" for the R3xx cards, and Carmack being an NV puppet, is also quite silly and childish. The 9800XT runs Doom3 nearly as fast as a 5950 Ultra, and most reviewers claim that the experience is comparable. Also, if you bothered to do the research, you would see that there are some games that run as fast or faster on a 5950U than a 9800XT, especially when using newer Forceware drivers and without AA/AF.

Your general negative tone leads me to believe that your personal "issues" are more than about simply Carmack and Doom3, but also about NV vs ATI. Suspicious, to say the least.
 
Scali said:
As I said before, anyone who'd rather pay 100e for an upgrade to a 9600Pro-ish card than 200e or more for a CPU upgrade.
And let's not get carried away with the CPU requirements of Doom3. First of all, they are not correct, since you need a 2.5 GHz CPU at least, to not have severe drops in framerate during combat with more than 2 or 3 enemies at a time.

The HardOCP guide shows a 1.5ghz + GF4mx getting avg 30 fps @ 640 with just turning off specular maps. In all the other games you mention the min req system would be lucky to run let alone be playable with some features turned on.

You'd be surprised how many people don't have a 2.5+ GHz CPU, and how many of those use an R300 or similar card, or would upgrade to one if it would benefit their games more than upgrading to a faster CPU, and be cheaper too.

Yes, I would be surprised. Most people I know have good CPUs and crappy or outdated graphics cards.

Who cares? There is a considerable market for aftermarket videocard upgrades, so apparently there are many people who upgrade their display card. Just because it's not the largest group doesn't mean it isn't important, let alone that it doesn't exist.

I didn't say it doesn't exist. There's a market for linux games but you only see id Software releasing clients while everyone else just releases dedicated server bins and milk the linux community. There's a market for DVD only games and yet you still see 6 CD games being released. Just because that market exist doesn't mean it's big enough for the people in charge to make decisions like we want them to.

They would be able to play Doom3 just as well as anyone with the most expensive Athlon64, if it would use a GPU-based method. Now they can barely play it if there are a few enemies.

Even if you use a 6800U with a similarly priced CPU you're still limited by the GPU if you play at 1600x1200 with 4xAA and 16xAF. Moving stuff to the GPU wouldn't help.

Yes you do, if the other people (the 'average') HAVE done this. That is the very thing that makes him below-average. See?

If you can point me to other games using stencil shadows extensively (there's Deus Ex and Thief 3, and I think there's that crappy Secret Services game) and that have done this...

FarCry is a lot more playable on my system than Doom3. I didn't notice any slowdowns during combat.

In FarCry with the min req system you also get a game that looks completely different from running it on the optimal system while in D3, even if you play in the min req system you are still getting a very similar graphical experience.

And my CPU limit most definitely comes from the skinning and shadowing. That is the only explanation for the extreme framerate drops when multiple enemies (== skinned characters with shadows) are onscreen (it isn't fillrate since 640x480 low detail and 1024x768 high detail make little or no difference).

You should try it with "r_shadows 0" and compare.

Another factor could be that ATi's drivers/hardware are less efficient at handling updated geometry through the AGP bus every frame than NVIDIA's. Another good reason to avoid CPU-based methods anyway.

Because IHV A is worse at it than IHV B? Then JC would be accused of being ATi's puppet.

The point is not whether or not NVIDIA cheats, but why the code is suboptimal for R3x0. Even with cheats NVIDIA was usually not faster than R3x0 with the FX anyway.

Because if the code was optimal for ATi, it would be suboptimal for nVidia and then nVidia fan boys would cry foul. You can't please everyone everytime using the same time. Just look at what Valve said, 5 times more time just optimising for the GFFX's. Right now, D3 runs very close on either GFFX and R3xx's in their respective rungs.
 
Scali, you claim that you are a developer. May I ask which company you work for, and what games you have developed?

Not all developers are game developers. I am currently doing a project for a niche CAD program to design hydraulic manifolds, as I mentioned before elsewhere, I'm sure. Ironically enough, this is in OpenGL.

Using 3dmark03 as a means of gaming comparison with Doom3 is somewhat inane. 3dmark03 has demos that are designed to stress the GPU and limit the effect of CPU in the test results.

Any developer will realize the type and amount of work being done in both, and come to the conclusion that they are very comparable. That is the entire point of 3dmark03 anyway.
And if we take your claim that 3dmark03 stresses the GPU, then it would be even MORE remarkable that 3dmark03 is actually considerably faster than Doom3, which doesn't stress the GPU. Then we can conclude that obviously Doom3 is not using the GPU to its full potential. That would be fine, if it wasn't overusing the CPU instead.

3dmark03 game tests are also not interactive gaming environments.

Neither are Doom3 timedemos. Carmack said in an interview that no AI or physics are calced in the timedemo.
In fact, 3dmark03 DOES calc physics in realtime. So 3dmark03 is actually doing more realtime work.

Top that off with the fact that graphics drivers are generally heavily optimized for the 3dmark03 benchmark, and the comparison is nonsensical.

That's nonsense, since FM checks for any cheats, and has only found one shader replacement in Radeon drivers at all... Other than that, Radeon performance has been pretty much constant in 3dmark03 since the beginning.
We know for a fact that NVIDIA has Doom3-specific optimizations, and there may also be Doom3-specific optimizations in the latest ATi drivers, but for 3dmark03 it is highly unlikely.

Some of these same reviewers have claimed that low min framerates are less noticeable and less distracting in Doom3 than in many other games.

Depends on how low, doesn't it? If it gets below 10 fps, like on my system, you are no longer able to aim clearly, or avoid enemy fire.
I guess that is not noticeable or distracting?

On top of this, there are different quality settings that can be adjusted so that the game is playable (whatever that means, anyway).

Not at all. Whether I set 640x480 low detail or 1024x768 high detail, I get the same low framerates in combat. The settings have absolutely no effect, since the game is CPU-limited.

Also, if you bothered to do the research, you would see that there are some games that run as fast or faster on a 5950U than a 9800XT, especially when using newer Forceware drivers and without AA/AF.

Gee, I wonder why those newer Forceware drivers make the games run so fast!

Your general negative tone leads me to believe that your personal "issues" are more than about simply Carmack and Doom3, but also about NV vs ATI. Suspicious, to say the least.

You couldn't be more wrong. It is not NV vs ATi at all. Firstly, as I said before, NV4x would also benefit from a GPU-based solution.
Secondly, if I were to buy a card at this point in time, and money was no object, it would most certainly be a 6800Ultra, since it has more features than any Radeon, and performance is very good.

The point is just that ATi had cards that were capable of doing GPU-acceleration before NV did, and this most probably held Carmack back.
This has nothing to do with NV vs ATi other than the fact that NV had a really bad time when the R300 came out, and they had to compete with the outdated GF4 at first, and with the underpowered FX later. Which is what caused them to flame 3dmark03 in all kinds of ways, while 3dmark03 was just doing the right thing, NV just didn't have the hardware yet. Now they do, with NV4x. Even Carmack must have seen 3dmark03 and the performance the R300 got at the time, and Carmack must have known that NVIDIA would eventually come up with a card that would also have the performance, as will any other competitors (yes XGI and S3 too).
 
The HardOCP guide shows a 1.5ghz + GF4mx getting avg 30 fps @ 640 with just turning off specular maps. In all the other games you mention the min req system would be lucky to run let alone be playable with some features turned on.

Let's not confuse average framerate in a simplistic timedemo with playability, shall we?
Also, as I said before, NVIDIA may perform better with regard to dynamic vertexbuffers.

Yes, I would be surprised. Most people I know have good CPUs and crappy or outdated graphics cards.

Well I don't, and I know quite a few people who don't either.
So better get used to the idea that not everyone is the same.

I didn't say it doesn't exist. There's a market for linux games but you only see id Software releasing clients while everyone else just releases dedicated server bins and milk the linux community.

I think it is safe to assume that there are more people who upgrade their videocard than there are people who play games on linux only.
It is also safe to assume that adding GPU-acceleration for skinning and shadowvolumes is a lot less work than making a linux port of a Windows game.
So this is not exactly a proper comparison.

Even if you use a 6800U with a similarly priced CPU you're still limited by the GPU if you play at 1600x1200 with 4xAA and 16xAF. Moving stuff to the GPU wouldn't help.

You don't get it.
If you happen to have that fast CPU, perhaps it wouldn't be faster. But you no longer NEED to have that fast CPU.
Only the GPU will matter if you use it.
So instead of only the people with a 3800+ getting 100 fps, all people with 1500+ and up will get 100 fps, as long as they have the 6800U. And even the 6600 users will get very good framerates, and the people with R3x0 cards. So it's a win-win situation.
That's what hardware-acceleration is all about. Doing things on the CPU is outdated, and should only be done as a fallback.

If you can point me to other games using stencil shadows extensively (there's Deus Ex and Thief 3, and I think there's that crappy Secret Services game) and that have done this...

We have 3dmark03 ofcourse. And if we just look at offloading processing to the GPU in general, not specifically shadowvolumes, we will see that most games today and released in the near future use GPU for skinning.
Carmack basically hasn't moved on from the CPU-based stuff that he used in Quake.

In FarCry with the min req system you also get a game that looks completely different from running it on the optimal system while in D3, even if you play in the min req system you are still getting a very similar graphical experience.

Whether you see that as an advantage or a disadvantage is up to you I suppose.
I personally expect better hardware to provide better graphics.

You should try it with "r_shadows 0" and compare.

Why? I've already uninstalled it now... And what I do know is that the game runs faster on my brothers P4 2.4 GHz with an R8500 than on my system, and that on my system, 3dmark03 runs lots better than Doom3.
Turning shadows off would put us back in the Quake-age. I don't care if it runs well then. I have the hardware to handle stencil shadowing aswell, if people bother to use it.

Because IHV A is worse at it than IHV B? Then JC would be accused of being ATi's puppet.

No, all IHVs are worse than IHV B.
Besides, just because it works acceptably on selected hardware doesn't mean it's the best solution. Obviously it's always better to have less AGP traffic. It will reduce waiting time for both the CPU and the GPU, at the least.

Because if the code was optimal for ATi, it would be suboptimal for nVidia and then nVidia fan boys would cry foul. You can't please everyone everytime using the same time.

Yes you can. NV4x can handle code that is 'optimal for ATi' just fine. Since the architecture is nearly identical, it would be nearly optimal for both vendors.
On top of that, I believe that the later FX models also had extra vertexshader performance, so those would most probably run it fine aswell, with the exception of low-budget junk like the 5200 perhaps. But those could simply demand a high-end CPU and use the CPU-path instead.

Just look at what Valve said, 5 times more time just optimising for the GFFX's.

That says more about the FX than about the amount of time required for optimizing for a certain videocard.
The FX just doesn't have any kind of floating point performance. Since the Radeon can do everything it does at a decent speed, it is very simple to write fast code for it. There is no need to spend much time on optimizations. The same goes for NV4x. FX was just a total miss.
 
Silly said:
Not all developers are game developers.

Ah, that explains your ignorance regarding game developing! :D

And if we take your claim that 3dmark03 stresses the GPU, then it would be even MORE remarkable that 3dmark03 is actually considerably faster than Doom3, which doesn't stress the GPU.

Looks like you really have no clue what you are talking about. Most 3dmark03 scores are given at 1024x768 without AA/AF anyway, using relatively short rendered scenes with no interactivity, so where is the basis for comparison? Anyone who claims that Doom3 "doesn't stress the GPU" is probably not in their right mind anyway, because this is not a rational statement based on the data that we have seen so far.

Neither are Doom3 timedemos. Carmack said in an interview that no AI or physics are calced in the timedemo.

Again you are missing the point. Many, many reviewers have made it painfully obvious that Doom3 is quite "playable" using a variety of different cards and systems. That means playable in an interactive environment. Your basis for comparison is completely illogical.

Since you are so fond of criticizing other people's work, why don't you code up an interactive game yourself, and then we will see who is boss? :D

That's nonsense, since FM checks for any cheats, and has only found one shader replacement in Radeon drivers at all

It is very ignorant to believe that the major IHV's do not optimize for 3dmark03, even with the new guidelines. Ever wonder why NV and ATI scores sometimes jump up with new beta drivers, even though gaming performance is largely unchanged? But hey, you need some excuse to hammer your illogical points home ;)

Depends on how low, doesn't it? If it gets below 10 fps, like on my system, you are no longer able to aim clearly, or avoid enemy fire.
I guess that is not noticeable or distracting?

If you are consistently hitting into the 10fps range, it doesn't take a rocket scientist to figure out that you need to either: lower the in-game detail, lower the resolution settings, upgrade your cpu, upgrade your quantity of RAM, upgrade your graphics card, or any combination of these. Looks like you are not a very good problem solver if you can't figure this out. Instead of constantly complaining about your problem, how about working to solve your problem?

Gee, I wonder why those newer Forceware drivers make the games run so fast!

Gee, do ya think that maybe an optimized compiler could do it? I don't recall anyone saying that the IQ is any worse than at inception, so there is really no downside for FX users using newer and better drivers.

You couldn't be more wrong. It is not NV vs ATi at all. Firstly, as I said before, NV4x would also benefit from a GPU-based solution.
Secondly, if I were to buy a card at this point in time, and money was no object, it would most certainly be a 6800Ultra, since it has more features than any Radeon, and performance is very good.

Well, I'll be darned...maybe this is just about Carmack :LOL:

P.S. This is my final response to you in this thread. Keep on bashing Carmack, Doom3, etc, at least you are being consistent about it! ;)
 
Scali said:
Not all developers are game developers. I am currently doing a project for a niche CAD program to design hydraulic manifolds, as I mentioned before elsewhere, I'm sure. Ironically enough, this is in OpenGL.
As a developer you should know thats its not possible to implement every code path even if some are 'simple'.

Its entirely possible that the hit detection system which requires skinning on the CPU may looked like negating any benefits and as such it wasn't a priority to try.

I'm sure JC would love to support all system configuration to the best possible and if it were a quick job he would have done but the fact its not there shows you don't have all the information.

I know the internet forums love to shout bias at people like JC or TS but there both really passionate about technology and there games and I can't see either just not bothering without a good technical reason.
 
Back
Top