Chalnoth said:
Actually, from what I've been reading, the shadows are going to make it much more GPU limited than the previous engines (The same goes for the new Unreal engine...though they've also added a lot of CPU-intensive tasks to bring the CPU requirements up as well), with approximately 2-3 passes over the entire scene
per light source.
Just based on the fact that games cannot easily scale how much processing needs to be done on the CPU, good engines will always try to not put too much pressure on the CPU.
Oh, and Hammer will be a beast in processing power, particularly for programs compiled for it
I just hope that AMD has the foresight to include support for DDR400 and DDR II in the initial release...anything less would be a serious mistake.
Carmack needs to make his games more cpu-dependent than he's been used to else offering SMP options for his software will seem mainly an exercise in futility (sure doesn't help the current Q3 very much--if at all.)
Instead, as Carmack has indicated that he prefers nVidia products as his build baseline, it seems to me he's doing a sort of "ass backwards" cpu dependency which does nothing to scale his software with faster/additional cpus.
nVidia chips have always been abundantly cpu dependent. The nv25s are certainly no exception. In most of the testing I've done I see 100% cpu monopolization by the nVidia GPU. Want your Geforce 4 Ti4600 to run faster? Well, throw it a more powerful cpu and you'll get it....
While Carmack is throwing most of his code at the GPU, the nVidia GPU in turn is throwing much of its work to the cpu. So in an "ass backwards" kind of way, Carmack's code is very much cpu dependent--without the benefits of scaling, however. That's why I think he needs to write more of his code directly to the cpu than he presently does.
DDR 400 will most likely be stillborn, as few DRAM manufacturers will actually make a JEDEC-spec'ed DDR 400 (just as few of them now make JEDEC-compliant PC2700--apart from Crucial, I don't know of anyone else who does.) DDR II most likely will not be in wide circulation until mid-'03 to mid '04 (your guess is as good as mine), so there's absolutely no rush at all to manufacture a Hammer with a DDRII ram controller right now (what would they even test it with, for gosh sakes?)
Secondly, there will be as much as a 25% improvement in latency between the DDR 333 controlled by Hammer and the DDR 333 currently controlled by off-cpu memory control chips. This means that Hammers will run quite a bit better under DDR 333 than current cpus will run with it.
Third, as Hammer of course will receive periodic refreshes for MHz ramps and revisions, AMD can easily put in a newer memory controller as time goes on--but when the time is right. Believe me--DDR 400 versus DDR 333 in Hammer systems will produce very little performance gain (and this is assuming companies actually jump on the "DDR 400" bandwagon in the first place), and DDRII is completely useless at present and will likely be so for at least another year.