How come, are they inefficient or do they even leak ?Artists need more, and behold, they get 8 GB...
Wonder why Capcom uses Server CPUs though.
How come, are they inefficient or do they even leak ?Artists need more, and behold, they get 8 GB...
Wonder why Capcom uses Server CPUs though.
AFAIK Server-Chips are pretty much the same as Desktop-Variants, with exception of larger caches and support of ECC-Memory... and wayyyy more expensivePerhaps floating point performance?
How come, are they inefficient or do they even leak ?
Wonder why Capcom uses Server CPUs though.
Wonder why Capcom uses Server CPUs though.
Seems a bit silly also that the artists are running a different OS ver. to the programmers too (64bit as opposed to 32..)
Artists need the address space. Programmers too -- if your target machine only has 256 MB of RAM, you can probably get away with 32-bit, but I recently had to go to x64 on my main PC due to this issue.
The target platform for the PC version of the Lost Planet is a 32-bit OS.I know that..
I was referring to the fact that the programmer's should be using the 64bit OS too..
unless it's for incompatibility issues with devkit interface, x86 development issues on an x64 platform etc..
They feel the vertex performance of the Xbox 360 GPU can match that of NVIDIA GeForce 8800. The bad points are a still picture looks not very good and MSAA can't be applied to the blur. MT Framework can output supersampled images for media PR to hide these defects.
Of course it's ballpark, I didn't write "fully able to match" anyway. That sentence follows after a comment where they talk about a benchmark test they did for 2.5D motion blur ("2.5D motion blur took 5ms, in which vertex processing was roughly 1ms"), so their impression is most likely based on that particular effect. Since their PC version development PC had 8800GTX I think it's natural that they brought it into comparison.Is that how it exactly translates? Looking at the babelfish translation:
"the apex efficiency of Xbox 360 GPU is the response that ", it is good game, for up-to-date PC by comparison with the NVIDIA GeForce 8800 series of GPU,"
seems to suggest that its just comparable to an 8800 GPU. Perhaps more along the lines of in the ball park rather than fully able to match it. That comparison may have been drawn simply because the 8800 was the only other unified shader architecture at the time.
I find it extremely hard to believe that Xenos could match the 8800's vertex processing performance given its much lower shader power and knowing the R6xx's relative performance (to both Xenos and 8800).
Is that how it exactly translates?
First of all, 10-15.000 triangles aren't that much at all. Even though it's an order of magnitude more then in Quake3, it's still far from enough to properly display detailed shapes, especially curved; and lowpoly hair and fur of any kind makes it count a lot less too.
Second, most games today take many passes to render things, inlcuding depth buffers for shadow maps and stuff. A single texture mapped particle also means two triangles, and you can easily throw a couple of thousand particles at an explosion, smoke trail or such.
It all adds up in the end...
Hmm that's what I would figure, considering when I run the CryEngine 2.0 SDK at "high settings", typically I'm getting poly counts hovering around 1.1 million triangles @ ~30 frames a second. Now I know perfectly well that Crysis and CE2.0 use a voxel terrain system, but there's no way Lost Planet is pushing more polygons than Crysis.