No it's all there. Fafalada already gave an explanation in his comment above and I think it's valid.one, is there more detailed technical info in the Japanese text?
No it's all there. Fafalada already gave an explanation in his comment above and I think it's valid.one, is there more detailed technical info in the Japanese text?
If anything it means RE5 is still headed for both platforms same goes future titles like Dead Rising 2 and other unannounced titles. Big games like Devil May Cry that can be supported by the fan base of one system and likely received funding from one of the console manufactures will likely remain exclusive while lesser titles that need all the help they can get will be multiplaform. I do hope to see games like Breath of Fire and RE Online(should have been multiplatform last gen) do this.Well one thing this seems to point to is LP on PS3 and DMC on 360. If theyre building an entire engine and one of its main goals is 'multi platform support', it would seem they are positioning themselves to do this.
I have read, not sure if it's been discussed here, but this engine stuff from Capcom is a major change for Japanese developers. Previously they had built games from scratch each time, shunning the Western practice of engines such as UE3.0. So I just think that aspect is very interesting.
Thanks for the info.
This is really bad IMO, this will mean that we will see very few games at 60FPS. They do provide any reason for such estimation?
I'm not a big fan of Game Frameworks, not merely from a performance point of view, but more from a productivity point of view. Surely, there are parts that can be pretty easily moved from project to project (math library, debugging tools, memory allocators, lots of low level stuff), but when it comes to the engine itself, I think it takes much less time and it costs much less to just write what a game needs and nothing more, rather than trying to force a Big Framework To Bind Them All to do what you need and still have to maintain all the code that you dont really need but it's there to add flexibility.
I definitely don't think that the old myth of reusable code is the way to reduce costs, there are other more effective ways with a good record of success in other fields.
Not everyone agrees with me on this though, well, in the game industry, very few people agree with me on this.
Fran/Fable2
The 2.6x sounds like a good figure. You've got 3 cores, so the very most performance gain you could get over 1 core is 3x. The only way you'll get better than 3x increase is if your single thread is poor at using core resources. An efficient thread won't leave much left for a second thread to run off.
I'd be pleased with 2.5x as much performance from 3 cores on an SMP processor. I think typically dual-core processors are around 1.5x the performance of their single core varieties.
Previously they had built games from scratch each time, shunning the Western practice of engines such as UE3.0.
our mind isn't actually precise enough to notice one extra 1/60th of a second of delay.
Umh..such as? We need more full scene AA, not less!I think AA as we know it (including MSAA) will disappear, or at the very least become less important as other effects take over.
If anything it means RE5 is still headed for both platforms same goes future titles like Dead Rising 2 and other unannounced titles. Big games like Devil May Cry that can be supported by the fan base of one system and likely received funding from one of the console manufactures will likely remain exclusive while lesser titles that need all the help they can get will be multiplaform. I do hope to see games like Breath of Fire and RE Online(should have been multiplatform last gen) do this.
It's talking about rendering effects in low-resolution to save renderingtime/fillrate (using MSAA to reduce effects of lowered resolution).
I think dual cores are generally a smaller performance increase than that. For games supporting SMP, the increase is closer to 30%.
Uh, capcom? The company that would release 17 different iterations of virtually the same game, designed engines from the ground up each time?
Game input devices aren't MIDI keyboards though, and besides, games already have at least a frame of latency added to any inputs already, if not even more, plus games always have things like inertia that will smooth over any bumps in input and make differences in a few hundredths of a second even less noticeable.As a ex-pianist and occasional keyboard player on my PC, with a MIDI keyboard and softsynths, I can assure you that a latency of 33 ms (30 Hz) is "almost unplayable"
Stuff achieved with render targets, could be reflections or displacement effects (heat haze, refraction etc), things like original Unreal's force fields and so on. Maybe explosions, particle effects etc that are composited into the scene... Most likely, more things can be done than I can think of.But what's the "for effects" part?
Stuff achieved with render targets, could be reflections or displacement effects (heat haze, refraction etc), things like original Unreal's force fields and so on. Maybe explosions, particle effects etc that are composited into the scene... Most likely, more things can be done than I can think of.
MSAA can be useful to accelerate shadow maps rendering..
YepCan you elaborate on that?
Instead of, e.g. rendering the shadowmap into a 1024x1024 rendertarget with 1024x1024 Z-buffer, you render into a 512x512 rendertarget with MSAA, which has effectively a 1024x1024 Z-buffer?
Why would you resolve it? you need supersampled depth data, you don't want to resolve it.When is the MSAA resolved?
That's pretty much it.You only need the Z-buffer in both cases anyway, you would have color writes disabled...
it depends upon different HW implementations but your GPU can do a much better work at compressing z tiles and also at walking/rejecting more fragments per clock.OK, so if you have color writes disabled in both cases, why is the MSAA case faster? Is the Z-buffer which is serving as a Z-buffer for a MSAA rendertarget somehow inherently faster than the "plain" Z-buffer, even if they are virtually the same pixel size?
it depends upon different HW implementations but your GPU can do a much better work at compressing z tiles and also at walking/rejecting more fragments per clock.