Capcom's "Framework" game engine

Well one thing this seems to point to is LP on PS3 and DMC on 360. If theyre building an entire engine and one of its main goals is 'multi platform support', it would seem they are positioning themselves to do this.
If anything it means RE5 is still headed for both platforms same goes future titles like Dead Rising 2 and other unannounced titles. Big games like Devil May Cry that can be supported by the fan base of one system and likely received funding from one of the console manufactures will likely remain exclusive while lesser titles that need all the help they can get will be multiplaform. I do hope to see games like Breath of Fire and RE Online(should have been multiplatform last gen) do this.
 
Last edited by a moderator:
I have read, not sure if it's been discussed here, but this engine stuff from Capcom is a major change for Japanese developers. Previously they had built games from scratch each time, shunning the Western practice of engines such as UE3.0. So I just think that aspect is very interesting.

I'm not a big fan of Game Frameworks, not merely from a performance point of view, but more from a productivity point of view. Surely, there are parts that can be pretty easily moved from project to project (math library, debugging tools, memory allocators, lots of low level stuff), but when it comes to the engine itself, I think it takes much less time and it costs much less to just write what a game needs and nothing more, rather than trying to force a Big Framework To Bind Them All to do what you need and still have to maintain all the code that you dont really need but it's there to add flexibility.

I definitely don't think that the old myth of reusable code is the way to reduce costs, there are other more effective ways with a good record of success in other fields.

Not everyone agrees with me on this though, well, in the game industry, very few people agree with me on this.

Fran/Fable2
 
Thanks for the info.



This is really bad IMO, this will mean that we will see very few games at 60FPS. They do provide any reason for such estimation?

it's my firm belief that neither Xbox 360 nor PS3 really have enough raw pixel fillrate
(for most developers) to do 60 FPS while displaying truly next generation imagry.

both Xenos and RSX only have 8 ROPs and only ~4 Gpixel/s fillrate. 720p uses most of that up, so they're almost down to fillrates of last generation consoles once they've asended to HD resolution. I made a thread about this.

IMO both consoles should've had ~12 Gpixels/sec. nearly free 4x AA (like Xenos) and the option to goto 8x AA at moderate fillrate/bandwidth cost (less cost than with PC GPUs).


does that mean we will never see next-gen graphics at 60fps? no, not at all, but there will be few games that do it. options are, go down to 480p or cut out anti-aliasing
(cutting out AA will help PS3 more than Xbox 360).

I've heard time after time that fillrate is no longer very important, with shaders taking over in importance. well, that might be true to some extent, but it doesn't mean fillrate is no longer important or that 4 Gpixels/sec will "cut it".
 
I think AA as we know it (including MSAA) will disappear, or at the very least become less important as other effects take over.
 
I'm not a big fan of Game Frameworks, not merely from a performance point of view, but more from a productivity point of view. Surely, there are parts that can be pretty easily moved from project to project (math library, debugging tools, memory allocators, lots of low level stuff), but when it comes to the engine itself, I think it takes much less time and it costs much less to just write what a game needs and nothing more, rather than trying to force a Big Framework To Bind Them All to do what you need and still have to maintain all the code that you dont really need but it's there to add flexibility.

I definitely don't think that the old myth of reusable code is the way to reduce costs, there are other more effective ways with a good record of success in other fields.

Not everyone agrees with me on this though, well, in the game industry, very few people agree with me on this.

Fran/Fable2

Fran, I am actually surprised by your post. I think the start-from-scratch approach would be more suitable for a small but experienced group of developers. The frameworks should allow more junior developers to contribute and learn without tying up the gurus. Or are these just pipe dreams in practice ?
 
Last edited by a moderator:
The 2.6x sounds like a good figure. You've got 3 cores, so the very most performance gain you could get over 1 core is 3x. The only way you'll get better than 3x increase is if your single thread is poor at using core resources. An efficient thread won't leave much left for a second thread to run off.

I'd be pleased with 2.5x as much performance from 3 cores on an SMP processor. I think typically dual-core processors are around 1.5x the performance of their single core varieties.

I think dual cores are generally a smaller performance increase than that. For games supporting SMP, the increase is closer to 30%.

Previously they had built games from scratch each time, shunning the Western practice of engines such as UE3.0.

Uh, capcom? The company that would release 17 different iterations of virtually the same game, designed engines from the ground up each time?

our mind isn't actually precise enough to notice one extra 1/60th of a second of delay.

Don't know if it's the same thing, but the kailera online system for emulators allows you to control the number of input frames. 30 fps is noticably less responsive than 60, and below 30 fps starts reaching unplayable levels.

I believe it's been found that well-trained individuals can detect response latencies as low as 5ms.
 
Kind of disappointing news, if only because of the silly graphical issues that I've seen in Dead Rising (some in LP demo too -- similar ones). I hope they fix the issue with Alpha/transparent textures (like hair) blocking dof type effects (it's really apparent in a lot of the cutscenes in Dead Rising, and kind of jarring). And the faint outline that character models get when DoF/blur effects are used. I'd rather DoF be removed all together if it's going to be put in with issues like that. Forgetting personal issues I have with the engine (hopefully they are fixed over time and aren't deep rooted engine issues), I suppose it's good news for Capcom.
 
If anything it means RE5 is still headed for both platforms same goes future titles like Dead Rising 2 and other unannounced titles. Big games like Devil May Cry that can be supported by the fan base of one system and likely received funding from one of the console manufactures will likely remain exclusive while lesser titles that need all the help they can get will be multiplaform. I do hope to see games like Breath of Fire and RE Online(should have been multiplatform last gen) do this.

If a console manufacturer PAYS for development and exclusitivity then sure they'll get it, but the context of my original point was that there are no 'side deals' in the equation. Those deals obviously change everything.

I do see your point but just because a single platform can 'support' a game doesnt mean there isnt more money to be made by putting it on 2 or more. I think the real equation is if the added cost of 'platform 2 development' is significantly less than projected sales on that platform.
 
Last edited by a moderator:
It's talking about rendering effects in low-resolution to save renderingtime/fillrate (using MSAA to reduce effects of lowered resolution).

How do you suppose they merge the main scene with the additionally rendered low-res rendertarget?

Effects - as in "explosions" or similar stuff are usually built up from many polygons with soft texture and alphablending, and MSAA doesn't help with such type of objects, and the lowered resolution would be much less of a problem with them.
 
I think dual cores are generally a smaller performance increase than that. For games supporting SMP, the increase is closer to 30%.

Dual thread CPU's see increases of between 15-25%, a true dual core CPU should see a much higher increase given decent code.

Uh, capcom? The company that would release 17 different iterations of virtually the same game, designed engines from the ground up each time?

They mean they used to design an engine for each game series, rather then making one engine and basing all of there different games on that one engine.
 
Last edited by a moderator:
As a ex-pianist and occasional keyboard player on my PC, with a MIDI keyboard and softsynths, I can assure you that a latency of 33 ms (30 Hz) is "almost unplayable"
Game input devices aren't MIDI keyboards though, and besides, games already have at least a frame of latency added to any inputs already, if not even more, plus games always have things like inertia that will smooth over any bumps in input and make differences in a few hundredths of a second even less noticeable.

You're not bunnyhopping at a precise beat of 4:5ths through a CS match now are you? :D Bringing up piano playing is totally irrelevant in this discussion...

But what's the "for effects" part?
Stuff achieved with render targets, could be reflections or displacement effects (heat haze, refraction etc), things like original Unreal's force fields and so on. Maybe explosions, particle effects etc that are composited into the scene... Most likely, more things can be done than I can think of. :)
 
Stuff achieved with render targets, could be reflections or displacement effects (heat haze, refraction etc), things like original Unreal's force fields and so on. Maybe explosions, particle effects etc that are composited into the scene... Most likely, more things can be done than I can think of. :)

Reflections are already handled with lower-resolution rendertarget by everyone; it's nothing to brag about in a presentation (and they don't even need MSAA). For heat haze, force fields etc., you probably should just distort the original, full-resolution rendertarget - you need to be able to render it in full resolution when the heat haze is missing anyway. For explosions etc - see my previous post.
 
MSAA can be useful to accelerate shadow maps rendering..

Can you elaborate on that?
Instead of, e.g. rendering the shadowmap into a 1024x1024 rendertarget with 1024x1024 Z-buffer, you render into a 512x512 rendertarget with MSAA, which has effectively a 1024x1024 Z-buffer? When is the MSAA resolved? You only need the Z-buffer in both cases anyway, you would have color writes disabled...
 
Can you elaborate on that?
Instead of, e.g. rendering the shadowmap into a 1024x1024 rendertarget with 1024x1024 Z-buffer, you render into a 512x512 rendertarget with MSAA, which has effectively a 1024x1024 Z-buffer?
Yep
When is the MSAA resolved?
Why would you resolve it? you need supersampled depth data, you don't want to resolve it.
BTW..how would you resolve a zbuffer (I mean..which filter would you apply)? ;)
You only need the Z-buffer in both cases anyway, you would have color writes disabled...
That's pretty much it.
It might also happen that some artist decides to replace some geometry with a quad+texture+alpha test to simulate some complex
shadow geometry and you suddenly get depressed cause your shadow maps hast lost resolution and you don't know why..(based on a true story ;) )
 
Last edited:
OK, so if you have color writes disabled in both cases, why is the MSAA case faster? Is the Z-buffer which is serving as a Z-buffer for a MSAA rendertarget somehow inherently faster than the "plain" Z-buffer, even if they are virtually the same pixel size?
 
OK, so if you have color writes disabled in both cases, why is the MSAA case faster? Is the Z-buffer which is serving as a Z-buffer for a MSAA rendertarget somehow inherently faster than the "plain" Z-buffer, even if they are virtually the same pixel size?
it depends upon different HW implementations but your GPU can do a much better work at compressing z tiles and also at walking/rejecting more fragments per clock.
 
it depends upon different HW implementations but your GPU can do a much better work at compressing z tiles and also at walking/rejecting more fragments per clock.

Well, this should have dawned on me earlier but... I guess you're talking about functionality exposed only on the, say, RSX by Sony/NVIDIA? I don't think you can use a 512-but-MSAA z-buffer as a 1024-z-buffer under DirectX.
 
Back
Top