HDR - The Halo Way * Spin-Off

Status
Not open for further replies.
I also wonder how many PS3 users would have preferred Heavenly Sword to render at 640p with a more stable framerate. This is no flamebait or an attack against the devs, I highly respect their work and consider the game's graphics to be one of the best of this generation so far. But I've heard a lot of people moan about the game, and especially because of the 4xAA, it would be a far better candidate for scaling. Although I think the PS3 hardware scaler couldn't help there... right?

Scaling and dropping AA is only a benefit if your GPU bound...
Turning off AA (and/or moderate scaling) didn't improve frame rate in most areas so was pointless (there were only a few empty vistas that really got any noticeable benefit from no AA).

Its not like we didn't try the easy things :p
 
I think it's fairly likely that they render to the two buffers (in EDRAM, via MRT) in a single pass. Going from there to actually submitting all the primitives twice is a huge drain on resources (both GPU and CPU) so there wasn't much choice as far as I can see.
 
Bungie and Rare are the two principale studios of microsoft games
Bungie have could to recycle the engine of perfect dark no?
the same strange resolution
the same HDR?
the same motion blur?
...
 
I don't see a studio like Bungie (with all the amazing talents they have..) recycling any tech from another studio, unless we speak about something revolutionary..
 
I don't see a studio like Bungie (with all the amazing talents they have..) recycling any tech from another studio, unless we speak about something revolutionary..

What about the other way around? Perfect Dark Zero was a very early title, but if anything I would have expected Halo development to start at least as early, if not earlier, than PDZ.
 
People seem to forget a few things about the Halo3 engine here: 4-player coop over Xbox Live, unique HDR lighting pipeline, saving films (probably a lot more complicated then we would think) and Forge (again not so simple). I seriously doubt that one could just simply add these features into an existing engine...
 
People seem to forget a few things about the Halo3 engine here: 4-player coop over Xbox Live, unique HDR lighting pipeline, saving films (probably a lot more complicated then we would think) and Forge (again not so simple). I seriously doubt that one could just simply add these features into an existing engine...

I also don't think they can just fit in features retrospectively. *If* they want to do it, they will have to plan it from the beginning first, and then roll out the implementations incrementally.

They have done the "saving films" in the original Marathon series. Assuming it's implemented the same way, an old Marathon interview mentioned that the game saves the stream of "instructions and user parameters" and recreate the playback later.



Heavenly Sword should not even appear in this thread. The slow down are mostly due to in-transit loading. The PIP cinematics and fight scenes are fine.
 
Last edited by a moderator:
They have done the "saving films" in the original Marathon series. Assuming it's implemented the same way, an old Marathon interview mentioned that the game saves the stream of "instructions and user parameters" and recreate the playback later.

Yeah, it really just sounds like its Demo playback and editing kinda like was done a bunch with the original Quake by fans.
 
My personal opinion is the lighting and shadowing are some of the best I have seen in a game. The game engine leaves a lot to be desired (AF... boo!), but my dissappointment with most next gen games is the lighting engines where ambient light tends to destroy a ton of contrast. The self shadows and HDR highlights tend to bring a lot of subtle detail.

With their current engine in view, if I had to choose between castrating the HDR engine and possibly shadows for 720p and AA, I am taking their current implimentation. The models are pretty basic and a small bump in resolution isn't going to add substantially to the image. Cutting out the lighting engine would totally castrate the game though.
 
I looked at some of the videos, and most notably the "first 10 minute" video on Gamersyde makes me wonder about something.

Is there a lack of a "full" dynamic lighting system? Because the muzzle flashes don't seem to have a proper effect on the environment and the flashes don't seem to cause any shadow changes.
 
Dynamic point lights don't typically cast shadows in what I have seen of Halo 3. A couple of us spotted this in the Gamersyde footage before the game's release; notably the warthog footage indoors where the headlights don't cause dynamic objects to cast shadows.
 
I read those slides this morning and I can't really say I'm impressed, but don't wanna go OT. Just one thing relevant to ths thread: you don't need to keep 2 RT in edram at the same time to do what they maybe do, so the argument of extra memory needed for their HDR technique doesn't sound good to me.
I have to agree with you. IMO it's just a waste of space.

Xenos doesn't support 32 bit floating point blending
I think he meant 32-bpp, hence the comparison to NAO32. NAO32 has much more dynamic range, but Xenos' FP10 has blending support.

I personally think the 13 stop range of FP10 is fine for a render target. Even a quality camera like the EOS 5D has a range only 8.2 stops. Only the most extreme circumstances, like a very bright object viewed through a barely transparent material, will cause range problems, and things like the sun can be flagged with the alpha channel to ameliorate that problem.

Thus I feel FP10 is enough not to limit the pursuit of photorealism in any meaningful way. If you want to go beyond the abilities of today's best cameras then more range may be needed, but there are other challenges in 3D graphics that have higher priority.
 
I looked at some of the videos, and most notably the "first 10 minute" video on Gamersyde makes me wonder about something.

Is there a lack of a "full" dynamic lighting system? Because the muzzle flashes don't seem to have a proper effect on the environment and the flashes don't seem to cause any shadow changes.

It does seem that much of the lighting isn't dynamic. I've also only ever seen one light source casting shadows at a time, at least in gameplay. The shadows also fade when the occluder is a set distance away from the receiver (try jumping in the air when looking at your shadow), and also aren't cast on alpha-blended surfaces. However the tradeoff is that they seem to be quite high-quality when visible, which is a big plus.
 
explain, i thought (no matter how u divide it) fp10 gives you a lot less than that, nao32/logluv32 is IIRC ~30
FP10 has denorm support. Range is 1/256 to 32, which is 8,192:1, i.e. ratio of 2^13. 3 bits of exponent give you a range from 1.0 * 2^-2 to 1.984 * 2^4 plus an additional state indicating denorm (i.e. 0/64...63/64 * 2^-2). Tonal range is 512 steps when ignoring negative values, and that should be enough to avoid banding.

Did you want me to explain that or why 13 stops is enough? I think I covered the latter pretty well in my previous post.
 
Encoding to a different color space can be done in many ways, NAO32 was a first implementation and I tried to make it as good as Fp16, but the vast majority of games can use simpler implementations that use less cycles.

Pardon my lack of knowledge, but 4 or 5 cycles out of how many cycles? Plus could funky color space be used for fp10 on RSX? and would that be as cheap as 16-bit color? I find that really facinating!
 
Pardon my lack of knowledge, but 4 or 5 cycles out of how many cycles?
Ouf of n cycles, from 1 to hundreds of cycles. What you do is just to append at the end of a shader some code to convert from RGB colors to something else.
Plus could funky color space be used for fp10 on RSX?
I wouldn't call it 'funky color space' as Fp10 just work with RGB, but the answer is yes, it would be possible to implement via software the FP10 format (with no blending) on a programmable GPU. Too bad it would cost a lot more cycles than much better solutions that employ alternative color spaces.
and would that be as cheap as 16-bit color? I find that really facinating!
Shader wise it wouldn't, while it would use less external bandwidth.
 
Ouf of n cycles, from 1 to hundreds of cycles. What you do is just to append at the end of a shader some code to convert from RGB colors to something else.

So roughly percentage wise... how much shading is taken up converting per frame in a shader heavy game like Heavenly Sword?

Also... I'm curious about something... with all the talk of shading on Cell, could the conversion be done on the spu's?
 
Status
Not open for further replies.
Back
Top