Digital Foundry tech analysis channel at Eurogamer

Status
Not open for further replies.
What does the source say? The table of contents doesn't have a whole lot of info. Maybe there is something subtle during the day on top of the LOD system, but the detail take a hit in the distance no matter what time of day.

That the haze is different from LOD was the point I was making. Your original post implied that it was simply "just" LOD. Yes, there is an LOD system. No, the atmospheric haze is not it. I apologize if I read your post wrong.
 
Bah, DOF is awesome when done well. I especially like how it was used in Gears of War 2. Where what you are looking at always comes into focus while the surrounds at different distances from that point you are focused on blurs. Sorta like real life.
unfortunately games + reallife differ, unless theres something on the tv looking at your eyeballs u cant tell what the person playing the game is looking at.
what u can do is guess where the players looking at (eg under the aim cursor, or some important position) + focus on that, but this is often wrong (*)
eg enemy is on the side of the screen, player glances at it to see if he should shoot it but only sees a blur.
a valid reason to have DOF though is to hide rendering artifacts (eg lowres textures etc)

(*)Ive deciding to ditch ingame DOF for this reason, it detracts from gameplay (Ill keep it in though for screenshots cause it looks good )
 
unfortunately games + reallife differ, unless theres something on the tv looking at your eyeballs u cant tell what the person playing the game is looking at.
what u can do is guess where the players looking at (eg under the aim cursor, or some important position) + focus on that, but this is often wrong (*)
eg enemy is on the side of the screen, player glances at it to see if he should shoot it but only sees a blur.
a valid reason to have DOF though is to hide rendering artifacts (eg lowres textures etc)

(*)Ive deciding to ditch ingame DOF for this reason, it detracts from gameplay (Ill keep it in though for screenshots cause it looks good )

Which probably explains why it's very understated when not aiming in Gears 2.

Either way, it's the best DOF implemention I've seen so far.

Anyway, not to keep derailing this thread, I'll leave my opinion at that. :)

Back to Infamous (and this one isn't a direct reply to you).

And again just personal taste, but I certainly prefer how Crackdown dealt with log distance views than the way Infamous. That massie blur is just distracting. Perhaps it's something I'd grow used to or something I'd be able to ignore when playing.

And yes, I realize it's probably an artistic way to deal with a technical limitation. Just not an artistic touch I'm fond of. :)

Regards,
SB
 
Crackdown feature should be ready for this weekend. We have created a quite remarkable time-lapse video that shows that while Crackdown is technically outdated in some areas, as a whole it's still a remarkably good looking game. Somewhat apt considering the discussion we're having here.

Tried to sponge some info out of Ruffian Games on the sequel (I know Gary Liddon from days of yore), but they weren't having it!

Any way, expect that on Saturday and a Fight Night R4 piece today. Bionic Commando is also up for analysis later this week.
 
The problem with Uncharted is that its static lighting means that a single line is often repeated from one frame to the next, if the camera is not moving.

AFAIR, Drake has dynamic lighting for every light source. What "static lighting" are you talking about?
Lighting updates are not executed every frame for every source but it doesn't mean that the lighting is static, you know.
 
Crackdown feature should be ready for this weekend. We have created a quite remarkable time-lapse video that shows that while Crackdown is technically outdated in some areas, as a whole it's still a remarkably good looking game. Somewhat apt considering the discussion we're having here.

Tried to sponge some info out of Ruffian Games on the sequel (I know Gary Liddon from days of yore), but they weren't having it!

Any way, expect that on Saturday and a Fight Night R4 piece today. Bionic Commando is also up for analysis later this week.

It's probably pointless trying to get info before E3 ... everything that publishers want to get out will get out in a very planned manner at a very planned time. ;) I reckon we'll probably hear about Crackdown 2 (or another project from that developer) at E3.
 
AFAIR, Drake has dynamic lighting for every light source. What "static lighting" are you talking about?
Lighting updates are not executed every frame for every source but it doesn't mean that the lighting is static, you know.

I am not talking globally, I am talking in terms of specific scenes, and in those scenes the lighting is completely static, resulting in horrendously skewed results. I apologise if I was not clear.
 
I have a question in regard to the last comparison (bionic commando), how do you estimate if there is HDR implemented in a game? I mean lighning can be implemented with varying success no matter there is some HDR or not at work? Are there concrete "hint" in a screenshot?
There's also something how would like to learn more about and I'm not sure it's the proper thread it's clearly stated that nao32 is superior to FP10 I would like to understand why in comprehensive way.
(I tried to read this and this but it's too technical... actually simple enough for me would be screenshots imagine matrix :LOL: and what the impact of no support for alpha blending).

Keep up with tha good work both of you!! (you're two right?).
 
There's also something how would like to learn more about and I'm not sure it's the proper thread it's clearly stated that nao32 is superior to FP10 I would like to understand why in comprehensive way.
(I tried to read this and this but it's too technical... actually simple enough for me would be screenshots imagine matrix :LOL: and what the impact of no support for alpha blending).

To put it simply, the logluv encoding has a very very (very!) high dynamic range as well as colour precision.

Lighting accumulation is better because of the separation of the chrominance and luminance. If you consider multiple overlapping lights in RGB space, you can saturate the area affected by those lights as brightness is tied to the magnitude of the colour. That also has an implication on the mixing of colours. Under LUV, the colors are much better preserved under multiple lights.

edit: the dynamic range works out to be in the region of 10^38.
 
To put it simply, the logluv encoding has a very very (very!) high dynamic range as well as colour precision.

Lighting accumulation is better because of the separation of the chrominance and luminance. If you consider multiple overlapping lights in RGB space, you can saturate the area affected by those lights as brightness is tied to the magnitude of the colour. That also has an implication on the mixing of colours. Under LUV, the colors are much better preserved under multiple lights.
Thanks for the response :)
I find something interesting here
I find this part interesting and backing really well Nao and others words
Even though the gamut and dynamic range of the 32-bit
LogLuv format is superior to that of 24-bit RGB, we will not
get the exact pixel values again if we go through this format
and back to RGB. This is because the quantization size of a
LogLuv pixel is matched to human perception, whereas
24-bit RGB is not. In places where human error tolerance is
greater, the LogLuv encoding will have larger quantization
volumes than RGB and therefore may not reproduce exactly
the same 24-bit values when going back and forth. Over
most of the gamut, the LogLuv tolerances will be tighter,
and RGB will represent lower resolution information. (This
is especially true for dark areas.) In other words, the losses
incurred going through the LogLuv format may be
measurable in absolute terms, but they should not be visible
to a human observer since they are below the threshold of
perception.
This one too:
We performed two tests to study the effects of going
between RGB and LogLuv formats, one quantitative test and
one qualitative test. In the quantitative test, we went through
all 16.7 million 24-bit RGB colors and converted to 32-bit
LogLuv and back, then measured the difference between the
input and output RGB colors using the CIE E* perceptual
error metric. We found that 17% of the colors were
translated exactly, 80% were below the detectable threshold
and 99.75% were less than twice the threshold, where
differences may become noticeable.
To go back to my topic so bare eyes could be a good indicator as more or less pleasing to the eyes may indicate the effect at work.

EDIT
But it's interesting to note the lengths developers are willing to go to make the games different 8O
I mean this HDR implementation is accessible for both platforms for most likely the same cost (some more cycles).
Basically they saved cycles on this but implemented SSAO, put better textures in some places and didn't dare to fix an obvious tearing problem... The world is a strange place... :LOL:
 
Last edited by a moderator:
I have a question in regard to the last comparison (bionic commando), how do you estimate if there is HDR implemented in a game? I mean lighning can be implemented with varying success no matter there is some HDR or not at work? Are there concrete "hint" in a screenshot?
Not really! It amazes me how quickly people claim HDR. HDR is rendered in the standard limited RGB range in screenshots. You have no way of knowing from the frontbuffer whether it was rendered in the backbuffer that way, pixel for pixel, or if the backbuffer intensity was scaled and processed to make the frontbuffer. eg. A photorealistic raytrace of a car in a garage (low dynamic range) versus a photograph of the same displayed on a monitor - the range of optical intensities will be similar, even though the photograph resolves data from a source with a huge dynamic range versus the CG effort.

Video can show tone mapping variations in effect which is the principle giveaway. Aliasing along bright edges in an antialiased game is also a giveaway. I guess if you have a few comparable screenshots showing tone-mapping with brightness variations, that's a clue. HDR does help with realistic lighting, making it easier to achieve a convincing look instead of needing to resort to tricks, but you can't look at a game with good lighting and assume it uses HDR. Bloom and such can be comfortable faked in LDR.
 
Not really! It amazes me how quickly people claim HDR. HDR is rendered in the standard limited RGB range in screenshots. You have no way of knowing from the frontbuffer whether it was rendered in the backbuffer that way, pixel for pixel, or if the backbuffer intensity was scaled and processed to make the frontbuffer. eg. A photorealistic raytrace of a car in a garage (low dynamic range) versus a photograph of the same displayed on a monitor - the range of optical intensities will be similar, even though the photograph resolves data from a source with a huge dynamic range versus the CG effort.
Got it ;)
Video can show tone mapping variations in effect which is the principle giveaway. Aliasing along bright edges in an antialiased game is also a giveaway. I guess if you have a few comparable screenshots showing tone-mapping with brightness variations, that's a clue. HDR does help with realistic lighting, making it easier to achieve a convincing look instead of needing to resort to tricks, but you can't look at a game with good lighting and assume it uses HDR. Bloom and such can be comfortable faked in LDR.
Interesting! Thus it would be interesting to Granmaster and Mazingerdude to put an accent on this because the affirmation for this game (sand some other) seems pretty free they should give more informations about how they come to this conclusion.

Back to Bionic commando, I've just noticed (I still have not download the demo) that none of the version use anti aliasing, that's bad for "non HD" (I hate this expression...) game.

Actually I'm happy with my question (as if someone will care :LOL: ) as it forced me to search information about things I heard about quickly and quiet a while ago.
So I think I have properly understand the benefits of Nao32 (actually it looks like some came to the same conclusion as him).
The precision for luminance is greater and the log function seem more of a match for the way are eye percieves variations in light intensity. The only CON is that transparent objet must be render "normaly"
All of this bring me to my next questions...
Isn't the RGB standard dumb (for video games at least) ? I mean 8bits for alpha/transparencies that sounds like a lot. FP10 as implemented in xenos goes with 2bits and Sebbi hints that 1bit can fit the bill.
And would it be possible to implement on the 360 a Nao32 like hdr rendering that would support alphablending? Think something like this:
Luminance 14/15 bits, U 8bits, V8bits, alpha 1/2 bits
 
how do you estimate if there is HDR implemented in a game? I mean lighning can be implemented with varying success no matter there is some HDR or not at work? Are there concrete "hint" in a screenshot?

Nope, the only concrete evidence of HDR implementation is the real time tone map change, which of course is something you can't see from a still image.

Here're videos I made a while ago that display HDR tone mapping at work for dark & light adaptation.

http://www.youtube.com/watch?v=PPrPSEWid2w&fmt=22

http://www.youtube.com/watch?v=VgSe8BCCqgo&fmt=22

hope this helps ;)
 
Last edited by a moderator:
tonemapping can be done in LDR (albeit worse)
where HDR shines is accessing a color multiple times (with LDR cause of the limited range, things get lumped into the same value), i.e. like in reflections etc

for the vast majority (meat + potatoes) or what u see in a game, theres gonna be very little difference between the two.
Ive always maintained HDR is very overrated cause of this (OK getting better now as it doesnt have so much performance impact as it once did)
to have true HDR youve gotta put in a lot of extra work texturewise etc
I'm not sure it's the proper thread it's clearly stated that nao32 is superior to FP10 I would like to understand why in comprehensive way.
FP10 stores colors as RGB, logluv (nao32)stores colors as brightness/luminance + hue
the human eye is far more sensitive to luminance than it is color
http://en.wikipedia.org/wiki/Logluv_TIFF
As humans can't distinguish color in a very wide spectrum of possible colors, Logluv satisfies human observers with 8 bit on each of the U/V components. The Lightness component is then the most critical information-carrier — it has to suffice the requirements to store the high range offered by input-data, and is the component for which humans are the most sensitive. Logluv chooses a 16 bit presentation with base2-logarithmic scaling of the component (hence Logluv) enabling the representation of lightness values in the range of 38 aparture widths.
 
Zed thanks for the extra explanation :)

Does somebody know is that would be possible
liolio said:
would it be possible to implement on the 360 a Nao32 like hdr rendering that would support alphablending? Think something like this:
Luminance 14/15 bits, U 8bits, V8bits, alpha 1/2 bits
 
Probably a noob question, but anyway.

Do you always transform the logluv/nao32 representation into the RGB space before using it for calculations or are you able to perform some of the GPU work passes without transformation?
You can do whatever you want with data! It only *has* to be converted to RGB for the GPU to render it, because that aspect is hardwired in, or whereh LogLUV fails like alpha blending.
liolio said:
would it be possible to implement on the 360 a Nao32 like hdr rendering that would support alphablending? Think something like this:
Luminance 14/15 bits, U 8bits, V8bits, alpha 1/2 bits
How do you propose to do alpha blending on semi transparent surfaces? 1 bit is okay for a mask but can't handle more than that. :???:
 
How do you propose to do alpha blending on semi transparent surfaces? 1 bit is okay for a mask but can't handle more than that. :???:
:oops:
I've just realized that I miss read this part:
Sebbi said:
( R11G11B10_FLOAT)
For some reason I thank that 1 bit was dedicated for alpha... not too mention that I ignore/didn't noticed that:
Regarding the notable absence of the alpha channel in this format:

Quote:
Originally Posted by sebbbi
The alpha channel of the back buffer (render target) is rarely used for blending. All the most common blending techniques can be implemented with only the source alpha (object's texture alpha channel). Destination alpha is only useful for some multipass rendering tricks (and most can be implemented more efficiently with the stencil buffer).

For custom color representations (logluv, RGBE8, etc), the alpha channel is used to store the color data component (exponent, color multiplier or fourth color channel needed for the color space). When writing custom color representations to the back buffer, the alpha channel is directly written to just like the RGB channels, and no alpha blending is used. This also means that source alpha cannot be used either (as the pixel shader alpha output is used for the alpha write and contains color data instead of alpha). When you are rendering with a custom color space, the only way to get alpha channel blending to work is to pinpoint between 2 buffers (sampling the pixel color from the other and rendering to the other). This slows down the performance considerably (game has to copy the whole backbuffer to a texture for every alpha blended object).
I feel really stupid now... (between this information are from this topic )

So proper value should be Luminance 14 bits, U 8 bits, V8 bits, alpha 2 bits (as 2bit seems enough for transparency in "standard FP10" format) for my question to make sense.
 
Status
Not open for further replies.
Back
Top