Halo: Reach

I don't care if they still use a sub-720 framebuffer, but I really hope they add some AA and AF. Anyone heard more about this (other than the preliminary comment that it's improved)?
 
No news at all, it probably hasn't been locked down yet because of performance variables. AF in particular can have a large effect on performance, because of the relatively limited speed of the XGPU TMUs (at least as far as I know).

On a side note, what was the actual resolution of the Halo3 multiplayer beta?
 
No news at all, it probably hasn't been locked down yet because of performance variables. AF in particular can have a large effect on performance, because of the relatively limited speed of the XGPU TMUs (at least as far as I know).

They could maybe set the max level of AF and per texture/surface, but it's somewhat unpredictable with the number of extra cycles required for the higher number of samples. You'd certainly hope they're doing something about it with the larger scale terrain though.

FWIW, ODST had a tweaked amount of filtering compared to Halo 3 (it was negative LOD though).

On a side note, what was the actual resolution of the Halo3 multiplayer beta?
Ackshully... I don't think we ever determined that. The whole pixel counting biz started with one after he looked at the E3 2007 trailer. It definitely had no AA though.

Might have to see about some captures from Gamersyde (if those are still online), but I don't recall any other site having 720p grabs.


Edit: Getting 1152 for width...
Edit2: and 640 for height

So it was the same.
 
Yarp.

When I read about the new LOD system and how it allowed to save memory which in return allowed them to up the number dynamic lights to an healthy amount I wonder more about the possibility of a pre light pass deferred renderer than about a possible use of tesselator (not that I would not want it to see some use...).

It certainly sounds like a deferred lighting of sorts, and a Light Pre-Pass renderer can give them a little more material flexibility than a full blown G-Buffer implementation... of course, this is at the expense of a second geometry pass, which incidentally would be necessary for adaptive/continuous tessellation on dynamic objects on the 360 as it lacks the DX11 Hull/Domain shaders that would have made it a single pass event. (not that I'm saying that's what they're doing :p)

It's easily possible they're using the other form of Light Pre-Pass (CryEngine, Stalker) which doesn't require the second geometry pass at the expense of material variety, but still use more render targets for more accurate spec.


It was probably mentioned in the polygon thread, but the tessellator would be an easy fit for the terrain (it's static), and they can still do the lighting on top of that.

edit: extra disclaimer because some might take it the wrong way. "IF IF IF". But no the article doesn't say too much other than the order of magnitude increase in dynamic lights.
 
I'm certainly expecting a sub 720 resolution but so long as they improved IQ in other ways (and lighting to me is the most obvious IQ factor) then I'm happy.
 
OK stop repeating the non-truth about tesselation, before you repeat it enough times that it becomes truth.

Nothing in that article hints at tesselation - on the opposite, they are talking about LODs, techniques for reducing detail, not increasing it.

What is usually called "impostors" is actively used in Crysis; the idea is that if you have a small, faraway object, it changes very little from frame to frame, so you can render it to a tiny texture - e.g. 32x32, and render in the full scene a single camera-facing quad with this texture. Then you keep reusing this texture for a set number of frames, or until the camera or the object itself moves enough, or the lighting conditions change enough to make the approximation too rough - when you have to re-render it. The engine keeps an "impostor cache" - e.g. a pool of several thousand impostors, typically sub-rectangles in a few large textures.

There have been zero reports of any connection whatsoever between Ensemble's engine for Halo Wars, and the Bungie Halo engine, so there is exactly this much - zero - in the assumption that because Halo Wars uses tesselation, Halo: Reach must also use tesselation.

The might or might not actually use tesselation; it would be wise of them to use it - the point is, we have absolutely no info on that; we might just as well assume Bungie uses slave unicorn labor to power their server farms.

This is tesselation-fetishism, pure and simple. Please stop until you have at least a shred of evidence.
 
What is usually called "impostors" is actively used in Crysis; the idea is that if you have a small, faraway object, it changes very little from frame to frame, so you can render it to a tiny texture - e.g. 32x32, and render in the full scene a single camera-facing quad with this texture. Then you keep reusing this texture for a set number of frames, or until the camera or the object itself moves enough, or the lighting conditions change enough to make the approximation too rough - when you have to re-render it. The engine keeps an "impostor cache" - e.g. a pool of several thousand impostors, typically sub-rectangles in a few large textures.

Thanks interesting info. Only strange thing is they used it for what seems only clouds as disabling imposters removes clouds. I assume there are some drawbacks to using this method in certain situations?
 
It certainly sounds like a deferred lighting of sorts, and a Light Pre-Pass renderer can give them a little more material flexibility than a full blown G-Buffer implementation... of course, this is at the expense of a second geometry pass, which incidentally would be necessary for adaptive/continuous tessellation on dynamic objects on the 360 as it lacks the DX11 Hull/Domain shaders that would have made it a single pass event. (not that I'm saying that's what they're doing :p)

Yep, sounds like some form of deferred; on the Xbox 360, it seems, deferred lighting is a better choice.

Why do you need two geometry passes for tesselation?

I think some have talked about two passes in the context of partially preparing data via MEMEXPORT in one "pass", then rendering in the second pass; these are two passes in terms of "the GPU processing some data twice", but the two passes operate on very different inputs. In contrast, the two passes for deferred lighting are run on the same geometry and have to produce exactly the same results in terms of rasterized pixels, or else there will be severe artefacts (similar to what you see with z-fighting polygons of almost the same depth, or the horrible "surface acne" with not-exactly-the-same-depth in shadowmaps vs. scene rendering).[/QUOTE]

It's easily possible they're using the other form of Light Pre-Pass (CryEngine, Stalker) which doesn't require the second geometry pass at the expense of material variety, but still use more render targets for more accurate spec.

Are you sure that CryEngine works with one pass only?

Stalker is traditional deferred shading, not deferred lighting (DL = light prepass).

It was probably mentioned in the polygon thread, but the tessellator would be an easy fit for the terrain (it's static), and they can still do the lighting on top of that.

Yes, it would be great for terrain. But unlike Halo Wars and other RTSes, where terrain is a significant, almost mandatory part of the rendered scene, in Halo: Reach they probably have a lot of environments where terrain isn't present. So it would make less sense to invest in terrain tech.

Fair enough... again, I'm not saying that's what they're doing!

I wasn't pointing at you, sorry :) You were moderate enough :)

Thanks interesting info. Only strange thing is they used it for what seems only clouds as disabling imposters removes clouds. I assume there are some drawbacks to using this method in certain situations?

You mean Crysis? Well, what you describe might mean that they didn't have an impostor-less implementation of clouds, while for trees they had other ways to render them - e.g. higher-performance but uglier pre-rendered billboards. I'm fairly certain they use it for vegetation.
 
Lighting being one of those aspects
From Halo 3's 3 or 4 light sources, Reach's engine presents 20 to 40 dynamic lights at the same time
Weapon effects benefit greatly from the improved lighting, ie Plasma bolts moving across the screen with their own independent light source, casting colour and shadow across the environment
That seems interesting, games could really benefit of more shadow-casting dynamic light sources. The end of the trailer seemed to suggest this, but since it's probably a cutscene I wonder if it will indeed be part of the gameplay graphics.
 
Why do you need two geometry passes for tesselation?...
...
I think some have talked about two passes in the context of ...*snip*

Ah... this makes sense... Thanks. :)

I'll have to grab the chapter from ShaderX7 again for details, but I recall it having to do with the lack of the hull shader on Xenos & with respect to continuous/adaptive/view-dependent tessellation.

I'll get back to you on this one.


Are you sure that CryEngine works with one pass only?

Stalker is traditional deferred shading, not deferred lighting (DL = light prepass).
It was mentioned in the Engel's presentation on LPP, I think?

edit:
http://www.bungie.net/images/Inside/publications/siggraph/Engel/LightPrePass.ppt

Mm... slide 11. At least it reads like how CryTek described their implementation. In "Version B", they fill up a few RT channels with the specular info whereas this is not done in Engel's original implementation where a second geometry pass is done for applying different materials.

Yes, it would be great for terrain. But unlike Halo Wars and other RTSes, where terrain is a significant, almost mandatory part of the rendered scene, in Halo: Reach they probably have a lot of environments where terrain isn't present. So it would make less sense to invest in terrain tech.
I guess we'll see how outdoors the game will end up being. :)

I have a feeling we won't see any tech presentations until 2011. :(
I wasn't pointing at you, sorry. You were moderate enough.
I learned something from your post though! :)
to outlaw "tessellation" from the console forum hur hur :p
 
http://www.bungie.net/images/Inside/publications/siggraph/Engel/LightPrePass.ppt

Mm... slide 11. At least it reads like how CryTek described their implementation. In "Version B", they fill up a few RT channels with the specular info whereas this is not done in Engel's original implementation where a second geometry pass is done for applying different materials.

Hmmm, interesting, thanks for that.
This is closer to "full deferred" in that it needs another RT, a color buffer, to store the diffuse color of the objects. I still think you lose the ability to apply different material shaders per-object in this scenario.

I think Crytek are doing the two-pass variant, see here:

http://www.slideshare.net/guest11b095/a-bit-more-deferred-cry-engine3

Slide 9, Pass 3: Forward shading with light accumulation texture is a full-scene geometry pass IMHO.
 
With tesselation, you take an existing model and start to add polygons to it. If it's adaptive and view dependent then the change is continuous and without 'popping'. So far so good. There are many possible tesselation schemes, some will also smooth out the newly created vertices (eventually turning a cube into something close to a sphere*) some will leave the new vertices on the plane of the starting triangle.

If you have smoothing going on, you need to add extra vertices to your geometry to accentuate sharp edges and control the newly created curves. So for example with a car model where you have lots of smooth surfaces and sharp edges, you need a LOT of extra geometry. Your base model will have to be more detailed and thus if you only rely on tesselation, the simplest 'LOD' - the untesselated version - is still going to be faaaar to heavy.
Since most of the vehicles and character armor are such sharp edged hard surfaces in Halo, tesselation with smoothing isn't going to be beneficial there.

In order to add detail to a tesselated model, you need to also use a displacement map, preferably with a higher bit depth (16 int or float) and nice resolution. Because relative triangle size (in terms of final pixels) can't really be reduced on current platforms, it can only add mid frequency geometry detail and will also have to keep the normal map. Also, not enough geometry means that the details are quite rough and you can't create sharp edges and smooth surfaces this way. Again, completely useless for hard surface stuff; it's much better suited for creating muscles, folds, spikes etc. on creatures, or rough surfaces for rocks, terrain and trees.
So Bungie can't just build relatively low polygon models and use tesselation and displacement mapping for closeups because they'd look quite ugly.

What Halo Reach needs is to replace their carefully crafted and highly optimized highres models, textures and complex multi-layer shaders with less polygons, smaller textures and simpler shaders. Tesselation cannot help with either.

* like this
cube_movie_thumbnail.png

I'm still not entirely sure about that...

http://www.youtube.com/watch?v=yRlfgwRDCew&fmt=22

From the Aliens vs. Predator Dx11 tech highlights. They are using LOD tesselation to increase/decrease the amount of polys on the alien depending on how far it is from the camera. Once it is enabled, you can watch as the number of polygons is reduced as the camera moves away from the alien model.

And as far as I'm aware even the early tech demos of tesselation on R(v)6xx hardware had this sort of dynamic tesselation which is done by the hardware.

BTW assen, this isn't me saying that Reach will have tesselation. Just trying to figure this out. Probably something I'm just not getting with how Laa-Yosh is trying to explain it to me.

Regards,
SB
 
They are not removing any polygons from the base mesh, that's just how low poly a general game character is nowadays. Probably 10-15K at a first glance. They are just adding extra geometry using the tessellation scheme and then they're displacing the vertices based on a texture map.

And once again, this approach is absolutely not suitable for precise, sharp edged and smooth hard surface models like Spartan and Covenant armor and vehicles.

Can we please stop arguing about this?
 
I don't care if they still use a sub-720 framebuffer, but I really hope they add some AA and AF. Anyone heard more about this (other than the preliminary comment that it's improved)?
I think this is all that's been mentioned about AA in Reach, from the weekly update about the trailer:

"rest assured that Reach will be significantly improved in this department compared to Halo 3."
 
They are not removing any polygons from the base mesh, that's just how low poly a general game character is nowadays. Probably 10-15K at a first glance. They are just adding extra geometry using the tessellation scheme and then they're displacing the vertices based on a texture map.

And once again, this approach is absolutely not suitable for precise, sharp edged and smooth hard surface models like Spartan and Covenant armor and vehicles.

Can we please stop arguing about this?

Furthermore they don't actually seem to displacing all geometry (characters for example) - Smoothed, yes, but not displaced. Judging by the base meshes, they don't look like they'd displace very well, anyway - No edge-refinement. If the edges aren't reinforced (built up) they go to mush when the tessellation kicks in (Hence Laa-Yosh saying a tessellated cube = a sphere). Yeah, while tessellation can be a boon for realtime, you still need more geometry to start with, and that means more memory, more bandwidth, etcetera.
 
Last edited by a moderator:
Well, even Halo CE had sparks that could bounce. ;) Until we actually see things in motion, it's hard to say much.
 
Back
Top