The Game Technology discussion thread *Read first post before posting*

Pretty sure that Lost Coast came before the DoD update with HDR. The LC commentary made it sound like LC was used to test and showcase the technology.
 
Have any of those first implementations used proper linear space lighting and such? I kinda doubt it...
 
DoD:S released a month before LC. It has HDR from the start, but lost coast has a better implementation.

Ah, right. I guess a bigger deal was made of LC in the run up to it's release, probably because of it's Half Life setting. DoD was a pretty cool game.
 
In summary, is everyone in agreement that it's extremely curious that Valve, with a high profile title like Portal 2, is unable to at the very least provide visual parity on a title that seemingly does very little in the way of pushing the performance envelope relative to its contemporaries.

Do you have any idea what those portals cost in the rendering pipeline?
 
Well, with my 3D artist background I'd say just instance the hell out of everything, but it probably wouldn't work with actually moving through... or maybe it could, can't tell.
 
hm... You're dynamically placing camera views, so I suppose just render to texture/off-screen buffers. The hard part I think is getting things to appear to travel through it seamlessly... Dealing with clipping the model geometry at a surface then seeing it/projecting it on the surface, if that makes sense? That would also imply rendering to 2 or 4 more full res targets (two complete doorways in co-op). Since it's dynamic, it'd probably have to be done in extra render passes?

*shrug*
 
Couldn't instancing work here? Basically create a copy of the scene and place it at the other side of the portal but maintain a relationship that allows you to update the copy in sync with the original (and vice versa - instancing in offline 3D apps means basically applying multiple transform matrices to the same object so if you manipulate it there's just an additional reverse transformation).

It might appear like a backwards approach, instead of looking at the same geometry from different POVs you use multiple pieces of geometry, but it could simplify a lot of things, like actually moving through the portal. Even if you open both portals you only need to handle three instances of everything and the levels are fairly simple...

Then again, in coop this means up to four portals, with split screen rendering that's 5 instances and two POVs...?
Would be nice to hear an engine programmer's opinion...
 
This is what they say about portal rendering in the first game:

For the first few months of development, we rendered the views through portals to two offscreen textures. This approach was easy to implement and was compatible with a wide range of hardware. Unfortunately, this method was incompatible with antialiasing and consumed a large amount of video memory to handle recursive views through several portals. Because of these disadvantages, we switched to a system which renders portal views recursively into the frame buffer with the aid of the stencil buffer to isolate pixels corresponding to a given portal. This is a more effective scheme because it is compatible with antialiasing and does not consume any additional video memory through offscreen textures.

And..

When rendering the player's view through a portal, we must render a seperate image using a virtual camera which looks out of the opposite portal. To obtain a correct image and efficient rendering performance, we render only what is visible through the limited field of view of the opposite portal and exclude objects which lie between the virtual camera and the plane of the opposite portal

They also mention that for a short radius around the portal the physics model is less accurate to handle transitions easier. They say generating a portal costs around 10ms (not on what hardware though), and earlier versions took about 500ms.

(From the in-game commentary).
 
Last edited by a moderator:
Well, thanks for the info, seems like it's the more efficient / practical approach. Would be fun to see an instanced world though ;)
 
honestly L4D/L4D2 has the same crappy IQ and probably implementation of the source engine as portal 2 on the 360.

you get heavily aliased and ( lower resolution? ) edges when things are hit with some kind of light source. play around with the flash light in the game almost reminds me of when they do half rez alphas on ps3

http://xboxlivemedia.ign.com/xboxli...sacrifice-left-4-dead-2-20101007044429045.jpg

It's more like they put effort into getting it just running on the PS3. It's just like they put effort into getting it running on the 360 with their first 360 game.

well thats not true, orange box came out on the ps3 and there a significant improvement in IQ and performance since then, there was heavy optimization and even recruiting which went into the ps3 version.
 
Last edited by a moderator:
Again, this was not Valve.
That doesn't mean they didn't use the existing code for Portal 2. Seems reasonable to think they would use what they had already and would make improvements on it. Hell, the Orange Box patch for the PS3 made a few stability improvements, if I remember correctly.
 
I really doubt they would have used another devs code, which was probably a mess. Even after the patch there was still a short delay before every explosion, framedrops everywhere, and the potential to go into a slideshow in some places that only a restart could fix.

Think it would have been much easier to port the latest build of the PC version to PS3 and not worry about fixing up what EA did wrong, fixing the huge amount of bugs, and then having to add the improvements to the engine that have been made since then.
 
there supposedly a patch to resolve some of the issues plaguing the xbox version also its not really a fair comparison either. the ps3 version is installed so it is streaming off hdd, xbox version with optional install will most likely have similar streaming performance
The difference between disc and HDD seek times is a huge factor for a virtual texturing system. Each page load (or page tile load) needs a seek, and disc seeks can be up to 100ms each, while HDD seeks can be up to 20ms (five times faster). Since each page is rather small, the seek time is dominating the latency (the delay of seeing the surface and loading the texture detail it needs).

honestly tho ms should allow developers to detect hdd and suggest an install especially for heavy streaming games like this.
XBLA games (larger than 512MB) can require a HDD, but the retail games cannot. That's kind of a bummer for fine grained streaming technologies, such as virtual texturing. It doesn't seem so bad however. There's some minor texture detail popping when you look really closely to detailed surfaces (in the Brink video), but nothing major. Id software's virtual texturing system seems to be working really well. There should be even less texture issues in Rage, since it runs at 60 fps (assuming they do page id rendering at full rate). It's already looks better than Crysis 2 streaming (in Brink).
 
There's some minor texture detail popping when you look really closely to detailed surfaces (in the Brink video), but nothing major.

Have you actually played the game on the 360? The popping and streaming is horrendous.You won't even see a top-level mip unless you've stared at a surface for a few seconds.
 
Have you actually played the game on the 360? The popping and streaming is horrendous.You won't even see a top-level mip unless you've stared at a surface for a few seconds.

are you installed on HDD?

also a day 1 patch for the texture pop in is available
 
hm...

Q: We've talked in brief about the deferred rendering in SHIFT 2. Can you go into more depth on this? Bearing in mind the 4x MSAA we see on Xbox 360, is it safe to assume you're working with the light pre-pass approach?

Tom Nettleship:
We use a three phase light pre-pass approach for our deferred renderer. The main reason we chose this as opposed to a two-phase approach was because being a racing game, we needed to focus on high quality rendering of diverse material types, ranging from rubber and cloth all the way up to paintwork, glass and carbon fibre. While a BRDF-lookup texture approach allows for decent material variety in a two-pass deferred renderer, it doesn't cut it when you need to implement a totally different lighting model (carbon fibre, brushed metals).

Also, our most important visual components are the cars, which need high quality environment mapping. We tried every encoding possible for the normals channel of the G-buffer before deciding that it wasn't possible to get an acceptable quality level for bodywork reflections. So, we dropped back to the cheapest normal encoding (888 view space XYZ, which helped a lot with PS3 performance) and use those for general lighting. For bodywork reflections we re-evaluate the normal mapping in the third phase of the light prepass render. It's more expensive, but the quality you get makes it worthwhile.
http://www.gamesindustry.biz/articles/digitalfoundry-under-the-bonnect-shift-2-part-2?page=2 (gonna need a login for the rest)

Seems kind of nuts using 4xMSAA with LPP, but I suppose Blur did it too. I wonder what their g-buffer looks like at each phase though. :s

Q: You opted for MLAA on PlayStation 3 - is this the standard code supplied by SCEE's ATG as part of the PlayStation Edge tools? How easy was it to integrate into your engine?

Tim Mann: It's a slightly modified version of the pre-release Edge code. It's very easy to integrate in the pipeline and is placed between the HDR tone-mapping phase and motion blur. It can be quite fussy tuning-wise but we managed to find a good balance between too much edge detection - which produces too much blurring - and not enough, which leaves you with jagged edges.

edit: http://www.eurogamer.net/articles/digitalfoundry-the-making-of-shift-2?page=2

Ah right, forgot about this.

3_deferred.jpg.jpg
 
Back
Top