DICE's Frostbite 2

Tap In

Legend
GDC talks scheduled


DICE GDC TALKS 2011
Mon Feb 28 1:45 am DX11 Rendering in Battlefield 3
Johan Andersson

Wed Mar 02 10:30 am SPU-based Deferred Shading in Battlefield 3 for PlayStation 3
Christina Coffin

Wed Mar 02 3:00 pm Culling the Battlefield: Data Oriented Design in Practice
Daniel Collin

Thu Mar 03 1:30 pm Lighting You Up in Battlefield 3
Kenny Magnusson

Fri Mar 04 4:05 pm Approximating Translucency for a Fast, Cheap & Convincing Subsurface Scattering Look
Colin Barré-Brisebois
 
The destruction looks great!

Enlighten looks great, but it isn't clear to me how they are making it work with destructible environments.
 
Dice sound design in BF3 video

also update to GDC info

Tuesday, March 1

* EA offsite event – This is the main GDC games event. EA will be showing Battlefield 3, Alice: Madness Returns, Shadows of the Damned, Kingdoms of Amalur: Reckoning, Crysis 2, Battlefield: Play4Free, Gatling Gears, The Fancy Pants Adventures and more. The main reveal will be Battlefield 3, coverage of which is embargoed until Wednesday, March 2 at 6.00am PST (that’s 9.00am EST, 2.00pm GMT and 3.00pm CET), with other editorial being released in a staggered fashion over the following week.
 
SRAA

We introduce a new technique for s
ubpixel reconstruction antialiasing (SRAA). The core idea is to extend the success of MLAA-style postprocessing with enough input to accurately reconstruct subpixel geometric edges. SRAA operates as a postprocess that combines a G-buffer sampling strategy with an image reconstruction strategy. The key part is to sample the shading at close to screen resolution while sampling geometry at subpixel precision, and then estimate a superresolution image using a reconstruction filter. That superresolution image is then filtered into an antialiased screen-resolution image. In practice, the reconstruction and downsampling occur simultaneously in a single reconstruction pass. Our results demonstrate that SRAA can approximate the quality of supersampling using man using many fewer shading operations, yielding a net 4-16_ speedup at minor quality degradation compared to shading at each subpixel sample.
http://enterbf3.com/images/samplesraa.jpg
http://enterbf3.com/images/sraa3.jpg
 
I wanted to see the destruction. I imagine this will be a long slow tease. Stupid, repi.

Everything about the game looks fantastic so far. We'll see how good the SP really is when more is revealed, but I really want to start seeing the MP stuff.
 
Waiting right now for Christina Coffin's talk on SPU based deferred shading in Battlefield 3. I hear they use DX11 compute shader heavily on the PC side, so it'll be interesting to see how they leverage the SPUs.

* Previous DICE games were forward rendered, just a few lights. Mirror's Edge, which they SHOULD be making a sequel to, used pre-rendered radiosity lighting. Goal of Battlefield 3 was to really expand the lighting and materials, and DICE picked deferred.
* Their PS3 implementation uses 5-6 SPUs in parallel with the GPU and CPU.
* GPU does the initial G-buffer fill, and then passes it off for shading by the SPU. Still waiting to see what that means. Are they using SPUs as fragment shaders?
* The framebuffer is sliced into 64x64 tiles, use a simple shared incrementing counter for sync.
* SPUs compute 16 pixels at a time, in SoA format at full float precision
* The GPU is still busy on other shading while the SPUs eat through the deferred shading
* About 8 millis real time on 5 SPUs for deferred shading while GPU does other stuff, equals 40ms total max compute time contributed by the SPUs.
* Light culling is done in two stage tile hierarchy, after whole-camera and coarse Z in light volumes
* A branch is used to skip all 16 pixels if the attenuation makes them unlit. This is a net win despite the branching cost.
* The SPUs are very even pipe heavy due to all of the FPU action, so complex functions can be replaced with a look-up table using strictly odd-pipe instructions, can decrease 21 cycle functions to 4 in case of complex functions.
* There is still a pure GPU based implementation of the shading pipeline that maintain visual parity, in order to facilitate debugging and validation.
* SPU job code can be reloaded into the live game, enabling bug fixes with quick iteration.

http://www.gamedev.net/blog/984/entry-2249573-spu-based-deferred-shading-in-battlefield-3/
 
Yeah I was at that talk today, it was pretty good. Much more practical than novel. It's nice that they spent enough time to adapt their renderer to the hardware in such a major way. Although they do spend a lot of time rendering out their fat (4x) G-Buffer and copying it all into from local memory to system memory, since they do regular deferred rendering and not deferred lighting like Uncharted or Blur.
 
^ Wow that sounds really impressive. I wonder how the 360 will keep up doing most of that on the GPU alone.

She mentioned that on the 360 they have the CPU some of the setup + culling work done by the SPU's, although she was a bit vague on the specifics. Plus the 360's GPU has decent throughput so they shouldn't have too much of a problem ensuring that the two versions are comparable.
 
So that's 1.2 SPU's for 30 fps, and deferred shading is really not compatible with any sort of hardware sampling AA in DX9 hardware. I wonder if they still have budget for post-process AA on the consoles.
 
She mentioned that on the 360 they have the CPU some of the setup + culling work done by the SPU's, although she was a bit vague on the specifics. Plus the 360's GPU has decent throughput so they shouldn't have too much of a problem ensuring that the two versions are comparable.

Did they mention doing anything like this with the CPU on the PC side? Or is everything going through the GPU?
 
Presumably they are doing the same on PC via Compute Shaders.
She did hint at a Siggraph presentation to cover the X360 approach.

Overall great presentation. Many clever/practical optimizations combined together to achieve their goals.
 
Did they mention doing anything like this with the CPU on the PC side? Or is everything going through the GPU?

In their earlier talks about compute shader-based deferred rendering, they were doing the culling, setup, and actual lighting all on the GPU. I'd imagine that this is still the case.

However they do their software rasterizer-based occlusion culling on the CPU for all platforms.
 
damn... video pulled :cry:

rapid share still up


1Q1UA.png
 
Back
Top