DICE's Frostbite 2

I've almost spilled my drink upon seeing the first terrain screenshot ;)
Very impressive, especially considering how much trouble we have building any kind of terrain, we usually use per-shot matte paintings instead because it's so complicated to get good results...

Beautiful terrain that you can blow up! Destructible heighfields. I wonder exactly how deep a hole you can dig? Is this where the "procedural virtual texture" comes in? Obviously if you were deforming the terrain, and not changing the terrain texture, you'd get weird stretching that would make the terrain look kind of ugly, or am I wrong?
 
The "Lighting you up" talk is still missing, but nevertheless... I want an RPG on this engine, pronto. Landscapes, characters, dungeons with torch lights, you get the idea.

This is really advanced and amazing tech, the fact that we still haven't learned everything (materials? virtual texturing? destruction engine?) after all these presentations speaks volumes about the effort and ability put into the engine, and it's somewhat scary to see how quickly the industry is adopting offline CG tech by now. Stuff like energy conserving shaders is pretty new, ILM only started to use it on Iron Man 1, and now we're getting it in games - released this year!
Of course it's also interesting how offline is quickly adapting raytracers while increasing the detail and fidelity of the content - close as they might look upon first glance, there's still a very wide gap between games and movie VFX or even game cinematics.

It's also interesting how much convergence there is in the various engines like UE3, CE3 and FB2; I wonder if it'll be enough to get Carmack to look into new tech. They're really falling behind and it's more and more impossible for a single man to cover every aspect of rendering technology.
 
Yeah, terrain destruction is interesting, you'll also have to pay attention to composition (dirt vs rock), maybe they're reprojecting texture coordinates on the fly.
 
The goal is to grossly estimate how far we still are from some form of software rendering.

Very, very far. Just reaching the image quality (texture filtering and AA) in real time is out of question, and most studios are using some form of raytracing to render the images by now which is a lot more computations per sample compared to what the current GPUs are doing.
 
Ok I removed my post idea was proven wrong and it's distracting the thread for the merit of what Dice achieved.
Let speak about their great work and cheer them :)
Laa-Yosh thanks for your insights ;)
 
Last edited by a moderator:
Repi's DX11 paper is now up at: http://publications.dice.se/

Awesome thanks!

*Shrug* We won't know until we look at what those 4 PS3 developers were doing. They may need to undo existing work, and develop PS3 specific components from scratch. Sometimes they may also need to "live" with early decisions that may not make sense for PS3.

What? That would be 9 people who worked on the ps3 version of the engine, not 4 and that was from 4 years ago. My point was your assumption on the ps3 versions receiving less resources than the PC/360 counterparts is questionable at best.

I feel sorry for developers who have had to deal with previously crap tools on vastly different architecture, putting in serious effort, just for gamers to write them off as not trying or keeping up with the standards.
 
Well, one of the slides explicitly mentions Frostbite 2 for future EA titles. Is it possible that one day I'll be playing a sports game running on the Frostbite engine?

Computer shaders: 16x16 thread group for each tile? 1 thread per pixel? Do modern GPUs really have this kind of threading performance to handle the overhead of threading with such a small granularity? I guess the answer is yes?

A pixel shader is a shader that runs 1 thread per pixel as well, the main difference between our use of a compute shader and a standard pixel shader is that we can share results & computations within the 16x16 thread group
 
The idea is more based on informations and people experiences (so you qualifies ;)

I'm not a game graphics programmer either, I only worked in offline rendering; although I do have a degree in computer science, studied microarchitectures, wrote x86 assembly code and stuff, so I have some grasp of this stuff, but all that was more than a decade ago.
 
I've almost spilled my drink upon seeing the first terrain screenshot ;)
Very impressive, especially considering how much trouble we have building any kind of terrain, we usually use per-shot matte paintings instead because it's so complicated to get good results...

Thanks! We have a _long_ history of working with large scale terrains in our games and it is definitely not easy, very difficult to scale things properly while keeping detail high and memory low.

We had a quite competent system already in Frostbite 1 with procedural shader splatting and dynamic heightfields that I talked about at SIGGRAPH'07 (http://www.slideshare.net/repii/ter...l-shader-splatting-presentation?from=ss_embed) that we have since then improved and revamped quite significantly for Frostbite 2.

Hopefully something we will be able to describe in more detail about at SIGGRAPH'11 :)
 

I would like to congratulate you guys once again... for your set of ~16 microwave ovens in the kitchen.
(have some game journalist friends who visited your studio recently)

J/K :) Very cool tech in BF3 and very pretty results! Hope to get some cool single player there!
 
What? That would be 9 people who worked on the ps3 version of the engine, not 4 and that was from 4 years ago. My point was your assumption on the ps3 versions receiving less resources than the PC/360 counterparts is questionable at best.

I feel sorry for developers who have had to deal with previously crap tools on vastly different architecture, putting in serious effort, just for gamers to write them off as not trying or keeping up with the standards.

My point was you can't count the number of people without knowing what they do in any project. It's unclear whether all 9 worked on the PS3 or some continued to work on the PC/360 version exclusively. It is possible that they tried to build a common base first. It's also unclear what the 4 PS3 developers did… and whether they need to undo some work, especially in the early days.
 
WTF?You guys have Repi active on the thread and you are arguing about something you have no info/data on?It's best to drop it.

On topic,Repi,great work guys.I'm looking at those slides and that terrain shot and I almost fell of my fucking chair :oops:

Is there any sneak peak we could get on what you guys are doing on 360 regarding deferred shading?:smile:
 
Last edited by a moderator:
A pixel shader is a shader that runs 1 thread per pixel as well, the main difference between our use of a compute shader and a standard pixel shader is that we can share results & computations within the 16x16 thread group

Well, that exposes my ignorance of GPUs. I suppose it's only logical if you have programmable shaders, that a pixel shader would be a thread.

I'm curious if you can give an insight into my question earlier in the thread, as to why shading was offloaded onto the SPUs rather than the other type of post-process graphical effects etc. that other games seem to be offloading instead. You hear a lot about MLAA, depth-of-field, motion blur etc on SPUs.

Great looking game. Can't wait to play it.
 
WTF?You guys have Repi active on the thread and you are arguing about something you have no info/data on?It's best to drop it.

On topic,Repi,great work guys.I'm looking at those slides and that terrain shot and I almost felt from my chair.Fucking impressive! :oops:

Is there any sneak peak we could get on what you guys are doing on 360 regarding deferred shading?:smile:

Ok, about MLAA, something new or similar approach to the sony method? & about the full rastherized method presume in kz3, it's conjecture, or give the way to use full transparencies on the ps3 via spu? Thank you repi.
 
Last edited by a moderator:
Well, that exposes my ignorance of GPUs. I suppose it's only logical if you have programmable shaders, that a pixel shader would be a thread.

I'm curious if you can give an insight into my question earlier in the thread, as to why shading was offloaded onto the SPUs rather than the other type of post-process graphical effects etc. that other games seem to be offloading instead. You hear a lot about MLAA, depth-of-field, motion blur etc on SPUs.

Great looking game. Can't wait to play it.
Actually,maybe they do?I mean,SPUs do part of the job,send it to RSX which does its part of the job while SPUs are shading in the meantime.
 
Here's a quote about the development of the MT Framework:



So they start with 4, then 5 while working on the PC/360 version, then add an additional 4, making that a total of 9 people for the ps3 version versus 5 for the pc/360. And this can be found in this forum in the game developer presentation thread. This was back in 2007 and a good example to what I believe was and still is the norm.
You need to explain your math here. Your quote says 3 (then 5) for PC/360 vs 4 PS3 (which came later).

Also Capcom released at least 2 successful 360 games before they released anything on PS3, they clearly spent much more time in developing 360 version.
Finally, rendering engines are not drag and drop replacements. Some other developers need to use them. It's not like all those capcom games are developed by only 9 devs.

Hilarious...

Okay, how about... Bizarre Creations & Blur. *sigh*...

How about DICE, *sigh*...
 
Actually,maybe they do?I mean,SPUs do part of the job,send it to RSX which does its part of the job while SPUs are shading in the meantime.

The presentation says post processing is done on RSX after shading on SPUs which I also find interesting like Scott.
 
Don't feel too vindicated :) You'll always be able to do more with fully programmable compared to fixed function, I don't think anyone ever disputed that. But fully programmable though means less performance/watt compared to fixed function so it's not like it's a silver bullet. The stuff you see taking many milliseconds on spu could be done far faster with dedicated hardware, and use less power in the process.
Yes, but would that have been possible with the hardware available to PS3's designers? If PS3's silicon budget had gone on x86 and nVidia or ATi GPU, would that same transistor budget be able to match/exceed the Frostbite 2 results that PS3 is getting with Cell+RSX?

But as I say, that's an interesting discussion for another day. We are getting some nice multiplatform engines as a basis for comparing system designs, and we with the technological insights like these that show how the most eclectic, Cell_RSX, is being used, we'll be able to consider that pros and cons and look forwards to what future system designs might want to incorporate. This thread is specifically for talking about what DICE are doing on consoles.
 
The presentation says post processing is done on RSX after shading on SPUs which I also find interesting like Scott.
Oh I missed it...than that makes 3 of us :smile:

I remember GG and ND mentioning they couldn't get the desired quality of post processing on RSX so they moved it on SPUs.
 
Back
Top