Brink: Splash Damage latest FPS (Thread features DeanoC as master of secrets)

Farid

Artist formely known as Vysez
Veteran
Supporter
So, it seems that Splash Damage unveiled their latest project: Brink

Video (teaser) and artwork can be found

http://www.brinkthegame.com/

[gt]49784[/gt]

Not a lot of info about the game were releaved during the unveiling, other than the
usual
claim that it sets to revolutionise the FPS genre, but we know one particularily interesting fact about it: Dean Calver (DeanoC for the B3D usuals) is the lead programmer on that project.

:devilish:
 
Wow.

I've been told by anonymous sources that this is the first game to break 300 billion triangles per second.


:mrgreen:
 
LOL we might not get quite that many triangles ;)

Teaser doesn't show alot, the screen shots that are around give a better feeling being in-game shots.
 
So, how awesome is your game?

(Why is this labelled as a new trailer? I'm pretty sure I saw it last week.)
 
On the well known chembro scale of awesomeness, its roughly salmon dance awesome... :p

The teaser is a week old, which is fairly new but not that new...
 
So the first bit of tech thats worth talking about, is our Virtual Texturing. Feel free to ask etc. :)

Instead of traditional streaming textures, we use a virtual texture system. This means we know which textures are visible and at what resolution, so we then load them off disk (if they aren't in memory).

This has the obvious benefit of allowing much more textures in the level than we need to store in memory. The main cost is rendering a low resolution view of the world and decoding on the CPU which textures and which mip map levels are needed. A threaded loader then loads those bits of textures if required.

Its similar at least in theory to what idTech 5 and Crysis 2 is doing, so its not revolutionary but very useful to get really detailed levels on all 3 platforms :)
 
Virtual Texturing huh ? I think it's a wise investment in time. Would be superb if you could run a presentation or release some slides for others to follow some day ! :p.

Needless to say, I'll be following this closely. I still miss Heavenly Sword. :(

What's the second bit of tech information ? :devilish:
 
New methods are always cool. :cool: How much is loading textures on demand a bottleneck? The HDD wouldn't be fast enough to serve up textures at a peak per-frame level. And if the textures aren't changing much, how much difference is there between just loading it all into RAM in the first place?!
 
Instead of traditional streaming textures, we use a virtual texture system. This means we know which textures are visible and at what resolution, so we then load them off disk (if they aren't in memory).

Do you guys do virtual texturing only on the color textures, or on all textures including normal maps, gloss maps, etc?
 
We do it on all the maps, with transparency being an interesting case were not completely happy with
 
Instead of traditional streaming textures, we use a virtual texture system. This means we know which textures are visible and at what resolution, so we then load them off disk (if they aren't in memory).

This has the obvious benefit of allowing much more textures in the level than we need to store in memory. The main cost is rendering a low resolution view of the world and decoding on the CPU which textures and which mip map levels are needed. A threaded loader then loads those bits of textures if required.
How do you decode on CPU? Are you rendering texture IDs and mip level to a RT and then collecting those on the CPU? But then you can miss some smaller textures not touched by the subresolution render...

What's your secret sauce? :D
 
Thanks for sharing the info!

What would you say is the main difference between the three Virtual Texturing technologies? In addition, why do games based on this technology look a little blurry?
 
Any particular measures to prevent texture pop-in on "just in time" async loads? Caching or prefetching? Artist-driven or automatic?
 
Its similar at least in theory to what idTech 5 and Crysis 2 is doing, so its not revolutionary but very useful to get really detailed levels on all 3 platforms :)

Now that we have the Siggraph paper on idTech 5, it could be interesting if you explain how Virtual Texturing differs from that at the various levels (which includes the question above). It does seem different in that the first representation of the word in idTech 5 is done immediately, which the initial low res information of 3d data and textures is basically upscaled until more information has been streamed in, whereas I'm getting the impression you do a one pass low-resolution render first that's not visible, and then get the exact texture you need in the next. Is your capacity to load the textures factured into that estimation, or is the estimation conservative enough so that the texture can always be loaded in regardless of where it needs to come from?
 
Back
Top