Battlefield V reflection tech *spawn

@Shifty Geezer , what say you that we ban all discussions about RayTracing on the entire site? I'm fine with stamping out this virus until everything settles and there is no rasterization left in the world.
 
It would definitely be more constructive hearing from dev's actively engaged with the technology instead of guessing what is really is happening behind the scenes.
Yeah, but unfortunately there seems only two kinds of devs nowadays:
Those that hang out on twitter, where any useful information gets lost very quickly.
The other 99.9% just let Epic and Unity take care for tech.

Exaggeration, but really discussion about gfx dev has almost vanished completely from other related forums.
Having no time to play around with RTX myself yet, and assuming that's similar for most others too,
let me say all the information you collect here in one spot is very useful. Thanks for that, guys!
 
To criticize BFV is absolutely legitimate, we can say their implementation has artefact, some poorer performance, etc. But we can't say that DXR or RT has artefacts or poorer performance, we don't know if it's driver performance/problems or API problems, or if it sits with BFV.
Exactly, There are still many bugs in the game related to reflections, some water puddles reflect light shafts that are not there for example. The game also uses a LOD system for reflections as well, the game will only reflect things in the LOD dome around the player, so reflections can't be used at infinite distances, which causes objects to pop in and out in the reflection itself.

Again I stress criticizing from the position of a "hands on" experience, there are too many systems at play here, and jumping to hurried conclusions does nothing to help. For example, all of SheftyGeezers complaint about the gun clip scene are just the game using two different systems at once to simulate player movement: a first person model for the player himself and third person model for the other players to see, both of these models are represented in the reflections, the first person uses screen space reflections, the third uses ray traced reflections.

The player is crawling, yet his third person model is doing a completely different animation from his first person model, each animation has it's own gun, so two guns are reflected here even though there are only one player:
http://i67.tinypic.com/33ff21v.jpg
33ff21v.jpg


The presence of two reflections for the player: a ray traced one for the third person model, and an SSR one for the first person:
http://i66.tinypic.com/155s1z8.jpg
155s1z8.jpg


So no artifacts or anything, just the game using different systems to handle it's multiplayer.
 
Last edited:
I see four grand conclusions in your original complaint post:
Those are my posits. They should be challenged in the ongoing discussion - that's the whole point of this discussion. I have not concluded anything about the future or limits of RTX. This excerpt of yours lies between a lot of conditionals and posturing...

There just isn't the power, seeing as reflections only are already taking considerable shortcuts
...was actually...

If this is representative, I can't see reflections ever working well with lighting on the current RTX cards. There just isn't the power, seeing as reflections only are already taking considerable shortcuts. I guess a lot of that depends on where the bottlenecks lie. If its surface shading, lighting and shadows may not have a huge impact in addition to reflections such that the quality drop won't be significantly different from the current top attainable.
Those are discussion points, theories based on observation. The vary basis of beloved DF videos! I even presented a need more future data by looking at an RTX 2070 to compare!

It's getting really tiring. People here are simply asking for an objective analysis without jumping to massively grand conclusions, all you could have done is point out the artifacts and then started theorizing about their causes, people would have chimed in and a conversation is started, but how can we start one when you've already decided the problem, the cause and the effect?
Discussion consists of people presenting ideas and it being discussed. Those ideas will include things like, "I don't think it's fast enough." Those ideas might later be wrong. It doesn't matter if their wrong. We're not a marketing arm of any of these IHVs and some ideas that a new GPU isn't very quick that prove to be wrong doesn't matter. The only thing that matters here is people have ideas and discuss those ideas intelligently. I present artefacts and your response is "but screenspace reflections has artefacts too," which has nothing to do with discussing this tech and everything to do with trumpeting RT as better.

I have not at any point criticised RTX, because it doesn't matter. I don't care whether it's awesome or crap - it makes no difference to anything. The only things I've challenged are its value to games in a first-gen version, and whether it is an optimal design for realtime raytracing or whether it's targeted at offline rendering and is being pushed onto the gaming market a little too early. And it doesn't matter one jot whether I'm right or wrong in those ideas! What I want when I present something like a theory about why the reflections are goofed up is a good technical explanation. There's a very peculiar reflection that looks to me like a distortion, but there's no way I'm sharing that here because the response will be something like I'm nitpicking or I'm seeing things or screenspace reflections are worse anyway.

All of us have an agenda somehow, or prefer a tech or platform, but your having the mod status too.
Really? What's mine? Am I anti nVidia - did they kill my cat? Am I pro AMD because they pay me money? Am I console fanboy who hates on raytracing because the consoles don't have it? If people look at my posts imaginging there's an agenda there, they'll keep seeing stuff that isn't like like conclusions instead of theories.

To criticize BFV is absolutely legitimate, we can say their implementation has artefact, some poorer performance, etc. But we can't say that DXR or RT has artefacts or poorer performance, we don't know if it's driver performance/problems or API problems, or if it sits with BFV. I'm willing to bet it's more probable to be on the developer side, so it's important to make this distinction. I think that if we are more specific we become aligned on the same topic and people shouldn't get defensive.
Indeed, discussion on the rendering shouldn't really be here. The original video wasn't pointing at metrics but visual results. I'll move some posts.

@Shifty Geezer , what say you that we ban all discussions about RayTracing on the entire site? I'm fine with stamping out this virus until everything settles and there is no rasterization left in the world.
Indeed!

For those confused, there is no ban proposition on the cards. It's just that every time some talks about issues with raytracing at the moment, people swoop in saying it's early days and we have to give the devs time to get to grips with the hardware. As such, if we can't comment until the devs have got to grips with the hardware, we shouldn't discuss anything for a few years. That presents a choice - either don't talk about raytracing as it's happening and wait until it's matured, or talk about it now as it develops, sharing ideas and results. I'm for the latter.
 
Exactly, There are still many bugs in the game related to reflections, some water puddles reflect light shafts that are not there for example. The game also uses a LOD system for reflections as well, the game will only reflect things in the LOD dome around the player, so reflections can't be used at infinite distances, which causes objects to pop in and out in the reflection itself.

Again I stress criticizing from the position of a "hands on" experience, there are too many systems at play here, and jumping to hurried conclusions does nothing to help. For example, all of SheftyGeezers complaint about the gun clip scene are just the game using two different systems at once to simulate player movement: a first person model for the player himself and third person model for the other players to see, both of these models are represented in the reflections, the first person uses screen space reflections, the third uses ray traced reflections.

The player is crawling, yet his third person model is doing a completely different animation from his first person model, each animation has it's own gun, so two guns are reflected here even though there are only one player:
https://www.photobox.co.uk/my/photo/full?photo_id=501512697678
full



The presence of two reflections for the player: a ray traced one for the third person model, and an SSR one for the first person:
https://www.photobox.co.uk/my/photo/full?photo_id=501512697493
full


So no artifacts or anything, just the game using different systems to handle it's multiplayer.
That's discussion. Why didn't post this rebuttal in the first instance? Your image links don't work though.
 
Those are my posits. They should be challenged in the ongoing discussion - that's the whole point of this discussion. I have not concluded anything about the future or limits of RTX. This excerpt of yours lies between a lot of conditionals and posturing...
They were not presented as posits I am afraid, as you only used conditionals for some the of the conclusions. Maybe it's a misunderstanding issue.
That's discussion. Why didn't post this rebuttal in the first instance?
Didn't have the time to test the game myself to be frank.
Your image links don't work though.
Reuploaded:

The player is crawling, yet his third person model is doing a completely different animation from his first person model, each animation has it's own gun, so two guns are reflected here even though there are only one player:

33ff21v.jpg


The presence of two reflections for the player: a ray traced one for the third person model, and an SSR one for the first person:

155s1z8.jpg


So no artifacts or anything, just the game using different systems to handle it's multiplayer.
 
Hence, why shadows are the first thing I turn off on PC if I need the performance.
Speaking of things to turn down, mine is resolution. I have a 43 inch 4K TV. And while I find playing at 4K is sharp, it's not really that much better than 1440p, worse yet playing at 4K really brings out the bad stuff in games, textures look more washed out, LOD pop in becomes more prominent, and lack of detail at long draw distance sticks out like a sore thumb. Playing at 1440p hides a lot of these things for me, so I stick to it most of the time.
 
They were not presented as posits I am afraid, as you only used conditionals for some the of the conclusions.
Ideas presented here are general posits because we're not engineers working on the software or games. ;)

But stilll...

"These are a result of us not having enough power for 1 ray per pixel per frame." - That's a fact. Raytracing as an algorithm is perfect. It doesn't generate artefacts. If we could raytrace everything with one ray per pixel (actually need more to solve aliasing, leading to some to prefer cone tracing as an alogrithm), there'd be no artefacts. The hardware isn't that powerful though, requiring us to use hacks and fakes such as denoising and hybrid screen reflections, leading to such artefacts.​

"I can't see reflections ever working well with lighting on the current RTX cards." - personal opinion.​

"I'm also guessing the rays are concentrated in important areas or when there's less reflectivity, so as more reflections are present, the quality of those reflections decrease." - clearly a theory.

"There just isn't the power, seeing as reflections only are already taking considerable shortcuts." - wrapped in conditionals.​

Maybe it's a misunderstanding issue.
Often it is, and if people stopped approaching discussions with interpretations of agendas, and instead asked for clarification on a point or challenging it with a counter point, we'd all do a lot better. ;) Members of B3D should be expected to be engaged in tech discussions rather than fanboy warring, and if they can't talk about a piece of tech without getting emotionally attached to the outcome of a discussion, they should bugger off to some other corner of the internet. Let's please have one place in the entire internet where we can talk ideas without them turning into polarised bickering!!

The player is crawling, yet his third person model is doing a completely different animation from his first person model, each animation has it's own gun, so two guns are reflected here even though there are only one player:
I saw the same thing in the player reflection in the car, where the reflected character didn't match the movements of the player. That explains things looking different solved in future by player avatars being used in the first person, but not why the reflections break up. If the reflected model is being traced, there's something weird going on. I wonder if the BVH update rate can be behind and lead to errors?

The presence of two reflections for the player: a ray traced one for the third person model, and an SSR one for the first person:
The presence of two reflections is definitely artefacting. ;) The outside reflection in this case is screen-space? There's full texture detail which I wouldn't expect from traced reflections. There's quite possibly a lot of SSR going on making it hard to separate contributions from RT. The noise under the truck in my other video I would attribute to the low sample rate - anyone got any other theories?
 
The outside reflection in this case is screen-space?
The one to the right is the ray traced one, showing the whole gun with no gaps, the left one is SSR, you can tell by the gun reflection missing parts of the gun.
The presence of two reflections is definitely artefacting.
I get that, but it's not related to ray tracing implementation, but the game itself.
 
The only things I've challenged are its value to games in a first-gen version, and whether it is an optimal design for realtime raytracing or whether it's targeted at offline rendering and is being pushed onto the gaming market a little too early.

My trail of thoughts here:
We know in BFV hit shading is the bottleneck.
To fix this we could cache the shading, so it can be reused for both ray hits and the frame, hit shading becomes a single texture fetch, problem solved.
Likely we have to use a simplified unified material (just diffuse) at lower resolution for the caching.
If we simplifiy this, we could at the same time simplify the geometry too, allowing to support LOD as well (with voxels being the most popular example, but there are many options). So less detail, but high performance and infinite distance.
Unfortunately DXR / RTX does not not allow this. It's limited to classic triangle meshes. So this brings me back to my initial criticism: It would have been better to make compute more suitable to raytracing instead restricted fixed function hardware.

So while i am personally fine with RTX performance, i agree with your doubt. :)
 
My trail of thoughts here:
We know in BFV hit shading is the bottleneck.
To fix this we could cache the shading, so it can be reused for both ray hits and the frame, hit shading becomes a single texture fetch, problem solved.
Likely we have to use a simplified unified material (just diffuse) at lower resolution for the caching.
If we simplifiy this, we could at the same time simplify the geometry too, allowing to support LOD as well (with voxels being the most popular example, but there are many options). So less detail, but high performance and infinite distance.
Unfortunately DXR / RTX does not not allow this. It's limited to classic triangle meshes. So this brings me back to my initial criticism: It would have been better to make compute more suitable to raytracing instead restricted fixed function hardware.

So while i am personally fine with RTX performance, i agree with your doubt. :)
It's great to see everyone coming together again.

I just wanted to follow this thread as well: and I was hoping to see more from this post:
#567
The problem is that the reflection rays land all over the scene, with very little locality. This causes warp divergence, and completely trashes instruction caches since neighboring pixels can be executing code from a multitude of different shaders. Undersampling makes locality even worse.

Engines will have to focus on cutting overhead by using a small number of generalized shaders rather than a large number of specialized shaders. There are a number of ways to do this, with different pros and cons, and reworking your entire shader system is a nice chunk of work, so we can expect it to take a while before we see results in games.

Is there some background context as to why games went the large number of specialized shaders vs small number of generalized shaders? Is it a hardware optimization design choice?

If this is a big problem, then the challenges here for a simple bolt-on are fairly obvious as the two are directly competing. If you're optimizing for rasterization performance, you're not going to get good RT performance. And we can't yet optimize for RT performance until everyone has RT. This bodes for an interesting discussion for titles going forward. Quite curious as to what the upper limit developers can hit while their titles are optimize shaders for rasterization/compute, and what the upper limit becomes when they optimize shaders for RT.
 
It's fairly important imo to get developers something "hands on" to play with, there is just too much levels of complexities that you need to try it for real (and account for developers finding different approaches). So the statement about delaying it, imo doesn't give you the best results if you want progress. You cannot "solve" this inhouse alone, you need it in the open.

There is a lot of flexibility in how/what gets exposed in public apis. dxr is one way that is pretty "safe" in terms of abstraction, even safer is the metal2 way, but it is just the beginning. It is very typical that you want to hide new features behind some thicker abstractions, so you can change the hw/sw etc. without carving some details into stone forever. Once you have made more experience and you feel confident certain things are good "forever", you lower the access barriers to the hw primitives.

Imo having a sparse spatial data structure in hw was just a matter of time (and devs have been asking for some level of bvhs forever), and is yet another option compared to previous approaches to GI (voxels, sdfs etc.) for growing the toolbox on those topics. (btw you can trace against AABB as well, not just triangles).

As several write, the divergency and shader explosion problem are for real, but they are also a problem in rasterization long term:
runtime stuttering/hiccups can be related to shader compilation. Even when in advance, anyone working with big engines knows how fun it is to wait for permutations to be compiled, so it increases iteration times/costs of production.
These are not "active" choices, but the result of systems having grown naturally over the years, which is why from time to time (and as hardware advanced) you need to think about the abstractions at hand.
RT can make some of these issues pop out a lot more, but also introduces the concept of "callable shaders" (aka proper function pointers). Ultimately fixing it in the context of RT helps the overall programmability of GPUs, and I am sure it will "flow back" into more low-level primitives in the future (my personal opinion). By "solving" one potion of the equation (trace/bvh traversal) and putting it into the hw, you can now focus more on the rest (divergent shading etc.)

anyway exciting years ahead ;) and while unfortunately not all public, there is a good amount of discussion/exchange going on between IHVs and ISVs on all these topics.
 
Last edited:
Is there some background context as to why games went the large number of specialized shaders vs small number of generalized shaders? Is it a hardware optimization design choice?

If this is a big problem, then the challenges here for a simple bolt-on are fairly obvious as the two are directly competing. If you're optimizing for rasterization performance, you're not going to get good RT performance. And we can't yet optimize for RT performance until everyone has RT. This bodes for an interesting discussion for titles going forward. Quite curious as to what the upper limit developers can hit while their titles are optimize shaders for rasterization/compute, and what the upper limit becomes when they optimize shaders for RT.

I can only refine the question and add some assumptions...

If we remember when deferred shading came up, using a single uber shader was quite common. Nowadays we have grown complexity due to transparency and special materials, but it should be easy to use a single shader just for reflections again. Likely for BFV this would have been too much effort in short time, or they knew it would not help much.
First we would need to know: Does RTX already batch ray hits until all 32 threads can execute them using the same shader? I really think so, and then reducing shader count would do no wonders.

So more likely the problem is the shading complexity itself: Fetching multiple textures, fetching multiple shadow maps, or worse: emit new rays for shadows, reflections, AO, GI... whatever.

If i'm right, caching shading is the solution. But specular shading on reflections is not easily possible and can be a compromise of accuracy vs. storage in the best case, or just go without it. (Caching is a lot of work! You can almost start from scratch with your renderer if you aim for texture space shading at full detail.)

Also, if i'm right, BFV reflections would become 4 times faster or even more. Mid range GPUs would be enough, RT would be affordable for anyone, not just for a niche of rich kids. Here it becomes really interesting. But the price to pay is increased shading area: Not just the visible screen needs shading, anything needs it. So we need more memory, stochastic updates, ... whole lots of other problems.

(Sorry for bringing up texture space shading again and again, but you see how it would solve the problem.)


I do not think there is a compromise to make between rasterization and RT performance. If you have any concrete reason in mind let me hear. I can't think of a single one. If using just one shader for RT already helps we can do this without changing the raster pipeline at all.


Looking forward Exodus and upcoming Future Mark.
I assume Exodus already uses a simplified shader for GI, and eventually they ignore not only reflections of reflections, but also shadows (except sun maybe) completely. Hoping they publicate all this we can draw more conclusions.
Future Mark is probably the opposite: Reflections of reflections, RT shadows, RT AO as well? Interesting worst case scenario then.
(Silence about RT shadows in Tomb Raider. Is it still expected?)


The thing i like about RTX is i do not see a performance problem that can't be fixed with software. It's not what i wanted, but it's usable.
 
Turing got a few instructions to aid texture space shading as well, so there is a lot of flexibility in approaches possible.
https://github.com/KhronosGroup/GLSL/blob/master/extensions/nv/GL_NV_shader_subgroup_partitioned.txt
https://github.com/KhronosGroup/GLSL/blob/master/extensions/nv/GLSL_NV_shader_texture_footprint.txt

However there is a "round trip" cost of texture space shading as well (tag surfaces first, then compute shading, then resample), whilst dxr abstraction allows for better in-pipeline / on-chip handling of partial results. I am sure we will see many different approaches...
 
Really? What's mine? Am I anti nVidia - did they kill my cat? Am I pro AMD because they pay me money? Am I console fanboy who hates on raytracing because the consoles don't have it?

Dont see any reason either why you would be anti nvidia or anti anything really, dont think you are really :) I do think you like console tech more then pc tech though, not saying theres anything wrong with that at all. You have doubts on the Ray tracing abilitys on the new RTX 2000 series of GPU's, but you have to think its the first RT implementation and it works quite well for what it is in BFV. We will have to see more in the future how it pans out.
2020 will eventually see the next line of GPUs with perhaps better RT functions.
I do also think that next gen consoles will have RT, maybe not as fast or advanced but some abilitys to use it in games, combined with normal rendering like BFV does. Would be abit off if they didnt. Im going off topic here though :p

If people look at my posts imaginging there's an agenda there, they'll keep seeing stuff that isn't like like conclusions instead of theories.

True to that.
 
...it works quite well for what it is in BFV.
That's subjective and a matter of debate. I'm not clear how much is RT and how much is screen space. eg. The reflection on the marble floor or in the puddles. The doorway reflection in the shiny floor has full texture detail suggesting to me it screen space*, plus we get all sorts of artefacts like appearing/disappearing reflections. It's important to determine what RTX does do and what it doesn't - that's really what I'm getting at in all this RT discussion. In the RT for consoles discussion, all the benefits of raytracing were listed. However, we need to be realistic with the hardware and that starts by looking at what's being accomplished with RTX objectively. Comparing low quality 15% pixels to high quality 40% pixels should be done to see what impact the denoising has and how much 'spill' there is in the reflections, for example.

* Edit, Of course I'm forgetting temporal sampling to fill in the detail, while distortion is present in the window frames. There should be a visible different between low and ultra settings in reflection quality.
 
Last edited:
The doorway reflection in the shiny floor has full texture detail suggesting to me it screen space,
I tested the game, and it's not. The only SSR reflections in that whole scene are some leaves, and the first person player model.

As DICE have stated, SSR only applies to certain foliage objects, and certain LOD culled objects. Ray Tracing is not curtailed back here in favor of SSR.

plus we get all sorts of artefacts like appearing/disappearing reflections.
As mentioned earlier, reflections are subject to the LOD of the game, and there are also several bugs that need sorting out. It's important to make that distinction, BFV uses a specific tailor made RT implementation that suits it's technology, any drawbacks we uncover right now, are in the game, and not necessarily the implementation itself.
 
Last edited:

Thanks, awesome stuff. I wish some other vendor would expose so quickly too! (Thinking of GCN ability to launch vertex shaders from compute, or access to other threads VGPRs...)


I tested the game, and it's not. The only SSR reflections in that whole scene are some leaves, and the first person player model.

As DICE have stated, SSR only applies to certain foliage objects, and certain LOD culled objects. Ray Tracing is not curtailed back here in favor of SSR.

I don't think it's distinguishable so easily. If you see a reflection of a leave, you know it's from SSR. But if it's not a leave you can't be sure it's ray traced. (also 'wrong' SSR reflections wont be corrected with rays eventually.)
I assume they do SSR at first, and results with a high probability to be wrong get replaced by ray tracing. Their given percentage numbers of rays per quality tier may be affected or not, and maybe those numbers have changed with the patch as well.

Personally i guess it's a mix of both: SSR mostly helps to distribute more rays to other parts of the screen, but also allowed to reduce the ray count a bit to hit robust 60fps. (The bounding box bug likely improved only the tracing performance but not the shading bottleneck.)
A bit of hackery but it works.


The clipping of the raytraced scene is a more interesting limitation. I thought they would 'fix' this by blending with the sky before a building pops out. They did not? And is it the case that smaller objects pop out earlier than distant large objects? Would be interesting to know...
In any case we see that RT can not handle such large amounts of geometry. We need additional very low LOD models maybe, but smooth transitions will remain almost impossible for now.
Things like progressive meshes that transform smoothly between LODs would require updating low level BVH too frequently. Combining progressive mesh and progressive BVH becomes possible only if BVH generation becomes programmable. (The future of realtime engines is insanely complex - i never believe promises about 'It becomes easy, simple and lower cost ' :) )
 
Right. the problem space simply shifts. In lighting/shading RT becomes easier/simpler to use and setup. But feeding and managing a global scene now becomes tougher, given 2d raster allowed so many tricks (particles etc.) and you could cull aggressively.

Devs for sure will present their findings at GDC.
 
Back
Top