Viaibility and implementation of SVO on next-gen consoles *spawn

[sorry for talking about this in this thread BUT seeing as Alex the original article writer is here and it is a direct response to his post...]

Hey Alex,

A while back, June 2013, the Forza devs made a statement on their use of materials for their Forza 5 game

ref: http://www.nowgamer.com/features/19...dware_building_tracks_drivatar_explained.html


This sounds like Voxel Cone Tracing no?! Wasn't sure back then BUT from you're post it sounds like this technique.

Great find. I didn't see that before. He's clearly referring to some type of procedural textures with dynamic light interaction, but not necessarily SVO cone tracing. Microsoft Game Studio licensed Allegorithmic's Subtsance engine for procedural textures(well procedurals "lite", as they refer to them as "smart textures")along with a number of other prominent publishers. So not surprising to see it in Forza.

Whether or not he's actually using cone tracing or HDR lighting or some other global illumination technique, either way it confirms they're using both: procedural textures and at least partial global illumination. And you can see there's some dynamic lighting from the sun going on in forza.

But no I wouldn't go as far as confirmation of SVO cone tracing, because I don't see why they would stick with cube maps for reflections in that case. Unless they are ONLY using SVO cone tracing as a plugin to target that very specific effect. And when it comes to hacking rasterized graphics with things like this, that's way beyond my understanding. Every dev house has their own programmers which come up with their own hacks which only they know and come up with. It's a big mess if you ask me.

Having said that, I'm pretty sure the first dev that uses SVO cone tracing or any type of ray tracing as their base lighting engine, they will scream it from the top of the world. And you'll know it when you start looking at reflections, transparencies, refractions. It's hard to miss. That game will have reflective surfaces, water, and refractive surfaces all over the place to show it off.

Forza does appear to me to be using some sort of dynamic illumination and procedural textures for the car paint. So that's cool. But if you're going to use SVO cone tracing, reflections and global illumination as well as indirect lighting would be the primary reason. It's because you want those things. And ditching cube maps for refelctions would be one of the main reasons :)
 
[sorry for talking about this in this thread BUT seeing as Alex the original article writer is here and it is a direct response to his post...]

Hey Alex,

A while back, June 2013, the Forza devs made a statement on their use of materials for their Forza 5 game

ref: http://www.nowgamer.com/features/19...dware_building_tracks_drivatar_explained.html


This sounds like Voxel Cone Tracing no?! Wasn't sure back then BUT from you're post it sounds like this technique.

Physically Based Rendering?
 
Physically Based Rendering?

Sure it's possible, but that's not what they're saying. They're clearly saying we're not using textures as you know them. No image textures. They're saying procedural textures, or procedural materials, and algorithms which interact with light realistically. And you can see this immediately.

Procedurals are perfect for creating 3D detail at close up zoom levels and materials. Like the paint chips or metallic flakes they keep talking about in Forza. Unlike a regular image texture, they're not bound by resolution. The more you zoom in, the algorithm just creates finer and finer detail. A regular image texture has a finite resolution. That resolution is determined by its native resolution when the picture of the material was taken or the resolution it would have been created in something like Photoshop. So if you zoom in past the native distance or resolution of the original image, it will start getting pixelated or blurry. If you wanted to have an image texture that covers the entire surface of a car, but have resolution adequate for viewing paint chips at a cm away, you're talking about a very large texture size, scanning, a huge panoramic image, or multiple very large textures.

But that's not the only issue. If you were to take a snapshot of a real life image at 1cm away you will get a particular pattern and that will be suitable for up close zooming. But if you were then to zoom out, in a video game, it wouldn't look as the same material you wanted it to look like when viewed from say a foot away. It wouldn't know what to create.

Let me see if I can explain it better. Say you were to take a snapshot of a cotton sweater with a twisted knitted pattern, at 1 foot away. If you then wanted to have the ability to zoom in very close and maintain clarity and resolution, you would zoom in and take a snapshot at 1cm away, where you would get the pattern for the actual cotton fibers and use that texture for your object at zoomed in levels. However, in a game, if you texture an object with something like the pattern of cotton fibers, if you zoom out, it's not going to look like your twisted knitted cotton sweater anymore. It will look like something else. Not to mention, at some point you would be swapping one for another and it would be noticeable. So you can see why this just isn't practical. Procedurals are the only way to deal with this.


I just thought of something liquidboy....SVO cone tracing might be in use in Forza...in the garage.

It's really hard to tell though because you can fake it really well under those circumstances with cube maps too and I don't think there are any dynamic objects or lighting sources that you can move around in front of the car to notice whether they reflect accurately.

Screen Spaced Reflections are out of the question, since those currently still look pretty bad in comparison to cube maps and they don't reflect what's outside the camera's viewing area. But SVO cone tracing or similar in the garage isn't out of the question. The lighting and reflections in the garage do seem to be at a higher level.

maxresdefault.jpg



Forza5_E3_Screenshot_13.jpg


Notice the clouds or other objects which are not actually visible by the camera? That's outside of the possibility of screen spaced reflections. So we can eliminate that.

That leaves three options: ray tracing, cone tracing, or some really amazing high res reflective cube maps. Which is what we know they're using in the actual gameplay.

Anyway, I agree we shouldn't be getting so off topic in this tread, so let me just wrap it up and go back to my original point that I'm interested in finding out how eSRAM and the data move engines with tile and until might benefit procedural generation and ray tracing "via" parametric surfaces. Or cone tracing.
 
Sure it's possible, but that's not what they're saying. They're clearly saying we're not using textures as you know them. No image textures. They're saying procedural textures, or procedural materials, and algorithms which interact with light realistically. And you can see this immediately.

Procedurals are perfect for creating 3D detail at close up zoom levels and materials. Like the paint chips or metallic flakes they keep talking about in Forza. Unlike a regular image texture, they're not bound by resolution. The more you zoom in, the algorithm just creates finer and finer detail. A regular image texture has a finite resolution. That resolution is determined by its native resolution when the picture of the material was taken or the resolution it would have been created in something like Photoshop. So if you zoom in past the native distance or resolution of the original image, it will start getting pixelated or blurry. If you wanted to have an image texture that covers the entire surface of a car, but have resolution adequate for viewing paint chips at a cm away, you're talking about a very large texture size, scanning, a huge panoramic image, or multiple very large textures.

But that's not the only issue. If you were to take a snapshot of a real life image at 1cm away you will get a particular pattern and that will be suitable for up close zooming. But if you were then to zoom out, in a video game, it wouldn't look as the same material you wanted it to look like when viewed from say a foot away. It wouldn't know what to create.

Let me see if I can explain it better. Say you were to take a snapshot of a cotton sweater with a twisted knitted pattern, at 1 foot away. If you then wanted to have the ability to zoom in very close and maintain clarity and resolution, you would zoom in and take a snapshot at 1cm away, where you would get the pattern for the actual cotton fibers and use that texture for your object at zoomed in levels. However, in a game, if you texture an object with something like the pattern of cotton fibers, if you zoom out, it's not going to look like your twisted knitted cotton sweater anymore. It will look like something else. Not to mention, at some point you would be swapping one for another and it would be noticeable. So you can see why this just isn't practical. Procedurals are the only way to deal with this.


I just thought of something liquidboy....SVO cone tracing might be in use in Forza...in the garage.

It's really hard to tell though because you can fake it really well under those circumstances with cube maps too and I don't think there are any dynamic objects or lighting sources that you can move around in front of the car to notice whether they reflect accurately.

Screen Spaced Reflections are out of the question, since those currently still look pretty bad in comparison to cube maps and they don't reflect what's outside the camera's viewing area. But SVO cone tracing or similar in the garage isn't out of the question. The lighting and reflections in the garage do seem to be at a higher level.

maxresdefault.jpg



Forza5_E3_Screenshot_13.jpg


Notice the clouds or other objects which are not actually visible by the camera? That's outside of the possibility of screen spaced reflections. So we can eliminate that.

That leaves three options: ray tracing, cone tracing, or some really amazing high res reflective cube maps. Which is what we know they're using in the actual gameplay.

Anyway, I agree we shouldn't be getting so off topic in this tread, so let me just wrap it up and go back to my original point that I'm interested in finding out how eSRAM and the data move engines with tile and until might benefit procedural generation and ray tracing "via" parametric surfaces. Or cone tracing.


In Forza 4, environments like the one in the TopGEar studio are extremely low-poly, high resolution texture. The reflections in Forza5 photomode are probably a pre-rendered cubemap
 
I'm saying I can only think of the eSRAM and the tile/untile features on the DMEs. And PRTs in particular are what accelerates SVO cone tracing.
I'm not at all convinced anyone anywhere has said SVO cone tracing is accelerated. From your post, I see how you interpret the dev quote as meaning that, but I don't think that's what's meant at all.

If that's the case, you would have to ask yourself, what type of ray tracing uses parametric surfaces to achieve ray tracing? Doesn't make much sense, does it? It's entirely backwards.
It doesn't which is why I think he means ray tracing of parametric surfaces.
Although SVO cone tracing might fit because SVO uses a cone which could be considered a parametric surface...
Actually that's a volume, not a surface. SVO doesn't create a cone per se, nor use the parametric representation of a cone to form the sampling pattern of the trace.

What it comes down to regards that dev quote is, did the dev mean 'rendering of parametric surfaces via use of raytracing' or did he mean 'rendering of arbitrary scenes by volumetric sampling', neither of which is what his words actually mean, and which, for me, the former is definitely the more likely.

But on the topic of wordage we should be a little careful here, because in relation to his phrase it makes all the difference in the world.
Firstly, we haven't any context for that quote, so can't hold the dev to that. Secondly, it was probably just an off the cuff remark not really thought through. If you listen observationally to people, it's common for people to use a wrong word yet everyone involved in the conversation understands what they mean. Transcribe that conversation verbatim and it'll read wrong without every being wrong to the listeners.

First I need to point out that your example of "rendering a million marbles" is not something that's typically referred to as procedural generation. That's usually referred to as instancing which is quite a bit different from procedural generation as it is known and used the large majority of the time(tessellation, procedural textures, terrain generation, etc). And most of those use parametric surfaces algorithms.
I was envisaging a million procedurally textured marble marbles, or glass with some swirly feature inside. Every marble would be computed as a Perlin or whatever noise algorithm. Every point in the scene would be calculated as procedural (if very simple!) geometry and texture.

If it was a "missphrase", and all he meant was ray tracing a bunch of marbles, then why even bring it up?
Because he (or even she, we have no details beyond a single one-line quote, that may not even be accurate) was possibly asked. eg.

"So PS4 is faster than XB1"
"Yep"
"In every way?"
"Yeah...well, (shrugs), I guess, if you wanted to do something oddball like a CSG game, it could be faster there. It could be faster at some procedural content like that."

I can't think of anything else that would fit.
Hopefully this post shows it can be something else, even if you don't agree. I certainly cannot take that one line as evidence of any superiority in a particular technique like SVO, and there's no evidence beyond that.

Also, if you look up "ray tracing parametric surfaces"

http://citeseerx.ist.psu.edu/showciting;jsessionid=06DA1282D637478ECEB614D1576078D6?cid=228055

You're seeing a whole bunch of papers ALL referencing some type of voxel cone tracing.
Really? How many of those have you read? Because just looking at the titles I can see very few that actually cover the topic of voxel cone tracing. I can't see any obvious paper talking about that, truth be told, with many talking about rendering NURBS/Bezier surfaces or CSG.

Fair enough, and I was going to post it in its own thread but someone else brought the discussion in here before I could. However the answer to the thread title, when it all comes down to it, might just be SVO cone tracing, so how off topic is it? In the end what it all comes down to is what the spec differences actually achieve in terms of graphics or software techniques, no?
I think we need to get to the point of understanding what the differences are, if any, before we can consider that! The known differences, ESRAM and move engines, probably aren't requirements for DX11.2 considering AMD's DX11.2 GPUs aren't reported to feature them (or already have tile-capable DMA units which I'm fairly confident is the case given a little mention of as much in AMD presentations). And regards ESRAM, I can't see it being a great boon to SVO given that SVO is based on small 64kb tiles that should be a good fit for the texture caches. there may be a latency advantage in loading tiles from ESRAM instead of GDDR5? :???: That'll need some pro like Sebbbi to explain.
 
I'm not at all convinced anyone anywhere has said SVO cone tracing is accelerated. From your post, I see how you interpret the dev quote as meaning that, but I don't think that's what's meant at all.

It's accelerated by partial resident textures. That's not from the dev, that's just from looking up SVO cone tracing. Without PRTs, you're not likely to get it up and running on any current hardware for anything significant beyond a simple demo. Let alone be able to implement octrees.

It doesn't which is why I think he means ray tracing of parametric surfaces. Actually that's a volume, not a surface. SVO doesn't create a cone per se, nor use the parametric representation of a cone to form the sampling pattern of the trace.

What it comes down to regards that dev quote is, did the dev mean 'rendering of parametric surfaces via use of raytracing' or did he mean 'rendering of arbitrary scenes by volumetric sampling', neither of which is what his words actually mean, and which, for me, the former is definitely the more likely.

Firstly, we haven't any context for that quote, so can't hold the dev to that. Secondly, it was probably just an off the cuff remark not really thought through. If you listen observationally to people, it's common for people to use a wrong word yet everyone involved in the conversation understands what they mean. Transcribe that conversation verbatim and it'll read wrong without every being wrong to the listeners.

Fair enough, as I said, it's a pretty crazy phrase that confuses and leaves it open for interpretation more so than clarifies things. To me let's just say he's likely talking about ray tracing curved surfaces. I still think it points to SVO cone tracing as the only possible reason why it would be faster since we know any other type of ray tracing is just out of the question. Just simply because the technique is heavily dependent on partial resident 3D textures.

I was envisaging a million procedurally textured marble marbles, or glass with some swirly feature inside. Every marble would be computed as a Perlin or whatever noise algorithm. Every point in the scene would be calculated as procedural (if very simple!) geometry and texture.

Hopefully this post shows it can be something else, even if you don't agree. I certainly cannot take that one line as evidence of any superiority in a particular technique like SVO, and there's no evidence beyond that.

Well if I understood you correctly you're still saying creating the marbles would be done procedurally. Ok but that still doesn't answer the ray tracing part. Creating marbles procedurally won't help you ray trace them afterwards unless there's some other ray tracing technique you're referring to that I do not know about that benefits from having instanced objects. If you'e suggesting he really did mean ray tracing, that's far more intensive, whether or not you're doing it on 1 million procedurally created marbles, or instanced marbles, or 1 million unique objects. It would require the same computation power. Ray tracing doesn't care much about how many objects, or what kind of objects you have. Real ray tracing cares more about your resolution, not geometry. Unless there's something I'm completely ignorant about.

Really? How many of those have you read? Because just looking at the titles I can see very few that actually cover the topic of voxel cone tracing. I can't see any obvious paper talking about that, truth be told, with many talking about rendering NURBS/Bezier surfaces or CSG.

A few. Most of them point to voxel and texture octrees implementations as a suggested alternative.

I think we need to get to the point of understanding what the differences are, if any, before we can consider that! The known differences, ESRAM and move engines, probably aren't requirements for DX11.2 considering AMD's DX11.2 GPUs aren't reported to feature them (or already have tile-capable DMA units which I'm fairly confident is the case given a little mention of as much in AMD presentations). And regards ESRAM, I can't see it being a great boon to SVO given that SVO is based on small 64kb tiles that should be a good fit for the texture caches. there may be a latency advantage in loading tiles from ESRAM instead of GDDR5? :???: That'll need some pro like Sebbbi to explain.

Well no, we know they're not requirements. Clearly both GPus support it, and we know now, the PS4 is capable of it. The question at hand was how would they offer additional benefits.

The tiles can add up to a few megabytes in size though. The way it works is it pulls different tiles from different textures and different mip-map levels, then store the resulting pool of tiles in ram. I believe the example given at MS's build was a 3GB texture compressed down to 16MB of tiles. At the very least it appears like a coincidental fit for the eSRAM.
 
I translated the page Solarus pointed to of the PS4 demonstration on SVO:

As an example of the possibilities of the Playstation 4 Graphics Chris Ho presented his project. He combines a voxel representation on sparse octree calculation with the Partially Resident Textures, which offer GCN chips and keep as Tiled Resources DirectX 11.2 tray. By storing in 3D textures can be dispensed to the CPU-intensive generation of the octree, but without giving significant advantages art. A live demonstration of the technology was impressive lighting effects and achieved without major improvements, which still stand, according to Ho, around 38 fps for the current animation. The generation of the voxel skeleton but needs another 45 ms (corresponding to about 22 fps).

Hmm...

Interestingly enough most of the SVO cone tracing implementations you see around on the net are missing the octree as well, and here they're saying the octree is more CPU-intensive.

Kind of hard to tell but doesn't it seem like they're saying that it runs at 38fps without the octree, and at 22fps with it? And something about needing another 45ms......whatever the heck that might be.

If that simple demo is running at 22fps with an octree, it's going to need to a whole hell of a lot of optimization for a video game.
 
I translated the page Solarus pointed to of the PS4 demonstration on SVO:



Hmm...

Interestingly enough most of the SVO cone tracing implementations you see around on the net are missing the octree as well, and here they're saying the octree is more CPU-intensive.

Kind of hard to tell but doesn't it seem like they're saying that it runs at 38fps without the octree, and at 22fps with it? And something about needing another 45ms......whatever the heck that might be.

If that simple demo is running at 22fps with an octree, it's going to need to a whole hell of a lot of optimization for a video game.

It's all in the PDF. http://www.gdcvault.com/play/1019252/PlayStation-Shading-Language-for

Lighting+Voxels.jpg


Results.jpg
 
Other than a choice of particular art style, ray tracing for real-time rendering is generally not a practical approach for video games simply because there are better and more efficient ways. Just look up Quake Wars ray traced or Wolfenstein ray traced, impressive tech, but that's about it.
 
Other than a choice of particular art style, ray tracing for real-time rendering is generally not a practical approach for video games simply because there are better and more efficient ways. Just look up Quake Wars ray traced or Wolfenstein ray traced, impressive tech, but that's about it.


You must have not been keeping up with this. We're not discussing ray tracing, but rather SVO cone tracing.

But as far as ray tracing goes, there is no better approach.

In fact there's no option for video games other than ray tracing going forward. Rasterization becomes inefficient beyond a certain point, and actually more taxing than ray tracing. We're 2 years out from that point, tops.

2534026-6820803678-rt.pn.png

Red: Rasterized.
Green: Ray tracing.

Not to mention, even if you could figure out how to hack the remaining ray tracing effects we don't have like proper refraction, reflections, etc, the amount of manpower and effort that goes into hacking rasterized graphics to do things such as reflections and make reflection maps, refraction and fluids, is becoming too much for rasterized graphics to make sense anymore.

The scales are finally tipping and rasterized grapics are actually the ones becoming too expensive in both manpower and computer processing to run.

We should have SVO cone tracing this generation, but from the looks of things there's a chance we might miss out on it because the manufacturers wanted to save $50."For gamers, by gamers"...right?

"Hey gamers! We know you've been waiting for 25 years for this, and you can now have an X1/PS4 for $550/$450 with cone tracing or $500/$400 without. Which one do you want? Nevermind, we'll go ahead and make the decision for you guys and save you a buck."

If this turns out to be the case....un-freaking-believable! Time for a new competitor in the console industry, because the bean counters have clearly taken over.
 
We're 2 years out from that point, tops.

Funny how that chart image was dated as back as 2008, this is 2013 now. And what's the X value anyways on that chart? 100M? 1B? 10B? 100B?

Also, there are consumer PC parts that has the 2X+ the computational power of the X1/PS4. Why hasn't it happen, like ALREADY?
 
Last edited by a moderator:
...
In fact there's no option for video games other than ray tracing going forward. Rasterization becomes inefficient beyond a certain point, and actually more taxing than ray tracing. We're 2 years out from that point, tops.

2534026-6820803678-rt.pn.png

Red: Rasterized.
Green: Ray tracing.

Not to mention, even if you could figure out how to hack the remaining ray tracing effects we don't have like proper refraction, reflections, etc, the amount of manpower and effort that goes into hacking rasterized graphics to do things such as reflections and make reflection maps, refraction and fluids, is becoming too much for rasterized graphics to make sense anymore.

....

If this turns out to be the case....un-freaking-believable! Time for a new competitor in the console industry, because the bean counters have clearly taken over.


You fail to realise that with rasterization LOD is used to keep the triangle count to reasonable levels, so the counts will not really increase with time from here onwards.

You also do not consider what the constant factor is for ray-tracing vs rasterization, it has been discussed several times on this board that for primary rays, it is unlikely that ray-tracing will win over rasterization anytime soon.

For secondary rays however, there are a lot of ideas that ties into ray-tracing, photon mappong etc. So some sort of hybrid approach, as we have now, seems to be the way for some time to come.

And of course bean counters are in charge, otherwise companies would not survive.
 
It's accelerated by partial resident textures. That's not from the dev, that's just from looking up SVO cone tracing. Without PRTs, you're not likely to get it up and running on any current hardware for anything significant beyond a simple demo. Let alone be able to implement octrees.
Oh. I thought you were talking about XB1 being accelerated over PS4 given the original posit and reference to the dev quote. Yes, PRT improves SVO performance.

Well if I understood you correctly you're still saying creating the marbles would be done procedurally. Ok but that still doesn't answer the ray tracing part. Creating marbles procedurally won't help you ray trace them afterwards unless there's some other ray tracing technique you're referring to that I do not know about that benefits from having instanced objects.
You don't create the object. ;) You don't have any object data persistent in RAM. You render the scene calculating each pixel from scratch every time. Cast a ray, calculate what marble it would hit, calculate the surface appearance for that pixel, move to the next one. You need to read only the marble positions in this case (although this is of course the ray-tracing problem of fast random memory reading).

Well no, we know they're not requirements. Clearly both GPus support it, and we know now, the PS4 is capable of it. The question at hand was how would they offer additional benefits.
I don't follow the question. The discussion started when you suggested XB1 could have an advantage over PS4 in support SVO via PRT, no? That was the thing having an advantage. So are we talking about what advantage XB1 can have over PS4 thanks to its HW design, or, recognising PS4 does PRT too, what advantage both consoles have over...consoles without PRT?

The tiles can add up to a few megabytes in size though. The way it works is it pulls different tiles from different textures and different mip-map levels, then store the resulting pool of tiles in ram. I believe the example given at MS's build was a 3GB texture compressed down to 16MB of tiles. At the very least it appears like a coincidental fit for the eSRAM.
Yes, that seems the one area X1 may have a benefit, depending on how tiles are accessed and whether they can be cached effectively or not. If they are accessed in a truly random fashion, low latency access seems beneficial. I'm unable to answer that. We'd need to know what the latencies are between DDR/GDDR and ESRAM access, and how often the GPU is waiting on data. Oh, and how well the GPU copes with stalls by doing other work, because if it's still busy while waiting on textures tiles to be loaded from main RAM, the time taken to complete the task won't be representative of the work done by the GPU.
 
You fail to realise that with rasterization LOD is used to keep the triangle count to reasonable levels, so the counts will not really increase with time from here onwards.

You also do not consider what the constant factor is for ray-tracing vs rasterization, it has been discussed several times on this board that for primary rays, it is unlikely that ray-tracing will win over rasterization anytime soon.

For secondary rays however, there are a lot of ideas that ties into ray-tracing, photon mappong etc. So some sort of hybrid approach, as we have now, seems to be the way for some time to come.

And of course bean counters are in charge, otherwise companies would not survive.

With or without LOD the triangles amounts have grown significantly and is growing each and every generation for any particular scene and it will only keep growing. They're going to have to grow a whole hell of a lot to even allow for certain much needed physics effects such as water and fluids and a lot of others. You just can't do realistic, physically simulated water without millions of triangles and there's nothing LOD can do about that one. You need the triangles to do the physics and get its fluidity accurately. Can't get by with an empty shell when it comes to motion and volume-based fluids and other materials.

And I would argue the benefit of geometrical increase would eliminate the need for LOD in a lot of games, or at least the amount of levels, as we know them and that's actually a great thing. I'm sure developers aren't too fond of recreating the same model and textures 7-15 times for every object in their game. And the higher the res, the bigger the increase in RAM requirement and space requirement for each LOD level. It's just a lot of unnecessary work that has become very expensive and will ultimately lose out to the efficiency and quality of ray tracing engines. Even with adaptive LOD, it just doesn't solve the major issue like the water fluids problem.

We might have to take a small step back, and might not get all the niceties from the get go like multiple bounces on reflections, and 4+ depth on transparencies and sampled shadows, but that's just something we'll have to live with at first.

Don't get me wrong, photon mapping and SVO cone tracing are nice, and I'm sure we'll see them as well, but I see them more as transitional techniques to ray tracing.

It's around the corner. I will bet on seeing it in 2-3 years tops for high end PCs. And Brigade 3 looks like a good candidate for that first engine.
 
With or without LOD the triangles amounts have grown significantly and is growing each and every generation for any particular scene and it will only keep growing. They're going to have to grow a whole hell of a lot to even allow for certain much needed physics effects such as water and fluids and a lot of others. You just can't do realistic, physically simulated water without millions of triangles and there's nothing LOD can do about that one. You need the triangles to do the physics and get its fluidity accurately. Can't get by with an empty shell when it comes to motion and volume-based fluids and other materials.
You wouldn't use triangles, or any objects, for fluid effects. You use a volume based engine (voxels ;)) to calculate the fluid and then either a non-triangle-based renderer or map triangles to the surface.

As for real-time raytracing, there's an old discussion here worth reading, and of course news that Imagination has released raytracing hardware, but it's a discussion on its own for the £D algorithms forum and not the console forum (unless one has reason to think the consoles will be raytracing, in which case it can gain a thread here).
 
With or without LOD the triangles amounts have grown significantly and is growing each and every generation for any particular scene and it will only keep growing.

so like, assume just simple triangle, 3 verts striping, xyz+normal+UV, each vert takes 32B of memory....meaning 1M tris need 32MB, 1G tris = 32GB....where are you going to store that amount of verts...?
 
Back
Top