NVidia Ada Speculation, Rumours and Discussion

Status
Not open for further replies.
That's an important point, no? Maybe the most important. We are just too geeky all the day :D
The most important point will happen when a big budget game will require RT for gameplay. Like most games require texture mapping these days and wouldn't really work on a GPU which don't support it. When there will be no need to build the rasterization baseline for old platforms a lot of things and approaches will change, we are already seeing this in UE5 but it's a half step so far.
 
Sure. We don't have enough RT GPUs out there to make any RT only games yet?
Currently, giving RT support at all to many games is a huge step forward, which would not be possible without U engines.

Theres more RT gpus out there than PS5's for sure, probably more then all RT capable consoles combined. Anyway, you seem to really dislike anything pc (as a developer?), maybe switch to the consoles and do your projects there?
Otherwise, just about everything hardware related is much and much ahead on the pc side of things.
 
Anyway, you seem to really dislike anything pc (as a developer?)
I really don't know why you keep depicting me as a console warrior. I can not even play with console controllers, so why should i worship those platforms?
Because i see their advantages? Like being designed for games at a sweet spot, and having more flexibility with RT which i would need? I'm sorry to notice those things.
Because they cost less money? Or because the define cross platform games due to selling most of them? I'm sorry to notice this as well, and i'm sorry to be just fine with this obvious reality.
Because i criticize PC HW to increase in cost, power consumption and size, with results not justifying this? I'm sorry this is my opinion.
Because i want less powerful PC HW like iGPU on console levels, to compete their advantages and keep the PC platform securely alive this way? I'm sorry to put your vision of glorious PC master race into danger, while my vision with the same goal remains intact.
So either you get it, or not. No need to agree, but i want you to understand my point, which last time i thought you did.
Otherwise, just about everything hardware related is much and much ahead on the pc side of things.
That's not my business or interest. IT's just a tool to work on. No matter how fast the tool operates, i still (and only) need to find the faster algorithm, not the faster HW.
 
I really don't know why you keep depicting me as a console warrior. I can not even play with console controllers, so why should i worship those platforms?
Because i see their advantages? Like being designed for games at a sweet spot, and having more flexibility with RT which i would need? I'm sorry to notice those things.
Because they cost less money? Or because the define cross platform games due to selling most of them? I'm sorry to notice this as well, and i'm sorry to be just fine with this obvious reality.
Because i criticize PC HW to increase in cost, power consumption and size, with results not justifying this? I'm sorry this is my opinion.
Because i want less powerful PC HW like iGPU on console levels, to compete their advantages and keep the PC platform securely alive this way? I'm sorry to put your vision of glorious PC master race into danger, while my vision with the same goal remains intact.
So either you get it, or not. No need to agree, but i want you to understand my point, which last time i thought you did.

Those are your views and opinions on the pc gaming/market. Im not really seeing RT being better in any way on them (your the first to say so, so far). Cost less money, it depends, games tend to be more expensive and you gotta pay to play online etc. Console hardware didnt really get cheaper either. Cross platform games usually tend to be 'best' on pc, too. You dont need the highest end to match and/or beat the PS5, either. ASwell as going into the generation, you usually can get some nice hardware at the lower end range of the spectrum.

Your just in the wrong forum section imo, all i see you doing is complaining and downplaying anything thats not your typical console. But yeah, continue with this stuff, so will i from the other side.
 
Im not really seeing RT being better in any way on them
Because you, as an (assumingly) end user, only look at resulting performance from samples. And you see more performance -> more rays -> better image. Conclusion: PC has better HW and is the better platform.

To me, as a developer, it is like this:
Question: Can i implement an algorithm to scale geometry resolution so i have high res near the camera, but low res far from it?
PC rasterization: yes. PC DXR: no.
Console: yes, yes.
Conclusion: Only console currently has the necessary flexibility to reduce work by a factor of multiple orders of magnitudes. Thus, at some point, the console becomes faster no matter what's the difference in HW power.

The remaining question becomes: Can we have a workload which is large enough so it can run faster on console than on PC, while still being realtime?

Can you follow, up to this point? Remember those Nanite claims of bazillions of triangles, but only rendering 2 millions of them. Because there is no need to render triangles smaller than a pixel. I talk about that same thing.
And now imagine we have two computers, both the same. One runs an algorithm to reduce the triangles to pixel resolution and renders 2 million triangles, the other one has not and renders bazillions of triangles.
The first one will be almost a bazillion times faster. (Does that term exist? 'bazillions'? idk)
You agree, yes?
Now imagine we make this second computer 10 times faster. What will change?
Almost nothing. Still not fast enough.
You agree still?

Ok, now the next step, explaining why i 've said 'you only look at resulting performance from samples'.
Imagine we are not yet done with the triangle reduction algorithm. What do we do? Ofc. we don't use bazillion triangle models at all. We use a model with 2 millions instead for our game, because our computer and software can display this model at nice 60 fps.
This is one more sample for you.
But the guy working on the algorithm is almost done. He and his boss already look at a scene made from bazillion triangle models. And it looks awesome, and they rub their hands. Boss has dollar signs in his eyes, dev imagines applause at Siggraph. 'It works! Next game will have it!' they scream.
This is no sample for you. You don't know about it.

But i do. Do you see the difference? Can you imagine this changes perspective? I hope so.
Now think of this: You observe speedups from HW generations with factors of 1.5, or 2, or even 10 after adding new fixed function blocks. And it's awesome. You like that. That's progress!
Meanwhile, i aim for speedups with factors of hundreds or thousands, even more after HW scales up and the workload can be increased. And i like this too and this is my job, and it's fucking progress as well.

Neither perspective is a wrong topic for this forum. Because games are software running on hardware. Both is important. Conflicting interests on flexibility are the norm.
And believe me i care about hardware. I study it's performance and how to max it out. (Which is low level optimization and won't give me such big speedups, but still matters after that.)
But you don't care about software. You just sit there and say: 'Oh, they will surely find a way to achieve those bazillion triangles even if current API prevents this, or whatever you talk about. Does not really matter. PC having most rays per pixel is what matters. Get this finally and keep civil, bad dude!'
Which proofs your ignorance and incompetence about that SW / HW relationship, so maybe it's you who is on the wrong forum.
I do not care. I don't expect such competence from you. I know there is no marketing machinery hyping or explaining programming work to regular people. And opinions of end users matter, so it's input.

But if you don't understand my goals and issues which even a 100 times faster GPU could not solve, then just stop quoting me and drawing irrelevant and wrong conclusions. That's really annoying and i'm getting tired of all this nonsense.
 
That's really annoying and i'm getting tired of all this nonsense.

Others might think the same about your endless complaining about the pc platform. Again, your offtopic, NV RTX4000 speculations have nothing to do with the pc being inferior to consoles. Which is nonsense anyway.
 
Because you, as an (assumingly) end user, only look at resulting performance from samples. And you see more performance -> more rays -> better image. Conclusion: PC has better HW and is the better platform.

To me, as a developer, it is like this:
Question: Can i implement an algorithm to scale geometry resolution so i have high res near the camera, but low res far from it?
PC rasterization: yes. PC DXR: no.
Console: yes, yes.
Conclusion: Only console currently has the necessary flexibility to reduce work by a factor of multiple orders of magnitudes. Thus, at some point, the console becomes faster no matter what's the difference in HW power.

The remaining question becomes: Can we have a workload which is large enough so it can run faster on console than on PC, while still being realtime?

Can you follow, up to this point? Remember those Nanite claims of bazillions of triangles, but only rendering 2 millions of them. Because there is no need to render triangles smaller than a pixel. I talk about that same thing.
And now imagine we have two computers, both the same. One runs an algorithm to reduce the triangles to pixel resolution and renders 2 million triangles, the other one has not and renders bazillions of triangles.
The first one will be almost a bazillion times faster. (Does that term exist? 'bazillions'? idk)
You agree, yes?
Now imagine we make this second computer 10 times faster. What will change?
Almost nothing. Still not fast enough.
You agree still?

Ok, now the next step, explaining why i 've said 'you only look at resulting performance from samples'.
Imagine we are not yet done with the triangle reduction algorithm. What do we do? Ofc. we don't use bazillion triangle models at all. We use a model with 2 millions instead for our game, because our computer and software can display this model at nice 60 fps.
This is one more sample for you.
But the guy working on the algorithm is almost done. He and his boss already look at a scene made from bazillion triangle models. And it looks awesome, and they rub their hands. Boss has dollar signs in his eyes, dev imagines applause at Siggraph. 'It works! Next game will have it!' they scream.
This is no sample for you. You don't know about it.

But i do. Do you see the difference? Can you imagine this changes perspective? I hope so.
Now think of this: You observe speedups from HW generations with factors of 1.5, or 2, or even 10 after adding new fixed function blocks. And it's awesome. You like that. That's progress!
Meanwhile, i aim for speedups with factors of hundreds or thousands, even more after HW scales up and the workload can be increased. And i like this too and this is my job, and it's fucking progress as well.

Neither perspective is a wrong topic for this forum. [...] I know there is no marketing machinery hyping or explaining programming work to regular people. And opinions of end users matter, so it's input.

But if you don't understand my goals and issues which even a 100 times faster GPU could not solve, then just stop quoting me and drawing irrelevant and wrong conclusions. That's really annoying and i'm getting tired of all this nonsense.
Thanks for the insights!

A question though: You are talking about an impossible DXR-LoD system at what level or rather at what point in the rendering process?

I'm asking from a consumer point of view and with games out there using Raytracing already, I'm sure there must be some sort of LoD already, right? I can't bend my head around how current Raytracing is supposed to work with full detail everywhere.

Maybe you can explain this a little further? If you don't want to derail the topic, my DMs are open as well. Or a mod could split this branch off, since I think this is an interesting topic all by itself. :)
 
Last edited:
Meh, will believe it when I see it. Nobody else seems to be talking about this console RT flexibility. Certainly nobody is using it.

Notice i did not provide an answer to the question:
Can we have a workload which is large enough so it can run faster on console than on PC, while still being realtime?
... for your sanity.

I only made a dumb an simple example to illustrate why improving APIs, lifting blackboxes, etc, might give a benefit not only to developers, but also to endusers. Because it increases the numbers of beans they then can count afterwards on whatever their favorite HW is.
Nobody talking about it does not mean it's not used. It's just not interesting to HW fetishists how the stuff works, they only care which HW gives most fps.
I'm pretty sure console flexibility is used already, e.g. to stream BVH from disk instead calculating at runtime like PC has to do.
Then there is A4 games which said they used custom traversal code on consoles. IDK for what - likely not for LOD like i would.
And finally, PC holds consoles back too. If you make a crossplatform game, it's not attractive to develop completely different solutions to a problem on each platform. Thus, extra console flexibility ends up mostly used for low level optimizations, but the true potential isn't utilized.

All this is just obvious and makes sense, no?
I would not need to repeat it a 100 times if you guys would not constantly claim i'm just talking bullshit.


I'm asking from a consumer point of view and with games out there using Raytracing already, I'm sure there must be some sort of LoD already, right?
Yes. We can use the same standard practice of discrete LODs with RT. Which means having multiple models for a character, each with less triangles than the other. At some point we just swap those models if they come close / far.
This works, and there are even tricks to hide the visible popping by rasterizing and tracing the model twice, but switch LODs per pixel in a stochasic way. (The free RT Gems 2 book has a chapter on this.)

Problems / limitations with this standard approach:
* To hide the popping we need to render twice, so decreasing perf instead increasing it. Consequently, we make the transition zone small. So not everything needs to be rendered twice all the time. Visually that's a compromise. The transitions are still visible because they phase in and out. In contrast, mipped texture mapping does indeed always combine two mips of texture, so a transition is never visible. We just can't afford to do the same correct solution with geometry.
* But the big problem is this: Discrete LOD only works for small models. Imagine terrain. We can not swap the geometric resolution of the whole terrain, because it is always both close and far. So we would need to divide the terrain into chunks, like 10x10 meter blocks. But if we have blocks at different resolutions, there will be cracks between them. So we need to stitch the vertices to close those gaps, either with extra triangles, or moving vertices for example (Nanite has a brilliant way to avoid stitching, btw). It's a pretty difficult problem in many ways.
The complexity increases after we realize 10x10 blocks is not enough. At some distance even those become smaller than a pixel so we still have no solved the problem. So what we really need is a hierarchy: 10x10, 20x20, 40x40 blocks and so forth.
You can imagine the problem is hard to solve. And pay attention our whole 'static' terrain becomes a dynamic model as the camera moves forth and back. It's surface constantly changes, detail goes up and down at different areas of the terrain model.

Now the problem is: Although BVH is a hierarchy itself, we can not map this hierarchy to the hierarchy of detail, because we have no access to the BVH data structure at all.
All we could do is constantly rebuilding the whole BVH for each block which changes detail. We can not even refit it, because it's different geometry. And we can not keep the block detail levels which go out of sight in memory in case the player comes back.
Rebuilding all BVH for the entire scene constantly over time is too expensive and not practical (or if it was, it would be a waste of HW power).

As a result, DXR API restriction by blackboxing BVH prevents a solution of the continuous LOD problem, which is one of the key problem in computer graphics.
Nanite successfully solves the LOD problem only for the rasterized geometry.
For RT, they have to use low detail proxy meshes. Those proxy meshes have the same problems, now unsolved, but because their resolution is low and they may not change detail rarely or not at all, it works well enough to add a RT checkbox to the feature list.

Now you may ask: Is this some kind of a new problem? Never heard this is such an issue?
We need to go a bit back in history. There were many such continuous LOD algorithms in the early days. I remember the ROAM algorithm to have adaptive geometry resolution on terrain, then there are 'clip maps', or the 'transvoxel algorithm', and many more. There also was the Messiah game which used such algorithm for characters, which were very detailed for the time. But it always was complicated, and solutions were limited to support just heightmaps but no general topology, or processing characters on CPU was expensive and needed a upload to GPU every frame, while games like Doom3 achieved similar detail with just normal maps on low poly meshes.
And GPU power was increasing, allowing to postpone a true solution to the problem over many decades. Discrete LODs became the standard and was good enough.

Until recently, when Epic showed us we are all fools by ignoring this for too long. :)

If the whole industry adopts this newer and better LOD solution or tries similar things like i do, soon there should join some more guys my fight about lifting API restrictions.
If the industry sticks at good enough solutions from the past, my request will remain a rare one.
 
Some devs have already talked about how pc dxr is less flexible right now that the dx version on xbox and what they can do on ps5.
Do the Turing and Ampere (and perhaps Lovelace) support that flexibility?
I wonder if the PC DXR will ever get that if only RDNA2/3 supports it.
 
Do the Turing and Ampere (and perhaps Lovelace) support that flexibility?
They do not support traversal shaders, because traversal is fixed function (and thus faster).

But any GPU stores BVH in video memory, and specifying this data structure / opening the black box / enabling dynamic geometry by offloading BVH generation to the developers (optional!) is possible.
Also no GPU has HW to generate BVH yet. It is all driver software. So using the requested flexibility would not cause underutilized parts on the HW.
 
They do not support traversal shaders, because traversal is fixed function (and thus faster).

Sure they do. But this is much slower and so unusable for gaming.
Do you know that nearly every professional software provider switched over to a hardware accelerated raytracing pipeline? Maybe they know more than other people...
 
Sure they do. But this is much slower and so unusable for gaming.
It is not supported by the HW traversal units - they are not programmable. (And i do not request to change this!)
Ofc. we can implement anything in software, but then it's the software which provides those features, and the question was specific about HW support.
To my knowledge, NV HW can not support callbacks to compute during traversal, so implementing a traversal shader means to bypass HW acceleration completely, including intersection.

Edit:
Assuming our goal for traversal shaders would be stochastic LOD, which is the only big practical application i see, i think NV could implement it maybe this way:
Shorten ray length but preserve traversal state, callback to shader, continue traversal using preserved state if there was no change.
This way we do not restart the ray from the root node on each callback.
Might work, but we would need to discuss the many performance problems related to stochastic LOD. It's a super elegant and simple solution, but it's not very efficient.
 
Last edited:
And how much faster is AMD's hardware when you start to modify the traversal?
On Turing+ traversal gets accelerated by the FP32+INT32/FP32 throughput. So a software solution is at least twice as fast as on Pascal per compute unit.
 
i mostly mean the cost on building BVH, which should be zero actually.
It's close to 0 for static geometry.

But beside that, it's not that we can do wonders with 1spp, and cranking up to 10spp still is no wonders.
Sure, but we can do wonders with 0.5 spp in primary visibility, with flexible RT sampling we can probably get away with even less samples for primary visibility.
Primary lighting and GI sampling can be vastly improved with importance sampling, light paths reuse and guiding, caching, temporal accumulation and clever spatial filtering, we barely scratched the surface here in real games, especially on consoles where the most basic filtering is used.

Thus my question if we really need monster GPUs.
Monster GPUs are fun to work and play with.

I know for example you are very wrong assuming compute GI has to be inaccurate or inefficient in comparison to RT GI. From what i see, it's the exact opposite of that
That's what I see in real games. Besides, the efficiency comes down to shading cache (doing multiple bounces per pixel with cache is cheap), not tracing, and RT with the same geometry as in primary pass will always provide the most accurate per pixel shading.

Because not any gaming system has capable RT power.
Resolution matters too, not performant system in 4K can be very capable in 1080p.
4K requirements and high internal rendering resolution in current games choke consoles a lot, but 1080p to 4K reconstruction will become the new norm in a year or two with more advanced VSR and temporal upscaling technics, so a lot of performance is still on the table.
Current consoles have the bare-bone RT implementation with ray-AABBs and triangle intersection instructions, but the new mid life "Pro" consoles will be way more capable (pretty sure there will be lots of performance requests from devs).
Doing more advanced lighting, higher resolution effects or even path tracing on the Pro consoles would be a perfect point to differentiate in a multi-platform world.

What does it help if your artists no longer need to tweak lighting for the Enhanced version, when they still have to do it to support handheld stuff
Devs will do what they always do - drop old hw once install base of new hardware is large enough (that's the most solvent audience anyway), nobody can support old hw forever.

PC rasterization: yes. PC DXR: no.
Console: yes, yes.
Let me correct it for you - Console: yes, probably/no/nobody knows since this has not been proven by anybody
There is a lot of overconfidence in devs world, that's for sure, but a healthy portion of pragmatism would not do any bad here.

Conclusion: Only console currently has the necessary flexibility to reduce work by a factor of multiple orders of magnitudes.
That's a false conclusion since there is no any confimation in the first place.

Thus, at some point, the console becomes faster no matter what's the difference in HW power.
Again that's a false conclusion since it doesn't even factor in logarithmic perf scaling with the number of triangles in RT.

One runs an algorithm to reduce the triangles to pixel resolution and renders 2 million triangles, the other one has not and renders bazillions of triangles.
Even without any LODs, RT will never touch all the bazillions of triangles, in fact, it will touch as many triangles as this model takes pixels on screen (probably a little bit more due to geometry overlapping), that's the whole reason behind BVH and the logarithmic perf scaling.
The real difference is in memory footprints (larger for non compacted meshes in RT) and animated geometry skinning (which Nanite doesn't support anyway).

lifting blackboxes
Lifting the BVH black box requires solving many issues, building BVH for different HW is akin to shader compilation for different HW - precompiling all kinds of BVH for different architectures for all in-game geometry would bloat game client to the sky.
I can't imagine somebody would want to download a few hundred gigabytes for a single game, and that's just one of many related problems (what if there is some update for the BVH structure in driver, do we need to recompile? it doesn't solve the real costly things that is animated geometry, etc.).

Rebuilding all BVH for the entire scene constantly over time is too expensive and not practical (or if it was, it would be a waste of HW power).
That's why nobody does this.

DXR API restriction by blackboxing BVH prevents a solution of the continuous LOD problem
It doesn't prevent classic discrete LODs however (which I should remind once again have been used since forever and still used in the 99.99999% of existing games) and with the logarithmic perf scaling, classic LODs so far do just fine with RT.
The continuous LODs is a rasterization solution to the pixel sized triangles, RT on the other hand doesn't care about triangles sizes, so the continuity of the LODs is not a requirement and classic LODs do just fine.

If the whole industry adopts this newer and better LOD solution or tries similar things like i do
That's a big if, so far there is 0 adoption rate despite of all the early talks and work from Media Molecule in Dreams.

Until recently, when Epic showed us we are all fools by ignoring this for too long.
Epic wasn't the first with the continuous LODs, it has been here for a while in Dreams for example.

soon there should join some more guys my fight about lifting API restrictions
Pretty sure everyone is aware of the restrictions. Lifting API restrictions would certainly be nice, but these guys better propose some workable solutions to problems which will happen once the restrictions are left (this would certainly help more than fighting windmills).
 
Last edited:
Only console currently has the necessary flexibility to reduce work by a factor of multiple orders of magnitudes. Thus, at some point, the console becomes faster no matter what's the difference in HW power.
Gonna need the receipts - as all internal documents I have seen or personal anecdotes from developers contradict this. The game implementations as well contradict in terms of performance and visual quality.

Did you read the 4a blog on their implementation? It was a matter of finding tons of quality compromises essentially.
 
And how much faster is AMD's hardware when you start to modify the traversal?
On Turing+ traversal gets accelerated by the FP32+INT32/FP32 throughput. So a software solution is at least twice as fast as on Pascal per compute unit.
AMD would benefit from still having it's intersection HW accelerated, and NVidias doubled FP32 likely can't beat that. Beside intersection, there is no need for fp math in traversal, which is mostly stack management and pointer chasing.
It's pointless anyway? Nobody wants to do compute RT on a Truing GPU? What would be the reason to need those traversal shaders at any cost?
We would need some guy who has actual ideas about traversal shaders... but that's not me.
 
If you make a crossplatform game, it's not attractive to develop completely different solutions to a problem on each platform. Thus, extra console flexibility ends up mostly used for low level optimizations, but the true potential isn't utilized.
Then it’s a good thing there are top tier devs making AAA console exclusives. Surely they will share their RT exploits, one day…

All this is just obvious and makes sense, no? I would not need to repeat it a 100 times if you guys would not constantly claim i'm just talking bullshit.

It’s obvious that DXR is inflexible yes. But you’re also claiming a number of unsubstantiated things. In the RT gems chapter on LOD they acknowledge that traversal shaders would enable the necessary dynamic BVH selection. However they also acknowledge that passing control from the HW intersection unit to a traversal shader would be a significant performance loss. So it’s far from obvious that flexibility would be a net win on today’s hardware.

Is there some RT workload that would be faster on today’s consoles than brute force hardware traversal? Maybe. But if the end result is completely unusable performance anyway then it’s kinda pointless. Also there is no indication that this is even true on current hardware either in theory or practice.
 
It's close to 0 for static geometry.
True if you generate BVH on level load.
If you have open world, you stream in new stuff during gameplay, and generating BVH cost is not zero. May be a problem, maybe not. Depends on game and content.
If you want continuous LOD either for detail or efficiency reasons, amount of static geometry becomes zero, and reusing the same geometry for RT becomes impossible. Proxy workaround is the only option, but that's a hack not a solution. The problem we have finally solved for visible geometry just moves to the RT side and awaits a proper solution in a better future.

Sure, but we can do wonders with 0.5 spp in primary visibility, with flexible RT sampling we can probably get away with even less samples for primary visibility.
Keep up the good work :)

That's what I see in real games. Besides, the efficiency comes down to shading cache (doing multiple bounces per pixel with cache is cheap), not tracing, and RT with the same geometry as in primary pass will always provide the most accurate per pixel shading.
The irradiance cache is not the only killer optimization there is. Agree about per pixel accuracy, which one big reason i still need RT as well.

Doing more advanced lighting, higher resolution effects or even path tracing on the Pro consoles would be a perfect point to differentiate in a multi-platform world.
Not so much for me. The compute system can scale down, RT is optional detail. So i can target low end with the same tech and lighting setup is consistent. Not so sure about mobiles, though, but optimistic...

The continuous LODs is a rasterization solution to the pixel sized triangles, RT on the other hand doesn't care about triangles sizes, so the continuity of the LODs is not a requirement and classic LODs do just fine.
Acceptable if there is no better option available, but no optimal. The true benefit of LOD i see is more about scaling down again, not about insane detail as UE5 shows. If we get mobile RT, reducing detail may be just necessary.

That's why nobody does this.
Because it's not possible due to API limit, and only few see a need for it yet. But LOD is not the only case where we want to handle BVH ourselves. There's things like animated foliage too, or any case of geometry with dynamic topology.

That's a big if, so far there is 0 adoption rate despite of all the early talks and work from Media Molecule in Dreams.
Yep, which is not really impressive. I'd hope for some more innovation than just adopting IHV hyped features. ;)

Pretty sure everyone is aware of the restrictions. Lifting API restrictions would certainly be nice, but these guys better propose some workable solutions to problems which will happen once the restrictions are left (this would certainly help more than fighting windmills).
Fighting windmills? Maybe. Stuck in the constant LOD middle age with some discrete LODs for characters? Maybe.
We'll see. Usually the better algorithm wins, and competition forces adoption over time. It may hit you too, at some point. I hope the issues are resolved till then.
 
Status
Not open for further replies.
Back
Top