Solution to Flat Surfaces?

Nesh

Double Agent
Legend
I always wondered when we were going to see actual uneven surfaces with real texture than flat surfaces with texture maps on them in games.

But today I saw this and wondered if it can be applied in games usefully or if it is roothe next step brought by the next generation consoles
 
It's also not unlimited detail, as the narrator of the video claims repeatedly over and over; that is of course impossible as it would require at the very least unlimited memory and storage space (and quite possibly also processing power, data transfer rates when loading the stuff.)

I'm assuming there's some sort of algorithmic fakery behind the "unlimited" aspect, perhaps fractal-based (in which case any additional detail would become repetitive and/or random-like, and thus look unnatural); in any case it can't be unlimited. Because computers actually have limits! ;)
 
The more I look at that, the more I remember that there's far more to a good-looking image than poly-count and geometry. Especially these days, it's more about surface shaders and lighting techniques. Just because your round object is actually round won't automatically make it look real. That's true in both games and visual effects. In terms of geometry, what you see in the movies isn't that much different from what you see in a lot of games these days.
 
I'm not surprised he didn't bring up Crysis and POM which does a pretty damn good job of taking care of the flatness of ground geometry.
 
He mainly focused on the trees in Crysis but I think he said that that game did a good job of making the ground look like it had depth but that if you looked really closely you could see it wasn't really 3D.
 
Tessellation + displacement mapping does a decent job of adding actual geometric detail to what would otherwise be flat surfaces.
 
Tessellation + displacement mapping does a decent job of adding actual geometric detail to what would otherwise be flat surfaces.

Yeah but which is the least performance intensive solution? We see techniques like tessellation and displacement mapping as an evolution stemming from newer power hungry hardware and as an evolution of older methods. But we havent really looked much into performance and energy efficient solutions. We are hitting ceilings when it comes into fitting new solutions into consumer energy efficient boxes because they are direct results of more powerful energy inefficient hardware.
And as we continue to develop solutions resulting only from more powerful hardware these "ceilings" are becoming more and more of a problem
 
I was turned on by tessellation but really I see nothing great coming for either of the greats actors on the market, only minor cosmetic improvement.
Clearly this tech looks to achieve impressive results. I remember when Bruce xx (sorry I don't remember his name) opened a thread on this board a lot of skepticisms blend with some legitimate concerns, no enthusiasm at all. It's a bit like what the creator of atomontage engine states there are a lot of conservatism on the market.
Usually comments are like "ok it's kind of ok but it won't work for this, this this etc."
It's somehow related to the "next gen talk" OK manufacturers could offer an order of magnitude more power but they should wait it's not enough to make enough of a difference" When will it be enough?
20 times? 30 times, hundred?
I believe Sweeney is right Realtime 3D needs to use multiple techniques to provide the jump in quality even my mom would notice. I hope we will take the road of software rendering soon, GPU are getting more flexible, CPUs power grows too, developers have to rely on the most effective technique for the job.
It's unclear how this technique can be animated, but it could at least be used for some part of the scenery with outstanding results, same for voxels they have trade off and take lot of space let's only use them when ad where it's relevant. ALl those guys who works on alternatives to polygons have team that are team that can be count on the one hand (often one digit), if one institutional actors where to jump in one could expect a lot of improvements.
I don't bite into the economic argument about why new techniques don't catch up, even the developer of this solution have tools to import models from Zbrush and as it states it's not like LOD management, memory issue etc already consume a hell lot of time in any big project.

Graphic pipeline has to explode imho, when you have a hammer everything looks like a nails, in fact people spend a lot of time shaping problems into nails with more and more diminishing returns.
 
It's not the graphics pipeline. It's the art ... art took half a decade too long to get out of low poly mindset, who knows how long they will take to get out of the low abstraction mindset. (I'm not saying it's easy to create template based generators of highly abstract things like houses, but also not impossible ... if we want homes to be completely detailed right down to the cutlery in the drawers then it can't be modeled or artist placed right down to the cutlery in the drawers, not unless we want the games with very small worlds.)
 
It's not the graphics pipeline. It's the art ... art took half a decade too long to get out of low poly mindset, who knows how long they will take to get out of the low abstraction mindset. (I'm not saying it's easy to create template based generators of highly abstract things like houses, but also not impossible ... if we want homes to be completely detailed right down to the cutlery in the drawers then it can't be modeled or artist placed right down to the cutlery in the drawers, not unless we want the games with very small worlds.)
Well whereas it seems that the technique may allow for the artists to sculpt every details of for example a house I don't think that anybody is thinking that that would be a proper use of the tech.
For me it would come to make the most of assets that are already created which for some elements a given reasonable number of polygons and displacement maps don't achieve.
To me the problem is to integrate various techniques into the pipeline say some scenery is generated with point clouds (undestructable) some others with voxels and others polygons the moving characters/objects have to interact (and not intersect) properly with all those elements, I've no clue if that would be trivial. But for some games you don't need think of the many games (most?) where most of the scenery is unreachable indestructible.
To some extend it could be the same with voxels, even packed into proper data structures they take a lot of space, but if you use them only for some elements of the scenery the problem may alleviate it-self?
 
Yeah but which is the least performance intensive solution? We see techniques like tessellation and displacement mapping as an evolution stemming from newer power hungry hardware and as an evolution of older methods. But we havent really looked much into performance and energy efficient solutions. We are hitting ceilings when it comes into fitting new solutions into consumer energy efficient boxes because they are direct results of more powerful energy inefficient hardware.
And as we continue to develop solutions resulting only from more powerful hardware these "ceilings" are becoming more and more of a problem
Least performance intensive solution out of what options? Right now, tessellation + displacement mapping is the only practical means of having that sort of geometric detail in real-time on current hardware. It's certainly a much better option than just having the raw verts + triangles, since you can dynamically tessellate based on distance to the camera which keeps you from wasting triangles where you don't need them. Plus you trade ALU for bandwidth, which makes it a lot more scalable (and power-friendly).
 
lol - this guy again...
He needs to start substituting the word 'unlimited' for the word 'static'. :)
 
Last edited by a moderator:
It might sound impressive but the Art work does not impress me, show me something realistic to compare it against...
 
Forget the art will ya... The guy's demonstrating his graphics rendering technique, not the actual graphics. He even says this in the video linked in the OP.
 
How similar is this to the propose next gen id engine with megavoxel ?

AFAIK with this sort of tech, artists will make arts the sameway than it gets converted to voxels or points like this one.
 
No, this is different. Id's approach would mean every voxel in the world is unique, just as every texel is unique right now in Rage (more or less, anyway).

This approach uses separate objects instead, probably with a lot of instancing - re-use of the same asset over and over again. If there's enough variety and randomness, it's not as noticeable - you'll only get a subtle sense that the imagery is boring.


I'm not sure id is going to go the voxel path, though. HW advancements aren't geared that way, it doesn't solve characters, or proper dynamic lighting, or destruction and such. Just look at Battlefield 3 to see how well it manages without voxels; if high quality tessellation or some other displacement based tech becomes available, it'll get even better.
 
Got a question, I always thought tessellation and displacement mapping is the bridge to getting an ingame model looking as detailed as the High res zbrush source model, how true is that?
 
Back
Top