Unlimited Detail, octree traversals

I did not play rage, but I assumed they had at least some rough normal and specular maps for muzzleflashes and sun's specular highlight etc. Did they get away without any of that?!
 
Why engineers don't fit in with corporations:

http://youtu.be/BKorP55Aqvg

I think this sketch summarises this infinite detail tech perfectly.

HAHAHA that's a nice one.

Reminds me recently of a manager trying to sell WhiteBox encryption (basically your password is embedded in the code that does the encryption/decryption itself) to the customer for 1 hour ...in order to protect the password that has to be sent to the wifi chip (you send the password to the chip which uses it internally in the hardware).

The entire team was staggered for an hour trying to convince him it was NOT possible, yet he continued triumphantly to sell it.

nice one, thank you :)
 
Well if you're going for that, why not generate the views straight from the photographs? I think there already are camera systems to capture a full 360 range in a single shot...

Funny you say that.
Nearly 10 years ago I worked on a game that did precisely that. Captured real sets with an automated 360 degree ceiling mounted camera rig (did multiple passes in different directions using wide angle lenses). Naturally, file size and streaming was the major bottleneck at runtime - and the capture and conversion process wasn't exactly trivial. The game existed to sell the tech, as it was 'more realistic than modern graphics' - which was only true when ignoring the huge limitations otherwise present.

https://www.youtube.com/watch?v=5uQMXJc9zTc

My god it hasn't aged well, I forgot how cheesey the minigames were :mrgreen:
If you are wondering why the colors are a bit odd, the camera was a 12bit raw cam intended for industrial use, which meant it unfortunately had a terrible color filter. Learnt a lot about imagery in that project.
 
I did not play rage, but I assumed they had at least some rough normal and specular maps for muzzleflashes and sun's specular highlight etc. Did they get away without any of that?!
They didn't. The specular was also baked to the (quite low res) textures. Dynamic objects had normal maps and dynamic lighting (specular, etc), and that often made the dynamic objects look out of place (shining bodies lying on a dark floor). It was good enough for last gen, especially as it made 60 fps possible. But the business case wasn't that good. 3 DVDs must have cost them quite a lot extra per unit.
 
Let me compare this to real time ray-tracing. It's possible to ray-trace quickly on current hardware, if you only need primary rays. Bu primary rays alone don't bring anything fancy over standard rasterization. Nobody would be excited about a ray tracer without any reflections or shadows. As soon as you add secondary rays in the mix for shadows and reflections, the ray tracing performance will plummet. There is no real time ray-tracer that can render complex scenes with dynamic lighting and secondary rays (reflections) at an acceptable quality and acceptable frame rate.

That's interesting, years ago you had that ray tracer that renders millions or so sunflowers and that's really cool though of no practical use for games. You get *some* complex stuff for free if you don't mind it be at the expense of the rest.
 
So, apparently Dice has developed in-house photogrammetry tools and it will be used extensively for Star Wars Battlefront. I found this little writeup about The Vanishing Of Ethan Carter. http://www.theastronauts.com/2014/03/visual-revolution-vanishing-ethan-carter/

It seemed to be used to great effect in that game.

I'm curious to see what the setup is, and what limitations it has.


BTW, the only reason I responded in this thread was that there was already a bit of a discussion about photogrammetry. I'm not an "unlimited detail" believer.
 
Last edited:
There is one major issue when using photo scanning for environments - it's very hard to convert the data to assets that can be dynamically lit. The pictures capture all the lighting and surface response including reflections and shadows. Shooting in overcast conditions can help a bit though, but even a small change in the lighting during the capture session will inevitably mess up the results.

One thing DICE might have developed is some image processing toolset that can remove the unnecessary information; maybe with the help of shooting in some special set-ups and using multiple photographs?
RAD has constructed their own fabric scanner for the Order where they had a computer controlled array of LED lights to generate data that was then cleaned up and it also gave them normal maps. But building something like that to work on the scale of entire buildings or landscapes is quite a challenge :)
 
So, I'm guessing the limitations are just assigning materials correctly, "baked" lighting and "baked" shadows. Well, that and reducing the texture and models appropriately for a game. It seems like it would be a good fit for some kind of virtual texturing system. I couldn't find any info on The Vanishing Of Ethan Carter's lighting and shadows to know how dynamic they are, but it is UE3 or UE4.

One thing DICE might have developed is some image processing toolset that can remove the unnecessary information; maybe with the help of shooting in some special set-ups and using multiple photographs?
RAD has constructed their own fabric scanner for the Order where they had a computer controlled array of LED lights to generate data that was then cleaned up and it also gave them normal maps. But building something like that to work on the scale of entire buildings or landscapes is quite a challenge :)

Somehow The Vanishing Of Ethan Carter used it to capture houses and buildings. That's my understanding anyway. Not sure how they got it to fit in with their lighting.
 
Last edited:
Two recent videos:



It looks like this thing keeps getting better.

What do you think?

EDIT: by the way, shouldn't this thread be moved to the PC section?
 
Considering the discussion was started and focused on "Is this a technology that shows promise for this and next-gen consoles?", no, it should not be in the PC section.
 
Well, I know, but the thread title is only the name for a technology which is developing in and for PC. It's not that I care that much, the main purpose of my post was to share the videos and to read your opinions. :)
 
"Real holodeck finally created" - already hyperbolic bollocks of the highest order that makes me angry. Holodeck tech is bollocks itself - no-one's going to be creating temporary 'matter' (forcefields or otherwise) any time...ever...so claiming to have to working for real now is just click-baiting.

As for their posturing how infinite atoms, going it alone, yada yada, they're not unique. Media Molecule are getting better results from non-triangular objects. Also, GPU's aren't made up for mostly drawing triangles these days. Ignore the ROPs and you have teraflops of compute. And he doesn't respond to the actual criticisms.

Basically, regardless what achievements they are making (the holographic room tech is probably awesome and would look fabulous with a decent triangle-based engine!), the guy's attitude is so sucky and up himself that they piss everyone else off. He makes bollocksy comparisons such as claiming terrain these days is flat textures and a few flat grass textures - ummm, that was back when you started you Unlimited Detail nonsense. And they still haven't got a game using this tech after all their promises, and they still provide the ugliest, most repetitive graphics ever despite having 'Solid Scan' tech. And their 'we don't want an ugly dumb headset' ignores the fact that Joe Public can put a headset in their living room but can't fit a holoroom the same way.

Damn, this guy gets my goat!
 
^^^Hahaha! :D

By the way, when you mentioned MM, you were talking about Dreams, were you? I fascinated by its graphics; not top-notch, but the tech behind is so impressive...
 
They are not clinical enough in their self-concsiousness and the limits of the "unlimited". They are biasing everything in favor of their technology. Which is sad, because they sabotage themselfs. The implementation itself is nice, as nice as anything else developed in good faith.
 
Yeah, there's merit to what they're doing if only they didn't poop on everyone else. How much money would Epic or EA have made these past five years if instead of using triangles they pursued super-efficient voxel sets? It still has to be proven that their data model can animate fluidly. They liken it to Blizzard, simple games, but what about the best facial capture systems? SSS? Soooo many limitations to Euclideon's pursuits but they talk like it's a perfect solution. Utterly disrespectful to the rest of the entire industry.
 
Yeah, there's merit to what they're doing if only they didn't poop on everyone else. How much money would Epic or EA have made these past five years if instead of using triangles they pursued super-efficient voxel sets? It still has to be proven that their data model can animate fluidly. They liken it to Blizzard, simple games, but what about the best facial capture systems? SSS? Soooo many limitations to Euclideon's pursuits but they talk like it's a perfect solution. Utterly disrespectful to the rest of the entire industry.
Yeah, I get your point and I agree, but I can understand that they act this way because everybody shat on them (although they may be partially responsible for this, too, so it's a loop).
 
Yeah, I get your point and I agree, but I can understand that they act this way because everybody shat on them
His claims were bollocks and insulting. He's experiencing the reaction of ordinary people responding how they would normally when insulted. This isn't a case of some medieval thinkers refusing to see the truth his discovered. He's no Galileo or Darwin or victim of closed-minded thinking.

In stark contrast, look at how Alex Evans talks about MM's work. He's constantly referring to peers and other similar developments, and being completely honest about what is possible and what isn't. He's also not attracting large investment from clueless Angels with claims of changing the whole world.
 
In stark contrast, look at how Alex Evans talks about MM's work. He's constantly referring to peers and other similar developments, and being completely honest about what is possible and what isn't. He's also not attracting large investment from clueless Angels with claims of changing the whole world.
He's a wonderful professional, no doubt about it.
 
"Real holodeck finally created" - already hyperbolic bollocks of the highest order that makes me angry. Holodeck tech is bollocks itself - no-one's going to be creating temporary 'matter' (forcefields or otherwise) any time...ever...so claiming to have to working for real now is just click-baiting.

As for their posturing how infinite atoms, going it alone, yada yada, they're not unique. Media Molecule are getting better results from non-triangular objects. Also, GPU's aren't made up for mostly drawing triangles these days. Ignore the ROPs and you have teraflops of compute. And he doesn't respond to the actual criticisms.

Basically, regardless what achievements they are making (the holographic room tech is probably awesome and would look fabulous with a decent triangle-based engine!), the guy's attitude is so sucky and up himself that they piss everyone else off. He makes bollocksy comparisons such as claiming terrain these days is flat textures and a few flat grass textures - ummm, that was back when you started you Unlimited Detail nonsense. And they still haven't got a game using this tech after all their promises, and they still provide the ugliest, most repetitive graphics ever despite having 'Solid Scan' tech. And their 'we don't want an ugly dumb headset' ignores the fact that Joe Public can put a headset in their living room but can't fit a holoroom the same way.

Damn, this guy gets my goat!
The industry would certainly respect him more if his personality was closer to Alex Evans (Media Molecule). I have spoken with Alex (online and on person). He is very friendly and open about their technology. He also respects and understands the strengths and weaknesses of other technologies that they didn't end up using.

Euclideon is solely talking about rendering the primary visibility ("primary rays"), nothing about shadows (sun light, local lights), reflections or GI/AO. Primary rays are coherent (ray directions between neighbor pixels are very close), so the problem of finding the surface is relatively straightforward and results in coherent memory accesses. Shadow rays are also relatively coherent (slightly more discontinuities). But in order to generate realistic looking scenes, you need shadows from each light sources. Is Euclideon's tech fast enough for querying multiple collision points for each pixel to enable real time lighting with high quality shadows? So far we have only seen baked diffuse/ambient only lighting. Not even specular (angle dependent lighting -> needs real time evaluation). In addition to shadows, modern PBR lighting models need to query incoming light locally. You need to be able to quickly calculate how much occlusion (blocking geometry) you have on each direction on the point of the surface. Is Euclideon's data structure a good fit for this purpose?

So far they have only shown 100% baked lighting at 30 fps @ 1080p. In comparison, my new tech can render primary rays in 0.4 ms on high end PC @ 1080p (2500 fps). This is not something to brag about. Modern AAA games also need need high quality real time PBR lighting (with high quality AO) and high quality soft shadows. Nobody knows how well Euclideon handles scenes with dynamic PBR lighting + lots of moving shadow casting light sources.

Reflection rays and ambient rays (and lighting rays in general) are highly random. Rasterized games use various hacks such as cubemaps to approximate these phenomena. Does Euclidion also need these hacks, or is their technique as flexible as ray tracing? The important question is: how fast are random (location & direction) ray queries from their data structure? If they can only do fast projection of the whole scene, then their technology has pretty much similar limitations than rasterization and requires similar hacks. Modern games are not purely rasterization based. People have been combining other techniques and data structures to sidestep the limitations of rasterization. It would be highly desirable that radically new techniques solve these hard problems well. Solving the easiest problem of primary visibility ("primary rays") is not enough nowadays.

I also find it amusing that they still compare their tech against Xbox 360 and PS3 games (20x+ faster GPUs are common nowadays). Also Bruce Dell seems to be completely unaware of modern mesh and texture streaming techniques. He believes polygon based games need to load everything up front and describes this as a big limitation. Apparently he has never heard about virtual texturing, even though I am guessing that their data streaming is probably almost identical (virtual texturing extends trivially to volumetric data). Virtual texturing allows "unlimited" texture resolution at fixed run-time memory and processing cost (HDD storage space is obviously never unlimited). I heard they like unlimited stuff :D
 
Last edited:
Back
Top