Unlimited Detail, octree traversals

Btw, if you don't think that gaming laptop (Bruce's words) isn't high-end you're mistaking high-end with highest-end. That laptop has the top mobile cpu and 8Gb of ram. What's not high-end about it? Yes, it's not a desktop machine and it doesn't have a $999 CPU. It's just a high-end gaming laptop.

No kidding. My PC is a bit old but still capable, and that laptop beats the snot out of it (at least on the CPU and RAM side).
 
They've been at this for years, their demos show all the limitations that the obvious solution would, why do they deserve any benefit of the doubt.
If it's about "oh we're not artists", pay a contractor, it's just not that expensive to put together a good looking demo, and demonstrating a dataset that didn't exhibit all of the inherent problems of volume data would remove most of his technically competent critics.
 
As for the "he shouldn't provide details otherwise people will copy it", actually publishing something will establish prior art and allow him to apply for a software patent (ugg - I can't believe I'm writing this). Keeping it a secret is actually more dangerous because patents don't have to be performant: someone could use a clumsy unoptimised version of Bruce's algorithm, quickly patent it and then Bruce would have to pay license fees.
Curiousity got the better of me and, by the look of it, he repeatedly appears to apply for a patent but then lets it lapse (before it becomes public?).
 
There's a pretty common reason why patents are allowed to lapse... The examiner cites one or more documents that render the invention obvious or show it to have a lack of novelty.

If that is the case then then there's a very high chance that the idea has already been patented or at least disclosed for anyone skilled in the art to know. While technically it's also possible that there is an invention but he's just utterly crap at writing claims and has messed them up 5 times in 7 years, or he simply forgot to respond to any of the previous 5 application deadlines, it's pretty difficult to move slower than the patent application process and the cost of filing isn't trivial so you think you'd put some effort in!

On the plus side it does mean that the examiners must have clearly pointed out an awful lot of relevant background for his previous applications and the fact he's still applying should mean that despite being shown that information the inventor believes there is still something novel in there. :) Alternatively a skeptic might suggest that constantly applying and then abandoning an application would also be a way of keeping an application alive so you could tell investors that you have a patent pending... Fingers crossed it's the first - but as time passes and the demos still looking exactly like what you'd get with regular SVO's the second category is becoming increasingly likely. Personally if I was relying on a core invention for my business model I'd be filing a lot more than one patent on the subject! Especially after 8 years!
 
There's a pretty common reason why patents are allowed to lapse... The examiner cites one or more documents that render the invention obvious or show it to have a lack of novelty.

...it's pretty difficult to move slower than the patent application process and the cost of filing isn't trivial so you think you'd put some effort in!
...
On the plus side it does mean that the examiners must have clearly pointed out an awful lot of relevant background for his previous applications and the fact he's still applying should mean that despite being shown that information the inventor believes there is still something novel in there.
I followed the link and saw no explanation as to why the application lapsed, so we don't know if he spent any money or was just riding the intial phases. I dunno how it is in the land downunder, but in the UK it's free to file a patent, and only £200 in fees over the first year of application for the two phases (forget what they are). It is possible to file a UK patent for nothing and then use it as security once you land an investor. If you find no investor, the patent isn't published and you can reapply at a later date.

Edit: It's also illegal to claim you have a patent pending without it having got so far into the procedure. I forget which step that is, but just having filed a patent isn't enough to be able to claim Patent Pending status and investors could take him to task on that if he was using a non-granted patent as collateral.

Unless there's clarification, I'd say that's what Dell is doing here. He filed a patent, couldn't/didn't see it through, and so refiled. Evidently the first attempt he changed his mind, or got confused by the system, as he applied twice within a few days - he's clearly finding his way! I don't imagine he has paid for the patent and been turned down, and been exposed to the prior art or similar as you suggest, and hence I don't think he has compared his work to other's and decided it still has merit.
 
hmm - you learn something new every day... I'd always assumed the filing costs were higher than that.

Tbh my experience has always just been you file a patent and then at some point in the future you get you start getting examination reports back. I'd not considered that there might be a benefit in sitting in the limbo state between filing the document and peer review. The repeated filing did make me wonder if there might be some inventive concept he's determined to protect but I guess if there's no requirement for applications to be examined there's potentially been no feedback.
 
John Carmack unlimited detail demo opinion, now that’s how you speak!!!
http://youtu.be/hapCuhAs1nA?t=17m15s
Great interview. Thanks for sharing. By the way, thumbs up to the interviewer, he seems like a really really nice guy. He has a good vibe about him. He is actually listening to what the interviewee, the other person, says.

In contrast to this, there is the Eurogamer interviewer who is always nodding at the people he is interviewing and supports his right arm with his left one. I just can't stand his interviews because of him. He is always nodding but never listening. I don't like his interviews at all, and thus I don't watch Eurogamer videos either.

There are very few things that I dislike more than people who don't listen.
If you listen to others you don't only learn but also you will be helping them and understand what others are actually trying to tell you, It's essential in real life (it also works great when it's about your partner). -liolio style :smile:

On this new technology, well, I am not well versed in graphics, but it sounds okay to me. It doesn't seem superfriendly and approachable with current generation consoles though.

I also agree with Laa Yosh that a good artist would be of immense help to create a stunning presentation. The words unlimited detail resound within us because of Bruce Dell's insistence, but a great artist would turn words into realities. The graphics looked just okay, but. not awe-inspiring in any way because of the artistic direction, imho. If he has an artist already, well, he probably needs some help too. I am looking forward to know how this technology develops, and if this promisingly talented young company can pull it off.

Polygon based technology is more and more powerful everyday, and raytracing & voxels are still in their early stages of development. But I don't know, those terms are beyond my knowledge.

Anyway, I think Carmack words in the interview say it all: truer words were never spoken.
 
Last edited by a moderator:
To add to what Shifty said, it's also a matter of claims versus proof. OnLive actually does not work according to their original claims, that latency wasn't a problem (it's noticeable - some games more than others) and that they'd be streaming 720p (and above!) in real-time: there are noticeable video compression artefacts. It's impossible doing what they claimed it would do.

Same here with Euclidean. The claims are pretty outrageous. The unlimited part is demonstrably false because there isn't any computer with unlimited HDD to store such an unlimited detail. The only way to do unlimited detail without requiring unlimited storage is using fractals and similar mathmatical procedural generation which very few games have used, despite this being the de facto strategy in 4K demos. Just because you can make a tech-demo does not mean you can make a game engine.

....

Btw, if you don't think that gaming laptop (Bruce's words) isn't high-end you're mistaking high-end with highest-end. That laptop has the top mobile cpu and 8Gb of ram. What's not high-end about it? Yes, it's not a desktop machine and it doesn't have a $999 CPU. It's just a high-end gaming laptop.

The lowest latency OnLive games have the same latency as some PS3 games have locally (150ms and Killzone 2). It's very hard to accuse them of not hitting their latency targets if your game is running in a server park hundreds of miles away and you can still get the same latency that a locally running PS3 AAA game gets. And I'm not even getting into the "how will they compress all the frames and the raw image data takes way too much bandwidth for this to work" part of the argument - although the arguments presented here regarding data volumes seem to be very much in the same vein.

Well, as for the unlimited part - that depends heavily on your definition of unlimited, does it not? Would you agree that if you can make 1 000 000+ identical 3D grains of sand for your game's terrain, that's quite "unlimited" compared to the miniscule number of polygons that we have today? (By the way, TES IV: Oblivion used procedural generation for their trees, if you want a AAA example to go with the awesome 4K demos.)

And finally, the hardware issue. A Core i7 2630QM scores a little less in the Passmark CPU test than the Core i7 950 from 2008. So while I completely agree that it is high-end for LAPTOPS it is not really "high-end" and definitely not "the highest end" when considered in the context of the modern gaming PC.

That very same goes for the amount of RAM as well. 8GB is the bare standard here in Estonia when building a new gaming rig, and many of the ones I build tend to include 16GB - not because it is desperately needed, but because DDR3 prices are so low that it would be a shame not to take advantage of them.

So maybe you're just confusing "new" with "highest-end" ;)
 
The lowest latency OnLive games have the same latency as some PS3 games have locally (150ms and Killzone 2). It's very hard to accuse them of not hitting their latency targets...
You can't justify their claims relative to performance comparisons, when they were very specific about what to expect:
"The round trip latency from pushing a button on a controller and it going up to the server and back down, and you seeing something change on screen should be less than 80 milliseconds.
"We usually see something between 35 and 40 milliseconds."
They have not hit 80ms latencies. Quite contrary, latencies can get as high as 200ms, which feels extremely sluggish, and is exactly what the detractors were saying - these claims were impossible to hit. OnLive still made some amazing progress, but their claims were false. By that same token, a claim to offer unlimited detail is going to be a failure if the end result is finite, even if it achieves 2x the performance of polygonal engines. Dishonest claims don't get a free pass if some lesser example of progress is demonstrated. If I get milions in funding for a new compression algorithm that I claim compresses video data to 50% the size of h.264 at the same bitrate and quality, and in the end I only achieve an occasional 5% improvement, the investors might be rightly pissed that they spent so much on something that in the end wasn't as good as promised and wouldn't offer the dividends they expected.

- although the arguments presented here regarding data volumes seem to be very much in the same vein.
OnLive manage their data volumes via lossy compression. That's not an option with voxel datasets, unless you want macroblocking in your objects!

Well, as for the unlimited part - that depends heavily on your definition of unlimited.
The technical definition of 'unlimited' doesn't have any ambiguity. Dell's use seems to imply unique detail per pixel, but he doesn't describe to what LOD, so at some point you'll be zooming into the voxels at a distance where voxel denstiy is less than one per pixel, and he'll have to interpolate data somehow. I'm not taking his definition as truly unlimited, but as a PR phrase to mean no visible data aliasing, which the data complexity tells us is impossible. It'd be equivalent to a movie where you can zoom 50x into any part of it and still get 1080p resolution. That would require 2500x as much storage per frame. We don't have the storage tech to stream data at that resolution and quality outside of massive servers. It certainly won't be coming from a BRD or local HDD! And whatever he's doing to get higher datasets streamed can be done with the likes of megameshing, so at the end of the day chances are voxels aren't going to offer any advantage in that respect while still having the drawbacks (although they might be a better fit for an alternative rendering model).

...does it not? Would you agree that if you can make 1 000 000+ identical 3D grains of sand for your game's terrain, that's quite "unlimited" compared to the miniscule number of polygons that we have today?
Then by that token, imposters can achieve the same unlimted number of identical grains of sand, hence unlimited detail has been possible for years. And we can call an algorithmic texture 'unlimited detail'. I think Joe Public is going to be expecting 'unlimited variety' in terms of an end to repetitious content and low-res textures, rather than an unlimited number of a few objects repeated ad nauseum.

And finally, the hardware issue. A Core i7 2630QM scores a little less in the Passmark CPU test than the Core i7 950 from 2008. So while I completely agree that it is high-end for LAPTOPS it is not really "high-end" and definitely not "the highest end" when considered in the context of the modern gaming PC.
You should be consdering it in terms of the current generation for which the games he's comparing to are designed. The current base standard for any game is well less than 8GBs RAM and an i7. Most PCs running games aren't that hot. Consoles are well below that power standard. If you were to take the current console's and scale them up to that sort of spec, the end results would utterly blitz the results UD is getting. Sure, you may not be able to zoom in to a tree at the millimetre level and still see detail, but a game designed from the ground up for an 8Gb system would have incredible detail, along with working lighting and shadowing, at higher resolutions and framerates. So you aren't comparing like for like, and neither is Dell. He's not showing what is the best possible on the same hardware as he's using, and then showing he is getting better results from the same hardware.
 
Well, as for the unlimited part - that depends heavily on your definition of unlimited, does it not? Would you agree that if you can make 1 000 000+ identical 3D grains of sand for your game's terrain, that's quite "unlimited" compared to the miniscule number of polygons that we have today? (By the way, TES IV: Oblivion used procedural generation for their trees, if you want a AAA example to go with the awesome 4K demos.)

There's only one definition of "unlimited" that matters, and there is no such thing as "quite unlimited" (similar to how things aren't "nearly infinite" or "almost perpetual"). There's only one reason to use the word "unlimited" in the way that it has been here (and the video presentation is stuffed with the use of the word), and it has nothing to do with giving an accurate idea of what your technology can achieve!
 
Unlimited *procedural* detail is still unlimited, in the very real sense that it remains detailed even if zoomed in, while the normal textures start "macroblocking". Just sayin'.

This unique detail hoopla is completely nuts, no artist would ever bother to make unique every or even a significant amount of objects in an open-world 3D engine that can potentially handle millions of them. Think of the costs associated with that.

And Bruce Dell said that Euclidian's engine can be scaled down for whatever platform, by reducing the "atom" density appropriately. So I think consoles may well be able to run it. And if Euclidian is clever, they will also target smartphones - it shouldn't be that difficult considering that in a few years smartphones will have as much power as the X360/PS3.
 
Last edited by a moderator:
Unlimited *procedural* detail is still unlimited, in the very real sense that it remains detailed even if zoomed in, while the normal textures start "macroblocking". Just sayin'.
Only that they aren't using procedural cotent but laser-scanned models instead (that they splat a million into one scene to fit into the machine memory)
 
This unique detail hoopla is completely nuts, no artist would ever bother to make unique every or even a significant amount of objects in an open-world 3D engine that can potentially handle millions of them. Think of the costs associated with that.
I already mentioned that I doubt Dell was speaking literally, and it was a broad term to convey the notion of very high resolution assets. 'Infinite' is another common term for large but definitely limited ranges of permutations. eg. a proecdural texture engine cannot create infinite textures as given a limited texture size, there'll be a finite number of possible texel values. 2^24 bits for each pixel, so 2^24 to the power of however many pixels. Even if your texture is 1,000,000 by 1,000,000, the resulting astronomically large number of possible textures is still so vanishingly small next to true infinity that it counts as no textures at all!

All these hyperbolic terms are there to mean "much more than you are used to", and bare no relation to the scientific basis of the terms.

And Bruce Dell said that Euclidian's engine can be scaled down for whatever platform, by reducing the "atom" density appropriately.
Every engine can be scaled down, and looks worse as a result. If the atom density gets too low, his engine is no better off than polygons. Instead of insufficient RAM to store high poly models and we have them rendered with wonky silhouettes or flat grass planes, we'll have insufficient RAM to store high resolution volumetric models and they'll appear either as collections of cubes or some form of hazy blobs.

I'd like to see what he can do with 512 MBs RAM (on top of OS). Of course we won't see squat, because he takes a "I'm hiding now" approach that avoids all challenges...
 
Shifty already covered OnLive's original claims and how you should be comparing Bruce's hardware spec versus his claims. WRT the other topics:

(By the way, TES IV: Oblivion used procedural generation for their trees, if you want a AAA example to go with the awesome 4K demos.)

Nope, it's completely different. Oblivion's SpeedTree used procedural generation to create assets to save development time; I'm talking about algorithmic procedural generation of assets to save storage. 4K and 64K demos use mathmatical equations to generate textures and geometry because they can't store those without exceeding the arbitrary storage limit. Like all content generated in this way, the visuals are highly abstract with only repeating patterns like bricks and wood planks being modicaly possible.

SpeedTree has a dataset of created models and model parts and through pseudo-random methods creates a large enough pool of varied models so that modellers don't have to spend time creating each individual tree.

The examples you should be looking for are the likes of Spore and some indie games.

And finally, the hardware issue. A Core i7 2630QM scores a little less in the Passmark CPU test than the Core i7 950 from 2008. So while I completely agree that it is high-end for LAPTOPS it is not really "high-end" and definitely not "the highest end" when considered in the context of the modern gaming PC.

Laptop parts are always behind desktop parts, hasn't prevented laptop gaming from gaining ever more ground as the platform pc gamers use to play. Again, the problem is your narrow definition of high-end.

That very same goes for the amount of RAM as well. 8GB is the bare standard here in Estonia when building a new gaming rig, and many of the ones I build tend to include 16GB - not because it is desperately needed, but because DDR3 prices are so low that it would be a shame not to take advantage of them.

Again we go back to how the scientific method works: what you just described is anecdotal evidence. Take a look at the Steam hardware survey which is real, directed data you can draw trends from: over 75% of steam users have between 1 and 4Gb of RAM.

Just because you build computers with 16GB of it doesn't mean that's the current high-end. Arguments of price, etc. also don't factor in. You take a (big) sample of the population and study trends. Steam survey tells us an 8GB machine is already in the top 25%, and Steam doesn't distinguish between those with 6 or 8GB: it's quite possible 8GB is in the top 15%.

Otherwise your brand new 16GB rig you built last week is ultra low end! You want to know why? Because John Carmack paid $5000 and built a 192GB RAM system. Now that's high-end. That's called an outlier you should ignore.

Anyway, all this is pointless however because (again) Bruce's original claim that this is supposed to run on current gen consoles means even that laptop is already too powerful. They have a guy who described himself as the memory packer guy. If an 8GB computer has problems maintaining 30fps at 1024 x 768, how do you think a console with its 512mb is going to fare at 720p? That's a 17% increase in resolution for a technology that by Bruce's own admission is resolution dependent.

For me, the most important thing to clear up is the vast landscape of duplicated objects that we see. If he were to prove we could have the same performance he has now with non-duplicated objects I'd say the tecnology has merit! Maybe it wouldn't be viable for current gen consoles but give it another year or so and they'll be replaced anyway so my problem isn't with the performance numbers we're seeing now: it's running below 30fps with duplicated objects.

And Bruce Dell said that Euclidian's engine can be scaled down for whatever platform, by reducing the "atom" density appropriately.

Reducing detail along with it. Going by that interview video they're not drawing more "atoms" than they need to so if you were to, say, halve the number of "atoms" you'd start to see, depending on how he's generating the polygons, either double the pixel size blocks or blocky silhouettes which brings us back to where we are now.

Another thing, to counter Notch's argument that this had been done before he mentions how atomontage isn't suited for large landscape scenes and then shows a tech-demo where indeed only a small scene is shown. Atomontage's website, however, has many videos showing a large landscape and it's not full of duplicated objects:

atomontage_vb01_hybrid0.jpg


As an aside, atomontage is going even deeper (if you pardon the pun) since it's storing all "atoms" even those that aren't directly visible but could be if you eroded/blew up the surface geometry.
 
As usual, Mr Carmack manages to take a contentious issue and really get right to the heart of the matter; That 'unlimited' detail through fractal generation (be it polygonal, SVOs, etc) ultimately isn't a wrothwhile pursuit.

And I think that's what a lot of people are getting hung up on, ignoring the horrible rhetoric in the Euclidian videos, the tech they are showing is impressive. It's not ground breaking, but they have successfully got the public interested in alternatives to polygons (which is a good thing - it's sparking discussion).

I just wish they had done it in a way that wasn't so misleading ([paraphrase] "Unlimited Detail on the Wii will look better than any PS3 game" sort of thing drives me mad). An open discussion of the pros/cons would have been far better. By being so bold in their claims, anyone who understands the limitations has found themselves fighting a losing battle with the general community (and, ironically, you risk pushing the discussion to the other extreme - I think notch went too far in this way).


The thing is, I find projects like the voxel cone tracing based lighting - hybrid approaches - to be absolutely incredible, and a clear sign of the direction rendering will go. It doesn't need to be all of one, or all of another.

And the more I think about it - the more I'm question my gut reaction that Euclidian are wrong:

They are showing large scale instancing of geometry as a form of data compression. Pretty primitive, sure. But then again, that's exactly what we do with textures; large scale instancing (repeating). The arguments that a similar system shouldn't be used for geometry is actually quite a tough one; What most people are arguing about is the grid like repetition. That isn't an argument against reusing assets.

With a more flexible pipeline (warped geometry, rotated, projected, etc) you could go a long way mixing coarse polygonal world geometry with ultra high detail micro geometry and detailing using SVOs or other geometric representations.

And the numbers don't look that bad either. Elsewhere I've read people saying 'they must have 500GB of data' - but I doubt that.
The best texture compression schemes we have are 2bit per pixel, more commonly 4bpp and 8bpp.
The interesting thing is that the numbers aren't too different for voxels.

If you run the numbers on SVO data (albeit with simple shapes and no colour information) you find the results are interesting.

Consider a sphere with a radius of 1024 voxels. The surface area of the sphere is a bit over 13m voxels. Which maps to roughly 3630x3630 for a 2D texture. How much data would that be?

The intuitive answer is it'll be huge - but actually, storing that sort of information for raw geometry isn't that big. The best case (roughly) is each node in the SVO tree is 1 byte, 1 bit for each valid child. Taking that into account, 13m voxels at 1bpp is 1.64mb. Not enormous.

But obviously it's not 1bpp - a typical leaf node in the SVO will have 4 of the 8 bits set, so it's ~2x wasted data at the leaves - plus the actual tree takes up space. So you'd expect the size to be around 4mb at the absolute best.

Well, it turns out it's pretty close - my quick test case was just a tick over 6mb (with many redundant voxels - as it was based on intersection with a sphere).

The really funny thing is that it compresses (rar) down to 1.9mb. That's very close to 1bpv. At these rates, lossy voxel compression becomes very interesting - especially as a method to store additional data (after all - you'd be mad to store lossless textures today).

Obviously this ignores the difficult problem of traversing the data and rendering it - and applying non-geometric detail such as colour, but I think the important point is that the data size for SVOs is actually fairly small - and roughly related to the surface area of the primitive.

It's all very interesting tech, and I believe once the use of geometric detailing is more general (for example, applied in a way similar to textures today) I honestly believe they would be exceptionally powerful and radically change how games look. It's not an easy thing problem to solve (obviously :) and I applaud anyone who is researching in the field.

So I'll sum up - I am impressed with what Euclidian have managed. I do believe that what they are demonstrating represents (to a limited extent) how games will render in the future. However I don't believe it's a 100% voxel future (or equivalent), I believe it's a hybrid and I wish they would really go for that. And while they are at it, cut the nonsense about reinventing the world.
 
They are showing large scale instancing of geometry as a form of data compression. Pretty primitive, sure. But then again, that's exactly what we do with textures; large scale instancing (repeating). The arguments that a similar system shouldn't be used for geometry is actually quite a tough one; What most people are arguing about is the grid like repetition. That isn't an argument against reusing assets.

The problem with the above is that we are moving away from tiling textures through virtualized approaches. Unique virtual texturing is such a huge leap for any art department that it's almost guaranteed to take over in at least AA games.
I'd like to remind people that other developers are looking into this tech as well; I recall repi mentioning that BF3's terrain system uses it, sebbi is using in in Trials 2, Lionhead has a working implementation with MegaMeshes and so on.

This also makes the next step kinda obvious, unique geometry using some sort of virtualization has to be the way forward, instead of instanced approaches. Using simple geometry with displacement maps would make this method even easier, especially if a proper virtualized texture and streaming system is already in place.
 
They are showing large scale instancing of geometry as a form of data compression...What most people are arguing about is the grid like repetition. That isn't an argument against reusing assets.
Not at all. However, their tech demos repeated assets in the same alignment. If that's a limitation of their compression tech needed to make this work, then the format is no good. We need arbritrary placement of repeated assets, same as we have with polygonal models.

And the numbers don't look that bad either. Elsewhere I've read people saying 'they must have 500GB of data' - but I doubt that...
Consider a sphere with a radius of 1024 voxels. The surface area of the sphere is a bit over 13m voxels. Which maps to roughly 3630x3630 for a 2D texture. How much data would that be?
...Taking that into account, 13m voxels at 1bpp is 1.64mb. Not enormous.

The really funny thing is that it compresses (rar) down to 1.9mb. That's very close to 1bpv. At these rates, lossy voxel compression becomes very interesting - especially as a method to store additional data (after all - you'd be mad to store lossless textures today).
And how much would that same sphere take up as a CSG sphere? A few bytes? For every resolution? ;) If a matter of data complexity, how does their unlimted detail compare to HOS models? There's just as valid a tech pursuit there I think. Or how's about large models tesselated according to detail maps?

There's a couple of counter-examples that people could throw up to challenge Euclidian. Another voxel engine using SVO has already been mentioned. Another challenge would be to create model assets of similar detail using existing techniques and effective LOD. Okay, it'll possibly exhibit scaling artefacts as LOD levels are traversed, so UD has the advantage there. But I'm reasonably confident the same level of model quality can be achieved in a tech demo running on an 8GB laptop. And if the detail can be achieved with the current polygon method, we kinda lose the need for for UD. the only other advantage they suggest is direct import of objects, but Laa-Yosh can write reams against that!
 
Back
Top