Well - no they don't, if you are looking for "infinite"(and unique) detail on them.Shifty Geezer said:Textures get around this nicely
Well - no they don't, if you are looking for "infinite"(and unique) detail on them.Shifty Geezer said:Textures get around this nicely
Btw, if you don't think that gaming laptop (Bruce's words) isn't high-end you're mistaking high-end with highest-end. That laptop has the top mobile cpu and 8Gb of ram. What's not high-end about it? Yes, it's not a desktop machine and it doesn't have a $999 CPU. It's just a high-end gaming laptop.
Curiousity got the better of me and, by the look of it, he repeatedly appears to apply for a patent but then lets it lapse (before it becomes public?).As for the "he shouldn't provide details otherwise people will copy it", actually publishing something will establish prior art and allow him to apply for a software patent (ugg - I can't believe I'm writing this). Keeping it a secret is actually more dangerous because patents don't have to be performant: someone could use a clumsy unoptimised version of Bruce's algorithm, quickly patent it and then Bruce would have to pay license fees.
I followed the link and saw no explanation as to why the application lapsed, so we don't know if he spent any money or was just riding the intial phases. I dunno how it is in the land downunder, but in the UK it's free to file a patent, and only £200 in fees over the first year of application for the two phases (forget what they are). It is possible to file a UK patent for nothing and then use it as security once you land an investor. If you find no investor, the patent isn't published and you can reapply at a later date.There's a pretty common reason why patents are allowed to lapse... The examiner cites one or more documents that render the invention obvious or show it to have a lack of novelty.
...it's pretty difficult to move slower than the patent application process and the cost of filing isn't trivial so you think you'd put some effort in!
...
On the plus side it does mean that the examiners must have clearly pointed out an awful lot of relevant background for his previous applications and the fact he's still applying should mean that despite being shown that information the inventor believes there is still something novel in there.
Great interview. Thanks for sharing. By the way, thumbs up to the interviewer, he seems like a really really nice guy. He has a good vibe about him. He is actually listening to what the interviewee, the other person, says.John Carmack unlimited detail demo opinion, now that’s how you speak!!!
http://youtu.be/hapCuhAs1nA?t=17m15s
To add to what Shifty said, it's also a matter of claims versus proof. OnLive actually does not work according to their original claims, that latency wasn't a problem (it's noticeable - some games more than others) and that they'd be streaming 720p (and above!) in real-time: there are noticeable video compression artefacts. It's impossible doing what they claimed it would do.
Same here with Euclidean. The claims are pretty outrageous. The unlimited part is demonstrably false because there isn't any computer with unlimited HDD to store such an unlimited detail. The only way to do unlimited detail without requiring unlimited storage is using fractals and similar mathmatical procedural generation which very few games have used, despite this being the de facto strategy in 4K demos. Just because you can make a tech-demo does not mean you can make a game engine.
....
Btw, if you don't think that gaming laptop (Bruce's words) isn't high-end you're mistaking high-end with highest-end. That laptop has the top mobile cpu and 8Gb of ram. What's not high-end about it? Yes, it's not a desktop machine and it doesn't have a $999 CPU. It's just a high-end gaming laptop.
You can't justify their claims relative to performance comparisons, when they were very specific about what to expect:The lowest latency OnLive games have the same latency as some PS3 games have locally (150ms and Killzone 2). It's very hard to accuse them of not hitting their latency targets...
They have not hit 80ms latencies. Quite contrary, latencies can get as high as 200ms, which feels extremely sluggish, and is exactly what the detractors were saying - these claims were impossible to hit. OnLive still made some amazing progress, but their claims were false. By that same token, a claim to offer unlimited detail is going to be a failure if the end result is finite, even if it achieves 2x the performance of polygonal engines. Dishonest claims don't get a free pass if some lesser example of progress is demonstrated. If I get milions in funding for a new compression algorithm that I claim compresses video data to 50% the size of h.264 at the same bitrate and quality, and in the end I only achieve an occasional 5% improvement, the investors might be rightly pissed that they spent so much on something that in the end wasn't as good as promised and wouldn't offer the dividends they expected."The round trip latency from pushing a button on a controller and it going up to the server and back down, and you seeing something change on screen should be less than 80 milliseconds.
"We usually see something between 35 and 40 milliseconds."
OnLive manage their data volumes via lossy compression. That's not an option with voxel datasets, unless you want macroblocking in your objects!- although the arguments presented here regarding data volumes seem to be very much in the same vein.
Well, as for the unlimited part - that depends heavily on your definition of unlimited.The technical definition of 'unlimited' doesn't have any ambiguity. Dell's use seems to imply unique detail per pixel, but he doesn't describe to what LOD, so at some point you'll be zooming into the voxels at a distance where voxel denstiy is less than one per pixel, and he'll have to interpolate data somehow. I'm not taking his definition as truly unlimited, but as a PR phrase to mean no visible data aliasing, which the data complexity tells us is impossible. It'd be equivalent to a movie where you can zoom 50x into any part of it and still get 1080p resolution. That would require 2500x as much storage per frame. We don't have the storage tech to stream data at that resolution and quality outside of massive servers. It certainly won't be coming from a BRD or local HDD! And whatever he's doing to get higher datasets streamed can be done with the likes of megameshing, so at the end of the day chances are voxels aren't going to offer any advantage in that respect while still having the drawbacks (although they might be a better fit for an alternative rendering model).
Then by that token, imposters can achieve the same unlimted number of identical grains of sand, hence unlimited detail has been possible for years. And we can call an algorithmic texture 'unlimited detail'. I think Joe Public is going to be expecting 'unlimited variety' in terms of an end to repetitious content and low-res textures, rather than an unlimited number of a few objects repeated ad nauseum....does it not? Would you agree that if you can make 1 000 000+ identical 3D grains of sand for your game's terrain, that's quite "unlimited" compared to the miniscule number of polygons that we have today?
You should be consdering it in terms of the current generation for which the games he's comparing to are designed. The current base standard for any game is well less than 8GBs RAM and an i7. Most PCs running games aren't that hot. Consoles are well below that power standard. If you were to take the current console's and scale them up to that sort of spec, the end results would utterly blitz the results UD is getting. Sure, you may not be able to zoom in to a tree at the millimetre level and still see detail, but a game designed from the ground up for an 8Gb system would have incredible detail, along with working lighting and shadowing, at higher resolutions and framerates. So you aren't comparing like for like, and neither is Dell. He's not showing what is the best possible on the same hardware as he's using, and then showing he is getting better results from the same hardware.And finally, the hardware issue. A Core i7 2630QM scores a little less in the Passmark CPU test than the Core i7 950 from 2008. So while I completely agree that it is high-end for LAPTOPS it is not really "high-end" and definitely not "the highest end" when considered in the context of the modern gaming PC.
Well, as for the unlimited part - that depends heavily on your definition of unlimited, does it not? Would you agree that if you can make 1 000 000+ identical 3D grains of sand for your game's terrain, that's quite "unlimited" compared to the miniscule number of polygons that we have today? (By the way, TES IV: Oblivion used procedural generation for their trees, if you want a AAA example to go with the awesome 4K demos.)
Only that they aren't using procedural cotent but laser-scanned models instead (that they splat a million into one scene to fit into the machine memory)Unlimited *procedural* detail is still unlimited, in the very real sense that it remains detailed even if zoomed in, while the normal textures start "macroblocking". Just sayin'.
I already mentioned that I doubt Dell was speaking literally, and it was a broad term to convey the notion of very high resolution assets. 'Infinite' is another common term for large but definitely limited ranges of permutations. eg. a proecdural texture engine cannot create infinite textures as given a limited texture size, there'll be a finite number of possible texel values. 2^24 bits for each pixel, so 2^24 to the power of however many pixels. Even if your texture is 1,000,000 by 1,000,000, the resulting astronomically large number of possible textures is still so vanishingly small next to true infinity that it counts as no textures at all!This unique detail hoopla is completely nuts, no artist would ever bother to make unique every or even a significant amount of objects in an open-world 3D engine that can potentially handle millions of them. Think of the costs associated with that.
Every engine can be scaled down, and looks worse as a result. If the atom density gets too low, his engine is no better off than polygons. Instead of insufficient RAM to store high poly models and we have them rendered with wonky silhouettes or flat grass planes, we'll have insufficient RAM to store high resolution volumetric models and they'll appear either as collections of cubes or some form of hazy blobs.And Bruce Dell said that Euclidian's engine can be scaled down for whatever platform, by reducing the "atom" density appropriately.
(By the way, TES IV: Oblivion used procedural generation for their trees, if you want a AAA example to go with the awesome 4K demos.)
And finally, the hardware issue. A Core i7 2630QM scores a little less in the Passmark CPU test than the Core i7 950 from 2008. So while I completely agree that it is high-end for LAPTOPS it is not really "high-end" and definitely not "the highest end" when considered in the context of the modern gaming PC.
That very same goes for the amount of RAM as well. 8GB is the bare standard here in Estonia when building a new gaming rig, and many of the ones I build tend to include 16GB - not because it is desperately needed, but because DDR3 prices are so low that it would be a shame not to take advantage of them.
And Bruce Dell said that Euclidian's engine can be scaled down for whatever platform, by reducing the "atom" density appropriately.
They are showing large scale instancing of geometry as a form of data compression. Pretty primitive, sure. But then again, that's exactly what we do with textures; large scale instancing (repeating). The arguments that a similar system shouldn't be used for geometry is actually quite a tough one; What most people are arguing about is the grid like repetition. That isn't an argument against reusing assets.
Not at all. However, their tech demos repeated assets in the same alignment. If that's a limitation of their compression tech needed to make this work, then the format is no good. We need arbritrary placement of repeated assets, same as we have with polygonal models.They are showing large scale instancing of geometry as a form of data compression...What most people are arguing about is the grid like repetition. That isn't an argument against reusing assets.
And how much would that same sphere take up as a CSG sphere? A few bytes? For every resolution?And the numbers don't look that bad either. Elsewhere I've read people saying 'they must have 500GB of data' - but I doubt that...
Consider a sphere with a radius of 1024 voxels. The surface area of the sphere is a bit over 13m voxels. Which maps to roughly 3630x3630 for a 2D texture. How much data would that be?
...Taking that into account, 13m voxels at 1bpp is 1.64mb. Not enormous.
The really funny thing is that it compresses (rar) down to 1.9mb. That's very close to 1bpv. At these rates, lossy voxel compression becomes very interesting - especially as a method to store additional data (after all - you'd be mad to store lossless textures today).