Viaibility and implementation of SVO on next-gen consoles *spawn

Oh. I thought you were talking about XB1 being accelerated over PS4 given the original posit and reference to the dev quote. Yes, PRT improves SVO performance.
You don't create the object. ;) You don't have any object data persistent in RAM. You render the scene calculating each pixel from scratch every time. Cast a ray, calculate what marble it would hit, calculate the surface appearance for that pixel, move to the next one. You need to read only the marble positions in this case (although this is of course the ray-tracing problem of fast random memory reading).

I don't follow the question. The discussion started when you suggested XB1 could have an advantage over PS4 in support SVO via PRT, no? That was the thing having an advantage. So are we talking about what advantage XB1 can have over PS4 thanks to its HW design, or, recognising PS4 does PRT too, what advantage both consoles have over...consoles without PRT?

Yes, that seems the one area X1 may have a benefit, depending on how tiles are accessed and whether they can be cached effectively or not. If they are accessed in a truly random fashion, low latency access seems beneficial. I'm unable to answer that. We'd need to know what the latencies are between DDR/GDDR and ESRAM access, and how often the GPU is waiting on data. Oh, and how well the GPU copes with stalls by doing other work, because if it's still busy while waiting on textures tiles to be loaded from main RAM, the time taken to complete the task won't be representative of the work done by the GPU.

Alright so it seems you're acknowledging both points now, but you're just not putting them together.

1+1 = 2

If PRT's basically makes SVOGI possible, and the X1 has an advantage in PRTs due to its eSRAM or data move engines then I'm not sure where you are still confused. If that turns out to be the case, it should be faster at cone tracing.

I oth, am still confused on your ray tracing of marbles example. I obviously didn't mean you as in the artist. Seems to me you're still suggesting performing actual ray tracing in your example. So let me make it more clear: Creating marbles procedurally doesn't speed up ray tracing as far as I know, does it?

Seems to me like the above is still more plausible, in reference to SVO cone tracing. As opposed to taking away that ray tracing procedurally generated marbles speeds up the act of ray tracing. Why would it? That's the part I was confused about, and still am, in your example.
 
so like, assume just simple triangle, 3 verts striping, xyz+normal+UV, each vert takes 32B of memory....meaning 1M tris need 32MB, 1G tris = 32GB....where are you going to store that amount of verts...?

In RAM.

I didn't say ray tracing and doing physically accurate water was close. Just ray tracing. Ray tracing physically accurate water still has a long way to go.

My point was, there's just no getting around it. Because sooner or later, you're going to want that. We can't keep running away from geometry forever and hope pixel shaders make up for the lack of it with pretty textures and lighting. That works great for basic building architecutre, and cars, and characters. Not so great when we start demolishing them into tiny bits, and drenching them in gallons of water and you want it to accurately run through the cracks, drip over everything and settle back into puddles. Can't fake that one. You need those polygons to pull it off.
 
If PRT's basically makes SVOGI possible, and the X1 has an advantage in PRTs due to its eSRAM or data move engines then I'm not sure where you are still confused. If that turns out to be the case, it should be faster at cone tracing.
Right. But you were saying XB1 had an advantage, not least because you didn't seem to recognise PRT and SVO was already demonstrated on PS4, and you used the dev quote as proof of that. I see no advantage confirmed. There's a possibility, but that needs to be discussed.

It's good practice to identify a particular question to structure conversation. I've been unclear on what the question is here. Is it "does XB1 have an advantage over PS4?" or "does PRT have an advantage for SVO over non PRT hardware?" or what?

So let me make it more clear: Creating marbles procedurally doesn't speed up ray tracing as far as I know, does it?
It's not about speeding up raytracing. It's about being able to render the scene without needing to read in any (or very minimal) scene data. It is possible to procedurally calculate geometry (and textures and everything else) and render it on the fly. We can add a checkered, reflective floor and a cloudy sky to the scene and still not need to read any model data. This is how a renderer can need lots of write and not much read, explaining why the dev quote said, “Let’s say you are using procedural generation or raytracing via parametric surfaces – that is, using a lot of memory writes and not much texturing or ALU – Xbox One will be likely be faster.” You felt sure that reference to lots of memory writes must also mean lots of memory reads. I present the example of raytracing spheres as an example of raytracing parametric surfaces with memory writes and not memory reads, which fits exactly that idea. I'm not talking about anything other than that quote.

I'll add also that I wouldn't place any great value on a single unqualified developer sound-bite, regardless what they say. It's one, small pointer to understanding, but the dev may be wrong for a number of reasons, or may be misquoted, or may have supplied more info that explains the meat of their point which the article misses. One dev saying, "console X does ABC better than console Y" is no proof of that fact.
 
Right. But you were saying XB1 had an advantage, not least because you didn't seem to recognise PRT and SVO was already demonstrated on PS4, and you used the dev quote as proof of that. I see no advantage confirmed. There's a possibility, but that needs to be discussed.

It's good practice to identify a particular question to structure conversation. I've been unclear on what the question is here. Is it "does XB1 have an advantage over PS4?" or "does PRT have an advantage for SVO over non PRT hardware?" or what?

It's not about speeding up raytracing. It's about being able to render the scene without needing to read in any (or very minimal) scene data. It is possible to procedurally calculate geometry (and textures and everything else) and render it on the fly. We can add a checkered, reflective floor and a cloudy sky to the scene and still not need to read any model data. This is how a renderer can need lots of write and not much read, explaining why the dev quote said, “Let’s say you are using procedural generation or raytracing via parametric surfaces – that is, using a lot of memory writes and not much texturing or ALU – Xbox One will be likely be faster.” You felt sure that reference to lots of memory writes must also mean lots of memory reads. I present the example of raytracing spheres as an example of raytracing parametric surfaces with memory writes and not memory reads, which fits exactly that idea. I'm not talking about anything other than that quote.

I'll add also that I wouldn't place any great value on a single unqualified developer sound-bite, regardless what they say. It's one, small pointer to understanding, but the dev may be wrong for a number of reasons, or may be misquoted, or may have supplied more info that explains the meat of their point which the article misses. One dev saying, "console X does ABC better than console Y" is no proof of that fact.

No, I didn't say it had. I said it's possible due to the eSRAM and tile/untile and the hardware's seeming customization towards PRTs. The logical conclusion based on how we know the technique works leads me to believe that. The developer's comment was irrelevant to that belief, but rather I thought it's possible he's referring to the same thing. I've been brainstorming this ever since I saw MS's build on PRTs and started reading up seriously on them and came across SVO cone tracing so hearing this just basically inspired me to come out with it.

So to recap and just to stay honest here.

We still need to know though, if in fact it has an advantage in PRTs. That's not confirmed.
I have no idea if the eSRAM can or would be used to store the tiles.
I have no idea if the data move engines improves performance of PRTs and what the PS4 has implemented to deal with it beyond the standard support of AMD GPUs.
Does it have the same tile/untile feature on the DMA?
Can the tiles be stored in ESRAM?

Those are the things I was trying to discuss here. Those are the answers that need to be known before I could go from "might have an advantage" to "has have an advantage." I've never stated anything as a definitive conclusion, or fact.


As far as that developer's comment, in the vaguest of way we can take it, simply pointed to something about it makes it suitable to some sort of ray tracing. The most obvious explanation, to me, that has any sort of practical application, and would connect all of this together is SVO cone tracing. Just because I can't see any other type of ray tracing being faster on the X1.

Now you presented the alternative suggestion, which now that you clarified I understand what you are saying. So you're under the belief it's more of an off the wall example of something it could do better if you wanted to attempt it. But it doesn't explain why ray tracing is in there at all. The only advantage in your example then, is is in the rendering of parametric surfaces. You're only talking about the procedural generation part. Ray tracing has no effect. So I still say that's not it. Would have not mentioned it at all. I would have said, as you put it now, rendering parametric surfaces. But he did use "ray tracing", right?
 
I have no idea if the eSRAM can or would be used to store the tiles.
Yes it can. the ESRAM can be used for all data including textures.
I have no idea if the data move engines improves performance of PRTs and what the PS4 has implemented to deal with it beyond the standard support of AMD GPUs.
Nor do I. ;) Maybe you'll get more info if you dig up Sebbbi's posts on virtual texturing?
Can the tiles be stored in ESRAM?
The tilemap could be. The individual tiles would be loaded in GPU.

As far as that developer's comment, in the vaguest of way we can take it, simply pointed to something about it makes it suitable to some sort of ray tracing. The most obvious explanation, to me, that has any sort of practical application
It may not be a practical example. eg. A dev could say Cell was well suited for ray tracing, but that wouldn't point to RT being used in games on PS3.

and would connect all of this together is SVO cone tracing. Just because I can't see any other type of ray tracing
It's not ray tracing. ;) It's spatial partitioning and approximations of the results from true ray tracing.

But it doesn't explain why ray tracing is in there at all. The only advantage in your example then, is is in the rendering of parametric surfaces. You're only talking about the procedural generation part. Ray tracing has no effect. So I still say that's not it. Would have not mentioned it at all. I would have said, as you put it now, rendering parametric surfaces. But he did use "ray tracing", right?
RT is an incredibly simple, compact form of rendering. It needs no interim buffers or components. The very opposite is probably deferred rendering that needs lots of passes and lots of buffers, with lots of reads. A CSG render can involve almost all computation and output.

There's another aspect of your posts and original discussion that I'm unsure of, which is that you suggest SVO produces the same results as raytracing. You link to fully raytraced examples in your opener to explain what RT is before talking about SVO. Are you looking at SVO as being an alternative rendering strategy to rasterisation, capable of perfect clarity reflections and physically correct lighting?
 
Yes it can. the ESRAM can be used for all data including textures.
Nor do I. ;) Maybe you'll get more info if you dig up Sebbbi's posts on virtual texturing?
The tilemap could be. The individual tiles would be loaded in GPU.

It may not be a practical example. eg. A dev could say Cell was well suited for ray tracing, but that wouldn't point to RT being used in games on PS3.

It's not ray tracing. ;) It's spatial partitioning and approximations of the results from true ray tracing.

RT is an incredibly simple, compact form of rendering. It needs no interim buffers or components. The very opposite is probably deferred rendering that needs lots of passes and lots of buffers, with lots of reads. A CSG render can involve almost all computation and output.

There's another aspect of your posts and original discussion that I'm unsure of, which is that you suggest SVO produces the same results as raytracing. You link to fully raytraced examples in your opener to explain what RT is before talking about SVO. Are you looking at SVO as being an alternative rendering strategy to rasterisation, capable of perfect clarity reflections and physically correct lighting?

I'll check his posts out when I have some time. Maybe he will chime in as well.

And yes we know it's not RT per say, but as vernacular language goes...probably less far fetched than either console being capable of doing anything pratical as far as ray tracing goes. Guess we'll have to find out later what exactly he meant.

No, I was explaining the difference. Of course it's not the same thing and no I wouldn't expect the same accuracy or clarity. But it does give you the GI, indirect illumination, transparency, reflections, etc. Everything we don't have and want.

I will say this though. On the topic of clarity, if we're going to be fair, are we talking achievable ray tracing in real time? It's probably more clear than something like Brigade as of right now without needing a Titan to run. We have no idea exactly just how many Titans, or equivalent cards, it will take to clear up that noise in Brigade. Heck they seem to be pretty satisfied with it running with some minimal noise. Not sure if I feel the same but I would buy the game that runs on it to support them and the tech if it wasn't too bad. Assuming I can afford the hardware.
 
how many pixels per second could Picasso paint?

Depends on the type of pixels. Dull or pretty ones? ;)

So let me make it more clear: Creating marbles procedurally doesn't speed up ray tracing as far as I know, does it?

Games are about art style and art people's ability to tweak stuff to look pretty. Procedural stuff doesn't lend itself to that, that's why it's not very viable for AAA development.

This and memory access. But w/o dedicated HW to do procedural stuff, you're trading a lot of compute power to do stuff like marble. Or any noise for that matter. Where do you get your random data from? Texture? Cool but your point was to minimize texture (or data in general) access. And then you've got to do a lot of postprocessing on the data itself which takes cycles. A lot of them.
 
Depends on the type of pixels. Dull or pretty ones? ;)



Games are about art style and art people's ability to tweak stuff to look pretty. Procedural stuff doesn't lend itself to that, that's why it's not very viable for AAA development.

This and memory access. But w/o dedicated HW to do procedural stuff, you're trading a lot of compute power to do stuff like marble. Or any noise for that matter. Where do you get your random data from? Texture? Cool but your point was to minimize texture (or data in general) access. And then you've got to do a lot of postprocessing on the data itself which takes cycles. A lot of them.

The discussion wasn't really so much about procedurals, but I do want to address this because I still see it all over the net.

The idea that you can't use procedurals to do artsy stuff is somewhat of an old myth. That was mainly the case when developers couldn't spare the polygon budget to model small objects and procedurals were plain, simple and in their inception.

Take for example individual pieces of a character's attire. They would use textures or images, because they could draw the belt buckles, and the buttons, and add some touches to that 2D image. Or they would model them in 3D, but then unwrap the characters, turn the detail into 2D images which would later be skinned and the 3D detail brought back to life with some sort of bump mapping. The latter process is still common place today, but it has changed somewhat.

Now we are also beginning to display the 3D belt buckle in-game as a 3D object as well for the up close LOD models, the models closest to the camera. What we need now is to be able to show the leather texture pattern to make it look realistic and be able to zoom in on something like that and maintain clarity and resolution. Now you would use procedurals for that texture. Static images would lose out if you have the processing to spare. And that goes for the remaining pieces of your attire.

There's also a lot of new tools for procedurals today that allows for things not possible before. You can actually use them as a brush and mix them with image textures or other procedural to create a new regular image textures, or draw directly on a character. So a portion of your character's pant leg might be textures with one procedural, and another portion out of another, and in between you can have a static image if you so wanted.

If you're creating a 2D side scroller, and you really want hand drawn art, or a very cartoonish game, or something like Porject Spak, then you're still going to rely for good old fashion 2D images for a lot of your art, but for 3D games things have changed a lot.

Whether it's dirt, whether it's cloth, whether it's leather or metal...now that we're at a point where we're displaying the material's actual granular texture, procedurals are going to play a big role and could soon enough unseat image texture for the most commonly used types of textures in games. And all the arty stuff will still be there, but they be using actual 3D geometry to sculpt it out and present the art style. A lot of what used to be done in a 2D paint program to create a 2D image that would then be skinned on a character, is starting to move over to actual 3D sculpting using actual polygons and painted with procedurals, which now we have the power to carry it over to the game engine itself.

You can also animate them just as you do gifs but they require far less space to do so which is a big deal. Using animated procedurals with bump-mapping or displacement or even tessellation and realistic lighting can lead to some really interesting looks and effects.
 
First of all: we were talking about pixels, not models. Marble and sky were the examples provided and for a very good reason - this discussion was about ray tracing (cone tracing, sphere tracing) cost and texture lookup expenses were brought into the mix. Second of all: adding variety to the scene by combining characters from a set of elements is not what I would consider to be a procedurally generated content. Building dungeon from shapes in a roguelike - sure. Randomly generated/mutated trees? Sure. Spawning characters from a set of n heads, m bodies and o accessories is no more procedural than rolling for the amount of loot in the chest. Sure, it's random. But there's little "procedure" (recipe, steps,...) involved. What middleware vendors sell as procedural today became more widely known as layered materials, detail maps and such. Epic and Ready at Dawn were recently presenting stuff very similar to what Allegorithmic used to sell as ProFX.
 
First of all: we were talking about pixels, not models. Marble and sky were the examples provided and for a very good reason - this discussion was about ray tracing (cone tracing, sphere tracing) cost and texture lookup expenses were brought into the mix. Second of all: adding variety to the scene by combining characters from a set of elements is not what I would consider to be a procedurally generated content. Building dungeon from shapes in a roguelike - sure. Randomly generated/mutated trees? Sure. Spawning characters from a set of n heads, m bodies and o accessories is no more procedural than rolling for the amount of loot in the chest. Sure, it's random. But there's little "procedure" (recipe, steps,...) involved. What middleware vendors sell as procedural today became more widely known as layered materials, detail maps and such. Epic and Ready at Dawn were recently presenting stuff very similar to what Allegorithmic used to sell as ProFX.

What in the world are you talking about? Neither would I, but I honestly have no idea where that argument ever entered into the mix. Unless you were answering someone else.

I was talking about procedural textures.
 
Take for example individual pieces of a character's attire. They would use textures or images, because they could draw the belt buckles, and the buttons, and add some touches to that 2D image. Or they would model them in 3D, but then unwrap the characters, turn the detail into 2D images which would later be skinned and the 3D detail brought back to life with some sort of bump mapping.
I read this as constructing characters from 3D pieces (belt buckles and stuff). This wasn't your intent? K, no problem. No need to get agitated. Bad form.
 
There's a lot of internet chatter that just quotes terms and phrases without understanding. There will be people who read Voxel Cone Ray Tracing and equate it to full-fledged ray-tracing because it sounds the same.

That's this thread
 
Back
Top