Wouldn't moving to a REYES-like system help PS3 a lot ?

Simon F said:
IIRC, I've read that the typical (Pixar) Movie Renderman frames may also access 100s of Megabytes of texture data.

Can't see textures going away anytime in the next few years.

That is correct. We have several 8K texture pages for our characters right now, and each have at least color, displacement, specular layers... And it's still not for movie production. My pal working at MPC, UK had to work with hundreds (!) of 2K maps for a single object recently.

Pixar's Renderman is actually heavily optimized for textures, they have automatic MIP-mapping and lazy loading, for example.

Also, just think about it... how else would you color vertices, that are only generated at rendertime from a higher order surface? The whole point of using subdivs or NURBS is that there's less data to pass through the pipeline, from modeling through UV mapping through skinning through occlusion culling etc. Textures are a MUST.
 
And what has raytracing to do with this anyway?? The big idea behind REYES is to throw away everything you don't see, which is the total opposite of raytracing... where you need to have the whole scene ready, for the case that a ray gets bounced out of the camera frustrum. That's why it takes so long for Pixar to add raytracing and all the required additional features for acceleration and so on...

And this:
Guden, i think he was (quite rightly so) pointing out that 3D models used in games are *often* created using raytracers to begin with, then scaled back from hudreds of thousands of polygons, to models that are useable in today's games.

I can't understand a word of this. What has raytracing, a rendering technique, to do with modeling polygonal objects?? And how many games are using normal maps generated from high res models today? DX2, Far Cry - as Doom3 isn't out yet... And what have high res models got to do with raytracing either?

Not to mention that REYES does not equal micropolygon tesselation and stochastic sampling, just makes them easier to implement. Pixar has them patented though, so not too many other Renderman renderers support these. And don't forget that Renderman is the standard for scene description, and Pixar's Photorealistic Renderman is only one app that uses that standard...

Edit: so, implementing a REYES architecture won't automatically enable good motion blur, AA and displacement either.


All in all I think most of you guys' knowledge is very lacking in these areas, but already you are going wild with speculation... try to read up on the subject.
www.renderman.org
 
I have to agree with the previous poster....
I just wonder what some of you think the Reyes architecture is, and how exactly that changes the way that content is authored. Just what is this massive paradigm shift?
 
london-boy said:
Guden, i think he was (quite rightly so) pointing out that 3D models used in games are *often* created using raytracers to begin with

Um, do you mean modellers create models in modelling programs part of a 3D rendering package perhaps? Because raytracing is a rendering technique and has nothing to do with modelling... You could feed POVRay a textfile with a few parameters in it and get a couple raytraced spheres on the screen as a result. That's raytracing man. :)

then scaled back from hudreds of thousands of polygons, to models that are useable in today's games.

I doubt too many game developers actually work this way, because letting the modelling program auto-lod for you tends to be inefficient compared to building the model by hand to a certain polycount.

So, having a platform that lets you use your original raytraced model would be useful

...Yes, very useful, if the game you're making happens to be Diablo 2 or Killer Instinct. ;) Raytraced models make good sprites, but aren't of much use in 3D-animated games, heh.

and not a paradigm shift at all. There would be no re-training to do

Except a game isn't all models, environments would also be micropolys, and the engine has to be programmed/optimized with that in mind, etc...

It's not just the matter of modelling the characters and throwing them up on the screen.
 
london-boy said:
Yeah my thoughts exactly. Sony's been heading towards a POLYGONS POLYGONS POLYGONS approach for quite a while, ultimately i think they will definately push for a texture-less platform. We just have to see which platform it will be... ;)


I'm dying to see how the hell these bods in the labs will manage to cool these chips that are producing x number of polys / micro plolys / sub pixel displaced nurbs a sec. Isn't this the biggest problem :?: LN2 cooling isnt exactly user-friendly :D
 
Guden Oden said:
...Yes, very useful, if the game you're making happens to be Diablo 2 or Killer Instinct. ;) Raytraced models make good sprites, but aren't of much use in 3D-animated games, heh.

[\quote]

With normal maps gradually becoming more widespread, at least in the pc games side of things anyway, most devs are using high poly count level models now for generating the maps from. You'll still need a lower detail game model for the normal maps to go on though ( at least until we get systems capable of x billion, trillion sub pixel polys / nurbs a sec chips )

If anything, rather than a shift to a different rendering approach, the biggest shift I hope to see happen to the 3d industry, whether its for cg or games, is when everyone uses one format for objects, one coordinate system for animation data, one file format for image data ( hopefully OpenEXR ? ) and one for scene description / lighting / layout.

The worlds drought problem would also be solved because of all the tears of joy from developers and artists all over the globe ( assuming they all agreed with it.....but thats a bigger problem all together :) )

...sorry, went OT,....rambling.... :oops:
 
Also, just think about it... how else would you color vertices, that are only generated at rendertime from a higher order surface?

RiColor(Cs)
RtColor Cs;

In anycase without a texture you can define a surface color in your shader and programmatically control it. (although textures are easier for "artists" to work with :p )

The big idea behind REYES is to throw away everything you don't see, which is the total opposite of raytracing... where you need to have the whole scene ready, for the case that a ray gets bounced out of the camera frustrum. That's why it takes so long for Pixar to add raytracing and all the required additional features for acceleration and so on...

Well for the longest time instead you'd use BMRT as a rayserver for PRman instead... :p Anyways, thesedays most hybrid (which is pretty much most) renderers(RiSpec or no) can switch between scanline or raytrace per object usually with a declaration in the object or by switching the hider...

Edit: so, implementing a REYES architecture won't automatically enable good motion blur, AA and displacement either.

Hehe, otherwise AIR and RenderDotC would be doing A LOT better... :)

All in all I think most of you guys' knowledge is very lacking in these areas, but already you are going wild with speculation... try to read up on the subject.

Actually we use PRman, Mray, and Jig on pretty much a routine basis... :p
 
ERP said:
I have to agree with the previous poster....
I just wonder what some of you think the Reyes architecture is, and how exactly that changes the way that content is authored. Just what is this massive paradigm shift?

What I meant was micro-polygon based rendering ( including nice slice 'n dice to tessellate everything in micro-polygons ) and stochastic sampling basically.

I like the fact that Shading is done uniformly on all micro-polygons in one big ( and often long ) swoop without having to go back and forth between per vertex, per pxiel and sub-pixel levels, I like that, except between the eye andf the front-plane, clipping can be greatly simplified, I like how easier and logically straight-forward is displacement mapping.

I had the chance several times to read Cook's and Carpenter's original REYES paper and I have read some papers that did some comparisons between OpenGL and REYES still I am no expert ( far from being one ), but I know that if I were one I would not come in a thread like this and look down upon the vulgus with almost disgust on my face.

All in all I think most of you guys' knowledge is very lacking in these areas, but already you are going wild with speculation... try to read up on the subject.


Not to mention that REYES does not equal micropolygon tesselation and stochastic sampling, just makes them easier to implement

Well, in the original paper by Cook and Carpenter, talking about the REYES pipeline those two technique are quite fundamental to the goals put forward by the paper's authors: no aliasing ( better noise, the eyes likes that much more ) and efficient and logical rendering.

When I said "what about a REYES like architecture" I used the word architecture which is a bit off-place in the context of what I was saying, but the idea was taking the ideas behind the rendering pipeline proposed for REYES by Cook and Carpenter and optimize it for real-time usage on a machine like PlayStation 3 ( maybe a bit less samples in the Stochastic AA process ? maybe bigger size for the micro-polygons ? maybe use of deferred T&L when displacement mapping is not used ? ).
 
When I said "what about a REYES like architecture" I used the word architecture which is a bit off-place in the context of what I was saying, but the idea was taking the ideas behind the rendering pipeline proposed for REYES by Cook and Carpenter and optimize it for real-time usage on a machine like PlayStation 3 ( maybe a bit less samples in the Stochastic AA process ? maybe bigger size for the micro-polygons ? maybe use of deferred T&L when displacement mapping is not used ? ).

Of course on something architecturally like the BE, you're going to have distribute the scene in buckets (like we messed with on the GScube). The only problem with that is that a pretty gnarly bucket can stall a particular node while the others idle...
 
ERP said:
I just wonder what some of you think the Reyes architecture is, and how exactly that changes the way that content is authored. Just what is this massive paradigm shift?
Does there need to be one? I would think it's a GOOD thing if you don't impose a paradigm shift on your content creators? :p

Moody said:
I'm dying to see how the hell these bods in the labs will manage to cool these chips that are producing x number of polys / micro plolys / sub pixel displaced nurbs a sec. Isn't this the biggest problem?
I don't think so.
Micropoly rasterizer doesn't need a lot of things that conventional one does(as pointed out in that other thread).
And shader load before visibility optimizations shouldn't really be much different from doing it on pixels, so I don't really see a big problem there.
 
archie4oz said:
RiColor(Cs)
RtColor Cs;

In anycase without a texture you can define a surface color in your shader and programmatically control it.


I guess you know well enough what I'm talking about... It's creating color details for complex models, like a human character, or a badly worn metal plating on a combat vehicle or cloth, or anything.
Replicating the detail and level of control in a bitmap texture is IMHO not possible in any fundamentally different way.
Coloring the millions of vertices themselves by assigning vertex color is almost the same as painting textures on a properly UV mapped model.

Procedural texturing might be a good and viable alternative in many cases, but it does not allow for the fine artistic control of a bitmap texture, and it's slower too. A face or a simple logo is already too much for procedurals. You can do a lot with them, as demonstrated in Pixar movies, but they aren't really fitting for a photorealistic result. Most movie VFX require either hand painted or photographed and processed textures and I believe that there's no reason to throw out a working method.


Well for the longest time instead you'd use BMRT as a rayserver for PRman instead... :p
Anyways, thesedays most hybrid (which is pretty much most) renderers(RiSpec or no) can switch between scanline or raytrace per object usually with a declaration in the object or by switching the hider...

We both know the problem I was talking about. Yes it can be done, I haven't said otherwise, but it's slow as a snail usually, and it just throws out all the optimization you were getting with using REYES.
Or as an example, raytracing ambient occlusion for example is still not good enough in PRman without the proper caching and optimizing routines yet.

Also, BMRT is no longer available because of the Pixar-Exluna lawsuit.


Actually we use PRman, Mray, and Jig on pretty much a routine basis... :p

I wasn't talking about you, actually... :)
 
Laa-Yosh said:
Replicating the detail and level of control in a bitmap texture is IMHO not possible in any fundamentally different way.
Very true..but it's possibile to dramatically change the way a texture is represented on a surface, streching the idea ot texure maps toward the point of mapping coordinates not being needed anymore. In fact you could just use a different vertex color for each vertex in a mesh with MANY triangles. There are a lot of papers (and I believe we are just in thei infancy of this research field) that show how to represent, encode and compress scalar values on a 2D manifold. I bet in the next Siggraph will see even more works on this line..:)

Coloring the millions of vertices themselves by assigning vertex color is almost the same as painting textures on a properly UV mapped model.
That's the idea.

Also, BMRT is no longer available because of the Pixar-Exluna lawsuit.
Lawsuit was settled by Nvidia, who bought Ex-Luna, and Pixar.
http://film.nvidia.com/page/home

ciao,
Marco
 
nAo said:
Very true..but it's possibile to dramatically change the way a texture is represented on a surface, streching the idea ot texure maps toward the point of mapping coordinates not being needed anymore.

I do agree that the one field in desperate need of research and development is UV mapping. Tools are barely adequate...
However, I don't think there's room for radical changes in the approaches - 2D image editing is still a must have tool, despite the many 3D paint applications (Deep Paint, Bodypaint, and to an extent Zbrush). Thus we need images to act as the textures, images that can be viewed and edited in 2D. The ability to use processed digital photos is a huge time saver - with 2-3mpixel cameras, you can go out and get source material for practically everything. Various image filters are also often used tools; for example the high pass filter to create tileable textures.
Then there are the data storage and management issues, like easy MIP mapping for level of detail or the well developed methods to deal with texture aliasing.

If any new paradigm wants to replace bitmap textures, the benefits must be huge to compensate for the lack of these features. I'm obviously not qualified to make any predictions :) but I believe that the general concept of textures is as good as it gets for most cases, we only need better tools to deal with them.

Lawsuit was settled by Nvidia, who bought Ex-Luna, and Pixar.

Part of that agreement was that Larry Gritz had to discontinue BMRT if I recall correctly - it is no longer available. That's probably the reason why he decided to completely leave the software rendering field.
 
Laa-Yosh said:
However, I don't think there's room for radical changes in the approaches - 2D image editing is still a must have tool, despite the many 3D paint applications (Deep Paint, Bodypaint, and to an extent Zbrush). Thus we need images to act as the textures, images that can be viewed and edited in 2D.
I'm not advocating the end of textures as bitmaps, so I completely agree with you.
I'm just saying that, in the end, we can represent textures (as bitmaps with its mapping coordinates) in another, more efficient, scheme.
More efficient means with less storage and with better memory access patterns.
So I'm not saying we have to change the art pipeline, but just to introduce a new step in that pipeline.

Part of that agreement was that Larry Gritz had to discontinue BMRT if I recall correctly - it is no longer available. That's probably the reason why he decided to completely leave the software rendering field.
On what is Mr. Gritz working now?

ciao,
Marco
 
nAo said:
So I'm not saying we have to change the art pipeline, but just to introduce a new step in that pipeline.

I'm not against that either... but I'd first focus research on better UV mapping, as content creation is already two steps behind the speed that productions require :)

On what is Mr. Gritz working now?

He'll be a speaker at the 3D Festival in Denmark, this May. His presentation is called "hardware assisted production rendering" :)))
 
Back
Top