PS3: Phil Harrison addresses the real-time vs CGI issue

Status
Not open for further replies.
From:

http://www.firingsquad.com/features/xbox_360_interview/page6.asp

ATI: Yeah I really think it’s just an accident because, well you know, last summer they had to change their plans. They found out that Cell didn’t work as well as they wanted to for graphics. Remember originally you had two or three Cell processors doing everything and then in August last year they had to take an NVIDIA PC chip. And as you know, all PC chips do this, and so it [dual HD display outputs] just came for free.

Jawed
 
Yeah Jaws thanks for showing that picture. The cell and RSX will work together to give the developer the power to bring better graphics and physics to us gamers. I think it's here (and other reasons too) where the PS3 will out shine the X360. See people keep asking "If the X360 has 3 3.2 Ghz chips and the GPU has more pipes, then how can the PS3 be so much better". And people that defend X360 use that quote or something of the like all the time.

To me Jaws just pointed out one big reason how. If the Cell processor is double that of the X360 it kinda makes sense.
 
Titanio said:
Sub-Surface Scattering. It's been the latest thing in offline renders next to GI. It's where light passing through a material gets scattered around, picking up colour (being filtered). It's what gives the red glow in your hands when you place them over a bright light.
The two pictures I've found here illustrate SSS
http://www.news.cornell.edu/releases/Jan04/Marschner.award.ws.html
It's a majorly processor intensive operation, taking multiple volumetric samples when done accurately. The RSX demo showed this effect but I don't know how true this was, or just clever shading. I'm really keen to know what raycasting techniques are actually supported on RSX and what's just trickery.
 
Jaws said:
CPU<=>GPU two-way comms has been expected since Hofstee's presentation last year...

Software rendering that is integrated with hardware acceleration...this waht we get with CELL<=>RSX

While plausible, lets play devils advocate for a moment...

Was that slide before or after they realized they were going to do something other than the BE setup? From the looks of the RSX and what ATI is directly saying, the RSX is just a PC part and seems rather "last moment" relatively.

While I have no doubt that the PS3 can aid the RSX (I think it was even mentioned by Phil Harrison that it could do vertex shading) and the CELL is sweet, the question is how much can we truly expect?
 
Do you really think it wise to ask ATI for info on nVIDIA's designs? :D

I undertstand their logic, and they may be right that it's just a PC part, but it's not the ideal source for such information.

As long as RSX and Cell can talk to each other across the 25 Gb/s bandwidth, it's really down to the programmers to make something of it. eg. Cell could raytrace a landscape in realtime and RSX could populate it in realtime.
 
I know this in hardly a profound insight, but the X36/PS3 citation seems almost a carbon copy of GC/Xbox. Custom GPU by ATI (with embedded RAM, no less) VS Nvidia's design derived heavily from a PC part. While custom part might have cost advantage over the PC derivate, I don't think it's inherently better as far as performance and features are concerned.
 
I'd ahve thought the custom part would be more expensive than a PC part. nVIDIA's chip has an 'eceonmy of scale' that they have mentioned, as they'll make it for PS3 and PCs. ATI's components will be fabbed for 360 only.
 
Subsurface scattering requires raytracing and it's very slow when done correctly. If you don't take enough samples, it produces some quite ugly noise.
However, you can cache the results for a static mesh, and if you're careful, you can even do some small animations as well. There have been many papers and techdemos on this for a while now. I believe that the Molina head demo had such a precomputed SSS solution - it's quality was amazing, good enough to go into a feature film VFX shot.

I don't think that it was realtime. I mean the freaking thing's raytracing takes minutes to render on a dual P4/A64 config in video resolution (and even hours for a film res image), so it just can't get sped up a few thousand times, not even on Cell, to make it work in realtime.
 
Shifty Geezer said:
I'd ahve thought the custom part would be more expensive than a PC part. nVIDIA's chip has an 'eceonmy of scale' that they have mentioned, as they'll make it for PS3 and PCs. ATI's components will be fabbed for 360 only.

First of all, I am fairly sure that RSX is not a PC part, but rather a derivative. As such, they will are likely to save on R&D but not on manufacturing. Second, I'd imagine that one of the design specifications MS issued was a cost target - something that Sony would have much less flexibility with if they simply called Nvidia and asked them to adopt G70/75/80/whaterver to their system. However, R500 does look pretty complex (core+RAM), so who knows.
 
I feel like I'm in computer graphics design classroom right now. :oops:
I'm learning so much. I see why I became a member here.

On topic: The following quote came from a professor of computer science at Cornell who codeveloped with two other members from Stanford that made the SSS (it means simulating the subsurface scattering of light in translucent materials)

Translucent materials range from marble to human skin. The method has been adopted by several commercial animation studios and is most notably seen in the character Gollum in "The Lord of the Rings" trilogy.
One of the reasons [Gollum] looks as realistic as he does is because of the translucent skin," Marschner explains. "The technique was also used in the last "Matrix" film. One of the things that surprised us was how many materials are translucent," Marschner recalls. "Just about anything except metal, in fact." Among others are marble, cloth, paper, skin, milk, cheese, bread, meat, fruits, plants, fish, ocean water and snow. When light strikes such surfaces, some of it is reflected directly, but some also penetrates, is reflected underneath and returns, being scattered in the process. Computer methods that simulate only the reflection from the surface produce a hard, metallic effect.

Marschner noticed this while working on the Michelangelo Project at Stanford, which involves 3D scanning of the sculptures and architecture of Michelangelo. A computer rendering that assumes all light reflects from the surface makes a statue look like plaster, but adding translucency makes it look like marble. The research team's major contribution was to develop a mathematical method that allows simulation of translucency without requiring excessive computer processing time.

Now that completly explains this new technique. The question I have is can or will the X360 do this technique called SSS?
 
Acert93 said:
Jaws said:
CPU<=>GPU two-way comms has been expected since Hofstee's presentation last year...

Software rendering that is integrated with hardware acceleration...this waht we get with CELL<=>RSX

While plausible, lets play devils advocate for a moment...

Was that slide before or after they realized they were going to do something other than the BE setup? From the looks of the RSX and what ATI is directly saying, the RSX is just a PC part and seems rather "last moment" relatively.

While I have no doubt that the PS3 can aid the RSX (I think it was even mentioned by Phil Harrison that it could do vertex shading) and the CELL is sweet, the question is how much can we truly expect?

What's so last moment? Sorry but I don't see the relevance it has to what I posted? Anyway this stuff has been discuused in countless threads...

The FlexIO was designed to be *gasp*, Flexible in connectiong CELL to other IC's... e.g. GPUs...
 
Full dynamic SSS in realtime won'r be possible with this generation IMHO. It just takes far too many calculations.

Various fake solutions will be there, like ATI's lightmap bluring trick lifted from the Matrix sequels' VFX. It seems that the Darc Sector techdemo is using a similar technique, for example.
In fact, I'd be disappointed if developers would not get more realistic and varied materials in this generation. Bumpy specular stuff has already been criticised to hell in Doom3 (no pun intended)...
 
To Laa-Yosh


I don't think that it was realtime. I mean the freaking thing's raytracing takes minutes to render on a dual P4/A64 config in video resolution (and even hours for a film res image), so it just can't get sped up a few thousand times, not even on Cell, to make it work in realtime.

He stated in the demo like 3 or 4 times that it was running in real-time. Watch it again. Matter of fact click here to see -> http://media.ps3.ign.com/articles/6...le/615/615000/sonycon_demos_enviro_wmvlow.wmv

Full dynamic SSS in realtime won'r be possible with this generation IMHO. It just takes far too many calculations.

Read this Laa-Yosh

The Doc Ock head - the Alfred Molina head - is actually more of a Cell demo than it is a graphics demo, because we're calculating hugely complicated light sources in real-time on the Cell, even to the point where we calculate the angle at which light enters the skin, the way that the light is then coloured by your blood, and the way that it is then reflected back out. It's something called transmission. Skin is hugely complicated - if I put my finger over a light, for example, you can see that the light is coming through my skin. We were simulating that - emulating, simulating, kind of a fine line - we were simulating that on the Doc Ock head demo.

Thats from the horses mouth aka Phil Harrison

Anyways isn't this one of the main reasons that Sony built the Cell, so that it can produce on this type of level.
 
mckmas8808 said:
He stated in the demo like 3 or 4 times that it was running in real-time.
...
Read this Laa-Yosh
...
Anyways isn't this one of the main reasons that Sony built the Cell, so that it can produce on this type of level.

Maybe you should read what I wrote again. The demonstration is obviously realtime - and amazingly good looking - but the SSS solution itself is IMHO not. They are using a lot of cached data, like Quake used precalculated lightmaps to imitate detailed shadows and lighting that was just not possible to render in realtime on that hardware. Now we have games like Doom3 and UE3 stuff where most of the lighting is realtime, and I'm sure that as hardware advances, we'll also see more and more complicated effects calculated in 1/60th of a second.

But I'd eat my hat if any hardware would be able to fully raytrace a subsurface scattered human head in realtime, in 1920*1080 resolution. Even Sony cannot beat the rules of mathematics.
 
Me and you will be eating hats. If I see this in real-time I'm going to be real excited. I don't know the super technical stuff about how and why, I'm just giving out what the person who actually had something to do with the project said himself. If he wants to lie to everybody for the next 18 months, then get fired for doing it then well. :?

I doubt Sony would let him just go around lying like that. First in front of hundreds of thousands of people on stage at a big event, then to a reporter who probably writes for thousands of people. I just dont think the man is lying. If he said they can do it in real-time for next-gen gaming then well I will believe him.
 
Jaws said:
What's so last moment?
The GPU maybe?

Sorry but I don't see the relevance it has to what I posted? Anyway this stuff has been discuused in countless threads...

The FlexIO was designed to be *gasp*, Flexible in connectiong CELL to other IC's... e.g. GPUs...

So let me spell it out.

Regardless of the flexibility of the Flex I/O and the CELL in general, the question is how flexible will a desktop PC part fit into this design. Are there limitations on the GPU side. That is relevant, even if we do not have an answer at this point.
 
Status
Not open for further replies.
Back
Top