A new Sony patent application..

And that leads to a question. Is progamming a software renderer easier or harder than programming a hardware one? Edit: Or does it not matter at all?
 
The apparatus may further comprise a communication unit which receives image data rendered by subspace unit from an external distributed rendering processing device connected with the apparatus via a network, and the consolidation unit may consolidate the image data received from the external distributed rendering processing device with the image data generated by the rendering processing unit and generate the final output image data to be displayed.

They are certain distributed rendering is going to work?
 
Patent said:
]Furthermore, as another example of rendering strategy, partial ray tracing may be adopted, in which a different hidden surface removal method is used for each brick, for example, for a brick in the close view, ray tracing, which requires complicated calculations, is applied locally and for other outside bricks, reflection mapping is used.

I won't have time to actually read the patent until tonight, but has this been implemented, commercially, anywhere else on this scale -- for gaming? Or am I interpreting this wrong and need to read more than the first and last paragraphs? ;) It sounds pretty damn neat...

EDIT: My spelling is getting worse… hard to believe, I know. The mind boggles.
 
[0100] When consolidating image data generated by division rendering into the final output image, as was described in FIG. 5, affine transformation processing such as enlarging and contracting is done for the image data of each brick. In this regard, in order to match the resolution of image data generated by division rendering with the resolution of consolidated image data, it is necessary to interpolate the pixel value of image data for each brick accordingly. To do this, interpolation methods such as bilinear interpolation, which approximates the pixel values using internal division, and bicubic interpolation, which approximates the variation of pixels using cubic spline function are used.

So each brick is rendered at different resolutions? Odd, this causes more problems than just using the same res for each brick, based on the area covered. Am I missing something here?
 
IST said:
And that leads to a question. Is progamming a software renderer easier or harder than programming a hardware one? Edit: Or does it not matter at all?

There are tons of free software renderer available, and extensive literature on rendering algorithms of all kinds, so I would say it's much easier to implement a software renderer that to learn how to code around the limitations of a specific GPU.
 
Vince said:
Patent said:
]Furthermore, as another example of rendering strategy, partial ray tracing may be adopted, in which a different hidden surface removal method is used for each brick, for example, for a brick in the close view, ray tracing, which requires complicated calculations, is applied locally and for other outside bricks, reflection mapping is used.

I didn't have time to actually read the patent untill tonight, but has this been imoplimented, commerically, anywhere else on this scale -- for gaming? Or am I interpreting this wrong and need to read more than the first and last paragraphs? ;) It sounds pretty damn neat...

It is describing a software renderer. To my knowledge, no hardware renderer does scan line or raytracing.
 
And we have PS3's block diagram.

PS3_BLOCKDIAGRAM.jpg
 
pcostabel said:
It is describing a software renderer. To my knowledge, no hardware renderer does scan line or raytracing.

I know about the renderer, but I was refering to the application of it to gaming. I've never heard of it used, but I'm not familiar with the areas of high-end visualization for gaming/realtime-entertainment where one (IMHO) would expect it would have been seen this far.
 
pcostabel said:
IST said:
And that leads to a question. Is progamming a software renderer easier or harder than programming a hardware one? Edit: Or does it not matter at all?

There are tons of free software renderer available, and extensive literature on rendering algorithms of all kinds, so I would say it's much easier to implement a software renderer that to learn how to code around the limitations of a specific GPU.


Thanks for the info.
 
[0111] Further, in the distributed rendering processing system for such a computer network, it is possible to define the network distance parameters based on the number of routing hops and communication latency between the central compute node and distributed compute node. Therefore, it is also possible to coordinate the distance from the viewpoint to the object to be rendered by divided space unit and the distance from the central compute node to distributed compute node and distribute rendering processing by divided space unit to the compute node. In other words, since the image data of an object located close to the viewpoint is frequently renewed according to the movement of the viewpoint or the object itself, rendering processing is done in a compute node at a close location on the network and rendering results are consolidated in the central node with a short latency. On the other hand, since the image data of an object located far from the viewpoint does not change much with the movement of the viewpoint or the object itself, rendering processing can be done in a compute node at a far location on the network and there is no problem even if the latency before the rendering results reach the central node is long.

They're talking about distributed rendering here. This concept of updating objects closer to the viewer more frequently than objects in the distance is quite interesting, I can see it working for online games, with distant characters being rendered on a remote CPU while the ones closeby are redered locally... you could do LOT-style battle scenes with this approach.
This stuff could actually work!
 
Vince said:
pcostabel said:
It is describing a software renderer. To my knowledge, no hardware renderer does scan line or raytracing.

I know about the renderer, but I was refering to the application of it to gaming. I've never heard of it used, but I'm not familiar with the areas of high-end visualization for gaming/realtime-entertainment where one (IMHO) would expect it would have been seen this far.

Raytracing is way too slow to be used in games. Even feature films usually do not use raytracers, except for specific scenes. RenderMan can do raytracing on a per object basis.
Scan line is the standard algorithm for software renderer, slower than zbuffer but less memory intensive. Since is software only, it is never used in games.
 
Vince said:
I won't have time to actually read the patent until tonight, but has this been implemented, commercially, anywhere else on this scale -- for gaming? Or am I interpreting this wrong and need to read more than the first and last paragraphs? ;) It sounds pretty damn neat...

EDIT: My spelling is getting worse… hard to believe, I know. The mind boggles.

A patent doesnt actually constitute an implementation either.
 
And soon you will play on a TV broadcasted show:

A method for coordinating an interactive computer game being played by a plurality of users with a broadcast television program, comprising: (a) simulating a virtual environment and displaying at least a portion of the virtual environment on the broadcast television program; (b) providing to each user a mechanism for creating an object by assigning to the object characteristics and parameters, the object thereafter operating autonomously; and (c) selecting at least one object created by a user and displaying the selected object in the virtual environment on the broadcast television program.
...
The problem with this system is that only those players who have purchased the client portion of the computer game can participate in the game. Further, it is not possible for others, such as a player's friends, to even view the game in progress unless they are can view the game on a player's monitor. Thus, these games tend to be limited to a plurality of single players sitting in front of their computers.
...
In another embodiment, the virtual environment in the broadcast active region may constitute the entire show so that the television broadcast is an animated show with computer controlled characters. Alternatively, the television show may feature a section with live actors and a section comprising a totally animated portion. In still another embodiment, the animated portion of the television show consists of a display screen that appears with the live actors so that the actors can interact with the animated characters during the broadcast. Alternatively, the live actors can interact with the animated characters by means of conventional "blue screen" techniques.

Page 46

Very interesting. Not that it's something completly new, but as they mention it in the patent I'm sure that' one of their big plans to make this into something with substance.

Fredi
 
Mfa said:
September 6, 2002 (Japan) and September 2, 2003 (Usa).
I'm pretty sure this is the first day this patent is on line on the uspto site.
They update their online patent applications database every thursday
Gotcha ;) Anyway, when all is said and done this does sound like something up SCEIs alley to try stuffing inside a game machine. And people were talking about Cell being ambititous?

pcostabel said:
So each brick is rendered at different resolutions? Odd, this causes more problems than just using the same res for each brick, based on the area covered. Am I missing something here?
It's combining the approaches from image based renderers (like Talisman) with concept of fixed render layers. Scaling and other stuff would be necessary for instance for adjusting far imposters and reducing their update rate more.

Anyway, I think a lot of this is more academic then practical stuff - a bit over the top for a rendering inside a game machine.
But I need to have some more time to read through, preferably when my brain has had more then 2 hours of sleep.
 
[0137] FIGS. 25A-25C are explanatory diagrams of selection of a memory region when doing motion blurring processing for the moving object 348. In the figures, the direction of movement of the moving object 348 is indicated with an arrow. Ordinarily, as shown in FIG. 25A, the rendering area 340 is allocated. However, for doing rendering processing of a brick containing the moving object 348, the rendering processing unit 46 does rendering processing as shown in FIG. 25B by allocating the rendering area 342, which has been compressed in the direction of velocity vector of the moving object 348, to the memory and reducing the resolution in the direction of velocity vector. When consolidating into a final output image, the consolidation unit 48 elongates the rendering area 346 in the direction of velocity vector as shown in FIG. 25C. Since a high resolution is not necessary in the direction of velocity vector of the moving object 348, by selecting the rendering area 342, which has been compressed in the direction of velocity vector, for the brick containing the moving object 348, it is possible to reduce the calculation volume and memory volume for rendering processing of said brick and speed up rendering processing.

This is a very cool trick to implement motion blur for non deforming objects. I wonder why it is part of this patent, though.
 
Paul said:
And we have PS3's block diagram.

This image looks very suspect IMO. Either it is a gross simplification or entirely bogus.

Main memory hanging off the same bus as the connection to the graphics chip and everything else? Not bloody likely if the machine will use yellowstone/redwood tech; they're two different interfaces.

And besides, where is the sound hardware? It WILL have sound, right? ;)

No, this is not PS3. This isn't anything at all if you ask me. :)

The only purpose for that pic has to be as an illustration to stick those identification numbers on (102-126); I don't think it neccessarily portrays an accurate representation of how those devices are connected together.
 
just started skimming through this thread.

what insight have you guys gained about the PS3 Visualizer GPU with this patent, if anything?
 
Nice to see that SCE continue to push hardware support for post processing / fullscreen cinematic effects. It worked out well last time to create a quite recognizable "PS2 look". (spare the flickery/blurry textures comments..)
 
Back
Top