The apparatus may further comprise a communication unit which receives image data rendered by subspace unit from an external distributed rendering processing device connected with the apparatus via a network, and the consolidation unit may consolidate the image data received from the external distributed rendering processing device with the image data generated by the rendering processing unit and generate the final output image data to be displayed.
Patent said:]Furthermore, as another example of rendering strategy, partial ray tracing may be adopted, in which a different hidden surface removal method is used for each brick, for example, for a brick in the close view, ray tracing, which requires complicated calculations, is applied locally and for other outside bricks, reflection mapping is used.
[0100] When consolidating image data generated by division rendering into the final output image, as was described in FIG. 5, affine transformation processing such as enlarging and contracting is done for the image data of each brick. In this regard, in order to match the resolution of image data generated by division rendering with the resolution of consolidated image data, it is necessary to interpolate the pixel value of image data for each brick accordingly. To do this, interpolation methods such as bilinear interpolation, which approximates the pixel values using internal division, and bicubic interpolation, which approximates the variation of pixels using cubic spline function are used.
IST said:And that leads to a question. Is progamming a software renderer easier or harder than programming a hardware one? Edit: Or does it not matter at all?
Vince said:Patent said:]Furthermore, as another example of rendering strategy, partial ray tracing may be adopted, in which a different hidden surface removal method is used for each brick, for example, for a brick in the close view, ray tracing, which requires complicated calculations, is applied locally and for other outside bricks, reflection mapping is used.
I didn't have time to actually read the patent untill tonight, but has this been imoplimented, commerically, anywhere else on this scale -- for gaming? Or am I interpreting this wrong and need to read more than the first and last paragraphs? It sounds pretty damn neat...
pcostabel said:It is describing a software renderer. To my knowledge, no hardware renderer does scan line or raytracing.
pcostabel said:IST said:And that leads to a question. Is progamming a software renderer easier or harder than programming a hardware one? Edit: Or does it not matter at all?
There are tons of free software renderer available, and extensive literature on rendering algorithms of all kinds, so I would say it's much easier to implement a software renderer that to learn how to code around the limitations of a specific GPU.
HW raytracers do exist. A couple of days ago Simon Fenney cited this company...pcostabel said:To my knowledge, no hardware renderer does scan line or raytracing.
[0111] Further, in the distributed rendering processing system for such a computer network, it is possible to define the network distance parameters based on the number of routing hops and communication latency between the central compute node and distributed compute node. Therefore, it is also possible to coordinate the distance from the viewpoint to the object to be rendered by divided space unit and the distance from the central compute node to distributed compute node and distribute rendering processing by divided space unit to the compute node. In other words, since the image data of an object located close to the viewpoint is frequently renewed according to the movement of the viewpoint or the object itself, rendering processing is done in a compute node at a close location on the network and rendering results are consolidated in the central node with a short latency. On the other hand, since the image data of an object located far from the viewpoint does not change much with the movement of the viewpoint or the object itself, rendering processing can be done in a compute node at a far location on the network and there is no problem even if the latency before the rendering results reach the central node is long.
Vince said:pcostabel said:It is describing a software renderer. To my knowledge, no hardware renderer does scan line or raytracing.
I know about the renderer, but I was refering to the application of it to gaming. I've never heard of it used, but I'm not familiar with the areas of high-end visualization for gaming/realtime-entertainment where one (IMHO) would expect it would have been seen this far.
Intersting... I wonder what they mean by accelerated raytracing. I doubt is anywhere near real time.nAo said:HW raytracers do exist. A couple of days ago Simon Fenney cited this company...pcostabel said:To my knowledge, no hardware renderer does scan line or raytracing.
Vince said:I won't have time to actually read the patent until tonight, but has this been implemented, commercially, anywhere else on this scale -- for gaming? Or am I interpreting this wrong and need to read more than the first and last paragraphs? It sounds pretty damn neat...
EDIT: My spelling is getting worse… hard to believe, I know. The mind boggles.
A method for coordinating an interactive computer game being played by a plurality of users with a broadcast television program, comprising: (a) simulating a virtual environment and displaying at least a portion of the virtual environment on the broadcast television program; (b) providing to each user a mechanism for creating an object by assigning to the object characteristics and parameters, the object thereafter operating autonomously; and (c) selecting at least one object created by a user and displaying the selected object in the virtual environment on the broadcast television program.
...
The problem with this system is that only those players who have purchased the client portion of the computer game can participate in the game. Further, it is not possible for others, such as a player's friends, to even view the game in progress unless they are can view the game on a player's monitor. Thus, these games tend to be limited to a plurality of single players sitting in front of their computers.
...
In another embodiment, the virtual environment in the broadcast active region may constitute the entire show so that the television broadcast is an animated show with computer controlled characters. Alternatively, the television show may feature a section with live actors and a section comprising a totally animated portion. In still another embodiment, the animated portion of the television show consists of a display screen that appears with the live actors so that the actors can interact with the animated characters during the broadcast. Alternatively, the live actors can interact with the animated characters by means of conventional "blue screen" techniques.
Gotcha Anyway, when all is said and done this does sound like something up SCEIs alley to try stuffing inside a game machine. And people were talking about Cell being ambititous?Mfa said:September 6, 2002 (Japan) and September 2, 2003 (Usa).
I'm pretty sure this is the first day this patent is on line on the uspto site.
They update their online patent applications database every thursday
It's combining the approaches from image based renderers (like Talisman) with concept of fixed render layers. Scaling and other stuff would be necessary for instance for adjusting far imposters and reducing their update rate more.pcostabel said:So each brick is rendered at different resolutions? Odd, this causes more problems than just using the same res for each brick, based on the area covered. Am I missing something here?
[0137] FIGS. 25A-25C are explanatory diagrams of selection of a memory region when doing motion blurring processing for the moving object 348. In the figures, the direction of movement of the moving object 348 is indicated with an arrow. Ordinarily, as shown in FIG. 25A, the rendering area 340 is allocated. However, for doing rendering processing of a brick containing the moving object 348, the rendering processing unit 46 does rendering processing as shown in FIG. 25B by allocating the rendering area 342, which has been compressed in the direction of velocity vector of the moving object 348, to the memory and reducing the resolution in the direction of velocity vector. When consolidating into a final output image, the consolidation unit 48 elongates the rendering area 346 in the direction of velocity vector as shown in FIG. 25C. Since a high resolution is not necessary in the direction of velocity vector of the moving object 348, by selecting the rendering area 342, which has been compressed in the direction of velocity vector, for the brick containing the moving object 348, it is possible to reduce the calculation volume and memory volume for rendering processing of said brick and speed up rendering processing.
Paul said:And we have PS3's block diagram.