Either it is a gross simplification or entirely bogus.
Quote:
Either it is a gross simplification or entirely bogus.
It's obviously a simplification, the diagram is PS3, it represents PS3; however it is by no means final, and some of it doesn't make sense.
Patent doesn't have to be exact afterall.
Megadrive1988 said:just started skimming through this thread.
what insight have you guys gained about the PS3 Visualizer GPU with this patent, if anything?
[0110] In rendering processing of an object by divided space unit, the data locality is guaranteed. Mutual dependence relation of divided spaces can be handled by consolidating rendering data of each divided space using Z-merge rendering processing. Therefore, the final output image can be generated in the following way: distribute rendering processing by divided space unit to the compute nodes which are connected through the network and then consolidate the rendering data computed in each node in the central compute node. Such a computing network may be one in which the compute nodes are connected peer to peer by a broadband network. Also, it may be configured so that each compute node contributes to the entire network system as a cell and the operation systems of these cells work together so that it operates as one huge computer. The division rendering processing in the present invention is suitable for distributed processing using such a computer network and therefore, is capable of rendering processing which requires a large-scale computing resource.
[0111] Further, in the distributed rendering processing system for such a computer network, it is possible to define the network distance parameters based on the number of routing hops and communication latency between the central compute node and distributed compute node. Therefore, it is also possible to coordinate the distance from the viewpoint to the object to be rendered by divided space unit and the distance from the central compute node to distributed compute node and distribute rendering processing by divided space unit to the compute node. In other words, since the image data of an object located close to the viewpoint is frequently renewed according to the movement of the viewpoint or the object itself, rendering processing is done in a compute node at a close location on the network and rendering results are consolidated in the central node with a short latency. On the other hand, since the image data of an object located far from the viewpoint does not change much with the movement of the viewpoint or the object itself, rendering processing can be done in a compute node at a far location on the network and there is no problem even if the latency before the rendering results reach the central node is long.
V3 said:Its nice they're still considering networking.
each with 1 Pixel Engine
For example, a parallel rendering engine consisting of 8 4 MB area DRAMs may be operated with 4 channels.
If you have so little time, then how come you go patent scavenging?nAo said:I found the patent..but I can't read it! no time damn! still debugging my VU triangle clipper, so much work..so few time
If you work 13/14 hours a day, you need some time off....even at work timeSqueak said:If you have so little time, then how come you go patent scavenging? Not that I'm not glad you did.
[0071] The graphic processing block 120 includes a parallel rendering engine 122, image memory 126, and memory interface 124. The memory interface 124 connects the parallel rendering engine 122 and image memory 126. They are configured into one body as ASIC (Application Specific Integrated Circuit)-DRAM (Dynamic Random Access Memory).
well I suppose it is possible that each Pixel Engine is like a souped up Graphics Synthesizer, with 16 pipelines
so then with 4 Pixel Engines (1 per Visualizer PE) you'd have 64 pipes.
If you work 13/14 hours a day, you need some time off....even at work time
Raytracing is way too slow to be used in games. Even feature films usually do not use raytracers, except for specific scenes. RenderMan can do raytracing on a per object basis.
Scan line is the standard algorithm for software renderer, slower than zbuffer but less memory intensive. Since is software only, it is never used in games.
Nexiss said:Raytracing is way too slow to be used in games. Even feature films usually do not use raytracers, except for specific scenes. RenderMan can do raytracing on a per object basis.
Scan line is the standard algorithm for software renderer, slower than zbuffer but less memory intensive. Since is software only, it is never used in games.
Raytracing isn't that slow...
www.realstorm.com
http://www.openrt.de/Gallery/Triple7/
If PS3 is anything like what some people here are hoping for, I'd love to try my hands at a raytracer on it.
David_South#1 said:Curious, can you view the image files?
I am able to read the patent, but not view the images?
(Yes, I do have .tiff support and can view other patents.)