A new Sony patent application..

Either it is a gross simplification or entirely bogus.

It's obviously a simplification, the diagram is PS3, it represents PS3; however it is by no means final, and some of it doesn't make sense.

Patent doesn't have to be exact afterall.
 
Quote:
Either it is a gross simplification or entirely bogus.


It's obviously a simplification, the diagram is PS3, it represents PS3; however it is by no means final, and some of it doesn't make sense.

Patent doesn't have to be exact afterall.

I just browse through the patent, and that figure is not a simplification of the PS3 diagram.

Its a diagram of Visualiser (Remember that PE modified by putting pixel engine from the other patent ?). That's the one. Remember the visualiser can stand on its own, in that patent.
 
Megadrive1988 said:
just started skimming through this thread.

what insight have you guys gained about the PS3 Visualizer GPU with this patent, if anything?

The patent basically describes the visualizer, mostly from a software point of view. Some hints that may or may not refer to the actual implementation in PS3:

* 4 processing units
* software rendering or a mix of software and hardware rendering
* hardware support for Z-merging
* support for networking multiple visualizer either locally or remotely
* ability to select different rendering methods for different objects (raytracing, scan line, hardware? Z-buffer)
* designed with multiplayer online in mind (e.g character in the distance rendered on remote machine and composited on local frame buffer)

Overall, it is a feature film approach to the videogame pipeline. Having worked in the movie industry for a while, Sony is trying to put an FX studio inside one console.
 
pcostabel, very interesting stuff.

Panajev, what do you think of this?


also, if a Visualizer has 4 processing units, which makes me instantly think of the 4 PEs, each with 1 Pixel Engine, is any decent chance that Sony might use more than one of these in PS3? at least it's possible technically.

one Visualizer has 4 PUs, 16 APUs, 4 Pixel Engines, 4 Image Caches and 4 CRTCs.

now 4 of those in parallal would be really nice.
 
[0110] In rendering processing of an object by divided space unit, the data locality is guaranteed. Mutual dependence relation of divided spaces can be handled by consolidating rendering data of each divided space using Z-merge rendering processing. Therefore, the final output image can be generated in the following way: distribute rendering processing by divided space unit to the compute nodes which are connected through the network and then consolidate the rendering data computed in each node in the central compute node. Such a computing network may be one in which the compute nodes are connected peer to peer by a broadband network. Also, it may be configured so that each compute node contributes to the entire network system as a cell and the operation systems of these cells work together so that it operates as one huge computer. The division rendering processing in the present invention is suitable for distributed processing using such a computer network and therefore, is capable of rendering processing which requires a large-scale computing resource.

[0111] Further, in the distributed rendering processing system for such a computer network, it is possible to define the network distance parameters based on the number of routing hops and communication latency between the central compute node and distributed compute node. Therefore, it is also possible to coordinate the distance from the viewpoint to the object to be rendered by divided space unit and the distance from the central compute node to distributed compute node and distribute rendering processing by divided space unit to the compute node. In other words, since the image data of an object located close to the viewpoint is frequently renewed according to the movement of the viewpoint or the object itself, rendering processing is done in a compute node at a close location on the network and rendering results are consolidated in the central node with a short latency. On the other hand, since the image data of an object located far from the viewpoint does not change much with the movement of the viewpoint or the object itself, rendering processing can be done in a compute node at a far location on the network and there is no problem even if the latency before the rendering results reach the central node is long.

Its nice they're still considering networking. :D
 
each with 1 Pixel Engine

Just out of curiosity, why do you think its one ? afterall they call it as parrallel rendering engine ? What's parallel if there is only one each ?

Anyway, they gave example, maybe a hint as well :?:

For example, a parallel rendering engine consisting of 8 4 MB area DRAMs may be operated with 4 channels.

32 MB image cache ?
 
I found the patent..but I can't read it! no time damn! still debugging my VU triangle clipper, so much work..so few time :devilish:
 
well I suppose it is possible that each Pixel Engine is like a souped up Graphics Synthesizer, with 16 pipelines :oops:

so then with 4 Pixel Engines (1 per Visualizer PE) you'd have 64 pipes.

if Sony goes with an 8 PE Visualizer, and each of the 8 Pixel Engines had 16 GS-style pipes, that's 128 pipes.

almost anything is possible with Cell being so modular / scalable. that's one of the beautiful things about it 8)
 
nAo said:
I found the patent..but I can't read it! no time damn! still debugging my VU triangle clipper, so much work..so few time :devilish:
If you have so little time, then how come you go patent scavenging? :p
Not that I'm not glad you did.
 
Squeak said:
If you have so little time, then how come you go patent scavenging? :p Not that I'm not glad you did.
If you work 13/14 hours a day, you need some time off....even at work time :)

ciao,
Marco
 
[0071] The graphic processing block 120 includes a parallel rendering engine 122, image memory 126, and memory interface 124. The memory interface 124 connects the parallel rendering engine 122 and image memory 126. They are configured into one body as ASIC (Application Specific Integrated Circuit)-DRAM (Dynamic Random Access Memory).

1 PE of the Visualizer:

PU + DMAC + "Pixel Engine + 4 APUs" ( = parallel rendering engine ) + Image Cache ( part of the PE, not shared with other PEs if they are present in the Visualizer: they would have their own Image cache ) + shared DRAM ( shared between multiple PEs if you have multiple PEs ).
 
well I suppose it is possible that each Pixel Engine is like a souped up Graphics Synthesizer, with 16 pipelines

so then with 4 Pixel Engines (1 per Visualizer PE) you'd have 64 pipes.

Now speculation looks more interesting ain't ?
 
If you work 13/14 hours a day, you need some time off....even at work time


This is exactly why I'm dropping out of computer science and moving into a different field of math... do you enjoy the work marco (possibly a stupid question I guess)?
 
Raytracing is way too slow to be used in games. Even feature films usually do not use raytracers, except for specific scenes. RenderMan can do raytracing on a per object basis.
Scan line is the standard algorithm for software renderer, slower than zbuffer but less memory intensive. Since is software only, it is never used in games.

Raytracing isn't that slow...

www.realstorm.com
http://www.openrt.de/Gallery/Triple7/

If PS3 is anything like what some people here are hoping for, I'd love to try my hands at a raytracer on it.
 
Curious, can you view the image files?
I am able to read the patent, but not view the images?

(Yes, I do have .tiff support and can view other patents.)


EDIT: I had to reinstall Quicktime (Update).
Even then it only displays the images when QT is in the system tray.
I hope to simplify this later. But for now at least I can view them.

Thanks, Pcostabel (Two posts down.)
 
Nexiss said:
Raytracing is way too slow to be used in games. Even feature films usually do not use raytracers, except for specific scenes. RenderMan can do raytracing on a per object basis.
Scan line is the standard algorithm for software renderer, slower than zbuffer but less memory intensive. Since is software only, it is never used in games.

Raytracing isn't that slow...

www.realstorm.com
http://www.openrt.de/Gallery/Triple7/

If PS3 is anything like what some people here are hoping for, I'd love to try my hands at a raytracer on it.

Actually, that demo proves my point. It's super noisy because with ray tracing you need to supersample 4 or 16 times to get decent results, and it doesn't look any better then traditional hardware rendering.
I'm not saying you cannot use a raytracer in a game, I'm saying it's a waste of resources.
 
David_South#1 said:
Curious, can you view the image files?

I am able to read the patent, but not view the images?

(Yes, I do have .tiff support and can view other patents.)

I had the same problem. Try to go to the link directly. Worked for me.
 
Back
Top