Dinamyc Framebuffer for Antialiasing on PS3?

version

Regular
SUMMARY

In one of many possible embodiments, the present invention provides a polygon rendering system for receiving geometric data defining a polygon in an image being generated. The polygon rendering system renders the geometric data as pixel data. The pixel data defines pixels used to display the image. The system comprises a first memory buffer for storing the pixel data. It also comprises a second memory buffer for storing additional pixel data used to render edge pixels at a higher resolution than pixels that are not the edge pixels. Edge pixels are pixels that are located on an edge of the polygon in the image. The system also comprises a display controller for outputting the pixel data in the first memory buffer to output circuitry. The polygon rendering system identifies which of the pixels are the edge pixels and the display controller updates contents of the first buffer with data based on contents of the second buffer.

Another embodiment of the present invention provides that the additional pixel data is used to compute a color value for the edge pixels. The additional pixel data comprises a cluster of sub-pixel data for each of the edge pixels of the image. The sub-pixel data defines a number of sub-pixels. The sub-pixel data comprises a color value for each of the sub-pixels in the cluster. The display controller computes a color value for the edge pixels based on an average of the color values for each of the sub-pixels in each cluster that corresponds to the edge pixels and stores the computed color value in a field corresponding to the edge pixels in the first buffer.

Another embodiment of the present invention is that when the polygon rendering system renders pixel data defining an edge pixel, the display controller writes a memory offset value in a data field corresponding to the edge pixel in the first buffer. This memory offset value indicates an address of the additional pixel data corresponding to the edge pixel in the second buffer.

The present invention also encompasses the method of manufacturing and operating the polygon rendering system described above. For example, the present invention encompasses a method of rendering geometric data as pixel data. The geometric data defines a polygon in an image being generated. The pixel data defines pixels used to display the image. The method comprises identifying which of the pixels are edge pixels. Edge pixels are located on an edge of the polygon. The method also comprises allocating memory in a second memory buffer for storing additional pixel data used to render the edge pixels at a higher resolution than pixels that are not the edge pixels. This second buffer is in addition to a first memory buffer.

Another embodiment of the present invention provides a rasterizer unit that is configured to render geometric data as pixel data and identify an edge pixel located on an edge of the polygon. When the rasterizer unit identifies the edge pixel, the rasterizer unit outputs data that signals a display controller to allocate memory to a second memory buffer for storage of additional pixel data used to render the edge pixel at a higher resolution than pixels that are not the edge pixels. The second buffer is in addition to a first memory buffer for storing the pixel data.

Another embodiment of the present invention provides a display controller for controlling the first and second memory buffers.

Another embodiment of the present invention provides computer readable instructions on a medium for storing computer readable instructions wherein the instructions, when executed, cause a processor to perform the method described above.

Another embodiment of the present invention provides firmware stored on a memory unit of a polygon rendering system that causes the system to perform the method described above.

Additional advantages and novel features of the invention will be set forth in the description which follows or may be learned by those skilled in the art through reading these materials or practicing the invention. The advantages of the invention may be achieved through the means recited in the attached claims.

http://patft1.uspto.gov/netacgi/nph...8&f=G&l=50&d=PTXT&p=1&S1=sony&OS=sony&RS=sony
 
Would it be possible for one of the more educated members to break that down into noob terms for me, as i dont really undertsnad it.


EDIT : were does it say its for PS3 in that link?
 
EDIT : were does it say its for PS3 in that link?
Patents don't talk about specific hardware, but ideas. This is an idea patented by Sony meaning they can use it wherever they want. I can't think of many devices where it'd be appropriate outside of their consoles. But of course, a patent doesn't mean it'll ever see the light of day as a real implementation, though the title does contain the question over if this patent will appear on RSX.
 
The patent isn't even from SCE specifically. This could relate to any Sony product that uses framebuffers and/or AA (or indeed, none at all) - it's not necessarily got anything to do with their game systems.
 
I could imagine this as a software-rendering AA algorithm.

The intrinsic problem with what's being patented is that to write an edge pixel over an existing edge pixel, the raster output stage has to read in the existing edge-pixel's AA samples' Z. It can't read in those existing edge samples until it first reads the existing edge pixel's "AA sample address". In other words it's simple indirection:
  1. read pixel at location A to find out the address of the AA samples, B
  2. use B to find and read the AA samples
That's something that more conventional MSAA systems don't do, since the AA samples' location is guaranteed, by dint of the location of the pixel. In a conventional MSAA system, a full block of memory is assigned for MSAA samples, even if rendering consists of a screen-filling quad (hence no edges need AA!). Pixel number 33452 has its 4xAA samples at memory location 4x33452xS (where S is the size, in "memory locations", of a pixel sample's Z+colour data).

In the patent, memory is only consumed as pixels are found to be edge pixels, and memory addresses are assigned first-come, first-served. It's a complete mash up, with the pixels being in a nice orderly grid that corresponds with the screen, but the AA samples being a complete jumble and location having absolutely no meaning.

Anyway, I dare say this indirection looks ready made for Cell style DMA-lists, which implies software AA to me. It would save bandwidth, e.g. with 4xAA and 4 bytes for colour and 4 bytes for Z:


if pixel is interior pixel then
read 1 colour value + Z = 4+4 bytes (colour only needed if blending)
write new pixel if Z test requires it (or result of blending) = 4+4 bytes

else
read AA samples' Z = 4x4 bytes
write new AA samples if Z tests (per sample) require = 4x(4+4) bytes (maximum: colour+Z being written per sample)

with most pixels being interior pixels this could be a nice bandwidth saving.
The cost of this algorithm is dramatically increased latency, though. The increased latency would then require that the algorithm caches significantly more pixel+sample data on chip (LS), otherwise there'd be no latency hiding. Memory latency could be extremely nasty on Cell (hundreds of cycles) and depending on the throughput of the raster output stage, this could lead to the cache consuming a huge chunk of LS. But, anyway, it's still a reasonably streamable algorithm.

I'm not sure what the burst size in XDR is. In DDR you tile the memory organisation of AA samples so that the "bandwidth heavy" conventional MSAA technique spreads the loads across all the channels (acting in parallel) of the memory system whilst also filling-up the burst size of the memory. In other words, the bandwidth costs are ameliorated by the fact it's cheaper to make one medium-sized access (to all four channels of memory) rather than 2 or more small accesses (to one channel each at a time). Since DDR doesn't support a "small access" ("one-pixel") it's not easy to compare a Cell-XDR software AA algorithm's bandwidth savings against a GPU's ROP-GDDR3 hardware AA. In other words, the patent could well be a solution that's specific to Cell-XDR and I'm guessing it's one that wouldn't perform well implemented as ROP-GDDR3.

Anyway, I only browsed key chunks of the patent, so apologies if I've missed something.

Jawed
 
This sounds exactly like Matrox' FAA implementation, how the hell did Sony get a patent on this ?

Doesn't solve AA for intersecting tris.

Cheers
 
Titanio said:
The patent isn't even from SCE specifically. This could relate to any Sony product that uses framebuffers and/or AA (or indeed, none at all) - it's not necessarily got anything to do with their game systems.
I appreciate that, but where else are Sony likely to want edge antialiasing of rendered triangles? Are their new TVs going for a 3D rendered interface? ;)
 
Shifty Geezer said:
I appreciate that, but where else are Sony likely to want edge antialiasing of rendered triangles? Are their new TVs going for a 3D rendered interface? ;)

Could'nt it be used to remove jaggies from interlaced video??

Sort of a Video scaler?
 
Says a guy who's been registered for all of a month...

And in this case he's linked to a patent. That's not making things up, unless he submitted that patent himself in Sony's name ;)
 
Shifty Geezer said:
Says a guy who's been registered for all of a month...

And in this case he's linked to a patent. That's not making things up, unless he submitted that patent himself in Sony's name ;)

Just another troll, or someone from here with a fun second login, don't bother.
 
Gubbi said:
This sounds exactly like Matrox' FAA implementation, how the hell did Sony get a patent on this ?

Doesn't solve AA for intersecting tris.

Cheers
That's what I got from reading the summary as well.
 
Back
Top