Anti-aliasing Without EDRAM in NVidia's PS3 GPU...

I'm definitely not bothering trying to decode any more patents. You're on your own Jaws!

Jawed
 
Jawed said:
I'm definitely not bothering trying to decode any more patents. You're on your own Jaws!

Jawed

LOL!

I never decode them, just enought to get a general gist! Usually the Abstract, Background and Brief Summary are more than enough!

Anyway, suffice to say, I think they'll make the 'right' decision, whether they use eDRAM of not... ;)
 
As it happens I respect the idea that TBDR and this patent go together. But my respect is founded on no understanding, :LOL:

From having spent a fair amount of time on the ATI patents related to R500, it seems to me it has the chance to be extremely efficient at frame buffer operations, blending, AA, Z/stencil because the whole design is a pipeline - random reads/writes against the frame buffer seem to be banished.

A brief glance at the patent, particularly the box labelled "Latency Buffer", implies to me that NVidia is also pipelining the blending/AA, so is tackling the problem head-on like ATI.

If anyone is prepared to give a précis of the NVidia patent, then feel free :D

Jawed
 
Jawed said:
From having spent a fair amount of time on the ATI patents related to R500, it seems to me it has the chance to be extremely efficient at frame buffer operations, blending, AA, Z/stencil because the whole design is a pipeline - random reads/writes against the frame buffer seem to be banished.
If so, does this leave out some rendering effects that would work on a non-linear reading of the buffer?
 
Shifty Geezer said:
Jawed said:
From having spent a fair amount of time on the ATI patents related to R500, it seems to me it has the chance to be extremely efficient at frame buffer operations, blending, AA, Z/stencil because the whole design is a pipeline - random reads/writes against the frame buffer seem to be banished.
If so, does this leave out some rendering effects that would work on a non-linear reading of the buffer?

I don't see why but I'm not sure what effects you're referring to, and whether you're referring to effects based-upon textures rather than the frame buffer.

But I'm still guessing, all my comments are based on a cobbled-together understanding of the patents!

Jawed
 
Jawed said:
As it happens I respect the idea that TBDR and this patent go together. But my respect is founded on no understanding, :LOL:
...

Hehe...times like this I have to resort to pretty pictures, without having to read the whole patent!

comparisonb.jpg


Looking at the PowerVR section for comparison's sake for TBDR and referring to the patents from the first page,

The first scanout filtering patent reduces the 'frame buffer' bandwidth further when decompressing/downsampling/blend and draw...

The second colour compression patent reduces the internal GPU bandwidth with compression for SSAA/MSAA fragment tile shading which also further reduces the 'framebuffer' bandwidth...

Of course, these bandwidth saving features are in addition to what you get with TBDR...

ddrvpvrmem.jpg


And this wouldn't be complete without the golden oldie graphs circa 2000! :p

http://www.eurogamer.net/article.php?article_id=488
 
I was wondering:
What kind of trade off is the removal of an external zbuffer?.
I think of FX tricks that are based on reading back the z-buffer.
What are the pro of an external zbuffer?
 
_phil_ said:
I was wondering:
What kind of trade off is the removal of an external zbuffer?.
I think of FX tricks that are based on reading back the z-buffer.
Reading back how? If you mean with software then you'll see the system grind to a halt.
 
trinibwoy said:
If TBDR is so efficient why haven't the major IHV's taken it upon themselves to implement it?
This is the biggest mistery of 3D graphics history ;)
 
trinibwoy said:
If TBDR is so efficient why haven't the major IHV's taken it upon themselves to implement it?

Could be a few reasons

Lack of patents .

Millions if not billions invested into tradditional renderers


No imagination ?
 
Why would a TBDR even have an over-sampled back-buffer in main memory? It seems to me that a TBDR would downsample completed tiles before writing them to memory, meaning that the TBDR would have one or more front buffers but no back-buffer in main memory - saving space and bandwidth.

If I understand things correctly, an IMR has one or more oversampled back-buffers as well as a front-buffer in main memory. Scan-out involves memory bandwidth for reading the entire front buffer, down-sampling involves memory bandwidth for reading the entire oversampled back-buffer and writing the entire front-buffer. The method described in the patent goes directly from over-sampled back-buffer to scan-out, saving the space as well as the read/write bandwidth for the front-buffer. AFAICS this has nothing to do with TBDR...
 
nAo said:
trinibwoy said:
If TBDR is so efficient why haven't the major IHV's taken it upon themselves to implement it?
This is the biggest mistery of 3D graphics history ;)

Or perhaps there's more to it than those graphs...?

Particularly when taken in a current context "Traditional/Conventional" on those diagrams should perhaps more accurately read "Prehistoric" :)
 
trinibwoy said:
If TBDR is so efficient why haven't the major IHV's taken it upon themselves to implement it?
Because of the ISVs.

I dont think NVIDIA would have a tiler in the works, or Sony going for it ... but I think it would be a lot better than ending up with direct rendering without eDRAM. I think Sony is making a huge mistake, they are squandering what Cell gives them with a graphics architecture which in a console environment compares poorly with the competition.

This isnt the same situation as with the poorly conceived GS. ATI's eDRAM chip is unlikely to be lacking features ...
 
API, developer inertia, compatibility. On the PC the direct mode renderers are what the APIs and software is designed for, when your competition sets the terms you can compete on things are not always easy.

Although the complete inability of the designers of tilers to execute, because they have all been IP only companies, hasnt helped. It isnt a good model for this market ... fine for SOCs, but not for PC graphic chips.
 
My guess is PS3 GPU has it's own XDR RAM controller and 128~256MB dedicated RAM (frame-buffer and textures). The pluses I can see are lower worst case fb latency (eDRAM overflow probably isn't so pretty, but developers can render in tiles to avoid this), and more predictable/lower texturing latency. That could be significant for dependent texture reads...
 
Back
Top