ddes said:
Laa-Yosh said:
Up until now, there weren't proper methods to handle a frame buffer that couldn't fit into the EDRAM you could reasonably put into a GPU.
Since current gen consoles only had to support resolutions up to 640*480@32bit, their framebuffer was small enough to even spare some EDRAM for texture memory.
Note that the X360 maximum resolution is 1280x720. At that resolution you can fit a 32-bit front and back buffers and a 24-bit Z-buffer into the EDRAM.
Laa-Yosh said:
Bitboys has been working on several iterations of EDRAM-based GPUs but their attempts never reached the market. As far as I now the main reason was that they couldn't manufacture a chip with enough EDRAM. So it was a matter of timing - how soon will we have a manufacturing process that can give us enough on-die memory for the currently used screen resolutions?
I do remember that there was a silicon prototype with 12 MB of memory on-chip?
two different revisions in fact...
EDIT:
okay, so deal with Bitboys eDRAM system was not to have whole back buffer at once in eDRAM. the scene was split in the tiles and only tile being rendered was needed to fit in eDRAM. On case of Matrix Anti-Aliasing being enabled, the AA was applied during eDRAM -> back buffer transfer. (The guy who was working on the rasterizer implementation of this chip is one of the regulars here, but so far he have decided not to show this side of his talents here. so, I am not going to tell you who he is. it's up to him, if he decides so.)
Images above show two different revisions of chip codenamed AXE. It has DX8 feature set VS 1.0 and PS 1.1 with 4 pipelines and 2 tmus per pipe. planned clocks were 175MHz Core / 175 MHz memory. If everything would have gone like planned, AXE would have been released As Avalanche 3D in christmas 2001. The chip is capable working as dual mode as well, so Avalanche Dual would have had around 46GB/s memory bandwidth and 8 dx8 pipelines.
after this and before moving hand held / pda side, Boys had another project called Hammer, which had some interesting things coming. it had eDRAM too, but only 4 MB and it incorporated their own occ. culll. tehcnology. all technology ment to be in hammer was licenseable after the project died and someone was interested enough at least their occ. culling technology, because all material relating to it was removed from their website soon after adding it to there. Only thing I heard was that it was removed because customer wanted it to vanish. at least so far I have no idea who the customer was.
so, need of eDRAM to fit whole frame buffer on it? no, I don't think so. all new cards already works now on pixel quads because of several reasons and ATI has even higher level Super Tiling that is used for big off line multicore rendering sollutions. As long as we can divide screen space to smaller parts, what's the reason for keeping frame buffer as one big rendering space? everytime the render finishes the tile, it's takes a small time before next one starts and there's no traffic in external memory bus during that time, so you could basically use that time for moving finished tile from eDRAM to frame buffer.