xbdestroya said:Well, PS2 *has* a dedicated GPU, it's just that GPU has no T&L abilities, and thus the EE gets co-opted.
dukmahsik said:really, i thought it just had a rasterizer and not a full on gpu.
xbdestroya said:I see where you're coming from. I'd still term it the system's 'GPU,' even if it doesn't really match the modern description of what a GPU is. We can just go with the older parlance of 'graphics chip' if you like though.
It's lack of hardware T&L is really the demarkation line for it in terms of being 'ancient' vs 'modern.'
How about the space of the bus that connects eDRAM and a logic part?DiGuru said:So, by using EDRAM combined with the ROP's, they actually save lots of die space (transistors), and increase the total bandwith quite a bit.
Vysez said:At best, we could discuss the amount of eDRAM MS/ATi did choose, seeing how, some launch game engines seem not to be compatible with the tiling needed for the AA, for instance.
scooby_dooby said:An interesting sidenote is that UE3.0 was NOT built with predicated tiling in mind, meaning all the UE3 based games won't be able to make use of the EDRAM as effeciently as possible.
So...when's UE4 coming out?
one said:How about the space of the bus that connects eDRAM and a logic part?
All the same, Xenos' eDRAM had to be big enough to hold the framebuffer + Z-Stencil and room for some reasonable size tiles. Doing that for dual 1080p would mean at least 32 MB even without leaving some room for the tiles to do their work.Xenos' daughter die circumvents this with it's tiling scheme - certainly I'm sure Kutaragi would have loved to implement some eDRAM if it had been at all practical.
4-6 depending on the particular implementation, although SRAM cells are definitely about as dense of an IC as you get. Also, it should be noted that the 4-6 transistors are for single-ported memories. if you want to have multiple read/write ports, you basically have to multiply it out.while SRAM uses four transistors (a flip-flop). So it takes four times as much room.
Yeah, it's transparent, but that refresh is a contributor to the latencies of DRAM.But it is faster, and doesn't need to be refreshed regulary (although that's transparent and handled by the memory modules themselves nowadays).
I'm still on the fence about the "zero penalty" AA... Not quite ready to buy that yet, especially when the enabling/disabling of AA marks the enabling/disabling of predicated tiling. There are just too many balls dropped with every piece of hardware in every one of the next-gen consoles. Even my own reservations aside, there has to be a reason why nobody is doing it, and instead resorting to substitutes.It limits the available RAM space for the frame buffer, but it increases the things that can be done without penalty, like AA.
Ya that's a great question, and it applies also to the SPE's in CELL. How many 3rd party's will spend the time to code everything specifically for SPE's, or will they just use the baseline 1PPE.xbdestroya said:Another way of phrasing it might be: among next-gen multi-console engines, how easy/costly will it be to include predicated tiling for the 360 version of the engine development-wise?.
scooby_dooby said:Ya that's a great question, and it applies also to the SPE's in CELL. How many 3rd party's will spend the time to code everything specifically for SPE's, or will they just use the baseline 1PPE.
My guess would be that you'll see Dev's support one platform as their base, and extract as much power as possible from that platform, then try and "make it work" on the other console when porting over.
Bill said:People here aren't seeming to get it.
I think EDRAM is a DISADVANTAGE plain and simple.
Why it's not in PS3 is probably because they were SMARTER and used their resources BETTER.
I've seen nothing to convince me of the value of EDRAM. Bandwidth is not a problem at 720P and motion blur is not worth 1/3 the GPU.