inefficient
Veteran
Now I really don't understand anymore.
...
I really got it *all* wrong? Wow.
I think he was replying to PiNkY not you.
Edit: What do I know, you were wrong after all!
Last edited by a moderator:
Now I really don't understand anymore.
...
I really got it *all* wrong? Wow.
This is what it's wrong, it's not about buffers, it's about memory. You used figures related to write and read data to and from memory, not internal buffers:So the only point at which I'm not clear about whether buffers are used or whether it would be a write directly to memory, was in the discussion of the Cell-to-GDDR3 memory, where I wasn't sure (nor probably clear enough) about how Cell would communicate most efficiently with RSX and whether or not this impacted the Cell-to-GDDR3 bandwidth, or whether there are different channels for pipelining, say, textures to RSX without the Cell directly accessing the GDDR3 memory, and without the RSX copying these textures from XDR on its own being more efficient.
- 4gb/s from Cell to RSX. This seems to me most useful for streaming in textures and vertex data. It would be more efficient to stream this in into the appropriate RSX buffers
in fact your step 0 is:The main advantage from cell being able to write at 4Gb/s to RSX would therefore seem to be if the above is indeed the case, i.e. the Cell can stream in data to the RSX into certain buffers that do not directly tax the GDDR3
0. Cell pre-processes vertex data (animations, decompression, etc.) and textures (decompression or conversion to the compressed format that RSX likes, maybe generate textures from scratch, modify them to make them darker or add shadow, etc.) and sends them to RSX (Cell read from XDR, perhaps write to XDR, then write to RSX)
I'm prety sure those 2 consoles are very powerful but it doesnt mean all you have to do is to stare at them and ask for an auto-made AAA game... There are more than million things that can degrade your perfomance...Reading comprehension FTW. That article said NOTHING about RSX having "more" shader power. It just said that devs are struggling to get shader performance out of Xenos but that should change as they get more familiar with the architecture. Revolutionary new shader technology says a big "Well durr!!!" to that idea.
The general rule is: make it simple! so if in your scheme CELL has already wrote data to XDR memory there's really no need to read them again and write them again to GDDR3 when the GPU can read them (remember those slides? ) where they already areThis is therefore also, as I already indicated, where my biggest questions are. When, in the pipeline, I said that in step 0 the Cell sends information to the RSX, I simply don't know whether certain buffers come into play like the ones Barbarian mentioned, or whether Cell writes this data to GDDR3 memory directly, or whether the buffers are even just special addresses in GDDR3 memory space - I really have no clue how this works.
There was a LONG discussion about just that many moons ago.
http://www.beyond3d.com/forum/showthread.php?t=31255
The conclusion was basically that it would not be a issue since the few useful case scenarios where you would actually want Cell to read from DDR would not require more than 16MB/s.
The general rule is: make it simple! so if in your scheme CELL has already wrote data to XDR memory there's really no need to read them again and write them again to GDDR3 when the GPU can read them (remember those slides? ) where they already are
Marco
Think video decoding, software rendering and so on and so forth.
Um, it's meant to be a launch game is it not? I kind of doubt it's only 35% done if that's the case, it'd be pretty impossible to finish it in time. If it's REALLY only 35% done, it'll be a 2008 title for sure, and that might be optimistic considering how long they've been working on it!