deathkiller
Newcomer
Sorry for talking like I knew something, I will go back to read only mode...
Sorry for talking like I knew something, I will go back to read only mode...
On RSX, like all NVidia GPUs of that generation, the frontbuffer retains its AA data for its entire lifetime, with the RSX display hardware resolving the AA data to produce the picture on screen with every refresh of the screen (e.g. 60 hz), regardless of the frame render rate (say 30fps).A 720p 4xMSAA framebuffer is ~ 30MB. A fully resolved framebuffer is (taking a stab at this) ~ 3.5MB? Someone feel free to correct my math.
That is incorrect, as the auto resolving of AA buffers on output is/was a function of the display controller on nVidia cards.. not of the GPU itself. Front buffers on RSX have to be of the real display resolution (eg: 1280x720) rather than any AA-adjusted variant.. which means applications are responsible for the resolve via a full-screen pass.On RSX, like all NVidia GPUs of that generation, the frontbuffer retains its AA data for its entire lifetime, with the RSX display hardware resolving the AA data to produce the picture on screen with every refresh of the screen
That is incorrect, as the auto resolving of AA buffers on output is/was a function of the display controller on nVidia cards.. not of the GPU itself. Front buffers on RSX have to be of the real display resolution (eg: 1280x720) rather than any AA-adjusted variant.. which means applications are responsible for the resolve via a full-screen pass.
Cheers,
Dean
When post buffer effects are used (like in every game nowadays) this advantage is gone since the edram processed scene has to be written back to the GDDR via a rather slow bus?
You completly forgot the poin of EDRAM is not to free up memory but to provide FREE AA and if i am not mistaken it can also provide free HDR.
It appears to me that the architectures of the two consoles are very different and that in comparison PS3 is the least understood not just because of time but also because of the radical nature of it's design.
Yes on his mixing and mashing of space & performance (I will let a dev answer most of his questions), but the point of the eDRAM is not to give free AA. The goal of eDRAM is to remove the backbuffer bandwidth, which is a large bandwidth client, from the main system memory. The backbuffer is fairly small, but consumes a disproportional amount of bandwidth for its footprint. So the eDRAM basically isolates a lot of the ROP activity. So that is the purpose of eDRAM. Some of the benefits of such is that the eDRAM provides just enough bandwidth for 4xMSAA as well as 4Gigapixels of fillrate. There is no bandwidth crunch and contention. Likewise the ROPs take this into consideration and can do single cycle 4xMSAA so there is no computational bottleneck either. Of course at 720p with 4xMSAA there is the hurdle of tiled rendering which does have a performance hit. How big has many variables.
...
...
Of the GPUs and CPUs on both consoles I think many would agree with me that RSX is the least radical design; it probably is also the best known in regards to what it can do, and what works well and what doesn't. Just my 2 cents on that :smile:
deathkiller said:The only visible result of this is the "flag algoritm" (Motorstorm, GTHD and Heavenly Sword flags look mostly the same
Where did you get this?! good fantasy! Our flags code was written internally..
IGN said:The game has also been spiced up with some nice effects, like an impressive glare as you emerge from the track's tunnel, and those waving flags that PS3 developers seem to love (see MotorStorm and Heavenly Sword).
From a component only perspective this may be the case, but from the view of the console architecture as a whole I would disagree. It’s analogous to viewing the GS in the PS2 as a crippled GPU compared to the XGPU (or any modern PC GPU in the last 6 or so years), when it was designed to work with the EE.
Regarding the RSX been best known, I often wonder if this is truly the case and that if there is a bit more to it then we are led to believe.
Traditionally, the GPU handled to all the heavy lifting when it comes to graphics, but if the SPU(s) can reduce the load by carrying things like backface and or occlusion culling more effectively, then the RSX has room for more exploitation. The same apply for the Xenon/Xenos relationship.
Performance penalties will really depend on how the engine deals with tiling, but if that as low as 5% hit for 3 tiles holds up, depending on how it scales, it could become pretty extreme. In other words, probably never any 4xMSAA, though 2x would certainly be possible.
Does xenos support 3Dc or 3Dc+?
[/color][/color]
The PS3 isn't that radical though in regards to general architecture. CPU with memory (XDR System memory) and GPU with memory (GDDR3 VRAM) with some FlexIO voodoo connecting them. Sounds a lot like a PC, especially when you consider that the PPE is a PPC chip.
Where PS3 is radical is in the CELL SPEs; instead of multiple traditional CPU cores like the PPE, it has 1 PPE and 7 assymetric processors that are simpler (e.g. no branch prediction and no L2 cache) and require some extra elbow grease (DMAs to the system memory) but have an insanely fast local store (memory) that make them little monsters when you can fit within the confines of the memory (and even better SIMD). No doubt, that is radical -- both in terms of departure and in coolness
Yes, it is G80. A driver update will resolve the confusion... hello Brimstone!
...
I would argue that PS3 devs will get a chance to take a hack at this before Xbox devs. First is because RSX is a pretty well known quantity, and second because the SPEs open up a lot of doors. I think devs will be playing with Xenos for a while (ditto PS3 devs with SPEs) and use the CPU for other stuff... all that insignificant stuff like game code