The most Detailed Tech Information on the Xbox360 yet

jimpo said:
Something I don't really understand....

- Xbox2 has insanely wide bandwidth to 10MB of EDRAM. This will give them free 4xAA in HDTV resolutions + other really incredible performance benefits
- PS2 had 4MB of EDRAM with insanely wide (in 1999 standards) bandwidth. Having any kind of AA in PS2 games seemed to be a major pain in the ass for developers. Also, judging by the games, I fail to see what kind of killer features, compared to Xbox1 technology, PS2's EDRAM offered? (just a man-of-the-street's view of that, my technical knowledge in the subject is real weak)


So how come Xbox2's EDRAM seems to have much bigger benefits than Ps2's EDRAM?

I think it has less to do with the fact that it has the eDRAM, but that it has logic surrounding the eDRAM that can do all the Z, stencil and alpha calculations required for AA built in. PS2 didn't have this, AFAIK nothing has had this, which is why its causing such a stir.

It is the combination of that logic, and the eDRAM being big enough to hold frames at 720p that gives Xbox 360 AA x4 'for free'.

Having said all that, i'm as much a man-on-the-street as you, this is just what i have picked up from here and the Hard OCP article on the GPU.
 
jimpo said:
Something I don't really understand....

- Xbox2 has insanely wide bandwidth to 10MB of EDRAM. This will give them free 4xAA in HDTV resolutions + other really incredible performance benefits
- PS2 had 4MB of EDRAM with insanely wide (in 1999 standards) bandwidth. Having any kind of AA in PS2 games seemed to be a major pain in the ass for developers. Also, judging by the games, I fail to see what kind of killer features, compared to Xbox1 technology, PS2's EDRAM offered? (just a man-of-the-street's view of that, my technical knowledge in the subject is real weak)

So how come Xbox2's EDRAM seems to have much bigger benefits than Ps2's EDRAM?

Because ATI sized it appropriately for their target resolution and level of AA, and added logic to allow the GPU to better exploit it. On the matter of size, the PS2's eDRAM was just way too small to hold a supersampled back buffer necessary for anti-aliasing and leave enough room for cached textures and the front buffer. The X360's eDRAM is only used to compose the back buffer, so 10MB is effectively disproportionately larger than the PS2' 4MB. The front buffer is in system memory because it doesn't require much bandwidth at all (what a waste to have it in eDRAM on the PS2). Texture caching is probably done on the GPU, with only destination texels sent to the eDRAM. The eDRAM might also support deferred texturing to reduce bandwidth requirements further.
 
jimpo said:
Something I don't really understand....

- Xbox2 has insanely wide bandwidth to 10MB of EDRAM. This will give them free 4xAA in HDTV resolutions + other really incredible performance benefits
- PS2 had 4MB of EDRAM with insanely wide (in 1999 standards) bandwidth. Having any kind of AA in PS2 games seemed to be a major pain in the ass for developers. Also, judging by the games, I fail to see what kind of killer features, compared to Xbox1 technology, PS2's EDRAM offered? (just a man-of-the-street's view of that, my technical knowledge in the subject is real weak)

So how come Xbox2's EDRAM seems to have much bigger benefits than Ps2's EDRAM?

[Edited to remove points that the previous responses made even more clearly than I did. :D ]

The PS2 EDRAM / GPU was not designed very well. It had some very ambitious and powerful features (the bandwidth) that were emasculated by poor design decisions in the rest of the system. It's reasonable to expect that the Xbox 360 team will learn from the PS2 design, and not repeat those mistakes.
 
Was PS2 the first gaming application of eDRAM? Were it's limitations a result of the ideas not really being tried elsewhere, or bad design (Sony or Toshiba wasn't it) that didn't learn from other's application?
 
How much memory is needed to store a 1280 x 720 buffer for multi-sampling AA? Is it just 10280x720x4bytes? So for supersampling you need 4x more memory?
 
Shifty Geezer said:
Was PS2 the first gaming application of eDRAM? Were it's limitations a result of the ideas not really being tried elsewhere, or bad design (Sony or Toshiba wasn't it) that didn't learn from other's application?

Well, it depends on how elastic your definition of EDRAM is. :) Some people might say that the defining characteristic of EDRAM is that it's a particular RAM cell process, and that the whole back buffer has to fit into the EDRAM at once. By that definition PS2 was the first sytem with EDRAM.

But there is a continuum, with different designs embedding different amounts of frame buffer near the GPU. For example, the Atari 2600 had a double-buffered 20 bit, 1/2 scan line buffer. The Dreamcast had a "tile", which was a little 64 x 64 pixel frame buffer, (or was it 32 x 32, I don't remember) and so on.
 
In the KK interview one translated, KK says Microsoft upped resolution and performance but didn't extend the nature of gaming. So I think the Xbox 1.5 charge relates more to a difference in vision about computer entertainment rather than talking trash about performance and architecture superiority:

Honda: What kind of power does Entertainment Computer need?

Kutaragi: The approach where improving computing performance as much as possible and combining GPU with it is just connect a PC with a TV and redesign packaging of a case, that is not new as a concept. XBOX was such a game console in its first generation too. But we want to create the future which was previously impossible by expanding computer entertainment more than ever before. At the press conference we showed the demo of ducks driven by physics simulation, it's just about generating a virtual world in PS3.

For us it's very acceptable that Microsoft invests in this area. But just upping output resolution of a conventional game console and improving graphics power don't expand the world of game consoles as of today. This is only XBOX 1.5 rather than a next-generation XBOX. Rather than replacing what existent game console vendors have been doing with high-performance hardware, I want them to find a totally new field by their own originality. If they do so, we can expand the world of computer entertainment together.

Now whether games developers and publishers share KK's vision and exploit the capabilities of his PS3 remains to be seen. They are more likely to extend graphics performance rather than extend the notion of what gaming could be.
 
JF_Aidan_Pryde said:
How much memory is needed to store a 1280 x 720 buffer for multi-sampling AA? Is it just 10280x720x4bytes? So for supersampling you need 4x more memory?

All known implementations MSAA and SSAA require the same amount of frame buffer memory. The big difference is that MSAA requires fewer GPU calculations because the pixel shaders are only run once, not 4 times.

In MSAA the color for one pixel is replicated to all 4 pixels before the pixels are written into the frame buffer.

You could imagine a different sampling scheme that stored 1 color for every 4 samples, but you'd get funny artifacts when triangles overlapped, due to the correllated mask problem. The color of "behind" triangles would bleed through the seams between a mesh of "in front" triangles.

Each sample takes 8 bytes (4 for color, 4 for Z). So 1280 x 720 x 4 x 8 = 28.125 MB.

How can Microsoft fit that into a 10 MB EDRAM buffer?

1) They could use banding, where they draw the the picture 3 times, in 3 parts. (Revenge of tile based rendering! Xbox 360 is the new Dreamcast! )

2) They could use some secret-sauce compression scheme.

3) They could render smaller frame buffers and scale it up. (This is done all the time on PS2 and Xbox 1 games, and nobody notices.)

4) They could use 2x MSAA instead of 4x. (Still need 15MB or so.)

5) Some combination of two or more of the above.
 
wco81 said:
Honda: What kind of power does Entertainment Computer need?

Kutaragi: The approach where improving computing performance as much as possible and combining GPU with it is just connect a PC with a TV and redesign packaging of a case, that is not new as a concept. XBOX was such a game console in its first generation too. But we want to create the future which was previously impossible by expanding computer entertainment more than ever before. At the press conference we showed the demo of ducks driven by physics simulation, it's just about generating a virtual world in PS3.

Now whether games developers and publishers share KK's vision and exploit the capabilities of his PS3 remains to be seen. They are more likely to extend graphics performance rather than extend the notion of what gaming could be.

So he his sayng that their CPU is superior for physics/simulation (which seem to be done in XBGPU by "fluid reality") and that leeds to inovation, or it is dependent of dev and MS(devs) can do the same i.e. inovate.But they are the ones who talked about more filtrate and MS new games/services.

Anyway I think he his right the one who get more inovative uses from their soft may get the best games ergo probably more susccess
 
All known implementations MSAA and SSAA require the same amount of frame buffer memory.
Not all of them. 3Dlabs' Superscene AA uses a combination of a preallocated, fixed size buffer, with room for 2, 4, 8, or 16 samples per-pixel (customisable in the driver applet), and spills any additional samples that need to be stored into a dynamically allocated buffer. There's been some speculation, based on a few ATI patents, among other things, that the 360 chip may do something similar, possibly spilling additional samples to main memory (with some clever buffering scheme to mask any latency issues).

There's a nice description of the 3Dlabs system in this B3D article.
 
Composing the image with off-chip memory is slow. Old, conventional systems.

Composing it with high bandwidth embedded framebuffers is fast, but the framebuffers take up a lot of the limited amount of eDRAM that can be used. PS2.

Composing it with just the critical high bandwidth embedded backbuffer is fast, and the backbuffer takes up only some of the limited amount of embedded memory. X360.

Composing it with high bandwidth internal buffers is fast, and the framebuffer requirements aren't raised at all. Tile-based display list rendering.
 
Xmas said:
DaveBaumann said:
I've already mentioned that it tiles.
Always, or only under certain circumstances?


Developers choice, but given 720p won't fit in the EDRAM with AA, it might aswell be always.

Interesting observation of the day 640x480x4xAA just fits in the EDRAM.
 
Hm, your answers seem to imply ATI doesn't use the sample compression scheme proposed in the patent, i.e. storing additional AA samples in another location, but instead allocating the memory for all samples. Otherwise you wouldn't have to tile for 720p.
I wonder whether this really is a good idea.
 
It doesn't compress. The EDRAM always stores the upsampled data, but (like PowerVR, but on a larger basis) the tiled element is completely rendered and downsampled prior to it going to system RAM for display (once all the rest of the frame is rendered).
 
Back
Top