XBox2 specs (including graphics part) on Xbit

I dont know why (and this may be in the wrong thread) but this seems like an early revision?? We have the 3 CPU's there like a lot of people in the know have been hinting but I never in my wildest dreams (exciting life I lead..) expected 3.5GHz clockspeeds from the CPU's.

3.5GHz and a 500MHz 8 pipeline x2 PS and VS sharing GPU = unbalanced?
I see the eDRAM is used primarily as a frame buffer and z buffer.. is this correct?

Just thinking out loud but Im very surprised to see such a clockspeed on the CPU's alone.
 
as for raw fillrate, I expect nothing less than 16 textured pixels per clock. anything less would be disappointing.

I was hoping that R500 or R600 would go to 32 pipelines. there is never enough raw fillrate even with lower resolutions.

Xbox has 932 Mpixels/sec fillrate

I expect at least a 10x increase with Xbox 2.

either by more pipes, higher core clock, or both combined.

3 shader units per pipe sound nice.


I believe this block diagram is nothing more than:

a.) an old diagram from 2002-2003
b.) a diagram of an Xbox 2 intended for fall 2004
c.) a current or soon upcoming development kit


remember the initial specs going around for the original Xbox in late 1999/early 2000 (before the March 2000 revealing of Xbox) were much weaker than the final Xbox.


as others have said, three 3.5 Ghz CPUs coupled with an 8 pipe 500 Mhz graphics processor seems very unbalanced. if MS can get 3 CPUs into Xbox 2, why not 2 graphics processors? or at least one very very very fast & powerful one.
 
Well, that's a 500Mhz GPU with 48 ALU ops per cycle = 24gigaSIMDops/cycle. A 3.5GHZ CPU would only be able to usually about 1 SIMD op per cycle, so all 3 of them together can't match the GPU's raw vector processing throughput.

It's not unbalanced. Just doesn't have enough eDRAM IMHO. I'm a little disappointed it won't support FSAA or HDR HDTV resolutions.
 
DemoCoder said:
Natoma, even 1080i still needs to be supported, and it still requires you to render a 1920x1080 framebuffer, you just downconvert it to 1920x540 to output a field.

With only 10mb of EDRAM, FSAA and HDTV resolutions are not supportable. 1280*720*8bpp*4xfsaa = 29mb (yes, compression can be used, but you cannot depend on a guaranteed compression ratio, you have to allocate a buffer the size of the worst case). If you use 64-bit FB for HDR, it's worse: 1280*720*12bpp*4x = 44mb. Of course, it's all even worse for 1080i.

I'm not betting on AA this gen unfortunately. :(
 
Yeah, I hope this diagram is old. They should have just enough eDRAM to support atleast 1280x720 HDR 4xMSAA framebuffer minimum.
 
Megadrive1988 said:
I'd like 48 MB of eDRAM and 512 MB main memory.

I am sure MS wants to lose as little as possible on XB2 hardware initially and perhaps later on its life make some money on hardware AND software.

Edit@ lose... loose? whats an 'o' between friends.
 
Chalnoth said:
I was trying to state that anything that is currently in development may well differ from that plan. As an example, depending upon engineering concerns, the eDRAM may be reduced or dropped altogether.

Why are you being such a party-pooper, Chal?
Oh, right. You are the resident Nvidia fan after all and the XB2 GPU will be supplied by ATi, so that attitude from you would be expected... :rolleyes: ;)
 
DemoCoder said:
They should have just enough eDRAM to support atleast 1280x720 HDR 4xMSAA framebuffer minimum.

I wonder why this is so. Is the design that logic heavy? AFAIK, unless IBM is fabbing, TSMC doesn't offer a 65nm embedded option untill mid-2006 with CLN65HS. Perhaps this is a 90nm design - which would appear to be more inline with what I'd expect to see after the eDRAM cell sizes I used in the Broadband Engine thread. Some variable is missing or wrong, this is bizzare - where's Dave... heh
 
DaveBaumann said:
Frame buffer supports tiling. i.e. some can exist in eDRAM some in system / graphics RAM.

Putting parts of a fb in different memory is part of virtualizing memory (just like virtual memory support on intel/amd where when you run out of memory things get paged out to the harddrive). Tiling is aranging memory to not be linear but "tiled" to reduce cache misses. All modern graphics hardware already supports tiling. No hw for nvidia or ati currently supports tile boundry virtual paging.
 
DemoCoder said:
With only 10mb of EDRAM, FSAA and HDTV resolutions are not supportable. 1280*720*8bpp*4xfsaa = 29mb (yes, compression can be used, but you cannot depend on a guaranteed compression ratio, you have to allocate a buffer the size of the worst case). If you use 64-bit FB for HDR, it's worse: 1280*720*12bpp*4x = 44mb. Of course, it's all even worse for 1080i.

From the diagram, it certainly seems that the GPU uses both the console main memory and the embedded memory together ( 33 GB/s read stream and 22 GB write stream out of GPU to CPU/Main Mem). If this is true, you could use the embedded dram as the primary buffer in the case of AA if you have confidence that the majority of pixels will compress. You would then add the additional buffers in the main memory to hold the additional samples for those pixels that did not compress.


Aaron Spink
speaking for myself inc.
 
umh...
DemoCoder and Natoma...

if I read that diagram right, is there any reason why the back buffer should be in eDRAM? in fact, the text does not mention it as a part kept in eDRAM. Also 3D core is capable outputting 22.4 GB/s to north bridge, that again has a link to main memory with same amount of bandwidth.

so, to me, it looks like MSAA buffers, Stencil and Z-buffer are only ones kept in eDRAM. back and front buffer looks like to be in main memory. (Also, Video Scaler and video/audio output being linked to north bridge instead of 3D core, could indicate that they get data from main memory instead of eDRAM.)


all of this makes it look like that it could be possible support FSAA on all HDTV modes. It certainly would not be as fast as having enough dedicated eDRAM, but it could work this way. 22.4 GB/s should be enough to cover extra bandwidth 8 32 bit pixel writes per clock causes.


I might be wrong though...
 
The MSAA buffer *is* the backbuffer.

Anyway, "spilling" the backbuffer into main memory seems to defeat most of the purpose of keeping it in eDRAM. Keeping Z in eDRAM would make sense, since it needs to be read and written to many times. But if you're doing alpha blending, once you spill to main memory, you're performance is effectively bottlenecked by the read/write rate of main memory. When rendering, sure, you'd have a huge FB bandwidth to write to, but then, that FB has to be flushed to main memory at a certain point, and you'll have a stall, since it can't be flushed faster than the pipelines are filling up the edram.

"spilling" buffers only really gains you big boosts if you have a TBDR.
 
DemoCoder: oops, you are right. MSAA buffer is the back buffer. (it's 1:22 am here. time to bed for me...)

in any case, front buffer is most likely in main memory, right? at least I don't get any reasons why they would otherwise connect video refresh to north bridge instead of own path via 3D core...
 
Back
Top