Xbox 2 hardware overview leaked?

Status
Not open for further replies.
Um, when did you tell us this exactly? From what I've heard, the chip going in there certainly isn't R400 based. Saying it's using some tech from R600 just confirms that.
 
so basicly Xenon graphics are from the R400, R500, R600 family, and not

from the R300, R420, R520 family.... is that what you mean Dave?
 
newb question: how does one determine if there is enough pixel fillrate?

e.g. 640x480xframes-per-second (wanted) :?:
 
More or less. My understanding at the moment on the PC front is that R520 will be their next generation SM3.0 part, but will use the R300 architectural platform as the basis. They will adopt the unified shader architecture for DirectX next in R600 and this will use what they are developing for Xenon as the basis but obviously with further developments in order to round off the DX Next specifications. Basically whatever features / functionality Xenon has will never completely come to the PC front, however the architectural platform of Xenon will be used as for DX Next (and potentially onwards) parts. I'd guess that if you used Xenon to the full you'd need a DX Next part to port it intact (graphically) across to the PC.

Development of the shader capabilities isn't really tht much of an issue, whats really being focussed on is the shader instruction scheduler - it has to get the best from a unified architecture you want as few bubbles in those ALU's as possible, so the more intelligent the scheduler is the more efficient the architecture will be.
 
Can a few things be cleared up for a non-tech person like me? Fake or not, does this system utilize the vitual video memory features of DXNext? I found that to be a rather interesting feature based on the article written on this site. And can someone explain or provide real world examples of the features or beneficial affects of having the shaders being able to directly access memory, textures, ect. I mean, all this time, I just assumed GPUs were already capable of that. Also, when the system block was initially leaked, the biggest concern (coming from Democoder, I believe,) was that there wasn't enough E-Dram to support HDTV resolutions. With the statements made in this overview (720p native) is that concern still valid? Finally, there was a statement to the effect that the peak vertex rate of 500m/sec was attainable with non-trivial shaders. So, does that mean that we could see close to that high a number in game with a decent amount of effects?
 
tngregoire said:
Can a few things be cleared up for a non-tech person like me? Fake or not, does this system utilize the vitual video memory features of DXNext? I found that to be a rather interesting feature based on the article written on this site. And can someone explain or provide real world examples of the features or beneficial affects of having the shaders being able to directly access memory, textures, ect. I mean, all this time, I just assumed GPUs were already capable of that. Also, when the system block was initially leaked, the biggest concern (coming from Democoder, I believe,) was that there wasn't enough E-Dram to support HDTV resolutions. With the statements made in this overview (720p native) is that concern still valid? Finally, there was a statement to the effect that the peak vertex rate of 500m/sec was attainable with non-trivial shaders. So, does that mean that we could see close to that high a number in game with a decent amount of effects?

In the paper it states

Caveats
In some cases, sizes, speeds, and other details of the Xenon console have not been finalized. Values not yet finalized are identified with a “+â€￾ sign, indicating that the numbers may be larger than indicated here. At the time of this writing, the final console is many months from entering production. Based on our experience with Xbox, it’s likely that some of this information will change slightly for the final console.

...


Xenon is designed for high-definition output. Included directly on the GPU die is 10+ MB of fast embedded dynamic RAM (EDRAM). A 720p frame buffer fits very nicely here. Larger frame buffers are also possible because of hardware-accelerated partitioning and predicated rendering that has little cost other than additional vertex processing. Along with the extremely fast EDRAM, the GPU also includes hardware instructions for alpha blending, z-test, and antialiasing.


So the 10 MB of EDRAM total isn't finalized yet. I'd guess thats 10 MB is at .90 nm and could go higher if TSMC can get .65 nm up and running with good yeilds by the end of next year.
 
tngregoire said:
Can a few things be cleared up for a non-tech person like me? Fake or not, does this system utilize the vitual video memory features of DXNext? I found that to be a rather interesting feature based on the article written on this site. And can someone explain or provide real world examples of the features or beneficial affects of having the shaders being able to directly access memory, textures, ect. I mean, all this time, I just assumed GPUs were already capable of that. Also, when the system block was initially leaked, the biggest concern (coming from Democoder, I believe,) was that there wasn't enough E-Dram to support HDTV resolutions. With the statements made in this overview (720p native) is that concern still valid? Finally, there was a statement to the effect that the peak vertex rate of 500m/sec was attainable with non-trivial shaders. So, does that mean that we could see close to that high a number in game with a decent amount of effects?
10MB should be enough to support 1280x720p, you only need about 7.3MB for both color and z buffer at that resolution.
Giving ALU direct access to memory make it easier to program and advanced programming features become possible: the length of shaders can be no longer limited, you can use as many register as you like, even creating array and object will be possible. I really hope the HLSL in DirectXNect can support pointer...

edit: the calculation of required framebuffer size is based on no AA situation. when using 4xAA, the number will grow up to 30MB??!! :?
 
DaveBaumann said:
More or less. My understanding at the moment on the PC front is that R520 will be their next generation SM3.0 part, but will use the R300 architectural platform as the basis. They will adopt the unified shader architecture for DirectX next in R600 and this will use what they are developing for Xenon as the basis but obviously with further developments in order to round off the DX Next specifications. Basically whatever features / functionality Xenon has will never completely come to the PC front, however the architectural platform of Xenon will be used as for DX Next (and potentially onwards) parts. I'd guess that if you used Xenon to the full you'd need a DX Next part to port it intact (graphically) across to the PC.

Development of the shader capabilities isn't really tht much of an issue, whats really being focussed on is the shader instruction scheduler - it has to get the best from a unified architecture you want as few bubbles in those ALU's as possible, so the more intelligent the scheduler is the more efficient the architecture will be.
Right.. However.. The origional way it was put out was that Xbox2 Chip would be based off R600 and development was not seriously underway yet. At least thats the way i understood what was being said.

Which is what i disagreed with. The RX5A Chip (or whatever its nomenclature is now) is a predicessor to R600 not a result of R600. Its been in Development for a quite a while.
 
Anyone has idea on why they let GPU read directly from CPU's L2? what is it used for? It seems to me that the only explaination is CPU and GPU can co-op in some situation on the rendering work.
 
Xbox 2 graphics development has probably been underway since early to mid 2002 and will end sooner than Revolution's development, which means, Revolution could possibly have a better graphics processor unless Nintendo is going the extremely cheap route.
 
991060 said:
Anyone has idea on why they let GPU read directly from CPU's L2? what is it used for? It seems to me that the only explaination is CPU and GPU can co-op in some situation on the rendering work.
I think there might be a few suprises in the Xbox2 CPU :) .
 
991060 said:
Anyone has idea on why they let GPU read directly from CPU's L2? what is it used for?

1: CPU builds display lists in L2 for GPU to consume; much faster than writing them out to main RAM for GPU to DMA back from, and also avoids burning main memory bandwidth on trivial transactions. AGP fastwrites was supposed to fill a similar function (allowing fast CPU access to the GPU), but that was only with relatively small transactions, thereby maybe keeping efficency low.

2: CPU does vertex shading, leaving all of the GPU to do pixel shading. GPU reads back finished transformed meshes and then does the actual drawing. This is an evolution of scenario 1. :)
 
GwymWeepa said:
Alright, so guys, do you think this proposed xbox2 would be an impressive set-up?


And how would it look compared to unreal 3, could this maybe do 12,000 polys a character or should we expect just unreal 3 level. How do you guys think these stats will change the way A.I. is?
 
I'd hope Xenon could run a game like Unreal 3 with higher polygon count (30,000 ~ 50,000 poly models) and at 60fps. should be possible since good Xenon games will be optimized specifically for its hardware.
 
2: CPU does vertex shading, leaving all of the GPU to do pixel shading. GPU reads back finished transformed meshes and then does the actual drawing. This is an evolution of scenario 1.

somewhat like what PS2 does with EE handing the vertex shading / polygon & lighting computations, sending them to the GS which does the drawing.
 
Megadrive1988 said:
I'd hope Xenon could run a game like Unreal 3 with higher polygon count (30,000 ~ 50,000 poly models) and at 60fps. should be possible since good Xenon games will be optimized specifically for its hardware.

Although that does seem rather high, if that did happen, I would take 20,000 in each model. So then it would be like the last 5 min in halo(without the drop in framerates). With all the things on screen at once, that was fun. Can we expect virtual displacemnt to be commonplace in next gen?
 
Megadrive1988 said:
I'd hope Xenon could run a game like Unreal 3 with higher polygon count (30,000 ~ 50,000 poly models)

When would you actually be close enough to anything to appreciate that kind of detail level?

Also, there's fewer times that are more annoying than characters more detailed than the world they lived in.
 
GwymWeepa said:
Alright, so guys, do you think this proposed xbox2 would be an impressive set-up?
It seems quite memory subsystem limited to me.
Impressive is in the eye of the beholder, and will depend a lot on its competition.
 
Wunderchu said:
the Xbox Next CPU may be manufactured using a 65nm process, which should help alot with the cooling issue ...

IBM is having problems with 90nm for the 970. That is why the Power Mac G5 has hit only 2.5 GB, instead of 3 GB as promised a year ago.
 
Status
Not open for further replies.
Back
Top