My email interview about Xenos with Michael Doggett

Status
Not open for further replies.
Question 1: Looking at R520 and R500(Xenos), which one, in layman's terms is
more powerful?

Answer: These 2 chips aren't designed for the same use. R520 is a high end PC
graphics chip, while Xenos (it's not R500) is designed for a home
console. You might consider Xenos more powerful since it has 48 shaders,
while R520 has effectively 16 (x2 ALU) pixel shaders plus 8 vertex
shaders. But with the higher clock speed of 520 it comes out close in
terms of raw shader ALU. But when comparing memory bandwidth, Xenos is
more powerful due to the high memory bandwidth to the EDRAM chip.
---------------------------------------------------------------------------------------------------
Question 2: Looking at Specs of R520 , Xenos, and RSX (PS3 GPU with G70
Architecture with higher(550mhz) clock than 7800, lower memory
bandwidth, FlexIO connection and possibility of TurboCache), which of
these cards will output better graphics as time goes on (R520 life
ending with R580 I assume?)

Answer: They are all very evenly matched. In the end Xenos has 2 advantages.
It has a unified shader, so it can use all 48 shaders for vertex
operations. It also has a much greater framebuffer bandwidth than either
of the others.

------------------------------------------------------------------------------------------------------

Question 3: ATI has said Unified architecture is revolutionary, how much
revolutionary is it?Will it makes Games on Xbox 360 look better than
the Nvidia card?

Answer: It is revolutionary in that vertex and pixel shader are unified. This
is a major change in graphics hardware which we haven't seen since the
introduction of the vertex shader, and that was just an add on, not a
major change like this. It means xbox games can use up to 48 vertex
shaders, where as PS3 can only ever have a maximum of 8. That is a big
change for developers once they start to use it.


------------------------------------------------------------------------------------------------------

Question 4: ATI has talked of efficiency, with current PC cards being 50-60%
efficient while Xbox 360 is more than 80% efficient? what does it mean
specifically?

Answer: I'm not sure what these efficieny numbers are referring to. They
could be shader or memory bandwidth efficiency. This is certainly
something that we are always attempting to improve upon.

------------------------------------------------------------------------------------------------------

Question 5: I have seen that to make HDR work well, you need to reduce AA , but
I thought AA was virtually free when ATI first announced this card in
Xbox 360 and another question, how is it that AA had to be reduced
with full HDR in Kameo but HDR and AA are running at full capacity in
Project Gotham Racing 3?, is this true?


Answer: The 360 has different HDR formats. The 64 bit per pixel formats take
up more memory in the EDRAM. So if you are at the limit of memory you
will have to reduce the number of AA samples. But if you use the 32bit
HDR formats then you can run at 4x MSAA. I don't know the details for
Kameo and PGR3, but there are many modes that the 360 can run in.

-------------------------------------------------------------------------------------------------------

Question 6: In the recent Chip manufacturs convention I got hold of information
from ATI that there is Direct3d Compression between the GPU and memory
which means it virtually acts as if the GPU-Memory Bandwidth is
doubled from 22.4 GB/s

Answer: Assuming you are referring to texture compression, then the 360 has
support for several texture compression formats including all the
DirectX compressed texture formats.

-------------------------------------------------------------------------------------------------------


I wanted to ask all the pressing questions here. I hope I did a good job with those questions. Discuss.
 
This seems awfully familiar to the fake Ageia email that was posted a few months ago...I'm actually starting to wonder if all the "Doggett" emails are fake.
 
Awesome.

I still wonder about the EDRAM. More bandwidth is good for what if we're not bandwidth limited at 720P?

But the shader power is interesting. Exactly as I had tabbed it. R520 has 40 ALU's at 625 mhz. Xenos 48 at 500. So I figured they were close.
And I'd ask him again, if EDRAM makes a card more powerful as he states, why is it not in the PC parts?
 
> "It means xbox games can use up to 48 vertex shaders, where as PS3 can only ever have a maximum of 8. That is a big change for developers once they start to use it."

It sort of implies that if all your shader units are doing vertex work, what is doing pixel work? So how is it a benefit when at some point you got to do some pixel work.

That's not a realistic answer or a realistic comparison.

A better answer would be that you could have a better ratio of vertex and pixel work for any given scene based on the workload. Maybe that is too complex of an answer for the average person.

> "In the recent Chip manufacturs convention I got hold of information
from ATI that there is Direct3d Compression between the GPU and memory
which means it virtually acts as if the GPU-Memory Bandwidth is
doubled from 22.4 GB/s"

I'm surprised you asked this question, considering D3D compression ratio is 6:1, but you assume 2:1, and RSX also uses DXD compression, so any multiplier of the 360 bandwidth would have to also be done on the PS3 side. Might as well compare ACTUAL bandwidth figures then. If you going to double 360's bandwidth, then double the main bandwidth of PS3 while you are at it.

> "It is revolutionary in that vertex and pixel shader are unified."

I don't understand how it's revolutionary, as all it provides is a better balancing of loads for POSSIBLE efficiencies depending on the scene being rendered. It's evolutionary, and not revolutionary.
 
Last edited by a moderator:
Edge said:
> "It means xbox games can use up to 48 vertex shaders, where as PS3 can only ever have a maximum of 8. That is a big change for developers once they start to use it."

It sort of implies that if all your shader units are doing vertex work, what is doing pixel work? So how is it a benefit when at some point you got to do some pixel work.

That's not a realistic answer or a realistic comparison.

A better answer would be that you could have a better ratio of vertex and pixel work for any given scene based on the workload. Maybe that is too complex of an answer for the average person.

> "In the recent Chip manufacturs convention I got hold of information
from ATI that there is Direct3d Compression between the GPU and memory
which means it virtually acts as if the GPU-Memory Bandwidth is
doubled from 22.4 GB/s"

I'm surprised you asked this question, considering D3D compression ratio is 6:1, but you assume 2:1, and RSX also uses DXD compression, so any multiplier of the 360 bandwidth would have to also be done on the PS3 side. Might as well compare ACTUAL bandwidth figures then.

> "It is revolutionary in that vertex and pixel shader are unified."

I don't understand how it's revolutionary, as all it provides is a better balancing of loads for POSSIBLE efficiencies depending on the scene being rendered. It's evolutionary, and not revolutionary.

yes but Most of Ps3 games will be OpenGL while the DX9.0L Layer is not available to PS3 which allows more texture compressions such as the one posted in the Chip slides a few months ago.

Edit: I dont think he would know much about DX9.0L layers as its provided by Microsoft for the Console Card, not ATI.
 
Last edited by a moderator:
> "yes but Most of Ps3 games will be OpenGL while the DX9.0L Layer is not available to PS3 which allows more texture compressions such as the one posted in the Chip slides a few months ago."

Pakpassion, Sony's using Nvidia GPU, which uses S3TC compression, the exact same compression as D3D compression, as D3D compression comes from S3TC. The ratio for that compression is 6:1 maximum.
 
Edge said:
> "yes but Most of Ps3 games will be OpenGL while the DX9.0L Layer is not available to PS3 which allows more texture compressions such as the one posted in the Chip slides a few months ago."

Pakpassion, Sony's using Nvidia GPU, which uses S3TC compression, the exact same compression as D3D compression, as D3D compression comes from S3TC. The ratio for that compression is 6:1 maximum.
Thats common knowledge in these parts.
 
Edge said:
> "yes but Most of Ps3 games will be OpenGL while the DX9.0L Layer is not available to PS3 which allows more texture compressions such as the one posted in the Chip slides a few months ago."

Pakpassion, Sony's using Nvidia GPU, which uses S3TC compression, the exact same compression as D3D compression, as D3D compression comes from S3TC. The ratio for that compression is 6:1 maximum.

I dont think, i dont know exactly but I dont think its the same as the compression they were talking about in the Chip forum slides. thats specific to D3D and might be specific to the X360 version of it.
 
The question of why EDRAM isnt on the PC is because of a lack of developer support that would insue due to the requirement of a Tiling optimised engine, for one video card.

If the 360 is very sucessfull, and porting to the PC is simplied by XNA tools. We may see ATI take the risk of releasing a card that has EDRAM, on the PC.

But I wouldn't count on it.
 
If tiling caused a card to be more powerful, than a a EDRAM PC card would be made.

MS doesn't own the patent on EDRAM. That's ridiculous.
The real question is, would ATI add 100 million transistors to R520 for little gain (EDRAM)?

I remain to see what EDRAM DOES?

Well, well, it holds stuff!

(But the stuff works fine without it anyway)
 
Last edited by a moderator:
SirTendeth said:
The question of why EDRAM isnt on the PC is because of a lack of developer support that would insue due to the requirement of a Tiling optimised engine, for one video card.

If the 360 is very sucessfull, and porting to the PC is simplied by XNA tools. We may see ATI take the risk of releasing a card that has EDRAM, on the PC.

But I wouldn't count on it.

yes, super-tiling makes little sense on a platform with no fixed target resolution - there you'd be better off with micro-tiling ala powervr - that's much more transparent to your rendition code than xenos' super-tiling. on a console, OTH, super-tiling could be a big win, if the amount of edram is actually sufficient for most use scenarios. we'll see.
 
Another interesting thing he said in answering Question 3: " This is big for developers if they start using it", does it mean they havent started utilising it properly or he is just generalising .
 
Small clarification. Microsoft owns patents to one specific implementation of EDRAM.

Both Nintendo, and Sony have already used their own versions in the GC and PS2.

As far as I know though- EDRAM is expensive, and therefore it most likely will only be integrated in small quantities. This makes fitting an entire frame into the buffer improbable, if not impossible. Thus requiring a tiling mechanism- and this makes compatibility in the PC market more difficult for the developer.

Hence a lack of title support and what I think is the main reason we don't see EDRAM on the PC.
 
It seems like EDRAM is for high bandwidth needs. High res, high AA, etc.

Gee, exactly the wrong enviroment for a console.
 
Bill said:
It seems like EDRAM is for high bandwidth needs. High res, high AA, etc.

Gee, exactly the wrong enviroment for a console.

but isnt xbox 360 hi res compared to xbox? 480 vs 1024+?
and hi AA? 2x vs 4xMSAA?
 
Are you sure it didn't work out for the PS2 or GC...
and those were almost exclusively 480P.

Sony and Microsoft are pretending that this gen base line starts at 720P, and I would consider that to be high res. But that's just me.
 
Status
Not open for further replies.
Back
Top