New ATI Console interview

He's (deliberately?) misrepresenting the architecture. It's 32 GB/s INTO the eDRAM (IIRC : 32 in, 16 out). There's 256 GB/s between the memory and logic on that same die. Ordinarily people don't talk about the bandwidth between a processor's local storage and logic circuits but MS saw fit to include it in this instance, though they didn't mention bandwidth of XeCPU cores and L1 or L2 caches which would also be stupidly high numbers relative to system bandwidth and provide more 'oomphtastic' numbers.
 
Alstrong said:
a.k.a. no eDRAM? :)
As long as Ati do not produce the exact same chip, which belongs to MS, they, Ati, can do what they want. Example, Ati can produce a chip with the same Unified Shader Architecture as in Xenos and with eDRAM too, if they wanted.
It the Xenos chip as a whole that belongs to MS, not each and every parts of its architecture.
PC-Engine said:
You know this for a fact? So MS owns the 3DRAM, MEMEXPORT, Tiling, etc. technologies?
I know for a fact that MS owns the whole Xenos IP and its implementation rights, but I don't know the details about what MS IPs have, or have not, been integrated in Xenos.
 
Shifty Geezer said:
He's (deliberately?) misrepresenting the architecture. It's 32 GB/s INTO the eDRAM (IIRC : 32 in, 16 out). There's 256 GB/s between the memory and logic on that same die. Ordinarily people don't talk about the bandwidth between a processor's local storage and logic circuits but MS saw fit to include it in this instance, though they didn't mention bandwidth of XeCPU cores and L1 or L2 caches which would also be stupidly high numbers relative to system bandwidth and provide more 'oomphtastic' numbers.

Isn't it hard not to at least try to factor this speed into the equation though?

I mean, the amount of savings (BW-wise) MAY be substantial over existing architectures, correct?

I can't fault them for trying to bring it into the conversation.

Of course I'm just a layman. :)
 
Shifty Geezer said:
He's (deliberately?) misrepresenting the architecture. It's 32 GB/s INTO the eDRAM (IIRC : 32 in, 16 out). There's 256 GB/s between the memory and logic on that same die. Ordinarily people don't talk about the bandwidth between a processor's local storage and logic circuits but MS saw fit to include it in this instance, though they didn't mention bandwidth of XeCPU cores and L1 or L2 caches which would also be stupidly high numbers relative to system bandwidth and provide more 'oomphtastic' numbers.
But that's alittle different in this case because traditionally when we talk about console gpu bandwidth we're talking about the bandwidth between the storage ie. vram and logic circuits ie. gpu?

My question is what is the bandwidth the ps3 is working with when it comes to AA,Z ops, and stencil and what's the same bandwidth that the xenos has for those ops?
 
Richard Huddy:
First and foremost we have a "unified shader architecture". No
other console or PC chip can boast this. And what, in short, it means is
that the hardware is always able to run at 100% efficiency. All previous
hardware has separate vertex and pixel shaders. That means that that
previous hardware just had to hope that the vertices and pixels came in just about the right ratios. If you got too many pixels then the vertex engines would be idle, and if you got too many vertices then the pixel engines would starve instead. It's not uncommon for one part of the chip to be starved of work for a large majority of the time - and that's obviously inefficient. With a unified architecture we have hardware that automatically moulds its-self to the task required and simply does whatever needs to be done. That all means that the Xbox 360 runs at 100% efficiency all the time, whereas previous hardware usually runs at somewhere between 50% and 70% efficiency. And that should means that clock for clock, the Xbox graphics chip is close to twice as efficient as previous graphics hardware.

lets be real here: what if Xenos/C1 is really only 90%, or lets be generous and say 95% efficient, and other non-unified graphics hardware is 70% efficient, that means Xbox 360 graphics are no where near "close to twice as efficient as previous graphics hardware". A significant improvement, yes, but lets not stretch things too much!
 
Alstrong said:
Acert93 said:
Since when did the system get 6 cores?

He's referring to the "hyperthreaded cores" i.e. 2 threads per core.

Yes, but a separate thread is not going to run at full speed. e.g. if the XeCPU had 3 cores and each core did 1 HW thread instead of 2, it would still have a floating point performance rating of 115GLOPs. Extra threads do not automatically double performance as he is indicating. Far from it.

I think we would all agree this is the kind of bull that is just annoying. The only reason I see this quote as non-inflamitory is because it is a MS-centric discussion of MS products. At least there is no "Xbox 1.5" or "We have 3 cores, they have 1 and 7 other things" :LOL:
 
Megadrive1988 said:
lets be real here: what if Xenos/C1 is really only 90%, or lets be generous and say 95% efficient, and other non-unified graphics hardware is 70% efficient, that means Xbox 360 graphics are no where near "close to twice as efficient as previous graphics hardware". A significant improvement, yes, but lets not stretch things too much!

He does say 50% to 70%. 90-95% is close to twice as efficient as the worst case.


Acert93 said:
Yes, but a separate thread is not going to run at full speed....

Just telling you how he probably came to that calculation. Don't rape the messenger. ;)
 
but lets not stretch things too much!
You can say that about the whole interview.

Does anyone have an answer to this question of what is the bandwidth the ps3 is working with when it comes to AA,Z ops, and stencil and what's the same bandwidth that the xenos has for those ops?
 
Richard Huddy:
I guess there are a few numbers which are just astonishing. First of all we have 48 unified shaders all running in parallel. That gives us roughly three times the graphics power of existing high end PC hardware. In this context 48 is a huge number! The original Xbox had just one vertex shader and four pixel shader pipes.

I cannot agree with this.



* the original Xbox had *two* vertex shaders and four pixel pipelines

* as discussed many many times already, the 48 unified shader pipelines is not directly comparable to the amount of pipelines in current PC graphics hardware.

* he is totally NOT counting the 6 vertex shaders in current highend PC GPU/VPUs. the ones in his own company's VPUs, and in Nvidia's GPUs. he totally excluded vertex shaders in PC graphics processors while INcluding all of Xenos' "pipelines" to make the Xenos "look bigger" and more powerful.


* it is not 48 pipes vs 16 pipes. it would be, more like, 48 pipes vs 22 pipes. but even 48 vs 22 is not a correct comparison. .....even though 48 vs 22 is the correct number of total "pipelines". but Richard did not even do that comparison, he made an even worse comparison.

what is UP with making every other V/GPU sound smaller (even their own PC ones) and Xenos bigger? o_O



the very *best* most generous light one might be able to put the Xenos in (it's an awesome chip, I am not saying it isn't but since Huddy is talking about raw power we gotta bring things back down to earth) is that Xenos is equivalent to a 32 pixel pipeline GPU. not a 48 pixel pipeline GPU which Huddy is basicly saying.

with that said, Xenos *cannot* crank out 32 pixels per clock like a true 32 pixel pipeline GPU should be able to do. Xenos does not have the raw fillrate of a 32 pixel pipe (much less a 48 pixel pipe) GPU. Xenos cranks out 8 pixels per clock. that's a huge disparity. Now, in Xenos' favor, what makes Xenos better than a typical 8 pixel pipeline GPU, is the FSAA applied without loss of that fillrate.
 
Who is this guy (The one that was interviewed in the article)? and why does it seem like he doesn't know what hes talking about? Is this one of those articles that is made to pump up the volume on the 360.....I don't think it is because he states terms that a regular console gamer wouldn't understand....

Maybe hes mentally ill?
 
ralexand said:
Does anyone have an answer to this question of what is the bandwidth the ps3 is working with when it comes to AA,Z ops, and stencil and what's the same bandwidth that the xenos has for those ops?

Xenos has 256GB/s between its eDram logic and logic itself. That much isn't strictly necessary of course to do the things you're talking about, not by a long shot.

I've asked multiple times for the answer to your question also, but no one seems to want to venture any guesses. I suppose it varies from game to game and even from frame to frame - probably from gpu to gpu too. I don't think it's a static figure.
 
More I hear from him, less credible he gets..But his interviews are targeted toward general public so I understand that he is playing nothing more than a cheerleader for MS..even though he makes absolutely stupid comments such as 20Ghz of processing power..general public will probably buy his comments.
 
Hyperthreading does give a large benefit in performance, especially when software is coded specifically to use it. Hyperthreading can provide over 30% increase in performance.
 
ecliptic said:
Hyperthreading does give a large benefit in performance, especially when software is coded specifically to use it. Hyperthreading can provide over 30% increase in performance.

That's a little different than suggesting a 100% improvement as in the interview though. One core with two threads @ XGhz != 1 core with one thread @ 2*XGhz.
 
Really , I haven't seen any comments worse than kk has said so whats the problem ? If its good enough for sony to do , why isn't it good enough for ms ?
 
Titanio said:
ralexand said:
Does anyone have an answer to this question of what is the bandwidth the ps3 is working with when it comes to AA,Z ops, and stencil and what's the same bandwidth that the xenos has for those ops?

Xenos has 256GB/s between its eDram logic and logic itself. That much isn't strictly necessary of course to do the things you're talking about, not by a long shot.

I've asked multiple times for the answer to your question also, but no one seems to want to venture any guesses. I suppose it varies from game to game and even from frame to frame - probably from gpu to gpu too. I don't think it's a static figure.
I'm not saying its necessary, I'm saying that its there. I don't understand why bandwidth wouldn't be static though?
 
The maximum available bandwidth for RSX's buffers is going to be the entirety of it's bandwidth - 50 ish GB/s. Unless they have some local storage thing going on.

ralexand - there was a lot of debate on what these bandwidth figures are and what they're 'worth'. I went looking but couldn't find any specific, though if you revist any thread regards 'Major Nelson' you'll probably meet the same arguments.

In essence (at least, my argument :D ) the on-die bandwidth is irrelevant as a figure. MS could just as easily have claimed infinite bandwidth, as it is the data availability for the logic on chip, which is never having to wait. The eDRAM does serve a bandwidth saving purpose, and a claim of say '50 GB/s saved from main bandwidth' would be a fair statement (as long as the BW saved was calculated fairly. Throwing it out as part of a system aggregate bandwidth figure is baloney, and so is the idea that this is bandwidth between GPU main die and eDRAM+logic daughter die.
 
jvd said:
Really , I haven't seen any comments worse than kk has said so whats the problem ? If its good enough for sony to do , why isn't it good enough for ms ?
Has KK ever said a technical statistic that is totally false, like "256 GB/s between GPU and eDRAM"? He comes out with poetic nonsense, but I don't know that he's ever use totally false figures (save perhaps the 2 teraflop system performance for PS3 that raised it's ugly head at E3)
 
Titanio said:
ecliptic said:
Hyperthreading does give a large benefit in performance, especially when software is coded specifically to use it. Hyperthreading can provide over 30% increase in performance.

That's a little different than suggesting a 100% improvement as in the interview though. One core with two threads @ XGhz != 1 core with one thread @ 2*XGhz.

Most of the reason hyperthreading isn't nearly as effective as it could be is because of the simple lack of support it has on the software side of the computer industry. I always wanted to see just what kind of technological benefits it could have it was heavily supported.
 
Back
Top