New ATI Console interview

We will let this slide for now.

Nightz, in the future put all PR and interviews relating to a specific company and console in the respective PR thread.
 
Richard Huddy
Then we have a total of 2 Terabits per second of bandwidth (you can also write this as "256 Gigabytes per second of bandwidth") into the intelligent video memory. To give you a sense of the scale of these it's roughly 50 times as much as was available in the first Xbox, and its equivalent to being able to copy roughly 50 DVDs across the memory bus every second. Every now and then I have to look at this number again to remind myself of just how huge it is!

okay we can give him this. nothing outright disagree-able here. sure it is FLUFFY PR marketing here but Jen Jsun made the same comparison with the bandwidth between Cell and RSX. 7 DVDs through the ~35 GB/sec bandwidth link Cell><RSX.

Also, the 50 times as much (graphics bandwidth) as Xbox comment he made, is basicly the SAME comparison that Sony did with Playstation2 vs Dreamcast in 1999 - when Sony said PS2 can handle nearly 50 times as much 3D image data as Dreamcast, which I believe Sony derived from GS's 48 GB/sec bandwidth compared to Dreamcast's 0.8 GB/sec (800 MB/sec) bandwidth between PowerVR2DC and memory.
 
Megadrive1988 said:
nothing outright disagree-able here
Then we have a total of 256 Gigabytes per second of bandwidth into the intelligent video memory.
He says 'into' the intelligent video memory, not 'inside'. The bandwidth 'into' the intelligent video memory is 32 GB/s. I find that outright disagreeable :p
 
Shifty Geezer said:
In essence (at least, my argument :D ) the on-die bandwidth is irrelevant as a figure.

I beg to differ. When the RAM itself has onboard processing power spefically made relieve alot of the processing pressure off the main GPU and bandwidth off the main bandwidth. It is very relevent.

Shifty Geezer said:
Has KK ever said a technical statistic that is totally false, like "256 GB/s between GPU and eDRAM"? He comes out with poetic nonsense, but I don't know that he's ever use totally false figures (save perhaps the 2 teraflop system performance for PS3 that raised it's ugly head at E3)

One you are falsely quoting him, to make some kind of exagerated point obviously.

The real quote:

"Then we have a total of 2 Terabits per second of bandwidth (you can also write this as “256 Gigabytes per second of bandwidthâ€￾) into the intelligent video memory."

Shifty Geezer said:
Megadrive1988 said:
nothing outright disagree-able here
Then we have a total of 256 Gigabytes per second of bandwidth into the intelligent video memory.
He says 'into' the intelligent video memory, not 'inside'. The bandwidth 'into' the intelligent video memory is 32 GB/s. I find that outright disagreeable :p

Obviously the EDRAM is more than simply RAM. He never gives a "from" to go with that "into". So you are putting words into his mouth.
 
Shifty Geezer said:
Megadrive1988 said:
nothing outright disagree-able here
Then we have a total of 256 Gigabytes per second of bandwidth into the intelligent video memory.
He says 'into' the intelligent video memory, not 'inside'. The bandwidth 'into' the intelligent video memory is 32 GB/s. I find that outright disagreeable :p

ok, I obviously missed that part then :)
 
ecliptic said:
I beg to differ. When the RAM itself has onboard processing power spefically made relieve alot of the processing pressure off the main GPU and bandwidth off the main bandwidth. It is very relevent.
The moment the eDRAM was given logic, it became a processor. I have never said it's irrelevant either. It's a smart, capable design. Kudos to ATi.

One you are falsely quoting him, to make some kind of exagerated point obviously.
Huh? Maybe I didn't quote verbatim but the talk (not just here, but also from other MS produced material) has been of 256 GB/s into the eDRAM. This point has appeared in much discussion all over the internet and confused a great many people, me included :D

The real quote:

Then we have a total of...256 Gigabytes per second of bandwidth...into the intelligent video memory.
Obviously the EDRAM is more than simply RAM.
The eDRAM is that 'intelligent video memory' is it not? The piece of silicon that that possesses both storage and logic circuits. If he meant just the eDRAM (and not the logic attached to it) then it's not intelligent.

He is obviously talking about data passing into the daughter die, which he terms 'intelligent video memory'. I can't see that it can be read any other way. If he meant bandwidth internal to that chip, why didn't he use the term 'inside'? That's misleading. Unless he's talking about 256 GB/s into the 'intelligent video memory' from elsewhere, but where would that be?
 
Shifty Geezer said:
Then we have a total of...256 Gigabytes per second of bandwidth...into the intelligent video memory.
Obviously the EDRAM is more than simply RAM.
The eDRAM is that 'intelligent video memory' is it not? The piece of silicon that that possesses both storage and logic circuits. If he meant just the eDRAM (and not the logic attached to it) then it's not intelligent.

He is obviously talking about data passing into the daughter die, which he terms 'intelligent video memory'. I can't see that it can be read any other way. If he meant bandwidth internal to that chip, why didn't he use the term 'inside'? That's misleading. Unless he's talking about 256 GB/s into the 'intelligent video memory' from elsewhere, but where would that be?

There is obviously stuff we don't know about the GPU and has not and may not be revealed. It could just as easily be misspoken. It is not like he kept going on about the bandwidth nor did he get specific.

Maybe it actually does have a 256GB banwidth to the GPU and other reports are false. We have heard it before and alot of times its brushed off because other reports state differently.
 
Given the depth of Dave's Xenos article, I can't see any reason to believe there's more to Xenos than what we already know. If you haven't seen it already, view the Beyond3D article on Xenos direct from Dave's interview with ATi...

http://www.beyond3d.com/articles/xenos/

I don't think anyone can accept ATi and/or MS are hiding some other 'intelligent video memory' that has got 256 GB/s bandwidth into it from elsewhere in the system. So as you say, the only other fair suggestion is not PR BS is it's 'mispoken', but that's quite a lot of people fro the MS camp who have mispeaking this same concept for a few months on and off now. That's why I put it down to PR nonsense, deliberately misleading the public with meaningless numbers (which they all do - well, Sony does. Nintendo seem better behaved).
 
Shifty Geezer said:
Given the depth of Dave's Xenos article, I can't see any reason to believe there's more to Xenos than what we already know. If you haven't seen it already, view the Beyond3D article on Xenos direct from Dave's interview with ATi...

http://www.beyond3d.com/articles/xenos/

I don't think anyone can accept ATi and/or MS are hiding some other 'intelligent video memory' that has got 256 GB/s bandwidth into it from elsewhere in the system. So as you say, the only other fair suggestion is not PR BS is it's 'mispoken', but that's quite a lot of people fro the MS camp who have mispeaking this same concept for a few months on and off now. That's why I put it down to PR nonsense, deliberately misleading the public with meaningless numbers (which they all do - well, Sony does. Nintendo seem better behaved).

didnt you read the first page of the article? Alot of technical details are still hidden from the public
 
Specifics, such as ATi's prioritization/scheduling algorithms, yes. Fundamental aspects to the hardware - no. When talking about 'intelligent video memory' the ATi spokesman has to be talking about the daughter die. I'll be totally and utterly gobsmacked if otherwise. :oops:
 
Just to point another thingh here

On Xbox 360 I’ve been involved in helping developers understand how the hardware works and how to drive it. I’ve had the pleasure of presenting at some of the Microsoft developer events (called “XFestsâ€￾) which they hold every few months in both Seattle and London. When I’m there I present pure technical subject matter with no marketing slant thrown in. Xbox developers tend to be a very smart crowd, so you have to be on your toes at these events, but the atmosphere is terrific – people are really excited about the feature set and awesome power of Xbox 360.

Can this mean that we will have a first gen games much more optimal (using the HW) than we thought.
I mean, a first gen more optimal (to the HW) than (for example) Xbox, or GC.
 
Shifty Geezer said:
Specifics, such as ATi's prioritization/scheduling algorithms, yes. Fundamental aspects to the hardware - no. When talking about 'intelligent video memory' the ATi spokesman has to be talking about the daughter die. I'll be totally and utterly gobsmacked if otherwise. :oops:

Nothing but assumptions on your part.
 
Shifty Geezer said:
Given the depth of Dave's Xenos article, I can't see any reason to believe there's more to Xenos than what we already know.

absolutely not true. there were things purposely edited (given the sensitivity of these issues) from the published version of Dave's article that we don't know about.

There probably are also things about Xenos/C1 which we are not aware of, that were not whatsoever in Dave's article or part of Dave's discussion with the ATI architects, even to begin with.
 
i did not intend to post again in this thread (as i'm trying to stay out of PR threads), but it's sad to see b3d regulars falling for such senile PR bullshit.. each and every performance number given in this interview is outright PR. i can see fboys flling for that, but come on, b3d regulars, you know better than that.
 
Megadrive1988 said:
Shifty Geezer said:
Given the depth of Dave's Xenos article, I can't see any reason to believe there's more to Xenos than what we already know.

absolutely not true. there were things purposely edited (given the sensitivity of these issues) from the published version of Dave's article that we don't know about.

There probably are also things about Xenos/C1 which we are not aware of, that were not whatsoever in Dave's article or part of Dave's discussion with the ATI architects, even to begin with.

YEah i mean, ATI talked about "Fluid Reality", however it was not mentioned int Daves Article (I think....)
 
darkblu said:
i did not intend to post again in this thread (as i'm trying to stay out of PR threads), but it's sad to see b3d regulars falling for such senile PR bullshit.. each and every performance number given in this interview is outright PR. i can see fboys flling for that, but come on, b3d regulars, you know better than that.
This needs to be flashed on the browser everytime someone enters this place.
 
Shifty Geezer said:
jvd said:
Really , I haven't seen any comments worse than kk has said so whats the problem ? If its good enough for sony to do , why isn't it good enough for ms ?
Has KK ever said a technical statistic that is totally false, like "256 GB/s between GPU and eDRAM"? He comes out with poetic nonsense, but I don't know that he's ever use totally false figures (save perhaps the 2 teraflop system performance for PS3 that raised it's ugly head at E3)

I understood his comment to mean 256 GB/s between the logic and the memory on the Daughter die itself.

I can see how someone who has not read Dave's Xenos article (or seen the diagram) may not get that at all though and think that they indeed mean it is between the GPU Edram. (not at all implying that you do not understand it Shifty)

but... again, I understood his comment to mean 256 between the logic and the memory on the Daughter die itself. :) ;)
 
Acert93 said:
Alstrong said:
Acert93 said:
Since when did the system get 6 cores?

He's referring to the "hyperthreaded cores" i.e. 2 threads per core.

Yes, but a separate thread is not going to run at full speed. e.g. if the XeCPU had 3 cores and each core did 1 HW thread instead of 2, it would still have a floating point performance rating of 115GLOPs. Extra threads do not automatically double performance as he is indicating. Far from it.

I'm really glad someone else has finally bothered to step up to pushing this issue, at last! :D It seems like so many here on this board are content to simply treat "hardware threads" as functionally equivalent to fully operational, additional cores. It seems to me that you don't just get an additional 100% or even 30% of resources out of thin air. It would be 30% resources recovered while the 1st thread is running at greater than 30% losses, imo. It's a zero-sum game, theoretically (and likely less than 100% utilization of total resources, in real practice). The theoretical max should still be 1 thread amazingly getting 100% hits on the pipeline (and logically 0% on the 2nd thread), via uber perfect code. 2 threads are simply in contention for each others resources, but the trick is they are picking up stray resources that would have been otherwise lost (and unrecoverable) by the other. Naturally, code is imperfect to varied degrees (but hopefully approaching ideal cases at times), so the multithreading helps to recoup some of those losses to imperfect code.

We need to stomp out this "extra thread= free core" misconception, or next generation we will be blessed with next generation marketing gimmickery such as the "8 hardware thread CPU that magically turns a single core into an "army of 8". I mean, isn't this literally as bad as adding up the megahertz on a n-processor system, and saying, "Aha, this will be like a 20 GHz CPU"? ;) I take that back- it's gotta be worse than that, eh?

I don't recall the breakdown for "115 GFLOPs", in detail, but if the aggregate involved the impact of "smt" multiplication, maybe the figure needs to be revised? ;)
 
Back
Top