Official: ATI in XBox Next

Status
Not open for further replies.
Paul said:
People can get banned for cursing but not banned for trolling and ruining topics? Hmmm.

And no I'm not getting "very bad with it" I barely curse EVER, and it's not at someone IE: Your a ****head

People get baned for trolling too. Just what you consider trolling and what i consider trolling are two diffrent things.
 
Face it, dude. Nothing in the PC world is going to come near Cell in the next five years or so if it turns out to be ANYTHING like the patent we've been discussing. As far as rasterizer power goes - who can say? We totally lack specs on the new graphics chip for PS3. It's safe to say though that it's not going to be fillrate that holds the PS3 back even if PC chips can out-fill it in the near-term. At the resolutions PS3 will be working with, whatever fillrate it is given will most likely be as monstrously overkill as PS2s GS is right now.

What are you basing this 5 year claim on? Perhaps, you're not aware. But in the MPU industry there has been a trend, braniac CPUs -- those that target more work per clock -- than their speed demon counter parts have this tendency of not meeting their targets.

Additionally, you're basing far too much on flops, it isn't everything.
 
People get baned for trolling too. Just what you consider trolling and what i consider trolling are two diffrent things.

Chap anyone? This guy has ruined so many topics and created so many garbage troll topics it's not even funny. He has been warned so many times and he is still here.

I have no problem with him as a person(we both like xbox), but he does ruin alot of topics.
 
To put things in Perspective...

In a keynote session, Tadashi Watanabe, vice president of high- performance computing at NEC Corp., detailed the company's Earth Simulator, currently the world's most powerful supercomputer. The custom-built system stirred controversy when it was announced in April 2002 as having a 40 teraflop peak performance using 5,120 vector processors, far beyond the capabilities of even the fastest systems in the U.S. Those systems rely on clusters of many more off-the-shelf scalar processors.

Watanabe said the system cost $400 million to build over a five-year project life and incurs about $15 million in annual operations costs. It consumes about eight megawatts power and is housed in a custom three-story 3,250 square meter building in Tokyo where one floor is dedicated to some 83,200 cables that link its 320 computer cabinets and 128 interconnect systems.

The system is measured at 87.5 percent efficiency giving 35 teraflops on the Linpack benchmark, but actually delivers from 14 to 26 teraflops or 38 to 66 percent efficiency in real-world applications, Watanabe reported. That's still far beyond efficiency ratings as low as 15 percent for many of today's supercomputers, he said.

Theoretical... real...

That's still far beyond efficiency ratings as low as 15 percent for many of today's supercomputers, he said.

Is that for real?;)

Hmmm... Doth this also apply to... render farms?

PS IF yes at 15% efficiency.... that would make the patent perf about equal to... a render farm containing 1,024 Intel 2.8GHz Xeon processors.... If I'm not mistaken.... hmmm... who recently got one of those... ;)

edited
 
zidane1strife said:
To put things in Perspective...

In a keynote session, Tadashi Watanabe, vice president of high- performance computing at NEC Corp., detailed the company's Earth Simulator, currently the world's most powerful supercomputer. The custom-built system stirred controversy when it was announced in April 2002 as having a 40 teraflop peak performance using 5,120 vector processors, far beyond the capabilities of even the fastest systems in the U.S. Those systems rely on clusters of many more off-the-shelf scalar processors.

Watanabe said the system cost $400 million to build over a five-year project life and incurs about $15 million in annual operations costs. It consumes about eight megawatts power and is housed in a custom three-story 3,250 square meter building in Tokyo where one floor is dedicated to some 83,200 cables that link its 320 computer cabinets and 128 interconnect systems.

The system is measured at 87.5 percent efficiency giving 35 teraflops on the Linpack benchmark, but actually delivers from 14 to 26 teraflops or 38 to 66 percent efficiency in real-world applications, Watanabe reported. That's still far beyond efficiency ratings as low as 15 percent for many of today's supercomputers, he said.

Theoretical... real...

That's still far beyond efficiency ratings as low as 15 percent for many of today's supercomputers, he said.

Is that for real?;)

Hmmm... Doth this also apply to... render farms?

PS IF yes at 15% efficiency.... that would make the patent perf about equal to... a render farm containing 1,024 Intel 2.8GHz Xeon processors.... If I'm not mistaken.... hmmm... who recently got one of those... ;)

edited

Yes that's for real and IBM's efficiency ratings for their supercomputers hover around 55% so their realworld performance would be even less. :oops:
 
Yes that's for real and IBM's efficiency ratings for their supercomputers hover around 55% so their realworld performance would be less than 10% and going down to like 2%

Wow!!!

PS
Err my calcs were for single xeons... So actually if those 1024 are individual xeons... The patent perf is near the perf of the 1024 2.8Ghz xeon render farm at 15% efficiency...

and if your perf estimate applies to render farms... :oops: :oops: :oops:

ed

Edited 2
 
Listen. Cell is scalable . Cell can be in 1tflop. Cell will be in ps3.Most have jump to the conclusion that ps3= 1tflop. Now they can be right or they can be wrong. Only problem is there is no proof which way it goes. The cell chip in the ps3 may only be 500gflops . It could be 1.5tflops. Sony has never stated mhz or apu's in the chip that will be used in the ps3.

On the xbox 2 side all we know is that ati will be in it. Which is even less than what we know of the ps3

On the nintendo side we know that ati will be in it. Same as the xbox2.


So we really know nothing about the next gen.


What i do know is taht xbox 2 performance will be close to ps3 performance . The pcs of a year later will be greater performance than the xbox 2. So using that logic the ps3 will be less than a years later pc hardware .


a nice, well-balenced post.

And I have to agree mostly. we just cant say exactly how much FLOP processing power the PS3 will have. it could be as low as 256 GFLOPs, which would severely disappoint those of us expecting TFLOP(s), or as high as 6+ TFLOPs, which sounds incredible but remember a Sony or Toshiba exec did say Cell would have teraflopS of processing power.

Then, we know next to nothing about the GPU, the GS3/Visualizer. other than that it will have CPU cores and APUs like the Cell CPU, but some of the APUs will be replaced with Pixel Engines, Image Caches and other things normally associated with a graphics processor. but still, we just know so little about it. the same goes for the other next gen consoles. all we know is that ATI will be in both the Nintendo and next Xbox. we know maybe 5-10% about what PS3 or any next gen console will do. we are looking at it with soda straws. i cannot wait until we get a bigger picture.
 
Saem said:
What are you basing this 5 year claim on?

Common sense.

Have you checked recently how many Gflops an ordinary x86 CPU delivers? It's NOT many. Even a 10GHz tejas chip is going to look LAUGHABLE in comparison to Cell, even if Cell "only" manages 1/4 Tflop.

Then consider cost of 10GHz tejas chip, not to mention rest of comparable PC system...

Perhaps, you're not aware. But in the MPU industry there has been a trend, braniac CPUs -- those that target more work per clock -- than their speed demon counter parts have this tendency of not meeting their targets.

Trend? What trend? I don't believe one can speak in such broad, general terms, each architecture is unique. Deadmeat babbled some stupid shit about 'not even power5 reaches 4GHz' or some such when talking about Cell, well DUH, power5 wasn't targetted to reach 4GHz. It's as simple as that.

Each architecture is unique. You can't use chip X and use that as a rule of how chip Y will scale, especially if done by an entirely different design team with different philosophies, manufactured on an extremely different process etc (Cell will be 2 generations away from current chips after all), etc.

Additionally, you're basing far too much on flops, it isn't everything.

FYI, Cell does as many ops as it does flops. ;)
 
Correction it should be realworld applications of 20 - 40% efficiency for IBM's supercomputers.

Oh well...

Anyway, it is incredible, so incredible if this goes for render farms, I mean I just can't believe how incredible this is ;) (hint hint)

Let's use some pixie dust, to see what... argh matey, is going on ... maybe in 2004... we'll see

edited
 
Grall said:
Saem said:
What are you basing this 5 year claim on?

Common sense.

Have you checked recently how many Gflops an ordinary x86 CPU delivers? It's NOT many. Even a 10GHz tejas chip is going to look LAUGHABLE in comparison to Cell, even if Cell "only" manages 1/4 Tflop.

Then consider cost of 10GHz tejas chip, not to mention rest of comparable PC system...

This post is nonsensical. Ever heard of a GPU?
 
hmm..hmm...EE was like supposedly 2-4X better than a P2/P3, but then we know what happened with a 733 mobile celeron... ;)

Is it just me or this ATI-Xbox topic has once again...well you know... :)
 
hmm..hmm...EE was like supposedly 2-4X better than a P2/P3, but then we know what happened with a 733 mobile celeron...

It is, floating point wise.

And if your now insisting that that 733 Cele is better than EE...
 
who knows, it might be better in some other ways. furthermore it is clocked almost 2X faster. ;)

BUT, nope, that was NOT my insisting point. more like ""megapowered!"" cpus are just one part of the puzzle.
 
Clockspeed means nothing, especially since both chips are using different architectures.

Ask Faf if you want more on the XCPU vs EE thing, he works with it. But EE is more powerfull because it needs to be, Xbox has the big beast(xGPU).
 
Trend? What trend? I don't believe one can speak in such broad, general terms, each architecture is unique.

...

You can't use chip X and use that as a rule of how chip Y will scale, especially if done by an entirely different design team with different philosophies, manufactured on an extremely different process etc (Cell will be 2 generations away from current chips after all), etc.

I see you didn't comprehend what I'm saying. I'm talking about MPUs. Not a specific one, just MPUs, in particular, high performance parts. MPUs are designed with various architecture philosophies, some these are forumlated based on the typical work load that'll be encoutered by the MPU. Now, whenever companies have decided to take a more of a brainiac route than a speed demon route, they've tended NOT to meet their targets. This is a fairly well established trend, AFAIK.

Common sense.

Have you checked recently how many Gflops an ordinary x86 CPU delivers? It's NOT many. Even a 10GHz tejas chip is going to look LAUGHABLE in comparison to Cell, even if Cell "only" manages 1/4 Tflop.

Then consider cost of 10GHz tejas chip, not to mention rest of comparable PC system...

SSEx is evolving with each iteration and one can't be sure what's going to happen to the number of flops generated. For all we know Tejas could bring in some significant boosts to the FPU -- if deemed necessary. Additionally, there seems to be some rather interesting performance figures from the Pentium M, where it's FPU seems significantly faster -- 50% IIRC -- than the PIIIs on a clock for clock. I wonder if we'll see any of that in the Prescott, considering how long Banais has been in development. The only caveat there is the fact that Banais isn't going for an aggressive clock rate like their desktop counter parts. Not saying that the Intel will start pumping out flop monsters, I don't think the x86 market really needs it. But then again, the x86 market currently is trying to justify insane computing power for the average user -- they're succeeding of course.

In anycase, the 5 year figure just seems like something you've pulled out of your ass. There is a possibility it's true, but if one followed one's common sense to conclusion, then they'd know that if there were any precieved threats, Intel is more than capable at stepping up.
 
Saem said:
I see you didn't comprehend what I'm saying. I'm talking about MPUs. Not a specific one, just MPUs, in particular, high performance parts. MPUs are designed with various architecture philosophies, some these are forumlated based on the typical work load that'll be encoutered by the MPU. Now, whenever companies have decided to take a more of a brainiac route than a speed demon route, they've tended NOT to meet their targets. This is a fairly well established trend, AFAIK.

Just a word of advice. The whole concept of trends and underlying dynamics don't see to be comprehended here very well. While I happen to agree with the above comment (if viewed as a statistical/probabilistic whole as it was intended) and see where and how you've come to this conclusion - others will not be able to see beyond the singularity of specific cases.

Also, several Japanese news organizations are just reporting that a 1.7-1.9GHz PentiumM will be the XBox Next's CPU. But, take that as you will.
 
Just a word of advice. The whole concept of trends and underlying dynamics don't see to be comprehended here very well. While I happen to agree with the above comment (if viewed as a statistical/probabilistic whole as it was intended) and see where and how you've come to this conclusion - others will not be able to see beyond the singularity of specific cases.

Thank you.

Also, several Japanese news organizations are just reporting that a 1.7-1.9GHz PentiumM will be the XBox Next's CPU. But, take that as you will.

I wouldn't be entirely suprised if that was the CPU. The only thing I could really wish for is a faster bus to better feed it -- trying to stay within reason or rather what Intel will humor.

I wonder, if MS licensing IP (at least on the GPU front) is an attempt to build "customised" versions of existing things. Of course, I doubt Intel would oblige MS in that regard, ditto for AMD -- mostly due to cross licensing and the effort necessary. On the GPU side ATI seems to have. The only problem is that they have to keep with x86 compatibility.

I think what's happened here is that the S/T/I entity has gotten together enough money to be able compete with x86 on an R&D level -- something which others weren't really able to do from end to end for their products. Here is where all the ickiness of the x86 architecture will be very apparent. Not to say that it wasn't as apparent before. But, it was "hidden" by incredibly good engineering; however, they two seem to be on a more equal footing in terms of R&D efforts backing them up so that advantage has likely evaporated.

I'm pretty sure Intel sees this as a threat -- outwardly they're not really geared towards ubiquitous computing. Ubiquity is what the market seems to desire, I wonder what Intel has waiting in the wings to answer this -- I'm sure they have somethign. The only thing is, it's probably going to have to carry x86 baggage if they wish to use the xbox to launch into the market. And if doesn't, then what are they going to do to launch into this market? Or will they merely attempt to squash it like many other threats that have risen in the past to destroy/disrupt the current "PC" model. Fun stuff to think about.

Now for those that don't agree with my first statement in the previous paragraph, though I do know how Intel is going on about central machines in a home that serve the whole household and the house basically being governed by that. I don't really think that's a good solution. The machines being put forth still don't posess enough computer power, at least the ones that were used for demonstration. Either that or I'm a performance whore.
 
Just a word of advice. The whole concept of trends and underlying dynamics don't see to be comprehended here very well. While I happen to agree with the above comment (if viewed as a statistical/probabilistic whole as it was intended) and see where and how you've come to this conclusion - others will not be able to see beyond the singularity of specific cases.

:rolleyes:

Yes, and you only gave us trends that related to a discussion that you weren't apparently talking about! ;)
 
nonamer said:
This post is nonsensical. Ever heard of a GPU?

No, YOUR post is nonsensical. When discussing family sedans on a forum, do you often feel an urge to interject with words similar to, 'this post is nonsensical. Ever heard of a tractor?'

You don't think PS3 will have a GPU as well or something? We were discussing CPUs here, not GPUs.

Before accusing others of being nonsensical, try and make sense yourself first. o_O

Saem said:
I see you didn't comprehend what I'm saying.

Uh, excuse me, but :LOL: :LOL: :LOL:... Yes, I did.

I'm talking about MPUs. Not a specific one, just MPUs, in particular, high performance parts.

Yes? So what. Like I said, you can't take one idea, or a concept of an idea ('MPUs in general') and apply it to something specific to try and judge the general clock speed potential/performance of that specific part.

Reasoning that Cell is A: brainy, and B: brainy parts usually don't live up to its potential, and hence: C, Cell won't live up to its potential is stretching things too far since we know nothing of the specifics of Cell yet, including targetted clock speed. All we got is that patent which states the chip would have a 'preferred' performance of 1Tflop, and doing the math from the figures presented in that patent gives a figure of 4GHz. That's just a PREFERRED figure though, it's not set in stone. It could just be FUD from Sony to send ants down the pants of its competitors, making them scramble to try and match the performance.

Now, whenever companies have decided to take a more of a brainiac route than a speed demon route, they've tended NOT to meet their targets. This is a fairly well established trend, AFAIK.

Except, Cell isn't your typical MPU, so your trend goes out the window at square one, and even if you argue that you weren't thinking of Cell when you said that, well then what's the point of your argument? We could just as well start discussing the weather or something. You have to set an example into a scenario, or else it becomes MEANINGLESS.

"Lots of garlic often ruins a dish." True or false? Without knowing the specifics (like what you're cooking), it's impossible to tell! Some dishes are made with lots of garlic. Hence, set your statements into a scenario please, before you accuse me of not understanding them.

To resume my side of the argument, hehe, only Cell's performance can judge Cell's performance. Like I said, you can't predict one architecture's performance by looking at other, entirely unrelated architectures.

AMDs Athlon for example launched at 600MHz, faster than any then available PC CPU. It reached 1GHz faster than Intel did with the P3. It was arguably more brainiac than P3. However, these chips are very general in nature, Cell is not. It has a much more precisely specified architecture and instruction set. Each PU is probably not very brainy at all, there just happens to be a lot of them on one chip.

Cell will also not be required to scale up in clock speed as time passes, unlike a PC (or almost any other) CPU would be expected to. All that's required is that it can be manufactured with reasonable economics at target speed from the outset. Any further scaling headroom created from there on is simply gravy on top which can be put into further increasing yields and reducing power instead of improving clock speed...

I can't really see how you can say I didn't understand, when I understood perfectly. ;)

SSEx is evolving with each iteration and one can't be sure what's going to happen to the number of flops generated. For all we know Tejas could bring in some significant boosts to the FPU -- if deemed necessary.

Dude, it's NOT going to be given such a boost. If Cell has 32PUs and each PU has four FMAC units (which I believe is the case in the patent, though I could be wrong, I'm not crazy enough to learn it word by word)... Well, let's just say you'd need quite a bit more than just a "significant boost" to match that. Also, don't forget bandwidth. Just adding SSE units isn't going to do you much good if you can't feed them.

Intel is scaling up the bus speed of the P4 (or P5 as tejas will probably be called), but it's not going to be close to the aggregate sum afforded by Cell, with its multiple local storages and eDRAM. We're talking about hundreds of GB/s in total here...! The P4 bus is set to scale to 400MHz (1200MHz equivalent considering quad datarate signalling), and that works out to no more than 12.6GB/s I think. Quite a bit less, wouldn't you say? It's less than just the Cell eDRAM bus interface actually, even at quite a bit less than 4GHz clockspeed that's preferred in the patent.

Additionally, there seems to be some rather interesting performance figures from the Pentium M, where it's FPU seems significantly faster -- 50% IIRC -- than the PIIIs on a clock for clock.

Except... P3 FPU is RATHER SLOW in comparison even to SSE2! Being 50% faster than that is no great achievement. P3 x87 performance is in low single-digit Gflops figures, which is no threat to anything.

But then again, the x86 market currently is trying to justify insane computing power for the average user -- they're succeeding of course.

Oh, just you wait for Longhorn... Microsoft is sure to find a good use for ALL of your MHzs with its bloated code... ;)

In anycase, the 5 year figure just seems like something you've pulled out of your ass.

Do you TRULY foresee a 250+ Gflop x86 CPU within the next few years(say five, but I could just as well probably say five years AFTER PS3 has been released and still be safe)? Because I quite frankly DO NOT. No matter how much magic Intel does with Tejas, it's not going to come close to that level of computing power. Not even CLOSE. Even if they did add an insane number of SSE3 units, the chip would be starved of bandwidth and perform horribly in comparison to the chip real-estate all those execution units take up.

You gotta engineer a monster chip from the ground up to become a real monster, you can't just tack on more units and sort of hope for the best. Well, you COULD I guess, but Intel isn't stupid.

There is a possibility it's true, but if one followed one's common sense to conclusion, then they'd know that if there were any precieved threats, Intel is more than capable at stepping up.

Intel isn't seeing Cell as a competitior since Cell doesn't run x86 code. Hence they won't step up.

Possibly they'd create a derivate of Itanic to match or exceed Cell's FPU performance within the next five years, that would make sense since Itanic is already geared towards heavy flop-work. However, Itanic is not a x86 chip so it still doesn't count. It would still be quite hard work for them though, Itanic is far from 250Gflop/s performance, much less 1Tflop, and the same basic design hurdles have to be conquered with Itanic as with tejas to make it into such a beast, including solving the bandwidth issue.

Besides, I don't want to think of the horrific die size such a chip would have. With 6MB SRAM on it, Itanic is already a monstrously large processor, now tack on several hundred million transistors worth of FPU units... ;)


*G*
 
Status
Not open for further replies.
Back
Top