Anandtech dashes cpus from ps3 and xbox360

Status
Not open for further replies.
Acert93 said:
therealskywolf said:
6x times the processing power, well were we expecting much more? 6x times is alot.

50% faster. I guess I skipped over that word in the very last sentance. If you reread the entire post/echange it is pretty clear though. He is saying the Xbox CPU is 50% faster than a 360 core, and the 360 CPU (3 cores) is 2x as fast as the Xbox CPU. NOT 6x.

Well buddy then thats BS. Simple as that.
 
Titanio said:
dukmahsik said:
Shifty Geezer said:
Also, for those that think Anand are on the case this time around, look at HS runnin ATM with only a single thread and n hundred enemies on screen with their own AIs. Forgive me for being naive, but that looks a little bit better than 2x the performance of XB...

or kameo with 1000 trolls or 99 nights with 2000+ on screen characters!

Kind of a side note but..

Though these examples may prove a point relative to Xbox performance, I'm not sure if they prove as much wrt to expectation of next-gen CPU performance. Doing something like that is possibly more limited by the GPU than CPU - individually those characters aren't going to be very intelligent, and in terms of physical modelling, with such a busy scene you could probably introduce wider tolerances of less-than-perfect collision detection/response to compensate for the numbers. Note also that, wrt the trolls in Kameo at least, they're not very geometrically complex, probably to help with speeding up collision detection and so forth. You could implement a scheme to scale CPU work also depending on a troll's distance from the camera/main character - those far away may not be up to very much at all, or have much if any physically modelling, while those closer may be modelled more fully to maintain the player's illusion. In other words, 1000 trolls in a scene like that isn't necessarily the equivalent of 1000 main AI characters in even last gen games, or certainly many current PC games.
Let's not forget the fact that Kameo wasn't running on the "worthless" xcpu. :D
 
While finishing up the article, it became really clear this is propaganda.

And as for the statement that the XeCPU only has 2x the performance, that surely is BS. Why? Because the 360 is going to emulate Xbox games. Xbox games are not natively multithreaded. It is hardly believable that a 3 cores that supposedly running together perform at 2x a single core in real world performance could emulate the single core processor.

Emulation is not cheap on performance, especially if you are trying to make the application run as well as it did on the original platform. It is not a matter of getting Xbox code up and running, but making it run as well as it did in its native environment. These means games running 30-60fps.

V3, I am still interested in where you get your real world estimates. But based on the emulation fact I am having a hard time swallowing what Anad is saying.

It is obvious he is trumpeting the PC (which is fine) but it does appear his contacts are all PC developers, and at that probably not very good ones if they are having the trouble they claim.
 
Let's not forget the fact that Kameo wasn't running on the "worthless" xcpu.

Don't forget that rare expects to go from 3k-4k up to possibly 9k on that worthless cpu .

So i don't know how valid these claims really are . I think it will just take alot more work to get the power out of waternoose and cell than it would be a p4 or an athlo n64
 
I dont know about you guys but Im getting all warm and fuzzy seeing certain PS3 and X360 b3d contingents come together to defend the common good. :)
 
Acert93 said:
Btw, WB jvd. Hope your surgery went well man :D
all went good , just woke up like a half an hour ago and laying in a hostpial bed with nothing to do but read the web and comics haha
 
I read the article and thought it was pretty good.

Since the CPU's are scaled back wouldn't take make the overall system performance "CPU bound". Meaning there wont be enough bandwidth the feed the powerful GPUs?
 
seismologist said:
I read the article and thought it was pretty good.

Since the CPU's are scaled back wouldn't take make the overall system performance "CPU bound". Meaning there wont be enough bandwidth the feed the powerful GPUs?

Well NVidia have said they've never had a CPU to feed a graphics chip in the way Cell can, and have apparently made concrete modifications to accomodate that - something I doubt they'd do if there was no truth to that, and were simply saying that for sake of Cell's image (i.e. lying).

I think we can agree that this article isn't doing fair justice to either CPU in its own right. Whilst I won't dispute the specifics of what the developers told him, Anand's own commentary and spin seemed quite eager to reach a conclusion favourable to PCs, so..
 
liverkick said:
I dont know about you guys but Im getting all warm and fuzzy seeing certain PS3 and X360 b3d contingents come together to defend the common good. :)

Screw that!

If someone had gone with an AMD64 with a PPU/Vector Processor to shore up Floating Point performance I would be on Anand's side ;)

Seriously thought, Intel/AMD have some excellent chips at GP code and do a lot of things very well. Not the best console chips due to IP issues, legacy support, heat/power consumption, and other reasons. But overall you cannot slam on a PC CPU too much. Some of Anand's points are very good. Like Intel/AMD not tearing apart their CPUs and instead scaling back the frequency.

Of course you can do this on a chip that you sell at a profit. Tranistor budget and die space are a little more dicy when you are not making a profit on the chip, so less transistors and more frequency is the price we pay for MS/Sony selling consoles they lose money on.

All tradeoffs. None of the consoles were every going to compete head to head with a modern full-featured desktop chip in GP code.
 
wireframe said:
For one, the Cell's total area is dominated by the array of SPEs (8, with one disabled for a total of 7 for the PS3 design). In fact, about 2/3rds of the entire die is consumed by the SPEs and their related logic.

That's the weird thing. From reading this article you'd think it would be better to have 8 PPE's and no SPE's. (They look about the same size).

There must be a reason why Sony didn't do this. What do the SPE's add over the PPE?
 
Microsoft was sold on the peak theoretical performance of the Xenon CPU.

IBM’s pitch to Microsoft was based on the peak theoretical floating point performance-per-dollar that the Xenon CPU would offer, and given Microsoft’s focus on cost savings with the Xbox 360, they took the bait.

Any AnandTech reader should know for a fact that these numbers are meaningless

Anand would have us believe that those responsible for the X360 design lack the competence of a typical AnandTech reader.

He also likes to make incredibly sweeping comments like:
Those that are simply porting current-generation games over will have no problems running at 1080p, but anyone working on a truly next-generation title won't have the fill rate necessary to render at 1080p.

Funny, because DeanoC says:

Then there are all the unsourced, unquoted developer comments mixed with his own speculation. While the gist of the article may be spot on, I'd say it was very poorly written. It would've been better if he just gave us anonymous developer quotes rather than throw his own crap in there.
 
seismologist said:
That's the weird thing. From reading this article you'd think it would be better to have 8 PPE's and no SPE's. (They look about the same size).

The PPE is much bigger than the SPE, at least looking at the second revision.

If all the SPEs can do is act as a PPU, as Anand suggests..well I won't mind. But it will be a little more generally useful than that anyway, I think.
 
Developers have just recently received more final Xbox 360 hardware, and gauging performance of the actual Xenos GPU compared to the R420 based solutions in the G5 development kits will take some time. Since the original dev kits offered significantly lower performance, developers will need a bit of time to figure out what realistic limits the Xenos GPU will have.

Err he is saying that a athlon 64 or a p4 would be faster than the x360 yet dual g5 s offer significantly lower performance ?

Something isn't adding up and as he said the devs have only recently gotten more final x360 hardware . So how can they really make any claims ? The smae goes with cell , the devs only recently got the actual hardware
 
seismologist said:
wireframe said:
For one, the Cell's total area is dominated by the array of SPEs (8, with one disabled for a total of 7 for the PS3 design). In fact, about 2/3rds of the entire die is consumed by the SPEs and their related logic.

That's the weird thing. From reading this article you'd think it would be better to have 8 PPE's and no SPE's. (They look about the same size).

There must be a reason why Sony didn't do this. What do the SPE's add over the PPE?
 
jvd said:
Developers have just recently received more final Xbox 360 hardware, and gauging performance of the actual Xenos GPU compared to the R420 based solutions in the G5 development kits will take some time. Since the original dev kits offered significantly lower performance, developers will need a bit of time to figure out what realistic limits the Xenos GPU will have.

Err he is saying that a athlon 64 or a p4 would be faster than the x360 yet dual g5 s offer significantly lower performance ?

Something isn't adding up and as he said the devs have only recently gotten more final x360 hardware . So how can they really make any claims ? The smae goes with cell , the devs only recently got the actual hardware

I was going to bring that up too, but its in the GPU section so I let it slide
 
I also believe he says that the waternoose is smaller than a 90nm prescott ? Anyone know the size of that ?
 
Psssh! C'mon, I gave it a couple of pages ago. It's 134mm^2.

(and that's with 2 megs of cache)
 
Acert93 said:
liverkick said:
I dont know about you guys but Im getting all warm and fuzzy seeing certain PS3 and X360 b3d contingents come together to defend the common good. :)
Seriously thought, Intel/AMD have some excellent chips at GP code and do a lot of things very well.

All tradeoffs. None of the consoles were every going to compete head to head with a modern full-featured desktop chip in GP code.
Do we NEED GP code though? Remembering the Cell duck demo, is there any demo that runs on Athlon64 say with 100+ objects colliding with mesh based collision on a fluid-dynamic base? Is there any demo of a couple of Athlons (dual system) raytracing in realtime a landscape created on the fly, procedurally, from a couple of source map? I was mightily impressed with those demos. Am I really that out of touch with what modern processors can achieve, or is the advantage Athlon and Pentium has in the Database and Spreadsheet General Purpose arena just not that applicable in FUN applications? ;)

Until someone lists where GP functionality is needed and FP is needed (considering new implementations too) it's crazy to say a lack of GP is a terrible thing. In what way precisely (in gaming) would a weak GP performer be? Anand didn't give any examples (like poly counts, skeletal animation, physics, AI, procedural synthesis...). The only aspect to the article I could agree with was the difficulty in programmers having to learn new ways to program. But if they've managed it with PS2 they can manage it with anything!
 
Am I really that out of touch with what modern processors can achieve, or is the advantage Athlon and Pentium has in the Database and Spreadsheet General Purpose arena just not that applicable in FUN applications?

Somebody needs to upgrade to office 2k3, cause I'm having a blast! :LOL:
 
Status
Not open for further replies.
Back
Top