New Xbox 360 reviews tonight!(Kameo, Madden, PGR3, COD2)

Guilty Bystander said:
Lacks shader power due to EDRAM???
Seriously m8 when you're gonna say things like that it only gonna make you seem dumb just don't say anything then.

For a given gpu size, there is always a choice between how transistors are used. EDRAM unit is significant portion of Xenos GPU. Alternative use of edram transistors might be more ALUs, cache, ROPs, etc... Therefore he was correct, given decision for GPU unit of certian size and cost, choice for inclusion of large edram unit was choice for fewer shaders.

However graphics quality is currently best available so that is what is important for graphics fans.
 
Just an update:


Kameo
IGN- 8.4
http://xbox360.ign.com/articles/666/666667p1.html
1UP - 7.0
http://xbox360.1up.com/do/reviewPage?cId=3145712&did=1
GameSpot - 8.7
http://www.gamespot.com/xbox360/action/kam...wer/review.html

----------------------------------------------------------------------------------------------

Madden '06'
IGN - 8.0
http://xbox360.ign.com/articles/666/666658p1.html

----------------------------------------------------------------------------------------------

PGR3
IGN - 8.8
http://xbox360.ign.com/articles/667/667076p1.html
1UP - 10
http://www.1up.com/do/reviewPage?cId=3145713&did=1

---------------------------------------------------------------------------------------------

Amped 3
1UP - 7.0
http://www.1up.com/do/reviewPage?cId=3145673&did=1

---------------------------------------------------------------------------------------------

NBA 2K6
IGN - 7.8
http://xbox360.ign.com/articles/666/666970p1.html
1UP - 7.0
http://www.1up.com/do/reviewPage?cId=3145672&did=1
GameSpot - 8.3
http://www.gamespot.com/xbox360/sports/nba2k6/review.html

---------------------------------------------------------------------------------------------

FIFA '06'
1UP - 7.0
http://xbox.1up.com/do/reviewPage?cId=3145714&did=1

--------------------------------------------------------------------------------------------

Call of Duty 2
Gamespot - 8.8
http://www.gamespot.com/xbox360/action/cal...ty2/review.html

--------------------------------------------------------------------------------------------

NHL 2K5
IGN - 7.5
http://xbox360.ign.com/articles/667/667300p1.html

--------------------------------------------------------------------------------------------

GUN
IGN - 7.9
http://xbox360.ign.com/articles/667/667179p1.html

--------------------------------------------------------------------------------------------

Tony Hawk's American Wasteland
IGN - 8.3
http://xbox360.ign.com/articles/667/667130p1.html
 
ihamoitc2005 said:
For a given gpu size, there is always a choice between how transistors are used. EDRAM unit is significant portion of Xenos GPU. Alternative use of edram transistors might be more ALUs, cache, ROPs, etc... Therefore he was correct, given decision for GPU unit of certian size and cost, choice for inclusion of large edram unit was choice for fewer shaders.

They would have needed to use a considerreble amount of transistors for a more advanced 256bit memory controler. Logic transistors also uses way more power than edram transistors, with more logic transistors they would have to clock the chip lower to fit the same power envelope.
 
While the eDRAM inclusion could technically limit other things (assuming a total-transistor budget!) I don't believe that including the eDRAM caused any other features to be removed, or shader power to be limited beyond what ATI would have made it regardless.

The choice to go for eDRAM was one that reduced the overally size of the main GPU die. For one, if there was no eDRAM, there'd be no daughter die and thus the main core would be at least 252-257million transistors. This assumes that the ROPs would function perfectly in that situation. But, the inclusion of the eDRAM, imo, is exactly the same as the new ring bus(/memory controller) in the X1000s (haven't looked to see if RV515/530 get the same advantage as R520).

Looking at this despite the 7800 having 24 pipes @600MHz compared to X1800XT with 16 pipes @~630MHz, the latter still has a very small lead. And this should be attributable to either the AA or the AF. If it's the AF, then it's probably due simply to the decoupled texture units (in G70, every 16xAF filtered texture fetch would block that pixel pipe's shaders for 16-32 cycles? Or just the first ALU?). But if it's the AA, then it's not because of bandwidth, because the overclocked GTX512 has significantly more BW than the stock Sapphire X1800XT (~20% more!). Which means it's due to the efficiency of the memory accesses between ROPs <-> memory. Since the 8 Xenos ROPs sit right on the daughter die, and with a large amount of BW at that, their memory accesses should be extremely efficient, no? Latency would be low or extremely predictable (what should the typical latency of the eDRAM be?), etc. So, either way, it was a good decision, IMO. But this is a tangent. What would have happened if this didin't exist? For one, what if the memory controller had to increase in size to deal with the read/write/modifies of the ROPs? If they didn't, then the bottleneck of the system might end up framebuffer ops rather than shading or texturing! So, we're looking at 260+ million transistors? And we haven't added any extra shaders! Then, to add another array costs how many million transistors? If they're 1.5m/each ALU, then we have 280+million, and if they're 2.5m/each, then 300+m transistors?

Then, to deal with the extra array, we probably have to make increases elsewhere in the GPU to make sure that it's properly fed (even if games do approach a 1:3 filtered tex: shader op ratio! and then increase it to 1:4!). On the other hand, we could increase filtered texture fetch units. Of course, then, we run into an even larger problem. Now we have, at max, the same RAM on a double-wide bus, probably more than half of which will be consumed by the framebuffer, leaving with the same bandwidth as now or less for 50% or 100% more filtered texture fetch units and the CPU to share. I wouldn't count on it. So, now the unit isn't weighted properly for an increasing shader: texture ratio anyway, it's starving for bandwidth, and it's probably in the 300m transis. ballpark... and all on one single die. If that wasn't already a yield/heat issue for MS, then I don't think we'd have two dies right now. And ROPs, I think, are out of the question. 20-25m of the transistors on the daughter die are the 8 ROPs (is there anything else there?), so we've still increased the size of the main die by 40-50m transistors to 270 or 80 million. And what gain would there be? Do we expect to be dealing with 4Gpixels/s? It handles up to its max of 4xAA w/o a fillrate hit, so.... running at 60fps.... is 720p going to benefit at all from 8Gpixel/s over increased efficiency, bandwidth, etc?

So, that leaves cache. But, is there some reason ATI would be working on a transistor-based budget, as opposed to a transistor per-die budget? 332-337m transistors + 128bit bus was chosen over fewer transistors (possibly. At least 80million to spread out between ALUs, ROPs, cache, or other logic) and a 256 bit bus. Cache would probably help, but how much does it already have and how much does it need? At 6 transistors a bit, 256KB costs a mere ~12m transistors, which I don't think would be that big a deal to add to Xenos. If it was necessary. It might be costly further down the line, when they combine the dies, but for now, it almost wouldn't make that much a difference as far as yields go (of course, concerning all of this, I'm going off extremely limited knowledge). Also, going back to the old Anand article, in the section addressing the "modeling engine," I think there's mention of ATI wanting to use the vertex cache to feed/store results for some heavy math (I believe the claim was related to raytracing/global illumination, working on HOS, and finally phsyics/GPGPU stuff). Are they going to cut corners on cache if it seems like they're dedicating plenty to this vertex cache?

So, at the end of the day, Bill, I don't think all the doom-and-gloomin' is really necessary. If they didn't have the eDRAM, I honestly don't think that performance would have shot up at all. They would probably increase ROP complexity (in addition to moving it to the main die) and probably the memory controller along with it, and this would have forced doubling the bus. This doesn't seem like something MS would see as favorable (especially for later down the line?), and might have prompted cuts elsewhere in the system as a result. And now the system doesn't have nearly as good framebuffer efficiency/headroom. FP16 HDR would probably be out of the question, along with tons of alpha blending and such. But, eh...

I think that MS gave ATI a pretty clean slate, had them design from the gound up without too many restrictions (mostly along the lines of yield/heat issues and costs later on down the line, as with the bus width)... and besides, this chip is probably going to be the biggest help for their R600 later on down the line, in the form of validation for their architecture and a basis for games actually making use of all those features (through 360->PC ports, which MS seems pretty keen on). I think that rules out the cache as a possible shortcoming, due to how integral it seems to be to their GPGPU/physics stuff and DX10 stuff. And I think they truly believe that the 3:1 ratio for shaders: filtered textures is what they perceive will be what games approach (but don't necessarily exceed... by that much) in the next few years. So I kinda doubt that they'd mess with that ratio by that much.

Hm, I do have a headache, I'm tired, and that's based on a mountain of assumptions, "ifs," bad comparisons, and so on and so forth. I would still stick with shading power not going up, especially considering all the gains that it would lose by ditching the eDRAM daughter die, however. So, let the corrections come. :cool:
 
ihamoitc2005 said:
For a given gpu size, there is always a choice between how transistors are used. EDRAM unit is significant portion of Xenos GPU. Alternative use of edram transistors might be more ALUs, cache, ROPs, etc... Therefore he was correct, given decision for GPU unit of certian size and cost, choice for inclusion of large edram unit was choice for fewer shaders.

Your whole theory falls apart when you realize the EDRAM is a seperate chip and not the GPU.
 
Oh good Lord please drop a meteorite on anyone still discussing about stuff that's been discussed over and over and over and over to death...
 
And you base this power of the CELL I guess? The unproven, theoretical peak power of the CELL? Lets wait and see how these 2 CPU's compare in the real world!

I don't see anything else in PS3 that could possibly be considered unbelievably powerful, RSX? Not unless we're all in for a huge surprise.

Sure Cell is not powerfull and is on par with the Xenon cause that's what most people out there are thinking, well dream on!

Cell will be the leaps ahead of the Xenon.
I know this because developers say it is.

Also every developer are basing their opinions on the beta development kits which only have a 2,4GHz Cell and a 7800GTX GPU with 256MB XDR.
Developers that are comparing 360 with the PS3 are comparing this PS3 beta kit (which doesn't have the final Cell and the RSX) with the 360 final kit (which does have the Xenon and Xenos).
No developers out there have even used the final Cell or the RSX which both are more powerfull.
Which with the RSX really remains a mistery untill nVidia or Sony finally speak about the RSX cause people keep assuming it's a higher clocked GTX or GTX 512MB Core (without the 512MB obviously) but what if it isn't and is something much more advanced.
What if it really has NV5x abilities or boosts much more Pixel Pipelines and letting the SPE's handle the Vertex Shading.

Till nVidia or Sony tells us we won't know.

Also on Cell we all know it almost will have twice the amount of floating point performance of that of the Xenon (218GFlop/s vs 115GFlop/s) and IBM has said the Cell is the most efficiënt CPU ever and thus in practice will increase the gap between the Cell and the Xenon.
This will problably make the Cell twice as powerfull in raw perfomance.

Factor 5 was and still is so exited about the Cell that they went from plans about being a multiplatform developer back to being an exclusive developer but for Sony this time.
Konami, Capcom, EA, Epic and lot's of other developers keep stating the Cell is so much more powerfull than anything out there or coming out there in the near future.

How can you guys keep thinking the Cell and Xenon are on par while no knowledged people seem to think so???
 
Guilty Bystander said:
How can you guys keep thinking the Cell and Xenon are on par while no knowledged people seem to think so???

Who said that?? I don't think anyone in their right mind ever said Cell and Xenon are on par. They might have said Xenos is better or on par with RSX or that the power switch at the back is 8X faster than PS3 one... But Cell and Xenon are not on par and never will be.
 
In the console GAMES section, in a thread entitled Xbox 360 Reviews, people are talking hardware and console v console...

*sigh*
 
Unconfirmed, but apparently an unnamed Dutch Magazine has given PD0 a 7/10. The mag won't be out till next week, apparently, so I'm sure we'll have other reviews in the meantime.
 
jvd said:
http://boards.gamefaqs.com/gfaqs/genmessage.php?board=516505&topic=24495734&page=0


Here is a player review of kameo tearing ign and other reviewers a new rear end hole


Main point though for posting this is that it confirms what i have said . The game is longer than claimed in the reviews , they barely touched the side quests (if even look at them) nor did they take time to really upgrade anything

I was wondering about this. There are over 40 upgradeable moves which means the side missions are either extremely short, or the main mission is extremely short for it to come out anywhere less than 15-20+ hours.

I'm a total completionist, and I'll play this on hard so I'm looking for 25-30 hours of gameplay.
 
Guilty Bystander said:
Also on Cell we all know it almost will have twice the amount of floating point performance of that of the Xenon (218GFlop/s vs 115GFlop/s) and IBM has said the Cell is the most efficiënt CPU ever and thus in practice will increase the gap between the Cell and the Xenon.

Oh...well..if IBM said so it must be true!

You ignore my entire point which is CELL has not been tested in the real world. What if it only ever reaches 20% efficiency? Not very powerful then, is it?

You're just a victim of the hype machine sorry to say it. Maybe after your done playing PS3 CELL will cook you supper and wash your car...and if you're really lucky...maybe a little lovin when the lights go down....mm that would be hot
 
Oooh ooh Lemme try!!!


No one has utilized the Xenon or Xenos properly. For the most part they coded these launch games on Xenon as if they were Single core SMT chips which some developers have experience with. For the most part they coded Xenos as if it were a traditional renderer and did not use the strengths of the edram because there wasnt enough time.

As far as the Cell kit comparison, to be equivalent in floats to xenon it would have to rated at 1.6 Ghz. 2.4 Ghz is a 75% of its float power. 75%(218) ~ 164 Gflops which is clearly still ahead of xenons theoretical peak by a respectable margin - no mystery here just plain math. The difference wont be the chip internals but rather final speed.

The 7800 included is suggested to be very close to the RSX (those who know wont tell, those who would tell dont know) so its all speculation... but even so Its probable that the 7800 completes favorably with Xenos when Xenos is being used as a traditional renderer but will almost guaranteed lose out against a properly programmed Xenos. RSX may have additions/improvements that close that gap.
 
scooby_dooby said:
Oh...well..if IBM said so it must be true!

You ignore my entire point which is CELL has not been tested in the real world. What if it only ever reaches 20% efficiency? Not very powerful then, is it?

Horribly OT, but, come on now scooby. IBM has exhibited applications where the performance gap suggested by the floating point performance figures has materialised, and then some. In some cases the gap has been much wider even (because some tasks also benefit greatly from Cell's memory setup, and push performance even higher). Cell has customers now beyond STI too who are using it in applications, like Mercury Systems ray-tracing apps etc. - I doubt they'd be investing in this if it was a great pretender.

Hmm, I wish there was an option to fork threads of discussion out of a thread into a new one! I shouldn't have replied to this, but oh well.
 
Last edited by a moderator:
Ya it ran really good in one controlled physics simulation. That's what you're referring to? And the one company that is using CELL for facial recognition (cell is built for image processing/decoding, this has what do do with games? ) This also proves the point that IBM are salesmen right now, they are SELLING this chip.

I want to see it in-game, doing game stuff before I'll bow down and hail it to be "unbelievably powerful". That's fair. EE was supposed to be a supercomputer for crying out loud.
 
Last edited by a moderator:
scooby_dooby said:
Ya it ran really good in one controlled physics simulation. That's what you're referring to? And the one company that is using CELL for facial recognition (cell is built for image processing/decoding, this has what do do with games? )

I'm referring to the various apps IBM showed - the terrain renderer, the cloth simulation, the server-side physics, the FFTs etc. The Mercury Systems OpenRT ray-tracer and so forth (and their presentation on Cell could be interesting for you if you're looking for an independent commentary). That toshiba app if you wish. I'm taking issue with the point that it "hasn't been tested in the real world".

And yeah, image processing has nothing to do with games :p And I guess neither do any of these other things! /sarcasm But you're right, we've yet to see a finished game running on Cell or a PS3, but to expect or hope that isn't all that powerful flies in the face of what's suggested by what we've seen so far in the first case, and just a little weird in the latter (i'm sure you're not HOPING it's not that powerful, right?).

PM me if you wish to discuss this further, this thread is going way OT.

To try and bring it back on topic, further to my mention of a potential Dutch PD0 review, it seems like we may not get PD0 reviews till launch? Matt at IGN seems to be suggesting MS is purposely holding PD0 from reviewers till the last moment (with a negative implication).
 
Last edited by a moderator:
No I'm not hoping it does bad, just skeptical of non-realworld peak performance claims, especially when they come from the company selling the product. Enough of that though.

Has anyone reviewer even confirmed having a retail version of PD0? Seems like it's still in development or something...
 
scooby_dooby said:
Has anyone reviewer even confirmed having a retail version of PD0? Seems like it's still in development or something...

Well, it's been announced as a launch day title, and apparently it is due in stores tomorrow or Friday, or at least that's what I've heard. Apparently it's the only X360 game not yet with reviewers, or online reviewers at least.

I suppose reviewers could always go out and buy it tomorrow or Friday if it really hits then. Assuming they've had decent time with near-final builds already, they could probably finish and review it for the weekend maybe.
 
Back
Top