MS: "Xbox 360 More Powerful than PS3"

As far as I know, there's also a FlexIO bus between the GPU and Cell, which could allow RSX to fetch data from the main memory as well. This would increase the total bandwith to ~40-45GB/sec, but I'm not sure how much of that could effectively be used by the GPU, or how you could split your memory accesses in two. I wouldn't really expect first gen games to make much use of this feature though.

Flexio is nice and could be used for alot of things . THough textures most likely isn't one of them . The second pool of ram is going to be used by the cell chip and its going to be useing mosto f that 20 something GB/sec bandwidth for its own needs

Now what would be interesting is the cell creating textures with its spus and feeding it to the rsx .


IT will be interesting to see what the ps3 is actually capable of and where its limitations come into play . Both systems will be largely untapped with first gen games . However we have one company that was showing games that didn't look great as they were ports from last gen and one company showing tech demos that are most likely out of reach for some time if ever
 
Alpha_Spartan said:
.. I get the sense that most people believe that the PS3 is more powerful by an order of magnitude or two over the Xbox 360

whenever i see the word magnitude used in this context i think about a logarithmic scale. as such, i don't think there will be a ten (10) or a one hundred (100) fold increase in power. of course defining power is often left to the marketing / PR departments.
 
ROG27 said:
is that, regardless of what the hypethetical performance is, what Sony has shown thus far (real-time) seems to be far ahead of the curve of what Microsoft currently has to offer. This seemed to be flip-flopped last generation, as is to be expected. Simply enough, one of the best indicators of a competing platform's performance/power is the date which it hits the streets. Some might argue this logic...but let's be serious here. If something comes out 6 months to a year after something else, it's going to have a clear technological advantage.

Again, a console is a price/performance trade-off piece of machinery. Sony is opting for the higher-end this time around. Microsoft had to make trade-offs to get their system out the door on time.

THe problem is when are these things that sony is showing us coming out ?

What ms has shown us is here and now . They had ai and lvls and boards and multiplay .

Sony has shown us uncontrolled demos that are scripted

The question relaly is what is actually going to be out for both platforms when say mgs4 comes out .
 
Phil said:
So in other words, your conslusion on RSX is based on how PC games - which we all know how optimized they are - perform on a GPU that we may or may not be all that similar?

My conclusion is based on the fact that even LAST generation games seem bandwidth limited with a 128bit bus on G70. I don't see how this isn't a potential bottleneck on the PS3.

Phil is seems to me you are unable to discuss any downside at all regarding PS3. Instead of trying to attack me, why don't you just add your own 2 cents to the topic?

What do you think of the PS3's available bandwidth? Do you think this is going to be a problem based on what we've already seen wityh current gen games? Why don't you think the 256gb/s internal bandwith within Xenos will give it an advantage? Surely you would rather have a 256bit bus? Don't you see that as a possible bottleneck? Explain.

As far as what I mean by the real world, I mean typically 4xAA can take anywhere from 0%-30% hit on most games nowadays, but potentially on Xenos using the EDRAM it will take a much smaller hit. The EDRAM allows more 'effective' bandwith that doesn't really appear on the paper sepcs, that's what I mean by realworld: effective bandwidth.
 
Laa-Yosh said:
Data and bandwith are the same on every system. If the RSX has no internal memory, that it has to work with an external frame buffer, through its main bus, and thus its capabilities will depend on the available bandwith. I don't see why we couldn't draw conclusions from the available specifications...

So you're claiming that "if" it doesn't have any internal memory, its main-bus is the definite factor regardless of all the other internals we have absolutely no idea about?
Woot, guess I just learned something new today about comparisons based on incomplete specs...
 
Phil said:
So you're claiming that "if" it doesn't have any internal memory, its main-bus is the definite factor regardless of all the other internals we have absolutely no idea about?
Woot, guess I just learned something new today about comparisons based on incomplete specs...

What don't we know about RSX really? The extent to which it's customized to interoperate with CELL? That's about it. We've got solid realistic assumptions/knowledge for most everything else.

What the big problem in comparing memory bandwidth?

We can give everything the caveat that there "may" be additions to RSX that will change the BW comparisons if that makes you feel better. I think we all understand that though, and it doesn't mean we can't compare what we know, and what we think is probable.
 
Hardknock said:
I expect PS3 to be able to push more geometry, have better lighting due to higher precision HDR and the help of Cell, better physics, sharper textures (due to less compression needed on Blu-ray discs) and to be harder to develop for.
Xenos is more capable of much heavier geometry loads (especially for skinned characters) than RSX. The question is whether Cell can sustain a high transform rate for non-trivial vertex shaders. I think it can, but not by a lot, and I'm people would rather use Cell for other uses. For physics, I think you're right, but it won't have a big visual impact. Coding ability is 10 times more important (though Sony probably will have the advantage there anyway).

For HDR, I'm pretty sure Xenos fully supports FP16 in addition to FP10, as that's what written in Dave's article. There's no precision advantage here. Sharper textures is highly unlikely, because there's less DDR memory to store them during runtime. It would be silly to waste transistors in RSX to absorb the additional latency of texturing from XDR.

A more valid conclusion would be that PS3 can have more varied textures from scene to scene.
 
ROG27 said:
is that, regardless of what the hypethetical performance is, what Sony has shown thus far (real-time) seems to be far ahead of the curve of what Microsoft currently has to offer. This seemed to be flip-flopped last generation, as is to be expected. Simply enough, one of the best indicators of a competing platform's performance/power is the date which it hits the streets. Some might argue this logic...but let's be serious here. If something comes out 6 months to a year after something else, it's going to have a clear technological advantage.

Again, a console is a price/performance trade-off piece of machinery. Sony is opting for the higher-end this time around. Microsoft had to make trade-offs to get their system out the door on time.

Actually, there have been several instances in the PC world (both CPU and GPU) where a 6 month later, you couldnt buy anything that was an order of magnitude more powerful than what you had (e.g. ATI 9700 pro)

Until someone picks up a controller on a PS3, devkit or otherwise, and plays a game that looks as good as GoW (which was running on 1 core with 1 month on final hardware) then i'll believe the PS3 is an order of magnitude more powerful than the 360. Otherwise you may as well compare what we've seen on the PS3 to the ATI Toy demo...

Personally i think we'll see things way more impressive than we've seen yet on both consoles but ultimately wont see a big difference between either of them in the long run.
 
This silly talk of comparing the PS3's 'gpu' is just like five years ago when the xbox/x86 gamer crowd loved to talk about the PS2 only having 4megs of VRAM since they never had never had any experience with any graphics architecture outside of the standard x86 desktop system.
 
jvd said:
THe problem is when are these things that sony is showing us coming out ?

What ms has shown us is here and now . They had ai and lvls and boards and multiplay .

Sony has shown us uncontrolled demos that are scripted

The question relaly is what is actually going to be out for both platforms when say mgs4 comes out .

Everyone should really be asking themselves what is going to be the defining feature which makes Next-gen gaming "Next-gen". I personally think the most important factor which will add a whole new level of immersiveness to gameplay is realistic physics. We've had realistic, pretty pictures on the screen for some time now. We just haven't seen them move well.

What system will make its pretty graphics move better? That is the real question here.

And that system (whichever it will be...I have some idea from what I've seen so far) will be considered the more powerful, more "Next-gen" system by the masses.
 
scooby_dooby said:
My conclusion is based on the fact that even LAST generation games seem bandwidth limited with a 128bit bus on G70. I don't see how this isn't a potential bottleneck on the PS3.

So I suppose - applying your logic - PS2 is hands-down the most powerful platform because it yields a bandwidth advantage of what? 20 to 1?

scooby_dooby said:
Phil is seems to me you are unable to discuss any downside at all regarding PS3. Instead of trying to attack me, why don't you just add your own 2 cents to the topic?

I already did, in this very thread.

scooby_dooby said:
What do you think of the PS3's available bandwidth? Do you think this is going to be a problem based on what we've already seen wityh current gen games? Why don't you think the 256gb/s internal bandwith within Xenos will give it an advantage?

Maybe because I have avoided baseless comparisons because there isn't anything to compare yet? Appologies if I am not willing to degrade to the same nonsense on concluding imaginary advantages "hands down" to a single entity within the entire platform over another platform with internals we really have no idea about (sans CELL)?

scooby_dooby said:
As far as what I mean by the real world, I mean typically 4xAA can take anywhere from 0%-30% hit on most games nowadays, but potentially on Xenos using the EDRAM it will take a much smaller hit. The EDRAM allows more 'effective' bandwith that doesn't really appear on the paper sepcs, that's what I mean by realworld: effective bandwidth.

So yeah, PS2 is the most powerful! And that even 20 times! Not to mention that the Microsoft execs claim (which this topic is about) were exactly using your definition of real world performance? Now let me ask you: Don't you think it's just a little bit imature of concluding anything based on some random spec of both GPUs? We obviously know quite a bit about Xenos while - I hate to put it - we still know rather little about the internals of RSX. I really suggest you hold of wasting your breath because there isn't anything presented in this thread that adds anything meaningfull to what hasn't been already discussed before.
 
Phil said:
So you're claiming that "if" it doesn't have any internal memory, its main-bus is the definite factor regardless of all the other internals we have absolutely no idea about?
NVidia has been in the 3D graphics business a long time. If there was a magic bullet for bandwidth, they'd use it. A 256-bit bus requires a lot of i/o connections on the die, so it limits how much you can shrink it, and that's why they chose 128-bit. If bandwidth wasn't an issue, both ATI and NVidia would spend their budget by releasing faster chips with smaller buses. To minimize bandwidth usage, they already devote tons of die space for Z-compression, colour compression during MSAA, long pipes to absorb latency, and fancy memory controllers to reorder requests into bursts so that the memory access is most efficient.

The fact that NVidia used exotic memory for their 512MB GTX says worlds. They increased the bandwidth per cycle compared to the regular GTX, and it has 2.4 times the bandwidth per clock of RSX.

Now, I wouldn't say it's the definitive factor, because the closed system allows developers to tune their workload to be as bandwidth light as possible. But it's a very big factor indeed.
 
expletive said:
Actually, there have been several instances in the PC world (both CPU and GPU) where a 6 month later, you couldnt buy anything that was an order of magnitude more powerful than what you had (e.g. ATI 9700 pro)

Until someone picks up a controller on a PS3, devkit or otherwise, and plays a game that looks as good as GoW (which was running on 1 core with 1 month on final hardware) then i'll believe the PS3 is an order of magnitude more powerful than the 360. Otherwise you may as well compare what we've seen on the PS3 to the ATI Toy demo...

Personally i think we'll see things way more impressive than we've seen yet on both consoles but ultimately wont see a big difference between either of them in the long run.

Silly boy, GoW would run smoothly on both platforms (although, initially, it will be at least time exclusive to XBOX360). The reason it looks pretty is because of the excessive use of normal mapping. The geometry present is actually quite low. With graphics, you can only fake so much before it starts degrading the immersive qualities of the game such as physics and animation. Note, that all unreal 3 based games that look good will have blobby, generic looking characters that lack small, detailed moving parts, free-flowing hair, and accessories.
 
ROG27 said:
Everyone should really be asking themselves what is going to be the defining feature which makes Next-gen gaming "Next-gen". I personally think the most important factor which will add a whole new level of immersiveness to gameplay is realistic physics. We've had realistic, pretty pictures on the screen for some time now. We just haven't seen them move well.

What system will make its pretty graphics move better? That is the real question here.

And that system (whichever it will be...I have some idea from what I've seen so far) will be considered the more powerful, more "Next-gen" system by the masses.

Realistic physics are only going to help so much. It's the animation that needs a lot of work and that depends more on the artistic talents of the developer than any FLOP rating on the hardware.
 
Geez, if the wikipedia numbers are accurate then it doesn't seem that good for Xenos. Sure it's got lots of bandwidth for tiny 10mb and is loosning a third of the core for it, but I don't really know if they can even use all that bandwidth to like anything...
Meh, I don't know. :)

Btw, if you've read the interview in PSM with Mark Rein on unreal tournament 2007 for ps3, I think it's gonna gick some major GoW butt. :p
 
ROG27 said:
Silly boy,

Is this really necessary?

ROG27 said:
GoW would run smoothly on both platforms (although, initially, it will be at least time exclusive to XBOX360). The reason it looks pretty is because of the excessive use of normal mapping. The geometry present is actually quite low. With graphics, you can only fake so much before it starts degrading the immersive qualities of the game such as physics and animation. Note, that all unreal 3 based games that look good will have blobby, generic looking characters that lack small, detailed moving parts, free-flowing hair, and accessories.

It still doesnt change the fact that we havent seen anything on the playable on the PS3 that looks as good as GoW, regardless of why it looks pretty. I dont doubt that we will soon though.

I agree it would run smoothly on both consoles and i stated as much in my post, I think everything this gen will pretty much look the same on both consoles with some REALLY nice looking exlcusives on both sides.

Anyway, my response was mostly directed at your statement that releasing 6 months later should provide an order of magnitude more power and, for examples stated, i dont beleive its the case.
 
ROG27 said:
Silly boy, GoW would run smoothly on both platforms (although, initially, it will be at least time exclusive to XBOX360).

Don't hold your breath for a PS3 version, MS is funding the development and is publishing Gears Of War. That does not equal "time exclusive".
 
I think it's kind of a false assumption to assume that XDR will be completely soaked up by the CPU (when all things seem to point to trying to avoid memory accesses all together thus leaving quite a bit of that bandwidth untouched in many cases -- possibly).

It's going to be a tricky juggling act and vary quite a bit from game to game, but I think there is plenty of room for doing sneaky things to get around the GDDR3 being on a 128bit bus. However, I think everyone can agree that is seemingly the most apparent bottleneck of the PS3, but the nature of a closed box makes it sort of difficult to know for sure what kind of issues will arise (especially if the developer is sneaky enough). Xbox360 and PS3 seem to have bottlenecks in different places, which will obviously lead to different strengths.

I don't know what possible discussion could get us, we don't really have anything to compare it to that would actually lead to a valid discussion -- comparing stuff to a PC and cutting the mem bandwidth in half is worse information than no information.
 
jvd said:
Flexio is nice and could be used for alot of things . THough textures most likely isn't one of them . The second pool of ram is going to be used by the cell chip and its going to be useing mosto f that 20 something GB/sec bandwidth for its own needs

Now what would be interesting is the cell creating textures with its spus and feeding it to the rsx .


IT will be interesting to see what the ps3 is actually capable of and where its limitations come into play . Both systems will be largely untapped with first gen games . However we have one company that was showing games that didn't look great as they were ports from last gen and one company showing tech demos that are most likely out of reach for some time if ever

Actually Cell can access the XDR ram seperate from link between RSX and itself. I assume RSX hangs off Flexio (as other theory makes sense) while the XDR is accessed through Cell's XDram controller.

I don't think there's any reason to assume textures cannot come over from the XDR memory. If Cell can procedurally generate textures which most accept it can and send them directly over to RSX via the direct link they have I see no reason why textures residing in the XDR memory pool cannot flow over Cell's EIB just as procedurally generated textures would.

Procedurally generated textures won't sent to RSX shouldn't eat into Cell's bandwith to the XDR but of course texture fetched from XDR would.

I look at bandwith for the GPUs as sort of a wash barring some new revelations. RSX has 40+Gb/s in aggregate bandwith while Xenos's bandwith it kinda tricky to nail down. Some of Xenos's 22.4Gb/s bandwith to main memory will be consumed by the needs of Xenon but then it has 32Gb/s to it's back buffer in the eDram which should perform AA and other bandwith intesnsive tasks. Xenon could consume up to 10+Gb/s of the bandwith to the main memory theoretically at least. It's hard to nail down...at least for me, but I think it's gonna be something of a wash as far as the GPUs are concerned.
 
Back
Top