MS: "Xbox 360 More Powerful than PS3"

scooby_dooby said:
Ok so let me get this straight. When Sony shows tech demo's your comment is:
"what Sony has shown thus far (real-time) seems to be far ahead of the curve of what Microsoft currently has to offer."

But when MS shows a jaw dropping REALTIME GOW cut scene you say:
"Silly boy, GoW would run smoothly on both platforms"

Do you see the hypocrisy? Don't you see how you're a "silly boy" because all the sony "realtime" demo's could also be done on the 360!?

Or do you think that realtime engine's dev's have created 6 months before launch for PS3 are already superior to what will be done over the entire lifespan of the 360?

You make absolutely no sense. Everything we've seen thus far could probably be done on either system.

Ahhh....before you jump to conclusions, my fine, furry friend, you should realize that once again I have not implied whatsoever that the Unreal 3 engine in any way, shape, or form is pushing the PS3 hardware...while, contrary to popular belief, the 360 is chugging along at a sub-par framerate with loads more time for optomization. Why I mentioned GoW being able to be run on flawlessly on the PS3 is because Expletive tried to argue that things on 360 will match the quality of things we will see on the PS3. I'm pointing out that the Unreal 3 engine is not the pinnacle of what is achievable, but rather, the bare minimum of what we should see in the next-gen (a multi-platform/unoptomized engine). I think games like MGS4, Motorstorm, Killzone will surprise you when they come out because of how close they come to their projected targets. This, in itself, is no insignificant leap from what is "here and now" on the XBOX 360.
 
Last edited by a moderator:
Fafalada said:
IMO* Cell should spank a conventional CPU silly when it comes to matrix solvers.

*While I have not written an entire Physics engine by myself (frankly I don't think anyone in this day and age can really do that if you want reasonably featured thing produced in a reasonable time) I've spent enough time on lower level stuff like matrix solvers that I have some idea how they tend to be affected by underlying CPU architecture.
I know, hence my reference to the IBM performance numbers. My point is a good physics engine doesn't use the same type of matrix solvers. To speed things up you do estimates, dealing with only a few elements at a time. Some things are solved faster by guessing and iterating, some by Gauss-Jordan reduction.

Like I said earlier, solving matrices grows with the cube of dimension. That certainly isn't how you want your game engine to scale.
 
Fafalada said:
IMO* Cell should spank a conventional CPU silly when it comes to matrix solvers.

*While I have not written an entire Physics engine by myself (frankly I don't think anyone in this day and age can really do that if you want reasonably featured thing produced in a reasonable time) I've spent enough time on lower level stuff like matrix solvers that I have some idea how they tend to be affected by underlying CPU architecture.

That said, my reasoning for above comment isn't limited to FP spec.

Thank you.

Some one that actually knows what they're talking about.

Physics will be a differentiating feature this generation. It might even be (coupled with animation and simulation) the definitive feature of what makes next-gen "Next-gen". Having the CELL in the PS3 is a huge asset for Sony IMO.

Everyone knows that from a GPU standpoint....NVidia and ATI are a wash (neck and neck in that race).

(Nostrodomis-esque prediciton for generation after next: AI will be the focus)
 
I asked once on Bizarre forums about edram issue and some nice developer (Roger) wrote this:

lysander wrote:
Umha, great writing, could roger share some info on utilisation of 360gpu power, the benefits of unified shaders and separate smart logic on edram core
Say what?

Specific questions hmmm... they all sound a lot more technical that I'm used to on the forum... I'd more expect "will teh game pwn GT4. lol. lmao. I'm so not buying this game, I'm so annoyed about something I read on a forum somewhere that I just thought I'd post 50 times to tell you I'm not buying the game. I haven't actually seen the game or been able to form my own opinion either."

So... in no particular order

1) Separate smart logic... well... the beauty of this is that it takes away the bandwith drain on main memory. On PC's when you start ramping up rendering resolutions and stuff you find that you very quickly become bandwith limited to the gfx memory by fill rate alone. The EDRAM means that it's got it's own super-fast bandwith that doesn't share the same bus as main memory, meaning that vertex and texture fetching for example aren't slowed down by the pixel filling, plus the main memory bus that the CPU is using isn't swamped by the GPU. The GPU to EDRAM connection is so fast you can basically forget in most cases about how long it takes the GPU to write pixels to the screen. Things like alpha blending and stuff like that become virtually free too.

2) Unified shader benefits... hmmm... one of the great things is the fact that based on how complex a VS or PS is you can shift around the ALU allocation policy to basically put more grunt where it's needed (e.g. post processing effects are pixel shader heavy, so you'll want to divert more processing goodness to that than say worry about the 4 vertices that are used to render the full screen quad). PS and VS are basically the same now, both can access textures and is really opening up the door on new things that can be done. We didn't really have the time to push a lot of this as far as we'd like for launch.

3) Utilisation of gpu power... tricky one... the thing most people forget now when talking about "engines" is that they go on about "how many triangles" can it push, etc... well... yeah... you can push a lot... but who cares? Why does it matter if you can push say 2million/sec and I can only do 100k/sec... surely it only matters if your 2million actually look better... the shift has gone now from polygons to shaders... polygon numbers are becoming meaningless... the true overheads and time to be spent/saved come in how complex your shaders are. This will almost inevitably come down to the pixel shaders as they fire much more often and it's a balance between how many textures you fetch and how complex the work you do on them are. It's easy to brute force an algorithm and getting it look nice, but maybe there's better ways to do the same thing and make it run in half the time. That's not to say the overheard of polygons is negligble these days, however the cost of rendering a primitive on the GPU side is more than likely mainly in the shaders. There's some other techniques we need to investigate in the future that were too risky for us to play with in our timeframe, so we may find new ways to eek more out of everything.

If you were more after how much we're utilising the GPU... that things running hot... all the time...

Happy?

-roger
 
jvd said:
It goes xdr - cell - flexio- rsx- gddr . So the fastest rsx can acess the xdr ram is the speed of wich xdr is attached to cell. Which is 20 something GB/s . Doesn't matter what the flexio runs at , your limited by the bandwidth from the ram. Which is going to be hit up by the cell chip already.

I may have misunderstood, but you stated that Cell's needs would eat into the bandwith between RSX and itself. Cell can access XDR and consume bandwith there without touching it's bandwith between itself and RSX. So it's seems incorrect to say that Cell will be consuming most of the bandwith between itself for RSX for it's own needs so that sending textures over the link between them may suffer for this reason. If I misuderstand what you say then I apologize.

Also the bandwith Flexio allows for the link between RSX and Cell is what I had in mind. I don't disagree that one can only get 20Gb/s into RSX from this link. However, you can still also send 15Gb/s from RSX to Cell. As far as how fast Cell can access the GDDR3 Ram then I don't disagree 15Gb/s is the max you could get over Flexio, but this is in addition to the 25.6GB/s Cell has to the XDR memory pool. In any case Cell has more raw bandwith available to it than Xenon. As far as RSX goes I still hold it looks like it could pretty much end up a wash to me with what Xenos has.

I never said it couldn't , it will only get a fraction of the bandwidth that the gddr pool gives. Because your limited by the bandwidth coming out of the xdr ram and then you have to subtract the bandwidth the cell is going to be using when acessing that memory. Then you might even have a speed reduction again in the flexio if your using that to transfer many procedurally generated textures and other post process effects .

Cell doesn't need bandwith to the GDDR3 memory unless data intended for Cell is larger than the XDR memory pool. You would be right I suppose if Cell accessed the framebuffer in the GDDR3 memory though. As for RSX eating into Cell's bandwith from the XDR pool if textures or data came from XDR I addressed that and I don't disagree. You covered someting I didn't think of with the case that textures are both coming from XDR and being procedurally generated by Cell. Well RSX still has up to 20Gb/s coming to it from the Cell<->RSX link on top of the bandwith from the GDDR3 pool. Cell would be losing bandwith from the XDR pool for Cell exclusive needs but the procedurally generated textures aren't taking bandwith from Cell since they are a result of a Cell task At least that's how I look at it. I do assume most of the data needed for Cell exclusive tasks should reside in the XDR memory pool.

Some of the rsx's bandwidth will be taken by the cell chip . Both the xdr bandwidth and flexio bandwidth will be used by cel land rsx when transfering any data the two need to share or give to each other before you get to texture bandwidth.

I don't think I agree with that. Cell and RSX can share data without touching RSX having to access XDR memory in that Cell can procedurally generate data for RSX or RSX can likewise send data to Cell. This of course is only one case to consider.
 
Last edited by a moderator:
I may have misunderstood, but you stated that Cell's needs would eat into the bandwith between RSX and itself. Cell can access XDR and consume bandwith there without touching it's bandwith between itself and RSX. So it's seems incorrect to say that Cell will be consuming most of the bandwith between itself for RSX for it's own needs so that sending textures over the link between them may suffer for this reason. If I misuderstand what you say then I apologize.

Yes , cel lcan acess the ram with out using flexio . Of course . However If cell is acessing xdr then that is x amount of bandwidth less that the rsx can use when acessing the xdr ram .

You get to places were u can loose speed . FIrst the xdr ram can only send data as fast as its bus allows. Then you have to factor in the cell chip acessing that ram too.

Then the cell is going to be sending things over flexio to rsx . Procedual textures and anything else u can think of , even post process effects as its the rsx that will output the final frame will be going over the flexio . So if these things use say 30GB/s bandwith out of 45 of the flexio . Then the rsx can only acess 15gbs of the xdr ram badnwidth. Less if cell is using the xdr bandwidth also .

That is what i'm talking about , sorry if i'm not clear enough.

Cell doesn't need bandwith to the GDDR3 memory unless data intended for Cell is larger than the XDR memory pool. You would be right I suppose if Cell accessed the framebuffer in the GDDR3 memory though. As for RSX eating into Cell's bandwith from the XDR pool if textures or data came from XDR I addressed that and I don't disagree. You covered someting I didn't think of with the case that textures are both coming from XDR and being procedurally generated by Cell. Well RSX still has up to 20Gb/s coming to it from the Cell<->RSX link on top of the bandwith from the GDDR3 pool. Cell would be losing bandwith from the XDR pool for Cell exclusive needs but the procedurally generated textures aren't taking bandwith from Cell since they are a result of a Cell task At least that's how I look at it. I do assume most of the data needed for Cell exclusive tasks should reside in the XDR memory pool.
I meant xdr ram .

But yes , if the cell chip needed to acess say the gddr ram to do post processing on the framebuffer it may eat up the ram.

The ps3 set up is very interesting .

I think it would have been best if they had a flexio to a 128 meg of ram for cell and then a 256bit bus to a 256 megs of ram for the rsx .

don't think I agree with that. Cell and RSX can share data without touching RSX having to access XDR memory in that Cell can procedurally generate data for RSX or RSX can likewise send data to Cell. This of course is only one case to consider.

They can but it will eat up the flexio buss , which eats up the rsx's bandwidth to the xdr ram .

To use 512 megs of ram they are going to need to use both the gddr and xdr bus . Cell is going to be eating up one of them and it may even eat up alot of the flexio buss by sending data to the rsx .

Devs are going to have to balance that and i'd say it will be awhile before they get the hang of it


Both systems have thier positive points and i've said from the start i expect them to both put out images very close to each other .

Personaly i made my choice , xbox 360 + rev . Its easy for me though as my sister will buy a ps3 when it drops in price
 
This is what im talking about, not much ******ism, just opionions, facts and speculations. What i first loved this forum for anyway. I sure hope SOny reads forums like these, and boosts this and that here and there. But remember the materials for technology in 2020 exists today. Dual thread like what intel is working on was lile the next logical step in CPU's. then you have cell.

The eDRAM in the x360 is what worries me, cause it is one of the main reasons PS2 games still look so good. So I hope the SOnt bigwigs give Ken his way, and rid the PS3 of any weeknesses. they woudl do well to analyze teh strengths of the x360. but hell, I'll say it again, if certain demos on PS3 are real, and are proeven real. Won't that drastically change the fface of this thread, we'll wait and see
 
jvd said:
Yes , cel lcan acess the ram with out using flexio . Of course . However If cell is acessing xdr then that is x amount of bandwidth less that the rsx can use when acessing the xdr ram .

Don't disagree if we are referring to the aggregate bandwidth RSX can have between both the GDDR3 and XDR memory pools.

You get to places were u can loose speed . FIrst the xdr ram can only send data as fast as its bus allows. Then you have to factor in the cell chip acessing that ram too.

Don't disagree with this either. I'm only contrasting this to the situation with Xenon where it has 10.8Gb/s between it and Xenos that it must use to communicate data to Xenos and/or access main memory. (Thanks for the clarification Jaws) Cell has 25.6Gb/s to XDR and 20Gb/s write 15Gb/s read between it and RSX to either communicate data to RSX or access GDDR3 memory. I'm only saying on the face of things Cell has more raw bandwidth available to it.

There are allot of hypothetical situations we could come up for sure but at least to me Cell has the advantage here barring extreme cases where both Cell and Xenon would likely starve for data.

Then the cell is going to be sending things over flexio to rsx . Procedual textures and anything else u can think of , even post process effects as its the rsx that will output the final frame will be going over the flexio . So if these things use say 30GB/s bandwith out of 45 of the flexio . Then the rsx can only acess 15gbs of the xdr ram badnwidth. Less if cell is using the xdr bandwidth also .

That is what I'm talking about , sorry if I'm not clear enough.

Well..Flexio only allows for 20Gb/s from Cell to RSX and 15Gb/s from RSX to Cell in the PS3 so I don't think 30 or 45 Gb/s can be consumed then. At a maximum RSX can only consume 20Gb/s from the XDR memory pool leaving 5.6GB/s for Cell exclusive needs since Cell only has 25.6Gb/s to/from the XDR pool....a situation most likely avoided for sure...or should be most likely. Cell could still read 15Gb/s from the GDDR3 memory pool over the link between itself and RSX...nutzo if you consider you have this bandwidth lying around from the GDDR3 pool and yet you still set up RSX to have to snag 20Gb/s from XDR. RSX could consume all the bandwidth to the GDDR3 pool effectively making the 15Gb/s from RSX to Cell useless in this cases...again...it's a pretty bad case.

If we consider Xenos would consume all the bandwidth to the GDDR3 memory then Xenon flat out starves for data...although only have 5.6GB/s still available to Cell in our "blue screen" case isn't really all that much to be happy about given how bad resources have been managed here. Xenos's communications to Xenos MUST be less than 10.8Gb/s Xenon exclusive tasks flat out starve for data. Contrast again with RSX<->communication can use up to 20Gb/s and 15Gb/s and Cell which is significantly more bandwidth and Cell still doesn't flat out starve for data...although the situation is likely dire.

I meant xdr ram .

But yes , if the cell chip needed to acess say the gddr ram to do post processing on the framebuffer it may eat up the ram.

Agreed. It could eat up 20Gb/s or the available 22.4Gb/s but then I'd again have to wonder why a developer would use monkeys to code up their game engine.

The ps3 set up is very interesting .

Are you daft man...seriously are you friggin retarded or something? Shhhhhhhh!!! Somebody's gonna here you...they'll do bad things to you....horrible despicable things. MOMMY!

I think it would have been best if they had a flexio to a 128 meg of ram for cell and then a 256bit bus to a 256 megs of ram for the rsx .

I don't know...I get the feeling devs would rather have more ram than less ram even though it's faster.


They can but it will eat up the flexio buss , which eats up the rsx's bandwidth to the xdr ram .

True. However, if RSX is still getting 20Gb/s from Cell directly and 22.4Gb/s from the GDDR3 Memory pool it has really lost nothing in the max amount of data that can come into it i.e. bandwidth. My point is that RSX only loses bandwidth if it's aggregate bandwidth is not taken advantage of. If it get's 20Gb/s form Cell directly or from data that resides in the XDR memory pool or a mixture thereof it is still getting 20Gb/s over the link and thus it cannot lose bandwidth unless than 20Gb/s is actually sent to it. The same reasoning does not apply to Cell because RSX uses all data towards graphics or rather RSX only tasks where Cell uses data for both RSX assistant tasks and task which are Cell's domain alone.

To use 512 megs of ram they are going to need to use both the gddr and xdr bus . Cell is going to be eating up one of them and it may even eat up alot of the flexio buss by sending data to the rsx .

I don't disagree that in order to fully use the memory space available bandwidth will be consumed throughout the system. As you can tell I'm reluctant to say that Cell will be "eating up" the bandwidth somewhere. I will agree Cell will be "eating into" the bandwidth somewhere...IMO most likely everywhere at some point or maybe on the average.

Devs are going to have to balance that and i'd say it will be awhile before they get the hang of it

I agree, but...geez I sound like a broken record....

Anyway, while I don't doubt that developers will have to get the hang of things I don't see as particularly hard to identify the "boneheaded" things one could do with managing memory resources and bandwidth and avoid them...such as the nasty scenario described earlier where a dev had RSX nearly sucking the bandwidth to the XDR dry yet had bandwidth to spare in the GDDR3 pool...this seems a glaring obvious situation one should avoid. I hope it is at least :(

Both systems have thiner positive points and i've said from the start i expect them to both put out images very close to each other .

Personaly i made my choice , xbox 360 + rev . Its easy for me though as my sister will buy a ps3 when it drops in price

I've got an X360 as of right now and I'm going to get a PS3 when it launches. I'm up in the air on the Revolution...never in my life would I think that possible about a Nintendo console, but it's true.
 
Last edited by a moderator:
Mintmaster said:
I never said it wouldn't, but we're not talking about no physics vs. awesome physics. My point is 2 or 3 times the FP power (doubtful) will not have a significant impact on physics, especially in terms of what it brings you in a game.

In the mainstream physics realm, anything less than 10x more power isn't really noticable.

I've actually coded a physics engine. Yes, there's a lot of FP code in there, but really impressive physics (which I haven't achieved yet) needs to navigate and maintain large spatial data structures, solve sparse matrices, and know how to leave things alone that aren't going to change. It's not simply a matter of crunching numbers.

You're not kidding, there are some significant programming issues in physics, there are a lot of algorithims that would run 4-8x faster if hardware could support efficient scatter and gather of single FP(SP or DP) quantities into packed data structures. The problem is that the basic memory technologies are moving in the other direction.
 
Fafalada said:
IMO* Cell should spank a conventional CPU silly when it comes to matrix solvers.

I believe that the specific comment refered to sparse matrixes, which are a different beast altogether than simple packed matrixes. For packed matrixes, all that matters is memory bandwidth.

Aaron Spink
speaking for myself inc.
 
Don't disagree if we are referring to the aggregate bandwidth RSX can have between both the GDDR3 and XDR memory pools.

aye. That is what im talking about. You can't say the rsx has x amount of bandwidth cause xdr + gddr bandwidth is y amoutn. Because in practice the cell is allways going to need a z amount of that bandwidth

Don't disagree with this either. I'm only contrasting this to the situation with Xenon where it has 10.8Gb/s between it and Xenos that it must use to communicate data to Xenos and/or access main memory. (Thanks for the clarification Jaws) Cell has 25.6Gb/s to XDR and 20Gb/s write 15Gb/s read between it and RSX to either communicate data to RSX or access GDDR3 memory. I'm only saying on the face of things Cell has more raw bandwidth available to it.
Right but my point is that with the x360 we allways know that xenos has x amount of badnwidth to main ram . Since we know the xenos can only ever use 10.8gbs of bandwidth. So while ps3 has that variable of what cell needs , the xbox doesn't as we know the max that the chip can acess. So what the xenos to ram is what 22gb/s ? minus 10.8gb/s ? that is 11.2gb/s it can acess plus the edram 32gb/s giving us about 43gb/s bandwidth to the chip that can actually increase .

With the ps3 we can't say that cause we dn't know what cell needs and i'm sure it will vary between engines and even frames


Get my drift ?

Well..Flexio only allows for 20Gb/s from Cell to RSX and 15Gb/s from RSX to Cell in the PS3 so I don't think 30 or 45 Gb/s can be consumed then. At a maximum RSX can only consume 20Gb/s from the XDR memory pool leaving 5.6GB/s for Cell exclusive needs since Cell only has 25.6Gb/s to/from the XDR pool....a situation most likely avoided for sure...or should be most likely. Cell could still read 15Gb/s from the GDDR3 memory pool over the link between itself and RSX...nutzo if you consider you have this bandwidth lying around from the GDDR3 pool and yet you still set up RSX to have to snag 20Gb/s from XDR. RSX could consume all the bandwidth to the GDDR3 pool effectively making the 15Gb/s from RSX to Cell useless in this cases...again...it's a pretty bad case.

is it 20gb/s sorry not up on the specs that much . But anyway. Whatever the cell needs in terms of badnwidth to send prodedual textures , or geometry , or to do post process effects on the frame is going to eat that up . Thus limiting what the rsx can acutally acess from xdr . Let alone what cell needs to be acessing from the xdr .

So as i've said it can vairy from engine to engine or frame to frame. Devs are going to have to mess with that for awhile


If we consider Xenos would consume all the bandwidth to the GDDR3 memory then Xenon flat out starves for data...although only have 5.6GB/s still available to Cell in our "blue screen" case isn't really all that much to be happy about given how bad resources have been managed here. Xenos's communications to Xenos MUST be less than 10.8Gb/s Xenon exclusive tasks flat out starve for data. Contrast again with RSX<->communication can use up to 20Gb/s and 15Gb/s and Cell which is significantly more bandwidth and Cell still doesn't flat out starve for data...although the situation is likely dire.
that is true . However xenos has that nice fast edram that it can acess too. Will it need more than 10 or so gb/s of bandwidth for textures ?

It also has that link from the xenon cache to iteself which is another 10 gigs where the xenos can acess geometry or textures

These systems have many quirks and as the members here teach me more stuff i begin to like both set ups very much.

Agreed. It could eat up 20Gb/s or the available 22.4Gb/s but then I'd again have to wonder why a developer would use monkeys to code up their game engine.

Its not just 20gbs . I'm talking about 10 or 12 gbs . What im really thinking about now is the flexio . Your saying its 20 gigs one way. Which means if we're doing post processing effects on the final image and sending procedual textures cell is going to hit that very hard . Which means even if its only using say 5Gb/s of bandwidth on the xdr ram. It may be using 10GB/s on the flexio thus limiting the xdr ram to comunicating with the rsx at only 10Gb/s

I am correct in that thinking that right ?

Are you daft man...seriously are you friggin retarded or something? Shhhhhhhh!!! Somebody's gonna here you...they'll do bad things to you....horrible despicable things. MOMMY!

Haha , let them do thier worsest !!!! bawhahaha .

Seriously i just like tech. I just don't buy sony because i've never had a postive experiance with them .


I don't know...I get the feeling devs would rather have more ram than less ram even though it's faster.
Mabye, i just think that he acessing of the second pool of ram is going to cause some headaches that could have been avoided . Also if my assumptions above are correct they may be some points where they can't use the power of cell or they'd cut themselves off from a ram pool

True. However, if RSX is still getting 20Gb/s from Cell directly and 22.4Gb/s from the GDDR3 Memory pool it has really lost nothing in the max amount of data that can come into it i.e. bandwidth. My point is that RSX only loses bandwidth if it's aggregate bandwidth is not taken advantage of. If it get's 20Gb/s form Cell directly or from data that resides in the XDR memory pool or a mixture thereof it is still getting 20Gb/s over the link and thus it cannot lose bandwidth unless than 20Gb/s is actually sent to it. The same reasoning does not apply to Cell because RSX uses all data towards graphics or rather RSX only tasks where Cell uses data for both RSX assistant tasks and task which are Cell's domain alone.

No it can't "loose badnwidth" however it will loose 256 megs of ram

don't disagree that in order to fully use the memory space available bandwidth will be consumed throughout the system. As you can tell I'm reluctant to say that Cell will be "eating up" the bandwidth somewhere. I will agree Cell will be "eating into" the bandwidth somewhere...IMO most likely everywhere at some point or maybe on the average

Right and i have a feeling that extra 256 megs is going to be wasted sitting there .

I've got an X360 as of right now and I'm going to get a PS3 when it launches. I'm up in the air on the Revolution...never in my life would I think that possible about a Nintendo console, but it's true.
Eh as i said , never had luck with sony products .
 
System Memory Configuration & Bus System (Break Down):

PS3: Memory Configurations & Bus Systems
256MB XDR System Ram: 25.6GB/s
256MB GDDR3 VRAM: 22.4GB/s
CELL to RSX: 35GB/s (20GB/s write + 15GB/s read)
CELL EIB: 300GB/s peak
CELL FlexIO Bus Bandwidth: 76.8 GB/s (44.8 GB/s outbound, 32 GB/s inbound)
RSX to Memory: 48GB/s Effective (can access both XDR & VRAM simultaneously)
South Bridge: 5GB/s (2.5GB/s upstream + 2.5GB/s downstream)


Xbox 360: Memory Configurations & Bus Systems
512MB GDDR3 Unified Ram Design (Xenos to Ram): 22.4 GB/s
Xenon Internal Bus: 1.5GB/s
Xenon to Xenos: 21.6 GB/s (10.8GB/s read + 10.8GBs write)
Xenos to EDRAM: 32GB/s
10MB EDRAM Internal Logic: 256 GB/s
South Bridge: 1GB (500MB upstream + 500MB downstream)
 
Mintmaster said:
I never said it wouldn't, but we're not talking about no physics vs. awesome physics. My point is 2 or 3 times the FP power (doubtful) will not have a significant impact on physics, especially in terms of what it brings you in a game.

Tell that to AGEIA, for example. It's not just about FP power. I think you'll see a fairly significant difference in what these machines can do with physics and simulation. That said, depending on the game you could have more than 3x the FP power available versus X360.

Data structures for collision detection etc. can be made suit SPEs. There's potential for good parallelisation of that task too.
 
Titanio said:
Data structures for collision detection etc. can be made suit SPEs. There's potential for good parallelisation of that task too.
That's true, but sometime SPE friendly data structures are not enough, there are problems that are inherently difficult to run efficiently run on a SPE due too poor locality of data (I'm not only talking about physics..)

ciao,
Marco
 
jvd said:
The video shown was not running at 30fps + as was shown. It was sped up and he admited it .


Although he has said that the game is at playable framerates now , we haven't seen anyone playing it (or any new shots actualy ) to prove it . This is not a knock against deanoc as I respect him. But we haven't seen anyhting so far


Well, my point was not that it was running at that framerate but that it was considered plausible and not impossible. So a game with 1080p 60fps FP16 HDR, Depth of Field, Motion Blur, heavy particle effects, cloth animations/physics, havok physics, thousands of ai controlled characters, and impressive geometry/textures/effects is considered plausible. With all the talk of RSX b/w starvation/scarcity you would think it would naturally choke with all of that at the same time going on at such a high framerate, that is that all of that at the same time would not be possible or would be nigh impossible due to b/w constraints.

It was one of the things that worried me, that there was gonna have to be an obligatory/unnavoidable substantial/significant trade-off/compromise between REZ, HDR, particles, motion blur, depth of field framerate and the like due to technical limitations/constraints. But it seems getting all the 'she-bang' is possible, that is it is within the realms of that which is plausible not near or totally impossible..
 
Last edited by a moderator:
Nerve-Damage said:
System Memory Configuration & Bus System (Break Down):

PS3: Memory Configurations & Bus Systems
256MB XDR System Ram: 25.6GB/s
256MB GDDR3 VRAM: 22.4GB/s
CELL to RSX: 35GB/s (20GB/s write + 15GB/s read)
CELL EIB: 300GB/s peak
CELL FlexIO Bus Bandwidth: 76.8 GB/s (44.8 GB/s outbound, 32 GB/s inbound)
RSX to Memory: 48GB/s Effective (can access both XDR & VRAM simultaneously)
South Bridge: 5GB/s (2.5GB/s upstream + 2.5GB/s downstream)


Xbox 360: Memory Configurations & Bus Systems
512MB GDDR3 Unified Ram Design (Xenos to Ram): 22.4 GB/s
Xenon Internal Bus: 1.5GB/s
Xenon to Xenos: 21.6 GB/s (10.8GB/s read + 10.8GBs write)
Xenos to EDRAM: 32GB/s
10MB EDRAM Internal Logic: 256 GB/s
South Bridge: 1GB (500MB upstream + 500MB downstream)


Quick question, anyone know if the flex I/O interface is serial control or parallel control? How about the Xenon-Xenos interface?
 
zidane1strife said:
Well, my point was not that it was running at that framerate but that it was considered plausible and not impossible. So a game with 1080p 60fps FP16 HDR, Depth of Field, Motion Blur, heavy particle effects, cloth animations/physics, havok physics, thousands of ai controlled characters, and impressive geometry/textures/effects is considered plausible. With all the talk of RSX b/w starvation/scarcity you would think it would naturally choke with all of that at the same time going on at such a high framerate, that is that all of that at the same time would not be possible or would be nigh impossible due to b/w constraints.
Well that was a long time ago... We constantly adapt as we learn what makes this thing tick...

We are definately not using FP16 HDR anymore, Marco implemented a cool method to get the same results using INT8. Faster and with MSAA, Winner :D

1080p? could still happen but I reckon 720p will be the standard but we will see. Just can't see us burning precious memory, fillrate and bandwidth for something only a few people can use...
 
DeanoC said:
Well that was a long time ago... We constantly adapt as we learn what makes this thing tick...

We are definately not using FP16 HDR anymore, Marco implemented a cool method to get the same results using INT8. Faster and with MSAA, Winner :D

1080p? could still happen but I reckon 720p will be the standard but we will see. Just can't see us burning precious memory, fillrate and bandwidth for something only a few people can use...

Has the team also started to use the SPEs now cause i remember you wrote that they only used the maincore?
 
jvd said:
Err ?

Last gen we had 3 systems

1) ps2 , this got the biggest budgets however it was the weakest really of the systems and the most difficult to program for . It was also released a year or so (don't remember exactly) before the xbox .

2) gamecube was released but this system debuted at 200$ not the 300$ of the other 2 and its graphics fall between the xbox and ps2 . Some games like re4 come very close to top of the line xbox graphics . Bear in mind that hte xbox was rumored to be just over 400$ to make and the gamecube just over 200$ . So a system for about half the price was keeping up visual

3) xbox , this system was the most expensive to make costing around 400$ a year after the ps2. It has the best graphics but the graphics aren't so far away from the ps2 that people jumped off the ps2 boat


The best of the games from all 3 systems arne't to far away from each other

This gen we have the xbox 360 costing over 400$ to make according to some and its out what 6-8 months before the ps3 that has an equal budget most likely for the system


So your going to most likely at the most see the best of the gamecube games vs the best of the xbox games , which isn't very far away from each other

When i think about the visuals ofcourse both X360 and PS3 will be able to output the same.
I rather tend to think that if we have a great AAA title on both systems just for ex if one outputs it at sluggy 30fps with dips down to 10/15 and the other one outputs 30fps constant then thats a definate improvement of the visuals and i know the version i would pick.
 
DeanoC said:
Well that was a long time ago... We constantly adapt as we learn what makes this thing tick...

We are definately not using FP16 HDR anymore, Marco implemented a cool method to get the same results using INT8. Faster and with MSAA, Winner :D

1080p? could still happen but I reckon 720p will be the standard but we will see. Just can't see us burning precious memory, fillrate and bandwidth for something only a few people can use...

Apologies in advance for this n00b question:
What is INT8? Is this somethign that would be available on Rev and 360 as well?


Do you think this gen will be 720p and then migrate to 1080p later (on the PS3 since its the only one that will support the res) or now that youve had more time with the hardware feel that 720p will be the sweet spot of resolution, framerate, efftects throughout the generation? (i.e. the decision between resolution and more graphic effects, more effects will aways win...)

(I'm, trying to think of the roadmap of 1080p displays as well, not jsut the consoles, and wonder when will there be a 'critical mass' amount of content for them. AFAIK, most of todays 1080p displays only accept a 1080i signal anyway)
 
Back
Top