MS: "Xbox 360 More Powerful than PS3"

Jaws said:
Yep, you're wrong.

Agree to disagree there!! ;)



I've shown you TWO block diagrams from E3 2005.

You showed me two diagrams with the RSX to Cell and the CPU to South Bridge, and the IO devices to the South Bridge. But that doesn’t explain the data between the South Bridge and the IO devices.


Again, the I/O devices DO NOT directly connect to CELL. They connect to SB. It's the SB that connects to CELL with FlexIO. The 6 UNIVERSAL_SERIAL_BUS devices connect to the SB, with their own busses. Your links have confused what FlexIO is. It's the equivalent of PCI-E. The PCI-E bus isn't going to replace USB nor is FlexIO, but they work together. This is basic stuff.

Everyone is wrong except you? I got it!! ;)



Lets use another angle. COMMON SENSE.

I say 40 GB/sec. You say 76 GB/sec.

76-40 = 36 GB/sec

There's no way that the HDD, network, Blu-ray, USB devices etc. are going to need MORE bandwidth (36 GB/sec) than CELL-RSX (35 GB/sec). I'm repeating myself now, so you can believe what you want if that makes you happy...

Who are you judge bandwidth waste? Are basing your opinion off PC standards and how they interact with each other? Consoles are built in-mind for high bandwidth needs, not what today’s standards maybe, it’s the future applications they are worried about.

So by your logic; data rates between IO devices and South Bridge wouldn't ever or never change. Meaning 5-10 years later we will still be using the same equivalent data transfer among the IO devices connecting to south bridge (or another chip).

Anyhow, January or February everything will be settled………..
 
Last edited by a moderator:
Nerve-Damage said:
Agree to disagree there!! ;)





You showed me two diagrams with the RSX to Cell and the CPU to South Bridge, and the IO devices to the South Bridge. But that doesn’t explain the data between the South Bridge and the IO devices.




Everyone is wrong except you? I got it!! ;)

You are confused like them because you have misinterpreted what FlexIO is and what device buses are.

kaigai_6a.gif


http://pc.watch.impress.co.jp/docs/2005/0701/kaigai195.htm

The FlexIO B/W are also clearly marked above.

Nerve-Damage said:
Who are you judge bandwidth waste? Are basing your opinion off PC standards and how they interact with each other? Consoles are built in-mind for high bandwidth needs, not what today’s standards maybe, it’s the future applications they are worried about.

So by your logic; data rates between IO devices and South Bridge wouldn't ever or never change. Meaning 5-10 years later we will still be using the same equivalent data transfer among the IO devices connecting to south bridge (or another chip).

Anyhow, January or February everything will be settled………..

Simple logic,

What is the data transfer rate of an HDD?
What is the data transfer rate of USB?
What is the data transfer rate of Ethernet?
What is the data transfer rate of Blu-ray?

Add them up and they will be NOWHERE NEAR 36 GB/sec! This is because they go through the SB at 2.5+2.5 GB/sec. Heck, the X360 has SB B/W of 1 GB/sec! You are way off...

If SONY allocated 36 GB/sec to I/O and ONLY 35 GB/sec to CELL-RSX, they should be SHOT!
 
Wow, Nerve-Damage, you're just full of misinformation today.

Nerve-Damage said:
In the end PS3 has higher tolerance for real-world performance than Xbox 360 if you choose to accept the truth or not.
You want an example of how 32GB/s of access to eDRAM slaughters 22.4GB/s to GDDR3? Here you go:

Suppose you want to do alpha blending (fog, smoke, fire, windows, grass, plasma gun are a few examples), and you have 4xAA enabled. XB360 sends 4 bytes of data to the eDRAM, and that's it. RSX must read the colour at each sample, blend and write back. That's 32 bytes of data. Do this 1,000,000 times per frame @ 60fps. This task occupies 0.75% of Xenos' bandwidth to the eDRAM. It occupies 8.6% of RSX's bandwidth.

True, RSX has colour compression so that number will be, at the very best, 2.2%. Even then Xenos will perform at 2.9 times the speed. There's a simple reason eDRAM is used: Much better performance in the most demanding real-world tasks.

Nerve-Damage said:
mckmas8808 said:
So, "Nerve-Damage" you are telling me that the RSX can read from the system memory while at the sametime the CELL reads from the same system memory?
Do I believe it?

Yes!

It’s really no different than Nvidias up in coming Turbo Memory/Caching System…IMO.
Yeah right. Okay, depending on how you define "while", then yes, RSX can access the XDR "while" Cell is. But this fact remains: Whatever bandwidth RSX uses from the XDR, Cell can only use the remainder.

If you want to talk about real-world tolerances, then what happens when bandwidth demands fluctuate from scene to scene? A very well balanced game will be GPU limited sometimes and CPU limited others. If bandwidth is the cause of these limitations, then you can't change the bandwidth distribution on the fly to help the weakest link on PS3. You can only decide where to store textures beforehand (unless you send textures back and forth, which consumes extra bandwidth of both XDR and GDDR3).


But, as I always say on these forums, it will always come down to developer skills. I think Sony has the upper hand there, and if any game looks substantially better on one platform than the other, good coding will be the reason, not hardware.
 
Jaws said:
You are confused like them because you have misinterpreted what FlexIO is and what device buses are.


Number one when did these diagrams become official?

Secondly they still do not show the transfer rate among the IO devices and South Bridge.

So by your opinion everyone else is wrong (My links) and you’re right.

If this will make you feel better…you win!! ;)
 
raxeland said:
I'm confused. Is Int8 considered hdr lighting? I thought RGB was int8.
Data format is a Data format.
It isn't considered anything, what data you can actually store in it is what matters.

It's like comparing S3TC and 4Bit palette formats and concluding they are one and the same because they are the same size per pixel (4bit).
 
Nerve-Damage said:
I’m not sure this will answer your question, The FlexIO within the PS3 uses unidirectional signal channels; meaning the read & write channels are separated. This in theory provides less latency and less chance for error among the read & writes. As far as I know the Xbox 360 still uses the common bidirectional bus system found in today’s PCI-Express based PCs.

Well, dual uni-directional has little to do with latency really, the primary thing is that it allows much higher performance electricals.

PCI-Express is also dual uni-directional, just like FlexIO. They both belong to the general category of high speed serial interfaces. PCI-Express is a fully serial interface using 8b10b encoding (the same as SATA, Ethernet, Infiniband, etc.) with inline command and control (ie the command and control is sent over the same physical wires as the actual data).

X360, I believe using the same signalling technology as in the 970 and Power4/5 processors which I believe, though I could certainly be wrong, involved a clock forwarded data path with seperate control.

The main point of all this, is that in the case of the FlexIO interface, and possibly in the case of the X360 interface, the bus bandwidths quoted are partially overstated if they don't already have a control derating facter applied.

Aaron Spink
speaking for myself inc.
 
Just a quick question about this whole deabte on the limitation of using FP16 HDR vs INT8 RGB
with the code optimization on heavenly sword.


Is this because they're having trouble with the game running faster on the alpha kit, or because they're using the PS3 reference tool (Final PS3 Dev Kit) and FP16 HDR is hogging all the bandwidth of the FlexIO in the final machine, while keeping FSAA on / 60fps / 1080p

Thanks, I think this is the main plight of the hardware superiorists questioning here.

An answer would be greatly appreciated
 
Nerve-Damage said:
The flexIO provides 12 channels at 6.2GB/s per channel, hence the 76.8GB/s of peak bandwidth performance. The FlexIO besides connecting the CPU to GPU, and CPU to South Bridge, also provides individual channels between the CPU and Blu-Ray Drive, Blu-Ray Drive to South Bridge, Hard Drive to CPU, Hard Drive to South Bridge, and the other channels are split between the South Bridge and other IO devices such as Bluetooth, ECT. Yes, the Cell has a physical connection (FlexIO channel) between it and the Blu-Ray drive and Hard Drive (Data streaming purposes versus going through the South Bridge). You don’t have to believe me, but know that I know it’s true.

Oh, of all the unmitigated BS. So sony is going to design a fully custom BR drive using a FlexIO interface? And a custom harddrive with a FlexIO interface? hehe, can I have some of what your smoking? Both the BR drive and the HD will either utilize a PATA or SATA interface connected to the southbridge.

Aaron Spink
 
NFactor said:
Thanks, I think this is the main plight of the hardware superiorists questioning here.
No the main plight of hardware superiorists is that they assume FP16 is the ideal quality HDR solution available at present time (in absence of FP32 being reasonable).
 
Nerve-Damage said:
Secondly they still do not show the transfer rate among the IO devices and South Bridge.

The IO devices on the SB will have at MOST 460 MB/s of bandwidth. This is assuming the following:

150MB/s to/from BR drive: SATA
150MB/s to/from HD: SATA
100MB/s to/from enet
50MB/s to/from USB
10MB/s to/from wireless

Realistically, the HD will have peak performance of 50-60 MB/s
Realistically, the BR drive will have peak performance of 10-16 MB/s
Realistically the enet will have peak performance of 10-40 MB/s
Realistically the USB will have peak performance of 20-40 MB/s
Realistically the wireless will have peak performance of 4-5 MB/s

This is grade school stuff.

So by your opinion everyone else is wrong (My links) and you’re right.
Yes, marketting, and PR, and data not relevant to the topic at hand.

Aaron Spink
speaking for myself inc.
 
Fafalada said:
No the main plight of hardware superiorists is that they assume FP16 is the ideal quality HDR solution available at present time (in absence of FP32 being reasonable).

Reasonable or not, I wouldn't know unless there was an actual confirmation from any developer on here from Ninja Theory that states that the explusion of FP16 HDR was because of optimization of this code on the alpha kit, or the final dev kit.

That's an answer to the question I'd like.

I mean if Deano had FP 16 running in the alpha kit back at E3 and at EGDC, and at TGS, adding color precision and art assests and in turn having that slowing down optimization on the alpha kit now, I could understand. But with the final dev kits on the doorstep here, or in there possesion just now.

Id there current optimization based on final hardware, or code optimization on the alpha PS3, that has a significant shortage of input/output bandwidth between the Cell and RSX.
 
ARRGGHHH More details below.

First off: FP16 HDR runs perfectly fine, we render everything in RGB Colourspace into a FP16 buffer. Then run a tonemapping algo to bring in down to LDR for display on a monitor/tv. The 'normal' way of HDR. Its all runs at the speed you would expect and it quite playable.

But RGB space is shit for lighting calculations, its simple the wrong place. Why? Originally RGB colour space was defined on the range [0,1] for each channel. With a 1 being the most strongest pure colour *POSSIBLE* in that channel. So RGB<0,1,0> is the most purest green possible. RGB was designed (long long time ago) as an absolute colour space. But even a trivial look tells you as you move to simple HDR (allow values above 1) its a vast waste of space. What exactly does the colour RGB<0,1000,0> mean? Something that 1000x purest green?

The reason is because you haven't sepereated hue (colour) from lumonsity. When we talk about HDR we not talking about more colour range but more lumonsity. So we change the colour space to one where lumonsity can go very high but the colour range just keeps the same range as before.

So what we do (Marco will have to give the details) is at the end of each pixel shader tap on a RGB->ColourSpace converter (its about 5 instructions I think). This colour space is much more quantizable, so it looks virtually the same packing it into an INT8 versus a RGB FP16 framebuffer. We still have the same range of lumonsity as FP16, still have the same colour fidelity but we just save bandwidth (and other things) by using a few shader instructions. Its also handy when it comes to tonemapping, as that involves calculating the scenes lumonsity.

Its got nothing to do with the current speed of FP16 rendering, its because we worked out how to do HDR better. FP16 rendering is slower on ALL hardware versus INT8 rendering (more memory access and having to process floats).
Its clever software beating hardware, you'd probably want to use this on PC, X360 (its much better then FP10), PS3, Rev etc. Its simple a better HDR method... Its beats FP16 HDR in almost all cases, so as I've said why wouldn't you use it?

Ironincally for the X360 ******s its even more relevant on X360... X360 sucks at FP16 HDR, particular because of the tiling (64 bit framebuffers use twice as many tiles). Swap to our colour space and the X360 gets FP16 HDR quality without the lose in speed it suffers from if you use real FP16 HDR.

So just to reiterate (in condensed form for cut and pasted on various forums...)
We DON'T use ARGB8 HDR we use a custom colour space HDR that has the quality of FP16 HDR but takes half the space. This is a win on every platform in the world and nothing to do with PS3 capability.
 
Last edited by a moderator:
Nfactor said:
Reasonable or not, I wouldn't know unless there was an actual confirmation from any developer on here from Ninja Theory that states that the explusion of FP16 HDR was because of optimization of this code on the alpha kit, or the final dev kit.

That's an answer to the question I'd like.

I mean if Deano had FP 16 running in the alpha kit back at E3 and at EGDC, and at TGS, adding color precision and art assests and in turn having that slowing down optimization on the alpha kit now, I could understand. But with the final dev kits on the doorstep here, or in there possesion just now.

Id there current optimization based on final hardware, or code optimization on the alpha PS3, that has a significant shortage of input/output bandwidth between the Cell and RSX.
Its got nothing to do with alpha or final or even the same hardware. Its simple that half sized framebuffer are faster on all systems... Surely this is obvious if you know how GPUs work. It apply equaly to Xenos, RSX, G70, G80, R520 etc. And whats its got to do with Cell? Its purely a graphics operation, we don't use the CPU for HDR, the whole thing is entirely done by the GPU.
 
Last edited by a moderator:
Thanks for the further explanation!

Maybe it's the wrong way to talk about this, but can you say how many samples you're using for AA?

Does bandwidth consumption go up using this method as with "normal" MSAA?

A couple of other questions you maybe don't want to answer but would be really cool if you could :p :

Remember the outdoor scenes from the E3 trailer - how are they looking now, or at what framerate? In other words, how "generally" playable is the game now, indoor and outdoor?

Have there been any other visual improvements we can look forward to?
 
Last edited by a moderator:
DeanoC said:
Its got nothing to do with alpha or final or even the same hardware. Its simple that half sized framebuffer are faster on all systems... Surely this is obvious if you know how GPUs work. It apply equaly to Xenos, RSX, G70, G80, R520 etc. And whats its got to do with Cell? Its purely a graphics operation, we don't use the CPU for HDR, the whole thing is entirely done by the GPU.


So does this mean running with INT8 enabled and running through the RSX, it's impossible to keep the same level of effects running the game at 720p vs. running it at 1080p, and at the same target framerate?
 
Titanio said:
Thanks for the further explanation!

Maybe it's the wrong way to talk about this, but can you say how many samples you're using for AA?

Does bandwidth consumption go up using this method as with "normal" MSAA?
We don't know yet as we still have lots of things to explore. We've get a speed increase with this, so we can have more AA than before.
We are now running at 720p with a 'good' amount of AA (cause we believe this will be the normal way the game is played). Just to clarify the 1080p quote that I also seem to be misunderstood. The game will support and run well on all resolutions Sony say we must support but we believe that most people will be seeing it at 720p so thats the res we use mainly in development.

Titanio said:
A couple of other questions you maybe don't want to answer but would be really cool if you could :p :

Remember the outdoor scenes from the E3 trailer - how are they looking now, or at what framerate? In other words, how "generally" playable is the game now, indoor and outdoor?

Have there been any other visual improvements we can look forward to?
Given how even the smallest comment I make, tend to end up huge forum threads around the net, you can forgive me for not making any specific comments beyond things are going quite well thanks :D
 
DeanoC said:
Given how even the smallest comment I make, tend to end up huge forum threads around the net, you can forgive me for not making any specific comments beyond things are going quite well thanks :D
Sounds like DeanoC is a master of flame baiting. :lol
 
DeanoC said:
Ironincally for the X360 ******s its even more relevant on X360... X360 sucks at FP16 HDR, particular because of the tiling (64 bit framebuffers use twice as many tiles).
"FP16 HDR" was never on the cards for Xenos because there's no blending support (and no MSAA) on an FP16 render target.

What is annoying is the idea that FP16 with blending and AA has promulgated, and it's only very recently that we've had confirmation that this is not the case - the Xenos article needs to be changed:

http://www.beyond3d.com/articles/xenos/index.php?p=04

The ROP's are fully orthogonal so Multi-Sampling can operate with all pixel formats supported.

is not true:

http://www.beyond3d.com/forum/showpost.php?p=636956&postcount=179

Jawed
 
Back
Top