MS: "Xbox 360 More Powerful than PS3"

jvd said:
See the double standards at work
Sorry to bust into your argument guys, but i dont see how this is a double standard.

On Heavenly sword, you got a new method that gives you the same results as FP16, if not better, allowing you to save resources. I dont see how you make this out to be out as a bad thing. and truly, it doesnt matter if methods like this were done on the ps3 or x-box, if the game looks the same or better while using another method that saves you resources, why not use it? I would be dumb not to. It's not about if the console can use a specific method or not, I think its more along the lines of developers trying to find better and more efficient ways to do the same thing; and in the process boosting other parts of the game due to saved resources. (i think a good example of this is 3Dc by ati)

I think the microsoft deal with the "free" AA is a completely different kind of situation.
 
Last edited by a moderator:
jvd said:
Still doesn't change the fact that its not floating point does it not ?
What's the problem with that? has it to be FP to be cool?
I didn't see you here Faf standing up for the edram on the xenos even when it wasn't being used by developers for its intended purpose
You really don't know Faf.. and btw developers on Xenos HAVE to use edram for its intended purpose..otherwise you don't get that much to see on your television :)
To me its the double standard on this forum that i've pointed out .
There's no double standard here since Faf is not going around praising PS3 (in fact if you pay attention to what he's posting in the last weeks you'll get some clue about him and what he thinks about PS3/X360) and bashing X360 (or viceversa)

ciao,
Marco
 
JVD... stop ignoring what he said.

"First off: FP16 HDR runs perfectly fine, we render everything in RGB Colourspace into a FP16 buffer. Then run a tonemapping algo to bring in down to LDR for display on a monitor/tv. The 'normal' way of HDR. Its all runs at the speed you would expect and it quite playable."
 
Of course he doesn't mention if your going to have percision problems later on . I remember one dev talking about percison problems with fp16 and how fp10 would show errors qucker . So it stands to reason that int 8 will show errors even sooner .

This is making the assumption the colorspace is RGB using the same rules that the built-in HDR solutions on the cards use (which they have said they aren't doing), the math is done using INT8 widths in the GPU itself, and a few other things that haven't even been touched on, such as what the colorspace actually /is/. If the proper components are separated out, lighting will be a breeze, and will actually have no problems with precision. RGB is one of the reasons why precision is such an issue, if I remember my classes correctly.

Of course he has nda's so he can't go further into detail . But it ceritanly comes across from his posts as a limition of the ps3 that had to be worked around

Doesn't quite sound entirely that way from this post: http://www.beyond3d.com/forum/showpost.php?p=648657&postcount=194

Sounds a little more like it was already in a pretty good stage, but that the issue of RGB lighting created a need. This need led to a solution which kills two birds with one stone: they get a bandwidth improvement to use for other areas, and they get a better colorspace for their lighting calculations.

They can be free to correct me on this though.

Still, I agree with them, the RGB colorspace is the WORST for lighting math, as RGB has no direct connection to a light source. We have made a lot of math to approximate how a light would work in the RGB space, as RGB is faster to display on a computer, even though something like HSV or YUV is a better choice when you are attempting to map something to reality. As long as the calculations themselves are done with reasonable accuracy by their shaders, does it matter what the storage format is in the framebuffer?
 
The issue with non RGB color-spaces in the frame buffer is that standard blending modes tend to do odd things, and thry aren't programmable.

Whatever solution you choose, you pretty much have to deal correctly with addative and multiplicative blending.
 
jvd said:
I've never seen him say anything about the lack of something being used as a good thing for the xbox 360 or a work around being used as a good thing for the xbox 360.
What still you don't get it's that we're not using a 32 bit buffer due to the lack of something, we're using it cause it's good and one would use it on other platforms as well, X360 included, as its quality is way better than FP10.

Deanoc explained exactly what the trade offs are for this and its purely playing to the ps3's strengths (Shader power ) and playing away from its weakness (badnwidth) .
I would remind you that on Xenos you could have the same problem, since FP16 doesn't come for free AFAIK
Using int 8 is going to have percision problems much sooner than fp10 or fp16 or even fp32
As I already explained that's not the case.
He does say it has some issues you'd expect and i'm going to assume its percision issues . Unless he cares to correct me
Wrong assumption
 
jvd said:
Still doesn't change the fact that its not floating point does it not ?
You're like someone arguing that S3TC == Clut because they are both 4bit.
It's a packed storage format - it doesn't really fit definition of a standard datatype - it doesn't need to.

I didn't see you here Faf standing up for the edram on the xenos even when it wasn't being used by developers for its intended purpose
You really can't go more then one sentence before trying to change the dialogue back into platform wars can you?
Unless you missed it - this method isn't exclusive to PS3.

I've never seen him say anything about the lack of something being used as a good thing for the xbox 360
My argument here was that the method BEING used is a good thing - you're the only one at the moment that is trying to pass this approach off as something NOT being used.
 
This is making the assumption the colorspace is RGB using the same rules that the built-in HDR solutions on the cards use (which they have said they aren't doing), the math is done using INT8 widths in the GPU itself, and a few other things that haven't even been touched on, such as what the colorspace actually /is/. If the proper components are separated out, lighting will be a breeze, and will actually have no problems with precision. RGB is one of the reasons why precision is such an issue, if I remember my classes correctly.

you will still have the int8 percision only . Nothing greater . So you will still get erros even if its stored somewhere else previously in a higher percision .

He says "Well to be fair its not the holy grail, it has some issues as you'd expect" your alsoblendthe color space once during the deal between two formats .

Sounds a little more like it was already in a pretty good stage, but that the issue of RGB lighting created a need. This need led to a solution which kills two birds with one stone: they get a bandwidth improvement to use for other areas, and they get a better colorspace for their lighting calculations.
sounds to me from his later posts that this was done purely for badnwidth issues . As he himself notes the trade off (shader for bandwidth)

Still, I agree with them, the RGB colorspace is the WORST for lighting math, as RGB has no direct connection to a light source. We have made a lot of math to approximate how a light would work in the RGB space, as RGB is faster to display on a computer, even though something like HSV or YUV is a better choice when you are attempting to map something to reality. As long as the calculations themselves are done with reasonable accuracy by their shaders, does it matter what the storage format is in the framebuffer?

But are they ?

IF this was such a simple thing why hasn't it come up before . As he himself said this is not the holy grail .
 
ERP said:
Whatever solution you choose, you pretty much have to deal correctly with addative and multiplicative blending.
Finally someone making a good point :)
 
Last edited:
ERP said:
The issue with non RGB color-spaces in the frame buffer is that standard blending modes tend to do odd things, and thry aren't programmable.

Whatever solution you choose, you pretty much have to deal correctly with addative and multiplicative blending.

This blending issue I think was mentioned on page 8 by DeanoC.
 
nAo said:
There's no double standard here since Faf is not going around praising PS3 (in fact if you pay attention to what he's posting in the last weeks you'll get some clue about him and what he thinks about PS3/X360) and bashing X360 (or viceversa)

So Faf, what DO you think about PS3/360? :)
 
jvd said:
you will still have the int8 percision only . Nothing greater . So you will still get erros even if its stored somewhere else previously in a higher percision .

Precision beyond 8-bit is over-rated in a lot of fields, which have been using a 24-bit packed format and getting accurate results. The problem is that RGB was chosen as the pixel format for a couple of reasons early on, and getting a switch over to a more capable format now is pretty hard. HDR is an answer to RGB's weaknesses in the lighting arena, while keeping the RGB colorspace.

He says "Well to be fair its not the holy grail, it has some issues as you'd expect" your alsoblendthe color space once during the deal between two formats .

sounds to me from his later posts that this was done purely for badnwidth issues . As he himself notes the trade off (shader for bandwidth)

It is a tradeoff, but does he state that they did it because they needed to make the tradeoff, or because they felt it was a good one to make for gains they could make elsewhere?

But are they ?

IF this was such a simple thing why hasn't it come up before . As he himself said this is not the holy grail .

It isn't a holy grail when everything else is RGB and you are attempting to use a format more suited for accurate lighting. To give you an idea, the effects that HDR can bring aren't entirely new, a lot of the framework required for it is available without growing the data-space. HSV is a great example, if you have tinkered with 3D modeling apps, at least some of the ones I have worked with use HSV for the colorspace, rather than RGB. To do lighting that is highly accurate, they still only need a 32-bit space per pixel: HSV (8-bits each) plus an alpha channel. Remember, 24-bits is enough to describe more colors than the human eye can see, and the trick is to keep your precision for each channel at least that high.

When you enter into the realm of RGB lighting though, I agree, you need more precision, as you are heavily manipulating a colorspace to do something that it doesn't normally represent. The math there isn't easy. We need to know what the colorspace is before you can claim arbitrarily that you need more precision.
 
nAo said:
the color space is a subset of CIE-Luv color space with some tweaking

Sweet. :)

While I can see it giving definite benefits while running within CIE Luv, you still run into the downside of RGB before and after, correct? That the colors entering the colorspace are only a slice of the colorspace, and as you leave the colorspace, things kinda get munged going back to RGB?
 
nAo said:
the color space is a subset of CIE-Luv color space with some tweaking

Don't you get odd results with linear interpolation of disparate colors in that space? Notably lerping through a third color?

I'd assume you use 16 bits L and 8 for U and V, but that doesn't work at all with addative or multiplicative blending. Although I could imagine authoring content that could work.
 
ERP said:
Don't you get odd results with linear interpolation of disparate colors in that space? Notably lerping through a third color?

I'd assume you use 16 bits L and 8 for U and V, but that doesn't work at all with addative or multiplicative blending. Although I could imagine authoring content that could work.

I'd guess there wouldn't be that many problems, as interpolators are likely to be in the shader running at fp32 - only the final pixel written to the screen would need to be converted.
Alpha blending may show some visual inconsistancies though - but they shouldn't be noticable for smoke 'n stuff.
 
Sorry for the discontinuity of the thread, I've had to remove some useless noise that came in here.

Be civil when discussing, it's becoming tiring to delete all those messages, and the thread might just end up locked to avoid any further troubles.

Not sure how relevant this is, but it sounds interesting :
http://www.anyhere.com/gward/pixformat/cieluvf1.html
 
ERP said:
Don't you get odd results with linear interpolation of disparate colors in that space? Notably lerping through a third color?

I'd assume you use 16 bits L and 8 for U and V, but that doesn't work at all with addative or multiplicative blending. Although I could imagine authoring content that could work.
If one uses a Luv-like color space there's no hope to have correct blending in the general case even with pre-authored content, due to the fact that CIE chromacity is a (non trivial) function of luminance, so the answer to your question is that additive and multiplicative blending modes don't work out of the box.
In my 'quest' for a good FP16 replacement that could also be used with hw 'fixed function' blending (c'mon NVIDIA and ATI..give me non cacheable textures fetches in a fragment shader ;) ) I evaluated different color spaces and I found only one tricky color format (RGBE) that can support hw multiplicative blending as it is, but additive blending is another beast.
I also tried to make some use of imaginary exponentials, hyperbolic trigonometric functions and quaternions to represent colors..a complete failure, lol :)
It's worth to notice that all color spaces that one can derive from RGB using a linear transform preserve only additive and lerp blending modes, not multiplicative..but unfurtunately all the interesting linear transform one can use map also to negative color components, and we can't store a negative value as it is into a INT8 buffer.
We can do that applying an affine transform but then all, but one (lerp!), preserved blending modes are not preseved anymore.
That's why a tweaked fixed range YUV color representation could support hw lerp blending out of the box.
In the end the solution to the problem of blending is to reuse the frame buffer as texture and to blend in a pixel shader if you want a correct solution.
Lerp blending on CIE Luv can't obviously work but in the end if you try to lerp between CIE Luv colors you get pleasant results most of the time.
Since bilinear filtering is a 2 passes lerp blend, tweaking the color representation to make it work well (problems arise with carry propagation between the most and least significant bits of luminance) gives you very good results! (and I store the luminance logarithm cause I must support a very high range, so linear blending between logarithms doesn't give us a linear interpolation of luminance, on a small range with a fixed point lumimance representation it should be possible to obtain even better results)
At this time I only used this funky color buffers to represent render targets and I use those as textures while resolving a multisampled or supersampled buffer with a simple bilinear filtering gently provided by TMUs :)
I haven't tried it yet but probably this format would do a good job even representing HDR textures, that's why I'm storing the most significant bits of luminance in the alpha channel cause it would compress nicely with DXT5 (8 bit per pixel HDR textures that doesn't require extra filtering in a shader would be very nice indeed ;) )
At some point in the future I would like to do some more work on the 24 bits version cause I think it can be improved to the level to make it useable.

ciao,
Marco
 
Back
Top