TheChefO said:
screenspace vector based effect? would you be so kind as to explain or point me in a direction to find info on this?
I can do both. I'm cool just like that.
Well, I can explain it simply, given that the papers and presentations linked in this post go into the gritty details already.
MB is obtained in Real-Time by rendering the frame you want to see motion blurred first and then an offset buffer, or velocity buffer, in second. Offset buffer which is calculated on a per vertex basis (according to the movement direction) and then stored in screenspace (RGB, like a Normal Map, if you will). And finally you apply a blur on the pixels of your original frame based on the vectorial (movement) information stored in the offset buffer.
And as they say a picture worth a thousand words, so here's some pictures to illustrate that:
You render the frame you want to se motion blurred, in this case a tunnel:
You calculate the velocity information and then buffer it, store it into screenspace RGB information, this gives you something like this (without the pointing arrows, of course
) :
And finally you apply your blur effect (akin to the ones you have in Photoshop, Matrix Convultion and the like) to the frame. the amount of blur applied per pixel, relatively to the other pixels of the frame, is determined by your velocity buffer:
Pictures shamelessly stolen from an Ati presentation and uploaded to
xs.to.
I hope that this explanation was of any use to you, or to anyone else.
And here's the promised paper and presentations:
An academic paper on a RT implementation.
http://www.dtc.umn.edu/~ashesh/our_papers/motion-blur.pdf
A sample article from the excellent ShaderX book series, from an implementation from Ati guys:
http://www.ati.com/developer/ShaderX2_MotionBlurUsingGeometryAndShadingDistortion.pdf
An Ati presentation that sum up the technique in more simplistic terms:
http://www.ati.com/developer/gdc/GDC2005_BringingHollywoodToRealtime.pdf
You have other sources on the subject in the references section of these papers.
spdistro said:
Valve, who are clearly more superior developers than you guys, no pun intended, can achieve the same effect with AA on 360.
How charming and baseless at the same time.
May I suggest you not to be rude with folks just because you can't address the correct technical points they make?
spdistro said:
i believe when you have unified architecture with unused features like memexport and an API which is inbetween dx9 and dx10 still in its infancy compared to an already established GPU related API of Nvidia, it would be easier for devs to show with Nvidia thier strengths eaarly and the ones with ATI Xenos, theier strengths later
That has stricly nothing to do with operating Full Screen Anti Aliasing - on a 720p RenderTarget - comparable in quality to SSAA and MSAA on C1/Xenos without having recourse to tiling. Nothing.
Actually, it barely makes any sense.
spdistro said:
i believe you are wrong in this one
Where is your
technical explanation behind this belief of yours. Is that just blind faith that push you to believe Valve could obtain an AA as good as RGMSAA while having a smaller performance hit than
simply tiling the framebuffer into 3 parts...
spdistro said:
just like you guys developed a system of not wasting system resources and got a form of AA with faked hdr at the same time keeping the visuals intact
Firstly, this form of AA Heavenly Sword uses is hardware supported and called MSAA, secondly what is your definition of High Dynamic Range Imaging and Rendering, exactly?
Because as far as I know NAO32 has the capability to store image information superior and inferior to [0,1.0], just like other HDR implementation do.
You can implement HDR in Real Time by different manners, using FP16 RGBa RenderTarget is just one of them. FP10 RT and NAO32 (using a different Colour Space) are just different implementations, not "fake" and/or incorrect ones. Or else you could just as well consider any format but one using 4096bits per pixels, or something, as being a "fake" one.
There's only difference in maximum precision and implementation limitations between the various HDRR available. Nothing to do with true or fake.
spdistro said:
thats why i believe they will use memexport, you cant use tiling when using memexport and you cant use memexport when using Tiling so I believe when taking into the concept and features of using memexport, Valve could use this implementation to solve the AA problem without using AA.
So, simply put, you like the term MEMEXPORT as a buzzword and you try to shoehorn it into the discussion? No matter how little relenvancy it has.