geo said:when he must know that ATI has it on tap for R520
Jawed said:I suppose it could be a way of using all that spare memory bandwidth in the new architecture...
_xxx_ said:geo said:when he must know that ATI has it on tap for R520
Now where does this come from?
Jawed said:Geo, I'm curious, how come you're so confident that ATI has floating point blending and AA on R520? What have I missed?...
I can sort of imagine FP10 with AA, but FP16 with AA?
I suppose it could be a way of using all that spare memory bandwidth in the new architecture...
Jawed
Yeah WTF is he on about??? maybe his talking about bloom???Xmas said:Sometimes I get this feeling that Mr Kirk completely switched to the PR department. That, or he really doesn't know what he's talking about.
According to nvidia its fine to use FP16 for colour calculations so whats the difference between FP16 in SM2 and SM3?It seems like an obvious thing to say, but the quality of HDR done in Shader Model 2 is less than in 3. The human visual system has quite a bit of dynamic range. When you build a picture through components, as graphics engines do, you get round-off errors as you create each object. Using Partial precision in SM2 exacerbates that problem and is less pleasing to the eye.
Xmas said:Sometimes I get this feeling that Mr Kirk completely switched to the PR department. That, or he really doesn't know what he's talking about."But with HDR, you render individual components from a scene and then composite them into a final buffer. It's more like the way films work, where objects on the screen are rendered separately and then composited together. Because they're rendered separately, it's hard to apply FSAA (note the full-screen prefix, not composited-image AA! -Ed) So traditional AA doesn't make sense here."
geo said:I thot we'd moved this to the "consensus" column.
Jawed said:Geo, I'm curious, how come you're so confident that ATI has floating point blending and AA on R520? What have I missed?...
I can sort of imagine FP10 with AA, but FP16 with AA?
I suppose it could be a way of using all that spare memory bandwidth in the new architecture...
Jawed
That's right, because as we all know, it's impossible for a GPU with a 128-bit memory bus to beat the crap out of one with a 256-bit memory bus. Notice that this is Half-Life 2 we're looking at, and not Doom 3.Last time it was how 128-bit bus was still just peachy. . .
geo said:Do I have the wrong end of the stick then? I thot we'd moved this to the "consensus" column.
Not too bad. On a benchmark that is known to favor ATI GPUs, the 6600GT (with a 128-bit memory bus) is within 15% of the 9800 Pro. That's not too bad.And vice-versa:
trinibwoy said:Still sticking up for the 128-bit crowd pretty well - even with AA
http://www.xbitlabs.com/articles/video/display/2004-27gpu2_8.html
And what Bob said too....