in all fairness the vast majority of GPUs architectures out there are VLIW, G8x is an exceptionwell why use VLIW then? I see where that could be good for GPGPU, but games? Does it make it very complex to optimize for...
Because GPUs have been that way for donkey's years?well why use VLIW then? I see where that could be good for GPGPU, but games? Does it make it very complex to optimize for...
Far Cry does take advantage of FP16 filtering if it's available. When it's not available, it does the filtering in the shaders.FarCry uses FP16 blending, not sure about FP16 filtering. R5xx doesn't support FP16 filtering, but it runs FarCry with HDR enabled without any special patch (which will e.g. do FP16 fitlering through a shader). I'm not 100% sure, but every article stated, that FP16 blending was required - not a single mention of FP16 filtering.
http://www.anandtech.com/video/showdoc.aspx?i=3029&p=7
here says that even HD2900XT doesn't have hardware to do well in Directx10 and also that geometry shaders doesn't perform well with the GPU
true ?
well , I though that R600 was a lot more powerful than G80 in GS...
I can hardly call any of the those "patched" game engines a even near DX10 title. Well, maybe CoJ is a notch more close to the heaven, in this term. And it is sad to see a site like Anand's to draw such a fundamental conclusion on the given basis.
in all fairness the vast majority of GPUs architectures out there are VLIW, G8x is an exception
It`s a lot more powerful doing amplification, because what the G80 is doing ATM in that scenario is quite...umm...horrible. Whether or not that is merely a driver bug or a feature of the G80 is unknown ATM. There are more uses to the GS than amplification. And one must also factor in how much GS work is in a given scene....because ATi can be 1000x faster at it, but in a scene where the workload is composed of only 1% GS work, and the rest 99% is slower on the R600, the advantage is useless.
ATIs chips, including R4x0, have a notable advantage in Oblivion that I've always been curious about. NV40 really bit the dust hard in that game, for some reason. G70 wasn't hugely better, either.
but , DX10 real games will use GS in an intensive way ,right ? so , doesn't it mean that R600 could keep up with these games a lot better than G80 ?
ps:I know that DX10 real games will take a lot of time to come
GS will obviously get used within DX10, but I'd wager that geometry isn't going to equate to any more than 10% of the full GPU load for a given scene. So, the point still stands -- you can be 1000% percent faster at a certain task, but if that certain task is only a small portion of the entire workload it's not going to get you very far.
The R600 spanks the G80 seriously in the geometry shader. The GS is essentially useless on G80 as performance drops exponentially as you increase workload. We've measured R600 to be up to 50 times faster in some cases.
50x is in the worst case scenario for G80. The G80 is sort of saved by the upper limited of the GS output limit. If you could output more than 1024 scalars chances are the gap would be even bigger. Essentially the deal is that if you do things very DX9 style, the G80 can keep up, but for DX10 style rendering, like if you output more than just the input primitive, performance starts to drop off at an amazing rate. You don't need much amplification for it to become really bad. In real world cases it might not be 50x, but you'll probably see much larger deltas than you're used to. Like you'd see maybe 2-3x gap in dynamic branching in the previous generation in the best case, but for the GS you'll probably find real world cases where the delta is more like 5-10x.
Yes, and I'm not contesting what Humus wrote. The 2900HD's geometry shading capabilities are far superior to the G8x line.I said that based on what Humus said here :
Even if the major pack of developers are willing to exploit the most of the GS potential at some point, the fact that one of the two major IHV's product have a kind of "questionable" performing DX10 path [GS], and it happened that this particular IHV has a significant market presence leadership, it won't be odd if next-to-no-one will bet on heavy usage of GS amplification, just from plain marketing considerations...
You are bad prophet, aren't you?I mean, if you write a ton of GS stuff into your app and 85% of the hardware it runs on will run it at unacceptable speeds, then why even bother?