pic
heres the full picture.. the sections they use are kind of misleading.. heck at 2 areas ATI is actually faster (using old drivers and doubtfully the updated uengine).
yeah a total of 30 seconds out of 260 secs
pic
heres the full picture.. the sections they use are kind of misleading.. heck at 2 areas ATI is actually faster (using old drivers and doubtfully the updated uengine).
They don't want a spoiler of their launch. AMD has the whole option: faster cards, lower price or both. But without information about nVidia's line-up is a bit diffecult for them. AMD did the same with the rv770.
yeah a total of 30 seconds out of 260 secs
heres the full picture.. the sections they use are kind of misleading.. heck at 2 areas ATI is actually faster (using old drivers and doubtfully the updated uengine).
a price drop can be done quickly actually.
The 5870 at $400 can come down to $350 , the 5850 can go down to $250 ad the 5830 can hit $200. If ati feels the need to drop prices.
I'm sure ati already has acess to or will have acess to a gtx 480 or 470 and is messing with clocks on the thing and figuring out the best way to move foward.
A hardwaer part unless already planned is much harder than a price drop. The price drop can be done in 1-2 days
of course point being that unless a game uses EXTREME amounts of tessellation (remember all the whines coming from greenies who said Uengine didn't reflect real games and was only an extreme representation) then cypress looks like its able to keep pace or possibly pass the 6 month late, 2x a big fermi.. then again wasn't the 480 supposed to be challenging the 5900 ?
Not only that but they seem to be operating under the assumption that real games will produce anywhere near the sort of workload that Unigine does in that segment. If they aren't able to do better than that in regular old DX9/10 then I have to say the new caches, higher bandwidth and doubled shader count haven't been put to good use at all. Could the entire architecture be fatally bottlenecked by the scarce texturing resources?
not exactly lets look at some of Kyle (hardocp graphs)
opps wrong link
just a sec let me find it again
why is it that even though the G80 has pretty much more of everything over the x1900xt still doesn't beat it all the time in this game timed walk through?
16X TR SSAA vs. 2X ADAA?
that's why it was the wrong link, had two of them opened closed the wrong one
Your question remained. This is not apples to apples either.
2X MSAA/16X AF or 4X TR SSAA @ 2048x1536
vs.
no AA/ 4X HQ AF @ 2048X1536
and you wonder why the X1950XT wins on parts of the game?
I think the point is that for that particular benchmark, nVidia has a vastly better minimum framerate: when ATI's framerate plummets, nVidia's keeps on chugging. Now, whether or not the benchmark is honest is a separate issue, but I think that this benchmark does show nVidia's new hardware in an extremely positive light.
heres the full picture.. the sections they use are kind of misleading.. heck at 2 areas ATI is actually faster (using old drivers and doubtfully the updated uengine).
At least AMD now needs to work out something with the tesselation for the 6k series to beat them in that useless demo. The worst part will be if the GTX480 barely win by 10-15% in real games. Than the 6k radeons wont need to much speed bump .
A 3rd player in this biznis wouldnt hurt these days.
Did Gigabyte already give away their secret to that?It looks more and more that nVidia wants a real kick-ass DX11 card. I really interessted in their compute speed.
At least AMD now needs to work out something with the tessellation for the 6k series to beat them in that useless demo. The worst part will be if the GTX480 barely win by 10-15% in real games. Than the 6k radeons wont need to much speed bump .
A 3rd player in this biznis wouldnt hurt these days.
So, "DX11 doesn't matter"?
Now, whether or not the benchmark is honest is a separate issue, but I think that this benchmark does show nVidia's new hardware in an extremely positive light.
. To me what we saw in all those white papers was actually the gtx470 benchmarks, if we look at the unigine benchmarks in the white paper it showed 1.5 to 1.8 times faster, but in this latest one, it looks to be 1.5 to 2.0 possibly a little bit more times faster in those same 60 seconds.