Actually, I think you're confusing this with reviews were the card was included
No, he's probably thinking of the supposed memory controller tweaks, that brought big wins in Doom 3 and a few other OGL titles IIRC.
Actually, I think you're confusing this with reviews were the card was included
I think Silent_Buddha is right in this point. Not only the OpenGL driver, which boosted MSAA 4x performance by tens percents, but even non-MSAA performance was improved slightly. I also remember AoE3 and one other game, which scored desperately, but which's performance was boosted really significantly. Looking at my notes, every second Catalyst release at that time brought some (measurable) performance improvements for X1 series.Actually, I think you're confusing this with reviews were the card was included lateron and where games like NfS Carbon or Gothic 3 were benchmarked. In those games for example, Nvidias tightly integrated texturing showed its dark side, so that made X1800 look way better in comparison.
No, he's probably thinking of the supposed memory controller tweaks, that brought big wins in Doom 3 and a few other OGL titles IIRC.
Perhaps he means a combination of command processing and juggling the two geometry engines (called "graphics engines" for some reason I can't fathom) is the root of a lot of inefficiency.The per-frame time really is the killer here. We're waiting from Dave to see if the command processor is what he meant by front end, but that would be a real shame.
So the overheads in scheduling for these two geometry engines lead to ~90% throughput on "standard untessellated" geometry?By contrast, Cayman has the ability to setup and rasterize two triangles per clock cycle. I'm not sure it quite tracks with what you're seeing in the simplified diagram above, but Cayman has two copies of the logic block that does triangle setup, backface culling, and geometry subdivision for tessellation. Load-balancing logic distributes DirectX tiles between these two vertex engines, and the processed tiles are then fed into one of Cayman's two 12-SIMD shader blocks. Interestingly, neither vertex engine is tied to a single shader block, nor vice-versa. Future variants of this architecture could have a single vertex engine and dual shader blocks—or the reverse.
How good is Z compression for shadow buffers?To be clear, I expect that some of the per frame time is raster related, as shadow/reflection maps aren't 100% geometry limited, but if much of the rest is command processor limited then that would be quite baffling.
Perhaps he means a combination of command processing and juggling the two geometry engines (called "graphics engines" for some reason I can't fathom) is the root of a lot of inefficiency.
http://www.techreport.com/articles.x/20126/2
So the overheads in scheduling for these two geometry engines lead to ~90% throughput on "standard untessellated" geometry?
I suppose a fruitful comparison could be made with GTX460 running at 880MHz. Or with Cayman down-clocked to GTX460's speed.
How good is Z compression for shadow buffers?
The big advance of Nvidia's Fermi architecture was that they split the geometry processing up into multiple chunks, 8 in the GF100/GF110, two in all others.
Why do you say it's like 1/4?GTX460 is much more castrated than just half GF100 in geometry. Its more like 1/4.
So? All I see is GTX460 being pretty much exactly half as fast. Factor in the lower clock and it would actually do a tiny bet better per clock (and it also has one of its SM, hence polymorph engine too, disabled).GTX460 is much more castrated than just half GF100 in geometry. Its more like 1/4.
Check out normal,moderate and extreme in heaven 2.1 on the second picture http://www.pcgameshardware.de/aid,8...klasse-Grafikkarten/Grafikkarte/Test/?page=12.
No idea where Charlie got the idea that GF100/GF110 have 8 GPC, but they don't. The magic number is 4 (for setup/raster), while GF104 has 2 GPC... If you talk about the polymorph engines, that's 16 vs 8 for GF100/GF110 vs GF104. Looks very much like half to me...Also charlie wrote this in the 6970 article http://www.semiaccurate.com/2010/12/14/look-amds-new-cayman6900-architecture/
The EyeFinity 5870 was a totally niche product for a tiny niche market. You don't expect that at the same price as the mainstream cards... Plus it needed a different pcb.Do you guys think the 500-700mb fb advantage will come into play for future lineup of "true" DX11 games...? at least before Cayman vliw4 shaders....rops..tessellator engines runs out of steam...just how did AMD priced the 2GB eyefinity 5870 for so much more than the vanilla 5870....and here both 5870 class gpus comes with 2GB! That is good right?
Ah, ok. You're talking about individual titles. Then that may be true.
I was thinking more of a broader increase of performance, and that didn't happen unfortunately.
Why do you say it's like 1/4?
I am confused...how much will updated drivers help Cayman "new" architecture...?
On one hand, you have people saying it would, while others benchers dont think so...
On another hand, AMD driver release notes usually claim 10-40% improvement with every new Catalyst....yet tests i read wrt to driver comparisons...yield virtually no big fps gains in games..but Cayman is like really "new" bro.
On yet another hand, i remember reading new gpus went from totally average at launch to pretty good gains over older gpus...aging well or just new games optimizing for these new gpus....as an example...i thought i was pretty happy with a 4870 1GB, after reading 5850 launch reviews..and behold at present i re-founded out that 5850 has been perf at a higher level than 4870 1GB with so many new games
Should i place my faith in Cayman architecture and anticipate new games will make use of it better....than current ones...than any driver updates will ever help?
I wish sites would come back and do a retrospective review of old gpus....atm with gpu tech slowing....make even more sense...i no longer see 4870 1GB in so many Cayman reviews.
On the final hand....how long more to DX12? Will 28nm gpus come with DX12..
But I don't like that so low resolution you are testing at.
blame 3dmark , they want money to test at higher resolutions