Jawed
Legend
The high-end GPU (it should be 2560x1600 for an enthusiast class system, or 3x the width of that using an EyeFinity resolution if you really want to be pedantic) should be rendering more triangles.Jawed: Maybe I didn't describe my idea properly. Imagine two systems:
1. mainstream GPU, resolution e.g. 1440*900
2. high-end GPU, resolution e.g. 1920*1200
both systems will achieve 60FPS under these settings. Isn't the load of triangle setup identical in both situations? If so, what's the benefit of having twice as fast setup for the high-end GPU?
Further, because ALU capability on mainstream GPUs is reduced by an even larger margin on lower GPUs (e.g. 20:1 + comparing Cypress and Cedar) it's likely that tessellation will be "turned down" on lower GPUs if HS/DS is a substantial cost, i.e. the density of triangles per pixel will be lower on the slower GPU than on the higher GPU.
(I'm still unclear on whether HS or DS will be mostly ALU limited or TEX limited, by the way... Also, it's hard to determine what proportion of frame rendering time will be spent on VS/HS/DS/GS versus PS.)
The problem we have is we can't tell where the gaming bottleneck will lie for a highly tessellated game on an enthusiast rig. (And AMD will always make the excuse that Hemlock is the enthusiast card, which scales setup/rasterisation by almost 2x - "if the engine is coded properly for AFR").
But we can be sure the enthusiast rig will tend towards rendering more triangles simply because of resolution in games with adaptive tessellation based upon resolution.
Also, we can't tell how badly developer noobness with tessellation will lead to unintended consequences (let alone driver noobness).
Jawed