From my limited knowledge from an introductory class on IC design and optimization, the yield depends on: redundancy (optimized by the designers), process error density (rather high on new process) and die size (I may be forgetting something, sorry).
Redundancy of course can be done on the design and this would be different from ATI vs NVIDIA, also their die sizes are. But the yields due to process error density and die size are not linear, they increase exponentially with the increase of die size (I think in most cases a Poisson distribution is assumed on the wafer area).
So basically the bigger the die the bigger the chance of getting errors due to the process and if the error lands in some logic without redundancy you have a bad die. A die that is 50% bigger can have more than twice worse yield due to the exponential increase in error and even different strategy with regards to redundancy.