Looking at computerbase, real-world results seems to be different. Comparing HD4890 (16 ROPs) and HD5850 (32 ROPs) at:
2560*1600: HD5850 is 48% faster
2560*1600 + AA 4x / AF 16x: HD5850 is 32% faster
2560*1600 + AA 8x / AF 16x: HD5850 is 22% faster
1920*1200: HD5850 is 43% faster
1920*1200 + AA 4x / AF 16x: HD5850 is 31% faster
1920*1200 + AA 8x / AF 16x: HD5850 is 27% faster
it doesn't seem, that HD5850 is able to utilize the advantage of twice as many ROPs compared to HD48xx
Is the glass half full or half empty? You could turn it around, and from the very same numbers argue that the 5850 shows clear benefits - the size of which are going to depend on the particular application, settings and scene.
I keep seeing statements on these forums that GPU such and such "is XX limited" as if it were some universal truth. In reality different applications/settings and scenes are going to have different requirements and bottlenecks - which is as it should be. GPUs strive to achieve some reasonable balance between resources taking both usage patterns and cost into account.
Since settings play such a big role, I'd say that the take home message is that reviewers need to test a wide variety of settings as well as applications, and that customers need to pay attention to the tests that target their particular set of needs and wants. (Regrettably, I find that most reviews test applications I do not use, at settings I wouldn't use.)