In the past, nVidia has gone to some lengths to avoid putting an extra power connector on their cards, in part perhaps to make their technology more palatable to OEMs. The original GeForce 256s were at the very edge of compliance with the AGP spec (the really should have had an extra power connector), and the GeForce 4 4x00 suck every bit of current they can get out of the bus.
It's not unreasonable to suppose that when they planned the NV30 nVidia thought they would get market-crushing performance by staying within the AGP spec at .13 microns, and only later learned that the performance they envisioned at the time the chip was in it's early planning stages wasn't so far out of the reach of their competitors. You would not say that their design was limited by target power consumption so much as it was limited by their original target clock speed, which was chosen on the basis of a lot of factors, including power consumption.
It's not unreasonable to suppose that when they planned the NV30 nVidia thought they would get market-crushing performance by staying within the AGP spec at .13 microns, and only later learned that the performance they envisioned at the time the chip was in it's early planning stages wasn't so far out of the reach of their competitors. You would not say that their design was limited by target power consumption so much as it was limited by their original target clock speed, which was chosen on the basis of a lot of factors, including power consumption.