Well... Maybe... But there is something to it. We do have the history of x64 tessellation.
If you take the Witcher 3 for exmaple, there is marginally any difference between x8 tessellation and x64 tessellation, and with x16 tessellation it was pretty much identical. Yet, HairWorks was "optimized" for x64 tessellation. AMD was weaker at tessellation at the time, so their performance dropped a lot, even though such huge performance drops were not necessary on either vendor's cards. Would it be too conspiracy-y to say that this was done on purpose, especially since there is no visual difference between x16 and x64 and nVidia had an advantage over AMD in this specific tech...?
nVidia is always in these controversies. What's funny is that one of Buildzoid's first comments about the 6800 series, was him wondering if nVidia would figure out a way to overtax or overflow the infinity cache so that performance on these 6800 cards crash.
I like all the open standards AMD goes for, and how they publish a lot of stuff for free for developers. Ultimately it does seem like a disadvantage for them in certain cases, like how TressFX was hijacked and rebranded as PureHair for example.