My opinion
Open source benchmarks can be exploited in the same way.
It isn't them being open source that would address this, it is if an Open Source benchmark would have its source material and camera paths varied and adapted, but this same avenue (increasing variance) of addressing this issue can be accomplished for 3dmark 03 or another closed source benchmark as well (and has in this case).
Open source benchmarks have other advantages to, like assuring shader code can be swapped freely and independently analyzed, but despite that IHVs could still custom replace specific common shader code...the flaw isn't the code, but simply the code being used often (being Open Source doesn't magically prevent that).
Stating "Open Source" as a solution to this issue is a red herring, fostered by nVidia's smear campaign against 3dmark 03. (Smear campaign is not just an emotionally laden term, it is based on the record of the steps nVidia has taken).
Open source benchmarks have disadvantages too: having the source code available facilitates case specific optimization.
Please consider...you have an open source benchmark, distributed with shader files and source material:
You change the source material and shader files, and performance changes significantly for one card.
Not every reviewer is going to custom modify the shader files uniquely, and try new source material (jump the rail), but some will and then share the info (as happened with 3dmark03, btw).
Now to answer the questions: Was it cheating? Maybe, but maybe it was just an optimization...with the amount of variance allowed, this is not necassarily clear, and depends on the person making the variance. It is a good thing the Open Source benchmark allowed you to vary things to explore the issue, anyways...but don't let that distract you from the fact that 3dmark 03 obviously already has sufficient variance to expose this issue, and it, or future 3dmarks, could add more.
The difference for closed source from Open Source (atleast to the way that Open Source can uniquely do this) is that it narrows down access to variance (bad, but changeable) and variance possibilities (good because it is easier and more reviewers are capable of doing it, and therefore likely to make decisions for doing so, bad because it makes cheating easier...but that last bad point is also chanageable).
You change the source code.
This is unique to Open Source, but it has unique hurdles...to manifest it, you have to change the source code in a way that defeats the ways IHVs have already developed to make engine specific recognition between versions changes, in which case what you are doing is making a new benchmark.
The good thing that will always be offered by Open Source is that it makes it much easier to make this new benchmark than making one from scratch...but you didn't get this benefit from the benchmark being Open Source, you got it from someone going through the effort of making a new benchmark, and then that benchmark actually being used for reviews. More accessible variance controls (of sufficient sophistication) actually achieves this end better, because then more people are able to check for themselves.
What is desirable about Open Source benchmarking is as an addition to closed source and (to some degree) more narrowly defined benchmarking...the advantages of the open source benchmarking will not be lost in this arrangement, but the advantages that a specific closed source benchmark might offer (keep in mind that the allegations of 3dmark's unsuitability is an nVidia proposition) will be retained.
An Open Source benchmark doesn't offer significant advantages merely by being Open Source...it first has to offer the toolset and engineering effort to succeed as a benchmark. Or, well, it does if you are replacing a closed source benchmark....otherwise, you can spend time covering the things the closed source benchmark doesn't do, or doing them a different way for contrast.
Don't forget that nVidia's attack on 3dmark 03 would have stated the same things if it was Open Source. The difference is that it would have been easier for them to cheat, and then, depending on the above outlined effort by people (that 3dmark 03 also allowed), less likely for them to get away with it. Since they didn't get away with it, and specifically because of the functionality 3dmark 03 already offers, I propose that the focus shouldn't be on replacing 3dmark 03, but making it harder for them to get away with it in the future by making the tools used to expose this available to a wider selection of people (did B3D pay a bunch of money to become a beta member, or was that perhaps a direct first step to accomplish this? I think the latter currently), increasing the usability of offering this variance, and increasing the range of variance allowed.
Oh, and having more benchmarks in addition...an Open Source one seems like a good idea.