In an ideal scenary where you have a good formal specification of what your block of code or block of logic does it may be as easy to verify RTL or software. However that's not usually the case, neither for RTL or for software. Given the cost of bugs in hardware you would usually expect that better formal specifications would be used for hardware design than for software development.
In any case if you have a big complex piece of hardware with a large number of blocks interconnected however good the specification is you will have all kind of bugs. Some may be quite obscure. And in a very complex piece of hardware, like a CPU, the number of possible test cases required to fully validate it is incredible large. Can you test all the programs that can be written in x86? There can be millions of possible combinations that could trigger that obscure bug.
Poor validation of a RTL design, for whatever reason, bad specification, bad validation tools or test sets, is much more dangerous and expensive than bad validation of software. The price of fixing hardware (even just metal layers changes) is orders of magnitude more expensive than fixing software. How many patches can MS release in a year? How many steppings of a given CPU can Intel produce in a year? And at what price?
There is also the problem about how fast you can validate RTL versus how fast you can validate software. If you are basically emulating the logic gates by software it's going to be orders of magnitude slower to run the same amount of testing in a piece of RTL code than on a piece of software code. Of course FPGA emulation could be somewhat faster, if available, but that also has problems of its own. The only thing that can reach the speed available for software testing is the actual hardware. But discovering bugs on the actual hardware is very expensive (how much it costs to create the masks and produce the chips for validation) and debugging silicon can be quite more difficult than debugging software.