They're not entirely made up, but they're not exactly correct either. The numbers are extrapolated (in the GPU) from the performance that a group of generic boxes would need to render at the same quality as their GPU.... So Nvidia is saying that you would need a brute force processing ability of 1.8 teraflops to match its GPU, ATI is saying closer to 885 gigaflops for theirs. That doesn't give you that amount of power "Actually", though. You can't, for example, use all of those hundreds of "virtual gigaflops" for simulation, or really anything else, it's not actual performance, only equivalent rendering performance.
This does not mean that the GPUs are complete slouches by any measure, though. In terms of actual floating point ability they should both be closely approaching the performance of the Xbox 360 GPU in general purpose "Actual" floating point performance.
But, finally. The complete numbers... the 1 teraflops, and two teraflops measures are estimates, and they're estimates of both the CPUs real floating point power and the "virtual" floating point power of the GPUs combined.
Hope that helps some.
Later