I bet it can barely match the performance that a 20nm maxwell can deliver.
Just ignore the raw Glops nonsense, besides a few lapack routines where the two were comparable, for more general and more flexable tasks I and my co-workers tested so far, MIC is roughly 1/2 the performance of K20x, despite of the roughly equal rated Gflops of the two products.
And furthermore, despite of the marketing nonsense, it actually takes more effects to optimze the codes on MIC than on GPU, and the codes are less portable.
The main problem with MIC is:
1, SIMD vs SIMT, the programming interface of CUDA is thread-level, whilst for MIC, it is a very fat vector-level, which means you need to write assmebly-like vector ops codes that could end-up being less portable to further generations of MICs, and of cause you have to forget template-like stuff for high productivity, and more or less stick with C and assemebly-like programming styles.
2, GPU has a programmable L1 cache and large register files which many general tasks can take advantage of, whilst MIC has no such features nor they are intend to implent such in their future versions according to Intel tech stuff I met.
3, It seems to me Nvidia's hardware can simply handle/manage massive parallel work-loads better and can utilize their memory-bandwidth better, for whatever reasons.
If the rumor spec about maxwell is true, I dont think Nvidia need to worry about intel's MICs in the near future.