The reason GPUs became interesting is that they offered 1-2 orders of magnitude greater compute density or performance per watt or per $ (and combinations thereof).
None of those things apply with Knights Landing in the wild. And it will run the code that everyone has been running, and happily support compute paradigms beyond crappy old MPI and OpenMP.
You can freely choose your desired mix of task- and data-parallel kernels within a single architecture, memory hierarchy, execution model, instruction set and clustering topology. It's a flat, sane landscape.
Yes, you're right: KNL runs the code you've already been running. However, it runs that code slower than HSW. Have you ever used a Xeon Phi?
Perhaps counter intuitively, there are far more applications in the wild that use CUDA or OpenCL than use KNL vector instructions effectively. KNL is several years too late to the GPU compute party, and it has the same drawbacks as GPUs: if you want improved compute density, you have to rewrite your code in a non-trivial way. However, because it's so late, it has to compete against the far more mature GPU compute ecosystem, with a significant performance/W and performance/$ disadvantage compared to either NVIDIA or AMD GPUs, but without a gaming market to justify Intel's significant R&D costs.
I do computationally dense simulations for a living. We buy large numbers of GPUs, and we couldn't get our work done without them. We write very little CUDA or OpenCL code - it's not necessary to get the job done, thanks to the libraries that already exist. I'd would buy Xeon Phi if it would make my applications run faster, but so far it's been a big disappointment. KNL looks better than KNC in many ways, but adoption will be slow, because the important libraries don't support KNL, and writing code to make Xeon Phi perform well is much more difficult than writing efficient GPU code. After years of broken promises (Larrabee! Knights Ferry! Knights Corner! Knights Landing!), Intel has a lot to prove wrt to Xeon Phi performance on things that matter.