They say they intend to use it for training. The question, then, is how flexible will it be? If it's too fixed function, it risks being left behind since the most effective ways of training a network is still very much an area of active research, as well as varients in the structure of the network themselves.
Deep learning networks are currently very "rudimentry" - input -> output processes, but it seems likely that the next step will be some sort of stateful network - that is, networks with memory. We have some very basic types right now, but getting them to full effectiveness may require some tweaking to the training algorithms, and if Nervana isn't flexible enough to do this, it will be useless in that field.