Sir Eric Demers @ Rage3D

I don't really take it as CUDA bashing at all, the first part of what he says makes complete sense. CUDA is very much NV's language at this point. No company in their right mind would go and base a major product of theirs on a competitor's. It would be far too easy for certain "quirks" to exist in CUDA that make it work really well on NV hardware and really badly on ATI hardware. Also CUDA support would be a moving target, where every few months NV releases some new features so ATI would always be months behind.

As to CUDA being G80-centric, I don't see that argument. They say they're working on OpenCL, and if you go look at some example OpenCL, you'll notice it looks almost identical to CUDA. If they can support OpenCL, they could support CUDA. It just makes no sense for them to.
 
Well, it seems he don't like the proprietary nature of the platform, or to put it that way -- it's "theirs", not "ours", at least. ;)

And what about that comment on superscalar order? Looks like VLIW is kind of exclusive to that description, at least in R600 view?!
 
At work so haven't read it but I quickly glanced at it and one thing struck me; who picked that typography? It's god-awful to read!
 
The micro-stuttering explanation was interesting...

I wonder how Nvidia is dealing with micro-stutter. Similar to AMD/ATI, perhaps?
 
Back
Top