Offload to what? How do you accelerate AI calculations?
What kind of special dedicated hardware does AI need exactly?
Likely it must do branching fast++, but beyond that...?
Offload to what? How do you accelerate AI calculations?
What kind of special dedicated hardware does AI need exactly?
Likely it must do branching fast++, but beyond that...?
That'll help but you'll need to stick to more traditional cores like P-M and Athlon, and not go to the extremely simplified and massively multi-core chips like niagra and cell, IMHO.
Offload to what? How do you accelerate AI calculations?
What kind of special dedicated hardware does AI need exactly?
Likely it must do branching fast++, but beyond that...?
I suppose I ment a way to hide those latencies. Much like current processors do with OOOE, data prefetching, large caches, etc.
Essentially processors like the P-M and Athlon tend to make rather excellent AI processors to begin with. They have pretty much all the traits listed so far, low latency random memory access, fast branching/prediction, data forwarding, short pipelines, and will shortly be multi-core.
Yup, that's kinda what I was thinking with #3. Now that we even get multicore Real Soon (TM), coming up with a dedicated AI accelerator just sounds like lots of work for very little extra performance and/or functionality.