Intel talks Itanium's architecture refreshes and road map up-to 32nm

B3D News

Beyond3D News
Regular
Intel's only high-end CPU entirely based on a RISC architecture CPU, targeted toward servers, Itanium, was the centre of the talk Diane Bryant Vice President and general manager for Intel's Server Platforms Group- had with reporters during a press conference held at San Francisco, last week, reports EE Times.

roadmap-itanium.jpg


Read the full news item
 
I think Intel labeled it as a RISC-replacement.

It's certainly very different from what are known or were known as RISC architectures.
 
it's not actually RISC ?
It's VLIW. Three (independent) instructions plus a few bits of extra info are packed into a 128 bit packet. The processor fetches/decodes not single instructions but these packets as one unit.
It's a great approach for cheap, low-power hardware that must run media codecs well ... lots of throughput for relatively low management logic overhead = die size. It will remain a mystery to me why Intel chose to go down that path for a big iron CPU of all things.
 
In addition, EPIC is VLIW redux.

Its bundle scheme still permits a level of hardware instruction dispersal and dependency checking.
It is better at abstracting away the innards of the chip. Early VLIWs had instructions that had to map directly to functional units, and some didn't require decoding at all.
A few VLIWs even gave up on dependency checking, so any errors were competely out of the hands of the hardware.

IA-64 at least abstracts the basic low-level implementation details. A number of execution units were added in the transition from Merced to McKinley, but code that ran on Merced still ran on McKinley.
It may not have run optimally, but it still ran.

For the first VLIWs, something like that would have more than likely broken compatibility.

IA-64 also adds in a lot of instructions for speculation and prediction, in order for it to accomplish in software what OoO does in hardware.


Intel's major error seems to be the EPIC proponents' prediction that OoO improvements would become intractable back in the 90's.
It was at least a decade early on that account.
 
Intel's major error seems to be the EPIC proponents' prediction that OoO improvements would become intractable back in the 90's.
It was at least a decade early on that account.
I think they did not take into account how branch probability is a dynamic problem and hence just cannot be solved at compile time anywhere near the quality of a learning hardware predictor. No matter how good your compile-time heuristics are for modelling one specific execution flow, it's back to square one as soon as your data composition changes.
 
Itanium's dynamic branch predictor is just fine.

It's the architecture's intolerance for unpredictable latency and a number of implementation flubs that have hit it hardest.

edit:
I think I misread your statement.
If you mean the difficulty in extracting ILP that is often conditional on program flow, then that is a roadblock.
 
Back
Top