25 chips "that shook the world"

In many ways the Intel of todays owes much to that very backstep that the Opteron forced Intel to take.

I'm not sure if we should see that as an advantage. It means we're stuck with the inefficient x86 architecture for many more years.

Had that not happened, it's quite possible we'd still see Intel working on refining the netburst architechture and quite possibly there would be Core 2 duo or Nehalem as we know them today.

Most probably not. Intel's roadmap for x86 ended at Netburst. There was never a 64-bit model on there. The plan was to make Pentium 4 the last x86, and keep the platform 32-bit only. Then migrate to Itanium for 64-bit and get rid of all the legacy that is x86 and its software.

IMO - it was a far larger shake-up of the world of computing in general than the PPro.

Pentium Pro is when x86 stopped being CISC and started being 'RISC'.
 
It looked right because they were the only ones doing it.
But they couldn't really get an advantage for 1-2 processor systems. And even 5 years down the line they didn't manage to get it off the ground.

Eh from everything I've read the Opteron was a huge success in 4p - 8p - and larger configurations. Especially when compared to the best Xeon's that Intel could muster at the time.

Enough so that even large server companies (Sun Micro for example) migrated many of their multi-processor servers from Sparc (or other RISC based processors) to Opterons.

Regards,
SB
 
Eh from everything I've read the Opteron was a huge success in 4p - 8p - and larger configurations. Especially when compared to the best Xeon's that Intel could muster at the time.

Except I was talking about 1-2p systems.

Enough so that even large server companies (Sun Micro for example) migrated many of their multi-processor servers from Sparc to Opterons.

I think that has more to do with the Pentium Pro (which the Opteron is a derivative of, it uses the same pre-decoding to micro-ops and out-of-order execution approach to x86), and the scale of the x86 market in general, keeping the cost per unit down.
Opteron just happened to be the best x86 server CPU at the time, but x86 was already eating into server sales long before Opteron, and even during Opteron's heyday, Xeon was the second-most popular server CPU.
 
I'm not sure if we should see that as an advantage. It means we're stuck with the inefficient x86 architecture for many more years.

There are more issues with the Itantic that have not been mentioned. The Itanium ISA attempts to improve speed by the use of implicit parallelism using VLIW (Very Long Instruction Word). This would be fine if anyone could produce a decent compiler for the Itanium, which has not happened and probably will not happen. Compiling for a VLIW architecture requires the compiler to know generally what branches will be taken and those not generally taken. Compilers are better than humans at producing fast assembly code in nearly all cases. However, compilers are not clairvoyant, they do not have the data that the program will run, so that the compiler does not know how it will run. Without knowing how the code will execute, it makes optimizing a program on the Itanium hard on the compiler. Also, keep in mind that x86 seems to still be pushed heavily by Intel, but unless you want to run Windows 7 on your smartphone with a screen so small it is unusable, it will usually be done instead by an ARM based product not from nVidia. Surprisingly, the only reason to use an Atom processor is to run Windows 7, painfully slowly.

Most probably not. Intel's roadmap for x86 ended at Netburst. There was never a 64-bit model on there. The plan was to make Pentium 4 the last x86, and keep the platform 32-bit only. Then migrate to Itanium for 64-bit and get rid of all the legacy that is x86 and its software.

You forgot the Intel processor codenamed Tejas, its was why Socket 775 initially was referred to by some as Socket T. IIRC it would stretch the instruction pipeline even further to even more ridiculous lengths to increase the CPU frequency. Tejas was also expected to be power hungry and dissipate enough heat that it became the primary driver behind the Intel's creation of the (now dead) BTX form factor. Whenever Intel is on some computing standards board, you can expect that Intel more or less is running the show (ramming awful standards through) or it will be a dead standard, ex. USB/IF, PCI-SIG and 1394ta (mostly dead on the PC, but is still alive and kicking on the 787, YF-35, and the Space Shuttle).

Pentium Pro is when x86 stopped being CISC and started being 'RISC'.

Not quite, x86 processors still use Variable Length Encoding (VLE) for the encoding instructions, the instructions still end up getting cracked into smaller (RISC-like) micro-ops. The use of only two operand instructions is unlike any actual RISC ISA that use three operands. x86 still has 30 year old baggage and cruft that needs to be supported, 16-bit unreal mode anyone? The low number of registers, eight in 32-bit mode, with their strange usage restrictions are also not very RISC-like. As for the Pentium Pro, it lopped off a few 16-bit registers, making the mixed 32 bit and 16-bit code in Windows 95 run like molasses. Then the PPro added features like a super-scalar design and out of order execution, this does not make it a RISC design, it makes it share features with other RISC processors, but this not make it RISC.

That was way too much nerdrage on my part, sorry to everyone, but Scali.
 
There are more issues with the Itantic that have not been mentioned.

I said sticking to x86 was a bad thing, not that going to Itanium was a good thing.
But at the very least it would get us to move away from a lot of the x86 legacy, and also make it easier to move towards other architectures than Itanium (the move from x86 to Itanium would mean that software developers have to put more thought into making their products cross-platform in general, much like how it is already common in the *nix world).

You forgot the Intel processor codenamed Tejas, its was why Socket 775 initially was referred to by some as Socket T.

As far as I recall, Tejas was just another iteration of the Netburst architecture, and was still 32-bit. Hence I didn't forget it.

Not quite, x86 processors still use Variable Length Encoding (VLE) for the encoding instructions

Obviously since they are still x86 processors, they still use the same x86 instruction encoding, which is and always will be variable length.
I had to admit, I chuckled.

That was way too much nerdrage on my part, sorry to everyone, but Scali.

Indeed, you argued some pretty useless points. Your argument was basically that they're still running x86-code which is still CISC. Thank you, Captain Obvious :)
That was my entire point. You want to get away from x86 so you can get rid of the cruft and all the extra complexity in translating x86 to something a modern backend wants to execute.

And minus points for your reading comprehension.
I never said x86 is RISC, I said it is 'RISC'. You go on a rant because you misinterpreted that. Only an idiot would think that x86 processors no longer run x86 code :)

Oh, and you had some factual errors aswell. PPro wasn't the first superscalar x86 processor, the Pentium already was superscalar.
 
Last edited by a moderator:
eh, whats the difference

Well, the quotes :)
I think it's pretty obvious that x86 is a CISC instructionset and as such no x86-compatible chip can ever be purely RISC.
But the Pentium Pro adopted RISC-like features in its backend and decoupled the x86 CISC decoding frontend from the actual execution. So 'RISC' as in RISC-like (I think some people call it a CRISC architecture, Complex Reduced Instruction Set Computer).
This allowed Intel to get much more competitive with actual RISC processors as others have also mentioned before.
In that sense it 'shook the world' because x86 processors were slowly starting to eat into the workstation/server market.
 
Well, the quotes :)
I think it's pretty obvious that x86 is a CISC instructionset and as such no x86-compatible chip can ever be purely RISC.
But the Pentium Pro adopted RISC-like features in its backend and decoupled the x86 CISC decoding frontend from the actual execution. So 'RISC' as in RISC-like (I think some people call it a CRISC architecture, Complex Reduced Instruction Set Computer).
This allowed Intel to get much more competitive with actual RISC processors as others have also mentioned before.
Yes, but the lack of registers, fixed length encoding and three operand format for instructions, and any kind of orthogonality is missing from the x86 ISA, with that you miss the entire point of a RISC ISA, and thus no x86 processor can actually be termed "RISC". The features in the backend of the PPro still don't make it a "RISC" processor. In fact, superscalar, out of order design are not RISC features necessarily at all, they are features many RISC processors share with the PPro, but are not features that one must have to be a RISC processor. The POWER6, Cell, and XBox 360 are POWER processors that are scalar and in-order. I would find it hard to call these processors CISC. To get a good idea of what I mean, try coding something simple in MIPS assembly using SPIM or an equivalent simulator for POWER, and then try the same task in an x86 simulator. Your code will look nothing alike, especially if you use the x87 FPU with double precision floats. Mmm, a register stack with push and pop.

In that sense it 'shook the world' because x86 processors were slowly starting to eat into the workstation/server market.

That had been actually been happening since before the release of the 386, which was one of many processors that had been massively overhyped by Intel, and failed to deliver for servers. It too supposedly had support for multiple processors. The push with the PPro was to get corporate bean counters to switch to Windows NT 4.0 server or later to Windows 2000 server, the only reasons to use a PPro server. Well, running worms like "Nimda" from the internet was another to own a PPro, just not a very good one.
 
Yes, but the lack of registers, fixed length encoding and three operand format for instructions, and any kind of orthogonality is missing from the x86 ISA, with that you miss the entire point of a RISC ISA, and thus no x86 processor can actually be termed "RISC".

That's what I said:
"I think it's pretty obvious that x86 is a CISC instructionset and as such no x86-compatible chip can ever be purely RISC."

In fact, superscalar, out of order design are not RISC features necessarily at all, they are features many RISC processors share with the PPro, but are not features that one must have to be a RISC processor.

I never claimed they were.

The POWER6, Cell, and XBox 360 are POWER processors that are scalar and in-order. I would find it hard to call these processors CISC. To get a good idea of what I mean, try coding something simple in MIPS assembly using SPIM or an equivalent simulator for POWER, and then try the same task in an x86 simulator. Your code will look nothing alike, especially if you use the x87 FPU with double precision floats. Mmm, a register stack with push and pop.

You look like an idiot discussing these obvious things as if you're the only one who knows them.
 
so I stand somewhat corrected. (sure, I'm aware that Sparc is alive, but more restricted to a niche relatively speaking)

Also, keep in mind that x86 seems to still be pushed heavily by Intel, but unless you want to run Windows 7 on your smartphone with a screen so small it is unusable, it will usually be done instead by an ARM based product not from nVidia. Surprisingly, the only reason to use an Atom processor is to run Windows 7, painfully slowly.

What about running closed source games and pretty much every standard desktop software not included in debian packages, on a 10" laptop, without spending ages recompiling stuff and without fucking around to get it to boot? (assuming you have to for a reason and find yourself in a PC environment, with only tools made for the PC)

there will be laptop ARM computers with some merits if they're to be used for media playback, notes, presentations, web etc. with a big battery life. But it might be just simpler to use a laptop PC instead.

and as for your point about smartphones. perhaps they are indeed pretty useless. they'll be awkard, with terrible text input and a tiny screen no matter the kind of CPU it's running on.
sure, stay away from that bling-bling crap unless you have a real need for it, for whatever unusual reason.
 
There are more issues with the Itantic that have not been mentioned. The Itanium ISA attempts to improve speed by the use of implicit parallelism using VLIW (Very Long Instruction Word).
The parallelism in EPIC is explicitly pointed out by the code through the template and stop bits, and by how the ISA defines valid instruction packets as not being rife with dependences.
Implicit parallelism is derived by how x86 chips analyse the instructions they load and check for dependences that IA-64 would have spelled out.

Whatever other issues there are with Itanium, I wouldn't dispute that from a hardware point of view, having explicit indicators can be pretty handy.
 
What about running closed source games and pretty much every standard desktop software not included in debian packages, on a 10" laptop, without spending ages recompiling stuff and without fucking around to get it to boot? (assuming you have to for a reason and find yourself in a PC environment, with only tools made for the PC)

I have a question along these lines, ive been looking at a netbook it uses an ARM processor, but runs windows CE. Does this mean it will run standard windows apps ?
 
I have a question along these lines, ive been looking at a netbook it uses an ARM processor, but runs windows CE. Does this mean it will run standard windows apps ?

It will only run applications compiled for ARM (or .NET stuff) as far as I know.
 
which netbook is that?
it's part of the choice, if you have the options of a x86 netbook with 2h30 battery life and an ARM netbook with 8h battery life, the latter one can be an excellent choice. you can install a command line debian lenny and apt-get the lxde desktop, then you get an excellent software environment supported on most non x86 platforms.

I would miss running warcraft III, counterstrike and other games (I would want to use a netbook for a bit of lan gaming, with or without an external display)

you can still run dosbox, or x86 windows under qemu, maybe bochs if you really need to.
(ARM host support for qemu is under development). expect DOS apps that need 386 level hardware to run well, and major slowness for an emulated windows.

A promising CPU architecture is the Loongson 3, from the line of Loongson (or Godson) chinese designed, MIPS-like CPUs.
http://en.wikipedia.org/wiki/Loongson
There's a loogson 2 netbook already (standard netbook hardware, except for the CPU and chipset) : a 64bit CPU built on 90nm, pretty advanced (out-of-order execution, like ARM cortex A8, VIA nano and most x86 CPU; unlike other ARM and Intel Atom)

Loogson 3 is a 65nm quad core version, soon to be available and slated for a 10W power consumption. It also features hardware assisted x86 emulation (that reminds itanium to me), said to deliver about 70% of native performance. It's mainly meant for chinese servers and supercomputers, and multi-user computers pobably.
But imagine that on a netbook :).

Software-wise, the x86 emulation is to work with Qemu, and some linux-windows kernel hybrid is being developed, using the work of Wine and ReactOS. Pretty insane and promising!
http://en.wikipedia.org/wiki/Linux_Unified_Kernel
 
Great read, thanks.

6502 FTW! :p

Apple_II_in_Terminator_6502.jpg
 
Last edited by a moderator:
Well, the PPro never gained much traction and the Pentium was just an evolution of the x86 architechture. Had it not been for Intel wanting to differentiate from the competition (by patenting/trademarking Pentium) it would have been called the 586.

Regards,
SB
KJv88.png
 
Back
Top