25 chips "that shook the world"

Discussion in 'PC Industry' started by Simon F, May 8, 2009.

  1. Scali

    Regular

    Joined:
    Nov 19, 2003
    Messages:
    2,127
    Likes Received:
    0
    I'm not sure if we should see that as an advantage. It means we're stuck with the inefficient x86 architecture for many more years.

    Most probably not. Intel's roadmap for x86 ended at Netburst. There was never a 64-bit model on there. The plan was to make Pentium 4 the last x86, and keep the platform 32-bit only. Then migrate to Itanium for 64-bit and get rid of all the legacy that is x86 and its software.

    Pentium Pro is when x86 stopped being CISC and started being 'RISC'.
     
  2. Silent_Buddha

    Legend

    Joined:
    Mar 13, 2007
    Messages:
    19,426
    Likes Received:
    10,320
    Eh from everything I've read the Opteron was a huge success in 4p - 8p - and larger configurations. Especially when compared to the best Xeon's that Intel could muster at the time.

    Enough so that even large server companies (Sun Micro for example) migrated many of their multi-processor servers from Sparc (or other RISC based processors) to Opterons.

    Regards,
    SB
     
  3. Scali

    Regular

    Joined:
    Nov 19, 2003
    Messages:
    2,127
    Likes Received:
    0
    Except I was talking about 1-2p systems.

    I think that has more to do with the Pentium Pro (which the Opteron is a derivative of, it uses the same pre-decoding to micro-ops and out-of-order execution approach to x86), and the scale of the x86 market in general, keeping the cost per unit down.
    Opteron just happened to be the best x86 server CPU at the time, but x86 was already eating into server sales long before Opteron, and even during Opteron's heyday, Xeon was the second-most popular server CPU.
     
  4. MeltedRabbit

    Newcomer

    Joined:
    Jun 4, 2003
    Messages:
    13
    Likes Received:
    0
    There are more issues with the Itantic that have not been mentioned. The Itanium ISA attempts to improve speed by the use of implicit parallelism using VLIW (Very Long Instruction Word). This would be fine if anyone could produce a decent compiler for the Itanium, which has not happened and probably will not happen. Compiling for a VLIW architecture requires the compiler to know generally what branches will be taken and those not generally taken. Compilers are better than humans at producing fast assembly code in nearly all cases. However, compilers are not clairvoyant, they do not have the data that the program will run, so that the compiler does not know how it will run. Without knowing how the code will execute, it makes optimizing a program on the Itanium hard on the compiler. Also, keep in mind that x86 seems to still be pushed heavily by Intel, but unless you want to run Windows 7 on your smartphone with a screen so small it is unusable, it will usually be done instead by an ARM based product not from nVidia. Surprisingly, the only reason to use an Atom processor is to run Windows 7, painfully slowly.

    You forgot the Intel processor codenamed Tejas, its was why Socket 775 initially was referred to by some as Socket T. IIRC it would stretch the instruction pipeline even further to even more ridiculous lengths to increase the CPU frequency. Tejas was also expected to be power hungry and dissipate enough heat that it became the primary driver behind the Intel's creation of the (now dead) BTX form factor. Whenever Intel is on some computing standards board, you can expect that Intel more or less is running the show (ramming awful standards through) or it will be a dead standard, ex. USB/IF, PCI-SIG and 1394ta (mostly dead on the PC, but is still alive and kicking on the 787, YF-35, and the Space Shuttle).

    Not quite, x86 processors still use Variable Length Encoding (VLE) for the encoding instructions, the instructions still end up getting cracked into smaller (RISC-like) micro-ops. The use of only two operand instructions is unlike any actual RISC ISA that use three operands. x86 still has 30 year old baggage and cruft that needs to be supported, 16-bit unreal mode anyone? The low number of registers, eight in 32-bit mode, with their strange usage restrictions are also not very RISC-like. As for the Pentium Pro, it lopped off a few 16-bit registers, making the mixed 32 bit and 16-bit code in Windows 95 run like molasses. Then the PPro added features like a super-scalar design and out of order execution, this does not make it a RISC design, it makes it share features with other RISC processors, but this not make it RISC.

    That was way too much nerdrage on my part, sorry to everyone, but Scali.
     
  5. Scali

    Regular

    Joined:
    Nov 19, 2003
    Messages:
    2,127
    Likes Received:
    0
    I said sticking to x86 was a bad thing, not that going to Itanium was a good thing.
    But at the very least it would get us to move away from a lot of the x86 legacy, and also make it easier to move towards other architectures than Itanium (the move from x86 to Itanium would mean that software developers have to put more thought into making their products cross-platform in general, much like how it is already common in the *nix world).

    As far as I recall, Tejas was just another iteration of the Netburst architecture, and was still 32-bit. Hence I didn't forget it.

    Obviously since they are still x86 processors, they still use the same x86 instruction encoding, which is and always will be variable length.
    I had to admit, I chuckled.

    Indeed, you argued some pretty useless points. Your argument was basically that they're still running x86-code which is still CISC. Thank you, Captain Obvious :)
    That was my entire point. You want to get away from x86 so you can get rid of the cruft and all the extra complexity in translating x86 to something a modern backend wants to execute.

    And minus points for your reading comprehension.
    I never said x86 is RISC, I said it is 'RISC'. You go on a rant because you misinterpreted that. Only an idiot would think that x86 processors no longer run x86 code :)

    Oh, and you had some factual errors aswell. PPro wasn't the first superscalar x86 processor, the Pentium already was superscalar.
     
    #45 Scali, May 20, 2009
    Last edited by a moderator: May 20, 2009
  6. Davros

    Legend

    Joined:
    Jun 7, 2004
    Messages:
    17,884
    Likes Received:
    5,334
    eh, whats the difference
     
  7. Scali

    Regular

    Joined:
    Nov 19, 2003
    Messages:
    2,127
    Likes Received:
    0
    Well, the quotes :)
    I think it's pretty obvious that x86 is a CISC instructionset and as such no x86-compatible chip can ever be purely RISC.
    But the Pentium Pro adopted RISC-like features in its backend and decoupled the x86 CISC decoding frontend from the actual execution. So 'RISC' as in RISC-like (I think some people call it a CRISC architecture, Complex Reduced Instruction Set Computer).
    This allowed Intel to get much more competitive with actual RISC processors as others have also mentioned before.
    In that sense it 'shook the world' because x86 processors were slowly starting to eat into the workstation/server market.
     
  8. MeltedRabbit

    Newcomer

    Joined:
    Jun 4, 2003
    Messages:
    13
    Likes Received:
    0
    Yes, but the lack of registers, fixed length encoding and three operand format for instructions, and any kind of orthogonality is missing from the x86 ISA, with that you miss the entire point of a RISC ISA, and thus no x86 processor can actually be termed "RISC". The features in the backend of the PPro still don't make it a "RISC" processor. In fact, superscalar, out of order design are not RISC features necessarily at all, they are features many RISC processors share with the PPro, but are not features that one must have to be a RISC processor. The POWER6, Cell, and XBox 360 are POWER processors that are scalar and in-order. I would find it hard to call these processors CISC. To get a good idea of what I mean, try coding something simple in MIPS assembly using SPIM or an equivalent simulator for POWER, and then try the same task in an x86 simulator. Your code will look nothing alike, especially if you use the x87 FPU with double precision floats. Mmm, a register stack with push and pop.

    That had been actually been happening since before the release of the 386, which was one of many processors that had been massively overhyped by Intel, and failed to deliver for servers. It too supposedly had support for multiple processors. The push with the PPro was to get corporate bean counters to switch to Windows NT 4.0 server or later to Windows 2000 server, the only reasons to use a PPro server. Well, running worms like "Nimda" from the internet was another to own a PPro, just not a very good one.
     
  9. Scali

    Regular

    Joined:
    Nov 19, 2003
    Messages:
    2,127
    Likes Received:
    0
    That's what I said:
    "I think it's pretty obvious that x86 is a CISC instructionset and as such no x86-compatible chip can ever be purely RISC."

    I never claimed they were.

    You look like an idiot discussing these obvious things as if you're the only one who knows them.
     
  10. Blazkowicz

    Legend

    Joined:
    Dec 24, 2004
    Messages:
    5,607
    Likes Received:
    256
    so I stand somewhat corrected. (sure, I'm aware that Sparc is alive, but more restricted to a niche relatively speaking)

    What about running closed source games and pretty much every standard desktop software not included in debian packages, on a 10" laptop, without spending ages recompiling stuff and without fucking around to get it to boot? (assuming you have to for a reason and find yourself in a PC environment, with only tools made for the PC)

    there will be laptop ARM computers with some merits if they're to be used for media playback, notes, presentations, web etc. with a big battery life. But it might be just simpler to use a laptop PC instead.

    and as for your point about smartphones. perhaps they are indeed pretty useless. they'll be awkard, with terrible text input and a tiny screen no matter the kind of CPU it's running on.
    sure, stay away from that bling-bling crap unless you have a real need for it, for whatever unusual reason.
     
  11. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,579
    Likes Received:
    4,799
    Location:
    Well within 3d
    The parallelism in EPIC is explicitly pointed out by the code through the template and stop bits, and by how the ISA defines valid instruction packets as not being rife with dependences.
    Implicit parallelism is derived by how x86 chips analyse the instructions they load and check for dependences that IA-64 would have spelled out.

    Whatever other issues there are with Itanium, I wouldn't dispute that from a hardware point of view, having explicit indicators can be pretty handy.
     
  12. Davros

    Legend

    Joined:
    Jun 7, 2004
    Messages:
    17,884
    Likes Received:
    5,334
    I have a question along these lines, ive been looking at a netbook it uses an ARM processor, but runs windows CE. Does this mean it will run standard windows apps ?
     
  13. Scali

    Regular

    Joined:
    Nov 19, 2003
    Messages:
    2,127
    Likes Received:
    0
    It will only run applications compiled for ARM (or .NET stuff) as far as I know.
     
  14. Davros

    Legend

    Joined:
    Jun 7, 2004
    Messages:
    17,884
    Likes Received:
    5,334
  15. Blazkowicz

    Legend

    Joined:
    Dec 24, 2004
    Messages:
    5,607
    Likes Received:
    256
    which netbook is that?
    it's part of the choice, if you have the options of a x86 netbook with 2h30 battery life and an ARM netbook with 8h battery life, the latter one can be an excellent choice. you can install a command line debian lenny and apt-get the lxde desktop, then you get an excellent software environment supported on most non x86 platforms.

    I would miss running warcraft III, counterstrike and other games (I would want to use a netbook for a bit of lan gaming, with or without an external display)

    you can still run dosbox, or x86 windows under qemu, maybe bochs if you really need to.
    (ARM host support for qemu is under development). expect DOS apps that need 386 level hardware to run well, and major slowness for an emulated windows.

    A promising CPU architecture is the Loongson 3, from the line of Loongson (or Godson) chinese designed, MIPS-like CPUs.
    http://en.wikipedia.org/wiki/Loongson
    There's a loogson 2 netbook already (standard netbook hardware, except for the CPU and chipset) : a 64bit CPU built on 90nm, pretty advanced (out-of-order execution, like ARM cortex A8, VIA nano and most x86 CPU; unlike other ARM and Intel Atom)

    Loogson 3 is a 65nm quad core version, soon to be available and slated for a 10W power consumption. It also features hardware assisted x86 emulation (that reminds itanium to me), said to deliver about 70% of native performance. It's mainly meant for chinese servers and supercomputers, and multi-user computers pobably.
    But imagine that on a netbook :).

    Software-wise, the x86 emulation is to work with Qemu, and some linux-windows kernel hybrid is being developed, using the work of Wine and ReactOS. Pretty insane and promising!
    http://en.wikipedia.org/wiki/Linux_Unified_Kernel
     
  16. Davros

    Legend

    Joined:
    Jun 7, 2004
    Messages:
    17,884
    Likes Received:
    5,334
    #56 Davros, May 22, 2009
    Last edited by a moderator: May 22, 2009
  17. spacemonkey

    Newcomer

    Joined:
    Jul 16, 2008
    Messages:
    163
    Likes Received:
    0
    Great read, thanks.

    6502 FTW! :razz:

    [​IMG]
     
    #57 spacemonkey, Jun 23, 2009
    Last edited by a moderator: Jun 23, 2009
  18. spacemonkey

    Newcomer

    Joined:
    Jul 16, 2008
    Messages:
    163
    Likes Received:
    0
    On the subject of home-grown non-American chips - whatever happened to the Russian "Elbrus" chip? They showed some silicon running x86 code last year (video at the bottom of this page http://www.espacial.org/miscelaneas/computacion/elbrus_mcst1.htm), but I haven't heard anything since. Did it ever make it into production?
     
  19. Mobius1aic

    Mobius1aic Quo vadis?
    Veteran

    Joined:
    Oct 30, 2007
    Messages:
    1,715
    Likes Received:
    293
    Great article, thanks for posting!:wink:
     
  20. Bludd

    Bludd Experiencing A Significant Gravitas Shortfall
    Veteran

    Joined:
    Oct 26, 2003
    Messages:
    3,794
    Likes Received:
    1,479
    Location:
    Funny, It Worked Last Time...
    [​IMG]
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...