IBM PowerPC-based x86 CPU?

Discussion in 'Console Technology' started by AzBat, Oct 12, 2004.

  1. ERP

    ERP
    Veteran

    Joined:
    Feb 11, 2002
    Messages:
    3,669
    Likes Received:
    49
    Location:
    Redmond, WA

    Yeah well the definition of risc has been bent a bit over the years, mostly by motorolla, who've even gone to the extent of combining instructions together with a boolean flag to reduce the actual instruction counts.

    "Risc" processors generally follow the followin rules to one extent or another

    large register file
    orthogonality of registers
    All instructions the same size (i.e. simple instruction decode rules)

    Interestingly if you were to call the zeropage a register file, the 6502 gets really close to these ;)

    Risc is a bit like OOP, it's no longer considered to be the panacea it was once thought to be.
     
  2. TheAlSpark

    TheAlSpark Moderator
    Moderator Legend

    Joined:
    Feb 29, 2004
    Messages:
    22,146
    Likes Received:
    8,533
    Location:
    ಠ_ಠ
    Oh great code master, please teach me teh ways of teh 68000 so that I may pass my course. 8)

    :wink:
     
  3. Farid

    Farid Artist formely known as Vysez
    Veteran Subscriber

    Joined:
    Mar 22, 2004
    Messages:
    3,844
    Likes Received:
    108
    Location:
    Paris, France
    Then it will be "emulation", hence it will runs slower than "pure" X86 architecture. So no threat for Intel's Money Hat.

    Why? Programming a XMP and a Xbox live interface (Thing they must do, anyway, since they had/have to provide an optimized sdk for Xenon live, hence the interface won't require tons of work) is trivial.

    And about BC, if they plan to incorporate BC without Nvidia's approbation than crap will hit the fan. Either Nvidia sue MS, or Xenon BC won't work with games that used Nvidia "extensions" (term used loosely), in both case it will be bad for MS. :)
     
  4. Fox5

    Veteran

    Joined:
    Mar 22, 2002
    Messages:
    3,674
    Likes Received:
    5
    Yeah, but the little-endians were far more advanced in technology than the big-endians, thus showing their culture, despite being a bureaucracy, could accomplish far more. Sort of how the pentium 4s and athlon 64s can kick a g5's butt, and the only modern high x86 processor the g5 stands up to is the now defunct athlon xp. The athlon xp and g5 may do well in completely different categories, but I think if you average out the ones where the g5 destroys the xp and the xp destroys the g5 you come out about equal.(you were talking about gulliver's travels, right?)

    BTW, I thought IBM dropped out of the x86 race because they just couldn't make processors as fast or as cheap as intel, amd, and cyrix, not because they suddenly decided x86 was no good. IBM may have the cheap thing down by now, but I don't think they have fast yet.(but perhaps cheap + cheap = dual cores cheaper than what amd and intel can do)

    How's x86-64?

    Why not?(btw, afaik quake 1 and 2 were programmed in C, did carmack ever use C++?)
     
  5. ERP

    ERP
    Veteran

    Joined:
    Feb 11, 2002
    Messages:
    3,669
    Likes Received:
    49
    Location:
    Redmond, WA
    If you went to school in the late 80's Risc was king and OO Programming was going to solve all of our coding problems. You'd just buy a bunch of components derive new functionality, plug them together and you'd have your application.

    Now it's pretty much accepted OOP is just a tool and that deep heirarchies are generally a bad thing from a code maintenance standpoint. Abstraction is only good if it provides functionality or simplifies the code.

    Basically anything taken to an extreme is bad, and in the late 80's both RISC and OOP were that way.... The 1 instrruction machine anyone, (the instruction is RMove).

    In the real world using the best of a variety of design ideas usually provides a better overall solutions than trying to make one idea the solution for all problems.
     
  6. Fox5

    Veteran

    Joined:
    Mar 22, 2002
    Messages:
    3,674
    Likes Received:
    5
    Well, I'm in high school right now, and the teachers who went to school in the 80s talk about how BASIC was all that was taught.
    And the only programming languages my school has taught have been OOP and we always get a textbook that goes on and on about the virtues of OOP. C++ which has been dropped, Java, and VB.Net are the 3. I've only had Java and VB.Net, but I do know that the frontpage like interface we get to us for the VB.net compiler makes it much easier to create the crappy time wasting programs we had to make in java.

    BTW, have you used Java 3d? I did some stuff with it for my final project in the ap class, but it was so confusing, it seemed to have very little in common with the rest of Java.(it's just an opengl/direct3d wrapper, isn't it?) I hate BranchGroups...
     
  7. ERP

    ERP
    Veteran

    Joined:
    Feb 11, 2002
    Messages:
    3,669
    Likes Received:
    49
    Location:
    Redmond, WA
    Most modern languages provide OOP tools, but design philosophy using those tools is now significantly different. In the bad old days deep heirarchies were the norm and inheritance was often prefered over containment. These days the reverse is generally true.

    My personal view on code design is basically KISS (Keep it simple stupid), don't oversolve a problem, don't obfuscate with abstractions, make it readable and easy to debug. You'll probably spend more time debugging it than writing it anyway.

    I went to school circa 88/89 in England and we were taught primarilly Pascal modula, and later Ada. I can't imagine an English university teaching basic.
     
  8. Fox5

    Veteran

    Joined:
    Mar 22, 2002
    Messages:
    3,674
    Likes Received:
    5
    I think my teachers were graduating in the early 80s, so they probably would have started college in the late 70s.
     
  9. one

    one Unruly Member
    Veteran

    Joined:
    Jul 26, 2004
    Messages:
    4,838
    Likes Received:
    167
    Location:
    Minato-ku, Tokyo
    You can do OOP with C with layered structures with vtables (of course you have to define them and assemble them to look like OOP classes by youself, and sometimes using dirty macro to make them more readable). Using C, you can maintain portability and can go without OOP overhead if necessary. But who cares about OOP overhead today? If you don't create some cross-platform tools OOP is pretty standard. If you don't like Java, things like D programming language may be for you...

    As for IBM's x86-compatible processor, I coudn't find it anywhere other than the said column. If it's specifically meant for Xbox2, why are devkits Mac with NT for PPC? A Wintel PC is enough for a devkit if Xbox2 is in x86. If it's for value market, how will IBM compete against VIA for cheaper price tag? Or is it a power-efficient chip like Transmeta or Pentium-M?
     
  10. BobbleHead

    Newcomer

    Joined:
    Sep 24, 2002
    Messages:
    58
    Likes Received:
    2
    615!

    Sounds like a revival of the PowerPC 615... maybe they'll actually release it this time.
     
  11. Dio

    Dio
    Veteran

    Joined:
    Jul 1, 2002
    Messages:
    1,758
    Likes Received:
    8
    Location:
    UK
    Well, no English university would have taught BASIC between 1985 and about 1995 because 99% of the class would have known it better than the lecturers anyway.

    C, and even worse C++, are absolute nightmares to teach, which is why universities have always preferred to use more restrictive forms of both. For C++ that generally became Java.
     
  12. Gubbi

    Veteran

    Joined:
    Feb 8, 2002
    Messages:
    3,661
    Likes Received:
    1,114
    That is clearly a programmers view. It's next to impossible to build wide superscalar implementations of the 68K family. The reason is that each instruction has an extension field to indicate if the instruction holds additional fields (which themselves can have an extension). Thus forcing a sequentiel decoding of each individual instruction, making superscalar decoding really hard.

    x86 is much more RISC in this respect: You can tell the length of the instruction from looking at the very first byte, every instruction can have a prefix so in reality you have to look at 2 bytes, but still alot simpler than 68K.

    Motorola realized this themselves and made the cut down 68K Coldfire cores.

    Cheers
    Gubbi
     
  13. Gubbi

    Veteran

    Joined:
    Feb 8, 2002
    Messages:
    3,661
    Likes Received:
    1,114
    And perhaps the most important one: RISCs are all load/store machines (at most one memory operation per instruction).

    Cheers
    Gubbi
     
  14. Gubbi

    Veteran

    Joined:
    Feb 8, 2002
    Messages:
    3,661
    Likes Received:
    1,114
    Put to the extreme, yes.

    But what the RISC wave did was look at the ISA, compiler and microarchitecture as an integral part. They analyzed CISC codes (VAX and S/370) and found that compileres hardly ever used the complex instructions and almost exclusively (like 98%) used a simple "RISC-like" subset of instructions.

    They also found that in some cases functions synthesized from the simple instructions ran faster than the microcoded complex ones on the same architecture.

    They looked at all this and then systematically weeded out all that didn't improve performance across a wide variety of workloads. That is the real legacy of the RISC wave in the 80s, - and it carried over to all newer ISAs (and implementations).

    Cheers
    Gubbi
     
  15. arjan de lumens

    Veteran

    Joined:
    Feb 10, 2002
    Messages:
    1,274
    Likes Received:
    50
    Location:
    gjethus, Norway
    Umm, no. I spent some time a few years ago looking at X86 instruction length determination, and X86 is MUCH worse than you make it out to be: For most (but not all!) instructions, you have a byte called 'ModR/M' that needs to be examined before you can say anything meaningful about instruction length (some addressing modes require an integer value after the rest of the instruction, others don't, the ModR/M byte determines addressing mode); in 32-bit mode, there is, depending on addressing mode, often also a 'SIB' byte that needs to be examined as well. Usually, the ModR/M byte is the second byte and the SIB byte - when it exists - is the third byte of the instruction, but exceptions exist, such as the large number of instructions beginning with the byte '0F' (practically all MMX/SSE/3dnow instructions), where ModR/M is usually the third byte and the SIB the fourth byte. And it is entirely possible to have an instruction with as much as 4 or 5 prefix bytes that come before all the opcode bytes, all of which have to be checked sequentially and some of which can change the meaning of the ModR/M byte or the size of integer arguments in instructions. So in the worst case, X86 requires you to check about 8 or 9 bytes before you can say anything conclusive about the length of the instruction you are trying to decode.

    The only reason a modern X86 processor can decode more than 1 instruction per clock is that they store loads and loads of predecode information alongside their L1 ICaches to help them to find out where each instruction starts.
     
  16. arjan de lumens

    Veteran

    Joined:
    Feb 10, 2002
    Messages:
    1,274
    Likes Received:
    50
    Location:
    gjethus, Norway
    Both ARM and PowerPC have instructions that can do multiple memory load/stores in one instruction; what makes them load/store machines is more that you cannot do both arithmetic AND load/store on the same value within the same instruction.
     
  17. Inane_Dork

    Inane_Dork Rebmem Roines
    Veteran

    Joined:
    Sep 14, 2004
    Messages:
    1,987
    Likes Received:
    46
    I'm pretty sure ERP is already aquainted with the basic history and design goals of RISC chips.
     
  18. darkblu

    Veteran

    Joined:
    Feb 7, 2002
    Messages:
    2,642
    Likes Received:
    22
    superscalar decoding has little to do with superspacar execution. take for example the ppro/p6 (and ever since) x86 architectures - you have a single decoder that handles the task pretty much the same way as its forefathers - decoding the ia32 madness sequentially, but instead of producing moderate amounts of decoded ops it outputs insane amounts of uops. superscalarity takes place only after those get reshuffled, regrouped and famili-name-chaned and eventually sent out to the multiple uops execution ports.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...