Official: ATI in XBox Next

Status
Not open for further replies.
x86 got ahead of its competitors partly because of the amount of money invested yes, but it was competing with them on a level playing field ... they were all superscalar SISD processors (with some SIMD units bolted on). The advantage other single stream superscalar processors could get with a better ISA has gotten more and more limited over time, with an ever greater percentage of transistors dedicated to book-keeping tasks to keep the small number of actual computational resources busy it became easier and easier to hide a little decoder overhead (especially with an I-cache advantage to ease the pain).

x86(-64) is simply not that bad an ISA anymore for its application, yeah it is crufty ... who cares, superscalar processing is one big pile of cruft.
 
Saem said:
I see you didn't comprehend what I'm saying. I'm talking about MPUs. Not a specific one, just MPUs, in particular, high performance parts. MPUs are designed with various architecture philosophies, some these are forumlated based on the typical work load that'll be encoutered by the MPU. Now, whenever companies have decided to take a more of a brainiac route than a speed demon route, they've tended NOT to meet their targets. This is a fairly well established trend, AFAIK.

Hmm. I would say that you are claiming a trend from a very small sample indeed and using very loose terminology to go with it. Are the Power4/Power5 "brainiac"? Is the Itanic? Is even the last iteration of the Alpha (classic "speed" example) all that clear cut anymore? To then go on and claim that "brainiacs" have been less successful, and then go on to make prediction about PS3 performance from this... - sorry, but this is very weak argumentation.

From my viewpoint, a new architecture such as this has two major advantages compared to what we have in the x86 world.
1. It is formulated without the restrictions imposed by 70s/80s lithographic techniques, and is targeted at a different problem space than clerical office work.
2. It is formulated with efficient and flexible multiprocessing in mind.

Most of the discussion I've seen here so far has focussed on the benfits that point 1 can give. The quote above from Saem is a good example.

I would argue that an equally relevant (although at this stage still not useable) basis for predictions is to look at multiprocessing trends, architectures, and efficiencies for different calculational tasks. I would further claim that with time, point 2 will grow in importance. And if I were really to go out on a limb, I'd say that this is also where the PS3 can initially run into some difficulties both due to less than ideal programming tools, and less than ideal programmers.

Differences in multiprocessing architectures have not been very relevant in consumer space. Cost has been prohibitive for cheap devices and computers, and the dual-processors that have made inroads at the higher end of PC-space have all been shared bus SMP boxes - a model that worked reasonably well for relatively inexpensive light multiuser use where it was introduced way back, and has been used in PCs because it's dirt cheap to implement, and the scaling limitations have been irrelevant.

We are at a very interesting point in time, because single chip multiprocessing is becoming viable. So far, the implementations we have seen have grafted two (more or less previous generation CPUs cores) onto one die. The programming models, OS support et cetera have remained unchanged. It would be much more interesting and potentially much more rewarding to go back to the drawing board, and design a new processor with on-chip multiprocessing as basic design parameter. Plus, have a software environment that is built around this new architecture.
And that is what is being done here.

Making PC-space inertia based predictions is probably valid for PCs (and thus the XBox) for the foreseeable future. For the Cell and its descendants however, it would probably make more sense to look at supercomputing development. Why the ever faster single processor vector machines first went to coarse grained parallellism, and later were largely given up on in favour of various flavours of parallell processor machines. There have been a couple of decades worth of hardware/software/tool development aimed at making massively parallell system more widely applicable.

Applying PC-space inertia arguments on the PS3 is not likely to be relvant other than as far as programmer training and habits are concerned. But a programmer who isn't salivating at the prospects of setting his intellectual teeth into something as exciting as this first step on a new road should probably give serious thought of whether he shouldn't consider a career selling hot dogs instead.

All IMHO.

Entropy
 
Long long ago... toy story...
The RenderFarm is one of the most powerful rendering engines ever assembled, comprising 87 dual-processor and 30 four-processor SPARCstation 20s and an 8-processor SPARCserver 1000. The RenderFarm has the aggregate performance of 16 billion instructions per second -- its total of 300 processors represents the equivalent of approximately 300 Cray 1 supercomputers.
Sun's SPARCstation 20 Model HS14MP delivers rendering performance more than three times that of a Silicon Graphics Indigo2® Extreme
Sun is the vendor with a clear advantage in all four areas. Sun's SPARCstation 20, Ultra 1, and Ultra 2 systems provide the only desktop multiprocessing solution available. In terms of rendering power per dollar, Sun systems costs less than half that of the competition. Each can scale up to four processors, ranging from the single or dual-processor 75 MHz SuperSPARC-II to the dual or quad-processor 100 MHz HyperSPARC in the SPARCstation 20, and one to two processors in the Sun Ultra 1 and Ultra 2 systems. This provides a four-fold range of performance in the same package - and Sun's modular architecture and pricing makes it economical to add new, faster processors as needed. Furthermore, Sun's multithreaded Solaris operating environment allows seamless integration with existing networks and systems.

Then it is said that came the rings...

Weta old quote...
reported that the facility will also need to expand its render farm from 400 processors to 700.
More recent quote...
“And since we are replacing a lot of workstations or adding second workstations to a lot of artists’ desks, I think we’re going to go from about 800 processors to about 1400 –1500 processors overall at Weta, if you include desktop and central resources.â€￾
Weta Digital technical head Scott Houston (pictured) compares a quadrupling of processing power between the first and second Lord of the Rings films
It quadrupled...
The artists work at IBM Intellistation workstations, powered by dual 2.2GHz Intel Xeon processors
As said in the previous posts the REAL world perf. of pixar's new 1024 2.8 Xeon processors most likely hovers around the perf. of the patent...
In essence what is described in the patent could have more REAL world perf than that which was used for The Two Towers... If I'm reading correctly...

And then came a King...
Weta Digital uses Linux in its render farm of over 2000 CPUs.
Maybe they've increased some more since the time of this quote, but fact is I doubt they tossed out their old stuff... Thus what is described in the patent will not be that far off the real world perf of this...

Now the question is....

It is said that dedicated silicon exceeds by several orders of magnitude the performance of a general processor in some grph related tasks/ops.... Do these render farms do the entire process on general purpose cpus?.....

IF yes, you do realize what I'm implying... ;)

EDited
 
Grall:

Except, Cell isn't your typical MPU, so your trend goes out the window at square one, and even if you argue that you weren't thinking of Cell when you said that, well then what's the point of your argument? We could just as well start discussing the weather or something. You have to set an example into a scenario, or else it becomes MEANINGLESS.

You did NOT comprehend what I'm saying, stop trying to claim that you are. What you fail to understand is the level of abstraction I'm talking about. This doesn't allow you to even begin to comprehend the trend I'm talking about. Additionally, the sample size is rather large over a large period of time which has had large shifts in computing philosophies encompassed within it. This isn't MY observation pulled out of my ass, this is something which many people in the know (especially engineers in the field) will either say so themselves or agree with it.

Cell will also not be required to scale up in clock speed as time passes, unlike a PC (or almost any other) CPU would be expected to. All that's required is that it can be manufactured with reasonable economics at target speed from the outset. Any further scaling headroom created from there on is simply gravy on top which can be put into further increasing yields and reducing power instead of improving clock speed...

WHAT?!?!?!?! That's crazy. Intel and AMD don't blow so much money on design just to get the CPU out at a speed and then go, "Yay gravy!", when it scales. The scaling is part of the engineering. They want it to not only hit the clock speed but sustain at least a certain amount of scalling. They'd lose insane amounts of money otherwise. You're obviously not aware of how things are carried out in the PC MPU space and especially so in the high performance MPU space. That gravy garbage isn't what happens. I can't believe one of the people in the know didn't tear you a new one over that -- Vince ;). Maybe, they ignore your posts. ;)

Additionally, your PIII vs Athlon example is way off. The PIII was the last breath of the PPro architecture. People were suprised it scaled as it did. The Athlon didn't have a peer because it unlike the PIII hand not shown it's legs in the 0.18u Cu process.

General:

Brianiac and Speed demon are peer relative terms, if you don't understand these terms go do the necessary research.

Regardless, I'm not talking about specific instances, I'm on a higher level of abstaction, if you can't see it, too bad. Nonetheless that's been the case and it makes sense why it is the case.

Now, all this doesn't mean CELL will "flop" horribly for those that have trouble thinking in anything but binary, all this means is that CELL is likely going to have trouble meeting its target in typical workloads.

x86 got ahead of its competitors partly because of the amount of money invested yes, but it was competing with them on a level playing field ... they were all superscalar SISD processors (with some SIMD units bolted on). The advantage other single stream superscalar processors could get with a better ISA has gotten more and more limited over time, with an ever greater percentage of transistors dedicated to book-keeping tasks to keep the small number of actual computational resources busy it became easier and easier to hide a little decoder overhead (especially with an I-cache advantage to ease the pain).

I don't see how it was a leveling playing field? Intel had significant control of the market, more money and more installed resources when it comes to production et al. THe times when they didn't, they found one way or another to get ahead and spanked the slow moving we're better than you bohemoths. Exactly why NT never went to other architectures for long.

x86(-64) is simply not that bad an ISA anymore for its application, yeah it is crufty ... who cares, superscalar processing is one big pile of cruft.

Out of curiosity, what would you rather see if not superscalar?
 
Saem said:
General:
Brianiac and Speed demon are peer relative terms, if you don't understand these terms go do the necessary research.
Are you talking to me here?
Brainiac vs Speed Demon are hardly scientific terms. They were used in some circles denoting either a focus on extracting maximum level of instruction level parallellism, or on optimising the pipeline for maximum clock. Never really more than terms for indicating tendencies towards these extremes, they were pretty useless to begin with and these days all general purpose CPUs are relatively similar in their OOO-ness and the all the rest that was used to determine these CPU design tendencies. The reason I used the Itanium above was that it falls outside these terms. The Cell approach does the same - the degree of "brainiacness" of a particular computational unit is largely irrelevant. These terms deserve to be buried and forgotten in the old RealWorldTech forum archives.

Regardless, I'm not talking about specific instances, I'm on a higher level of abstaction, if you can't see it, too bad. Nonetheless that's been the case and it makes sense why it is the case.
????
Now, all this doesn't mean CELL will "flop" horribly for those that have trouble thinking in anything but binary, all this means is that CELL is likely going to have trouble meeting its target in typical workloads.
Perhaps you'd like to specify what exactly that target is, that you predict it will fail to meet? You can throw in an explanation of what you mean by "typical workloads" while you're at it.

Entropy
 
Perhaps you'd like to specify what exactly that target is, that you predict it will fail to meet? You can throw in an explanation of what you mean by "typical workloads" while you're at it.

1TFLOP/Ops and games.

Are you talking to me here?

Nope, sorry, for someone else.
 
Saem said:
You did NOT comprehend what I'm saying, stop trying to claim that you are.

Yes, I *did* understand, as far as you actually bothered to explain your theories anyway. As you refused - and still refuse - to actually apply it to a practical example, I did that work for you. Sorry if I tangented off in a different direction than you, but really, you gave me no other choice.

Now next step is up to you. Do your homework buddy, I won't accept any more sneering remarks from you that I did not understand unless you actually SHOW what it is you believe I did not understand!

SHOW EXACTLY how you think I am not understanding you. Unless you bother to do that, this discussion will go nowhere.

What you fail to understand is the level of abstraction I'm talking about. This doesn't allow you to even begin to comprehend the trend I'm talking about.

And still you fail to explain yourself. :D

It's easy to blow people off with big words like 'level of abstraction' and telling people to 'do the neccessary research' (which just shows you're either bad at explaining what you really mean or you're plain lazy if you expect people to have to RESEARCH to find the point of your posts), but just as a reminder here buddy, the reason we have discussion boards is actually to DISCUSS things.

Sitting there with your nose up in the air saying in a snotty manner people don't comprehend is neither polite nor particulary constructive. It doesn't further any kind of discussion, it's just annoying and irritating. I kindly suggest you try harder to explain yourself in the future...

Additionally, the sample size is rather large over a large period of time which has had large shifts in computing philosophies encompassed within it. This isn't MY observation pulled out of my ass, this is something which many people in the know (especially engineers in the field) will either say so themselves or agree with it.

Your trends not pulled out of your ass (or so you say) notwithstanding, what I want to know is how you intend to actually APPLY this trend to anything SPECIFIC. It's all well and good hearing you say brainy microprocessors tend to miss their target; even if we take you on face value and assume this to be the truth, then SO WHAT?! Where does this take us in the context of our discussion?!

Unless you're talking about a specific CPU, your trend is meaningless, pointless AND irrelevant. It's just words without value unless you actually go a step further and start being specific. Of course, unless you're talking about an existing CPU, you'd just be speculating!

Again, taking Cell as an example, neither you nor anyone else here knows if it will end up being a "brainy" architecture or not, and even if it IS, none of us know wether it is in danger of missing its target because of it being "brainy", whatever that target may be! As far as I know, there hasn't even been a tapeout of the completed chip yet, much less first silicon, so how could anyone discuss any possible problems with its design with any reasonable level of confidence? (This rules out Deadmeat's speculation, of course. *snicker*)

Cell will also not be required to scale up in clock speed as time passes, unlike a PC (or almost any other) CPU would be expected to.
WHAT?!?!?!?! That's crazy. Intel and AMD don't blow so much money on design just to get the CPU out at a speed and then go, "Yay gravy!", when it scales.

Difference is of course, Cell will be used in fixed hardware. It will run at X MHz when it launches, and it will STILL run at X MHz when it is taken out of production a number of years later. Heck, the Cell architecture even incorporates a mechanism to insert dummy instructions to ensure code runs at the 'designed' speed when executed on faster hardware than the software was programmed for...

Whatever device you buy that contains a Cell processor, you will not be able to buy a faster replacement chip and plop in there like you can with a PC; this will be especially true for PS3. Hence, no need for the chip to scale up in MHz as time passes. Yield better and suck less power, sure, but wringing more speed out of the chips would be pointless. After all, PS2 EE still runs at just under 300MHz despite having been in production for years now and having gone through like five silicon revisions, getting smaller and cooler each successive time...

You're obviously not aware of how things are carried out in the PC MPU space

Except, Cell - which is the chip I was talking about - isn't aimed at the PC space, something I clearly stated. Try to keep up, will you? I won't have to repeat myself as much that way. ;)

I can't believe one of the people in the know didn't tear you a new one over that -- Vince ;).

Ha ha. Well, maybe THEY actually read my post properly... :rolleyes:

Additionally, your PIII vs Athlon example is way off. The PIII was the last breath of the PPro architecture. People were suprised it scaled as it did.

Maybe they shouldn't, considering the engineering resources Intel sunk into re-engineering the chip and tweaking it to get rid of critical speed paths etc. PPro and derivates went from - I think - 133 to 1333MHz, unless you want to count the mobile bananas core also (which borrows elements of netburst and is undoubtedly tweaked on the inside to scale better), then it's like 1600-ish MHz at the moment. Athlon started at 500MHz and is now at 2400 or such, right? Not that much of a difference in absolute numbers (and to AMDs advantage actually, lol), especially considering AMDs considerably smaller level of resources...

I think my example works just fine! :LOL:

The Athlon didn't have a peer because it unlike the PIII hand not shown it's legs in the 0.18u Cu process.

Sorry, I don't quite follow you. I believe AMD reached .18 before Intel, and unlike Intel, it used copper interconnects long before its rival switched over. Saying it didn't show its legs seems strange to me, since that's exactly what it seems to be doing! :) In reality, it was the P3 not showing its legs for a while, but of course Intel decided to dump the P3 for the (initially) far worse performing P4, so that battle became rather moot really...

[/quote]all this means is that CELL is likely going to have trouble meeting its target in typical workloads.[/quote]

...Which is just your baseless speculation, nothing more.

Your words are nothing but FUD at this point in time. :)

*G*
 
zidane1strife said:
Long long ago... toy story...
The RenderFarm is one of the most powerful rendering engines ever assembled, comprising 87 dual-processor and 30 four-processor SPARCstation 20s and an 8-processor SPARCserver 1000. The RenderFarm has the aggregate performance of 16 billion instructions per second -- its total of 300 processors represents the equivalent of approximately 300 Cray 1 supercomputers.
Sun's SPARCstation 20 Model HS14MP delivers rendering performance more than three times that of a Silicon Graphics Indigo2® Extreme
Sun is the vendor with a clear advantage in all four areas. Sun's SPARCstation 20, Ultra 1, and Ultra 2 systems provide the only desktop multiprocessing solution available. In terms of rendering power per dollar, Sun systems costs less than half that of the competition. Each can scale up to four processors, ranging from the single or dual-processor 75 MHz SuperSPARC-II to the dual or quad-processor 100 MHz HyperSPARC in the SPARCstation 20, and one to two processors in the Sun Ultra 1 and Ultra 2 systems. This provides a four-fold range of performance in the same package - and Sun's modular architecture and pricing makes it economical to add new, faster processors as needed. Furthermore, Sun's multithreaded Solaris operating environment allows seamless integration with existing networks and systems.

Then it is said that came the rings...

Weta old quote...
reported that the facility will also need to expand its render farm from 400 processors to 700.
More recent quote...
“And since we are replacing a lot of workstations or adding second workstations to a lot of artists’ desks, I think we’re going to go from about 800 processors to about 1400 –1500 processors overall at Weta, if you include desktop and central resources.â€￾
Weta Digital technical head Scott Houston (pictured) compares a quadrupling of processing power between the first and second Lord of the Rings films
It quadrupled...
The artists work at IBM Intellistation workstations, powered by dual 2.2GHz Intel Xeon processors
As said in the previous posts the REAL world perf. of pixar's new 1024 2.8 Xeon processors most likely hovers around the perf. of the patent...
In essence what is described in the patent could have more REAL world perf than that which was used for The Two Towers... If I'm reading correctly...

And then came a King...
Weta Digital uses Linux in its render farm of over 2000 CPUs.
Maybe they've increased some more since the time of this quote, but fact is I doubt they tossed out their old stuff... Thus what is described in the patent will not be that far off the real world perf of this...

Now the question is....

It is said that dedicated silicon exceeds by several orders of magnitude the performance of a general processor in some grph related tasks/ops.... Do these render farms do the entire process on general purpose cpus?.....

IF yes, you do realize what I'm implying... ;)

EDited

I see what your implying . But your wrong . These render farms work on 1 frame at a time. The frames use massive amounts of ram and hardrive space to be stored. There is no a.i or physics to worry about .

The ps3 which even if the cell chip is capable of 1tflop it will never reach that . Has very very limited ram space compared to these render farms . Has many things it needs to do . It also has to draw 60 fps . Where as the render farms can use days to draw 1 frame. So no ps3 will not come close to lord of the rings. We may see with the next gen toy story graphics but even then i doubt it .
 
Saem said:
Out of curiosity, what would you rather see if not superscalar?

Scalar and and (V)LIW execution combined with SIMD where necessary, bang for buck (MIPS for area) superscalar architectures cannot compete.
 
MfA:
Hasn't VLIW proven itself by now to be a pretty impractical concept really?

Itanic is a monster of a chip that is expensive in both $ and watts, while Crusoe is just plain lame from a performance standpoint.

In comparison, standard CPUs seem far more well-balanced if given equivalent resources in die area and transistors...

*G*
 
We may see with the next gen toy story graphics but even then i doubt it .

Well... 100approxx 75-100Mhz sun cr@po cpus, once the low % real world perf is taken into account is probably way way way below 1Tflops...

I'd guess(I've not read about their perf., this be wild shot) they're probably capable of a few Gflops in the real world...

The question still remains... I wonder...

PS I've heard that for one of the LoTR they usually took like an hour....

60min/4= 15min=900sec... if the answer to my question is yes.... and they're indeed slowed down several orders of magnitude due to not using dedicated silicon... this be brought into realtime...

edited
 
VLIW looses any advantage it has once you try to scale to wider issue execution than what the instruction set was designed for. In the embedded realm VLIW has always done fine.

The Crusoe is ok for what it does ... there is simply not a lot of demand for what it does.
 
Saem said:
Are you talking to me here?

Nope, sorry, for someone else.

Apologies for my tone then.
I would tend to agree that the TFlops number is probably "guaranteed not to be exceeded". On the other hand, I would argue that this matters little. Development tools and environment will be very important, and will be extremely interesting to see, if such will be realistically possible outside development houses.

Entropy
 
There is no a.i or physics to worry about .

I think MASSIVE does require a.i. and physics... no?
Also those particle/fluid/hair/etc movement likely require physics calc... no?

So did these studios also use gpus? I've not heard about'em... so... hmmmm....

PS I divided by 4 since they quadrupled previous perf...
Another quote for the RoTking...
Weta is concerned that the 32-bit P3 and P4 processors may no longer be sufficient. As a result, they are considering moving to 64-bit processors.

one guy over there(the site with the news, just in case...) wrote this comment....
3D environment literally is one that could use 8-way, 64-way CPUs and shared memory: many CPU's working with the same cache would do much better, would benefit from the prior calculations done by other CPUs, than a single CPU running the whole CGI scene by its lonesome.
... I wonder...

edited

edited 2
 
So did these studios also use gpus? I've not heard about'em... so... hmmmm....

ATI and NVIDIA are used more and more frequently in studios these days. In fact ATI had a hand in rendering in LotR:tTT
 
Oh ok, but what about the "Fellowship"

Another ancient quote...
Then there’s the rendering system…. The renderer alone is run 24 hours a day rendering out the 1000 odd shots that were required for TTT. It consists of 192 Dual Pentium 1 GHz and 448 Dual 2.2 GHz processors. A total of 1280 processors running at approximately 2,355 GHz…. Mmmmm…..

and this comment(dunno if it's true)....

They interviewed one of the guys at Pixar last year who said that from Toy Story to Monsters Inc. the rendering power had gone up a few hundred fold

A few hundred fold... and that was last year, prior to moving to a new render farm that should be FAR more powerful... and near the patent....

And there is this other comment...

Massive was described to us as “AI on steroidsâ€￾ and by all accounts it is.
 
The ps3 which even if the cell chip is capable of 1tflop it will never reach that . Has very very limited ram space compared to these render farms . Has many things it needs to do . It also has to draw 60 fps . Where as the render farms can use days to draw 1 frame. So no ps3 will not come close to lord of the rings. We may see with the next gen toy story graphics but even then i doubt it .

dude, if all we see with PS3-Xbox2-N5 is realtime graphics on par with the CG cut scenes in games, I'll be very satisfied. look at the Soul Calibur 2 opening CG or Tekken 4 arcade CG. if that is done realtime on the next consoles in 2-3 years, WHOA!
 
Yes, I *did* understand, as far as you actually bothered to explain your theories anyway. As you refused - and still refuse - to actually apply it to a practical example, I did that work for you. Sorry if I tangented off in a different direction than you, but really, you gave me no other choice.

Now next step is up to you. Do your homework buddy, I won't accept any more sneering remarks from you that I did not understand unless you actually SHOW what it is you believe I did not understand!

SHOW EXACTLY how you think I am not understanding you. Unless you bother to do that, this discussion will go nowhere.

This is where you don't understand. As I've said before this is a trend. There aren't really any HARD numbers other than trends that manifest themselves. That's how I put it in the first place and the second and you didn't get it.

Your trends not pulled out of your ass (or so you say) notwithstanding, what I want to know is how you intend to actually APPLY this trend to anything SPECIFIC. It's all well and good hearing you say brainy microprocessors tend to miss their target; even if we take you on face value and assume this to be the truth, then SO WHAT?! Where does this take us in the context of our discussion?!

I'll reiterate, seeing as this is a general trend, there are no specific cases. This is just the trend that emerges.

My point in all this is that CELL will likely have trouble hitting its target. By how much, well that can't be established without more information.

Imagine this, if someone comes upto and says, hey, who do you think is going to win the game tonight. A or B? Now lets say you don't know much about A or B except their playing philosophies. Additionally, you know the tendency (READ: trend) of other match ups with simillar philosophies you find that A's philosophy is more successful. Thusly, you'd likely choose A.

Again, taking Cell as an example, neither you nor anyone else here knows if it will end up being a "brainy" architecture or not, and even if it IS, none of us know wether it is in danger of missing its target because of it being "brainy", whatever that target may be! As far as I know, there hasn't even been a tapeout of the completed chip yet, much less first silicon, so how could anyone discuss any possible problems with its design with any reasonable level of confidence? (This rules out Deadmeat's speculation, of course. *snicker*)

I think it's more of a brainy approach in that they're going for TLP, for the typical work load -- games. Entropy raises a good point, that this is a very "fresh" architecture. From top to bottom really, where they'll have new CPU, new OS and to some extent new coding philosophy, it'll be interesting to see what languages and/or what incarnations of languages end up taking root. If they're not old skool, then there is a lot of promise, but if there is then there are issues. I don't think programmers are a huge issue here, they'll just have to adjust to new ways of thinking, but they're not really that "unarmed" as Entropy implies them to be -- hope I read that right.

Sorry, I don't quite follow you. I believe AMD reached .18 before Intel, and unlike Intel, it used copper interconnects long before its rival switched over. Saying it didn't show its legs seems strange to me, since that's exactly what it seems to be doing! In reality, it was the P3 not showing its legs for a while, but of course Intel decided to dump the P3 for the (initially) far worse performing P4, so that battle became rather moot really...

Yeah poorly worded on part. The point is that the K7 was a fresh architecture while the PIII was just the PPro on its last breath.

Scalar and and (V)LIW execution combined with SIMD where necessary, bang for buck (MIPS for area) superscalar architectures cannot compete.

I'm a big fan of V/LIW, I couldn't quite say why, but it was something to do with the predictable instruction bundle. I just like symmetry, I suppose. As for the Transmeta approach, well it seems Intel is doing something simillar with its x86 emulator that'll be out for Itaniums.
 
Status
Not open for further replies.
Back
Top