Sandy Bridge

Once upon time supposedly faster/easier to design/more efficient/cheaper RISC CPUs (from various vendors) where meant to wipe out x86 CPUs from the face of the planet.
 
And then x86 turned into RISC CPUs with instruction decoders :)
And despite the infamous overhead X86 CPUs were/are still very competitive.
I am not saying that what happened in the past tells us exactly what will happen in the future (it's mostly about power consumption now..), at the same time I am reading some predictions that don't sound particularly profound.
 
I'd expect this to be configurable in some ways. There's no point for the display controller to go through L3, probably neither for texture requests (I certainly still expect the gpu itself to have small, local caches). But it would be useful maybe for, say, compressed z buffer.
It still adds more complexity. For example, how do you avoid the L3 for RAMDAC accesses? Do you use multiple memory controllers on the ASIC, a bypass or what?
 
I'd bet on the quad-core and eight-core communist 64bit MIPS with low-level x86 emulation :).
thought they're intended for servers, low power multi-user computers and supercomputers rather than high end desktops.

Good luck with that. First their designs aren't even on part with EV6 levels of performance. Second, I have little faith they can pull of a level of emulation and speed of emulation that is competitive.

If it isn't an x86 it doesn't stand a chance. I used to think otherwise, but have come around over the years. The problem is you can't run the software. And if you can't run the software, you are nothing but expensive sand. And no being able to run a port of linux doesn't work, even for the server market. People have tried, and failed with that model. The ecosystem around x86 is just too big at this point. Its like someone trying to take out itunes but an even harder task.
 
And despite the infamous overhead X86 CPUs were/are still very competitive.
I am not saying that what happened in the past tells us exactly what will happen in the future (it's mostly about power consumption now..),
Exactly.
A lot of old wisdom needs to be reevaluated.
In all honesty, the shift has been gradual, and will continue for some time, but the assumption that Moores Law will solve your problems has been leaking at the seams for some time now, and it is likely to grow worse. Depending on your outlook, this can be seen as a huge problem or as a time of change and possibilities. Intel, to their credit, seem to well aware of the situation but haven't really been able to translate that into marketable products with high ASPs. (It could be argued that pushing complexity and allowing galloping power draws is what will ultimately lead to the demise the entire add-in graphics industry.)
 
Good luck with that. First their designs aren't even on part with EV6 levels of performance. Second, I have little faith they can pull of a level of emulation and speed of emulation that is competitive.

If it isn't an x86 it doesn't stand a chance. I used to think otherwise, but have come around over the years. The problem is you can't run the software. And if you can't run the software, you are nothing but expensive sand. And no being able to run a port of linux doesn't work, even for the server market. People have tried, and failed with that model. The ecosystem around x86 is just too big at this point. Its like someone trying to take out itunes but an even harder task.

I strongly agree and disagree. :)
IF you want to run Microsoft Windows, then AMD and Intel has very refined solutions for that, and more abstracted emulation probably won't be cost/performance effective.
However, Microsoft Windows isn't necessary for arguably the vast majority of people. And once you drop Windows, you can drop x86 and avoid paying both the Microsoft tax, and avoid the rather high power and cost of x86. By now it should be clear that x86 won't be supplanted from the high end, but rather from the low. For most people, computers are increasingly getting to the point of "good enough" - and the computer industry is holding it's breath waiting for the bottom to fall out of the market. Netbooks are an indication of where it's heading, and their remarkable market share grab only serves to underline the writing on the wall.
 
Good luck with that. First their designs aren't even on part with EV6 levels of performance. Second, I have little faith they can pull of a level of emulation and speed of emulation that is competitive.

If it isn't an x86 it doesn't stand a chance. I used to think otherwise, but have come around over the years. The problem is you can't run the software. And if you can't run the software, you are nothing but expensive sand. And no being able to run a port of linux doesn't work, even for the server market. People have tried, and failed with that model. The ecosystem around x86 is just too big at this point. Its like someone trying to take out itunes but an even harder task.

and yet you failed at your own point by using a weird dead architecture as a performance comparison :LOL: . I would have liked "VIA C3", "586 clone", "Atom", "K7".

anyway an unsubstained claim said "up to 70% of native performance". Nothing competitive sure but presumably your linux kernel, your desktop and most of your software would run mips native while hardware assisted emulation run your x86 or windows stuff.
Performance and watt targets are pretty much unrelated to Sandy Bridge. but I answered to that quad core ARM comment and don't know really why.
 
I'd bet on the quad-core and eight-core communist 64bit MIPS with low-level x86 emulation :smile:.
Gosh, no. But ARM does play a role in this ASP erosion via netbooks, although certainly not the only role. And this isn't only about ARM stealing volume, but also Intel being unexpectedly aggressive pricing-wise with Pineview on one hand, and allowing further penetration of Moorestown into netbooks on the other. The indirect effect of this, combined with a fearful consumer sentiment, will be polarization hurting mid-range notebooks where the most of the BoM goes to Intel right now.

There really are many different small factors I think are relevant to this prediction both in the short-term and the mid-term, don't focus too much on the above. But the most important single factor (I don't think it's quite as big as all the others combined though) is definitely the macroeconomy, especially its micro-level consequences. For example, I'd expect corporate PC volumes *and* ASPs to go down even deeper than anyone seems to be forecasting right now. This also hurts Intel indirectly because that's where their percentage of the desktop BoM is the highest.

In addition to ASP reductions, I expect this to hurt wafer starts - and since Intel has its own fabs and seems very aggressive on 32nm, that's just screaming looming overcapacity. Which will hurt margins too, and eventually ASPs through basic supply-and-demand of the overall CPU market. AMD would suffer from the latter, although obviously not much from the former since it's GF that takes the hit for overcapacity now - so I wouldn't be more worried about them than usual.
 
It still adds more complexity. For example, how do you avoid the L3 for RAMDAC accesses? Do you use multiple memory controllers on the ASIC, a bypass or what?
If this simply uses existing method (implemented in every x86 cpu) of tagging regions of memory as uncacheable it won't even add complexity. But I'd not be surprised if each part of the gpu doing memory requests could explicitly ask for bypassing (L3) cache neither, that will only add a tiny bit of complexity (cpus have instructions for bypassing cache too).
 
And despite the infamous overhead X86 CPUs were/are still very competitive.
All else being equal, they would in most cases would not be performance competitive.
The initial qualifier is the one that made all the difference.
ISA (barring truly bad performance bugbears: x87...) is a second-order consideration in the absence of other constraints.
Economies of scale, manufacturing prowess, engineering capability, industry inertia, and business model helped significantly.
Chip to chip, I wouldn't say x86 was the undisputed leader until most of its competitors had thrown in the towel for reasons unrelated to technical superiority.

Things might still be interesting in the future.
Power consumption, for example is a strong pressure resisting full generalization, and its pressure might become incredibly acute in a few years, even compared to the obsession that it has become right now.
We're seeing promised improvements per transition in the realm of 20% for the same level of circuit performance, and doubled transistor budgets.
While there should be some technologies in the pipeline that should help further, I imagine there are VLSI designers being kept up at night worrying about seeing a cumulative power improvement of 50% over the same time span it takes to increase transistor count by 16x.
 
I strongly agree and disagree. :)
IF you want to run Microsoft Windows, then AMD and Intel has very refined solutions for that, and more abstracted emulation probably won't be cost/performance effective.
However, Microsoft Windows isn't necessary for arguably the vast majority of people.

Two things that I think are worth noting:

One. Microsoft has the overwhelming majority of the operating system installed base at the consumer level. Further, what Microsoft isn't touching, Apple is -- using x86 hardware to do it. So between Apple and Microsoft both depending on x86 architectures, how does this qualify as not necessary for "arguably the vast majority of people"? Or did I just somehow misunderstand what you were saying? :oops:

Second, and maybe to your point, the OS really isn't the stumbling block: it's the software. So even if we could immediately throw Microsoft and Apple OSes under the bus, what about every piece of software those operating systems run? Sure, many of the massive computing platforms with discrete software might be ported (or already written for MIPS), but the overwhelming majority of software is, again, x86 based.

So, I think x86 really is far more entrenched than you may be giving it credit for.
 
All else being equal, they would in most cases would not be performance competitive.
The initial qualifier is the one that made all the difference.
ISA (barring truly bad performance bugbears: x87...) is a second-order consideration in the absence of other constraints.
Economies of scale, manufacturing prowess, engineering capability, industry inertia, and business model helped significantly.
Chip to chip, I wouldn't say x86 was the undisputed leader until most of its competitors had thrown in the towel for reasons unrelated to technical superiority.
I wouldn't say it either, and in fact I haven't said it. I wrote that x86 chips were competitive, which is quite different from being undisputed leaders.
Moreover the "all else being equal" argument (which has been used over and over again) is a wrong one because companies don't take decisions in a vacuum. It just doesn't make sense to take a complex system, change the boundary conditions and expect it to still evolve in the same way.
 
I wouldn't say it either, and in fact I haven't said it. I wrote that x86 chips were competitive, which is quite different from being undisputed leaders.
Moreover the "all else being equal" argument (which has been used over and over again) is a wrong one because companies don't take decisions in a vacuum. It just doesn't make sense to take a complex system, change the boundary conditions and expect it to still evolve in the same way.

I think it's worth noting just how much it took to make x86 competitive.
The "all else being even remotely equal" point is something I'd consider valid enough given how massively the deck had to be stacked to arrive at rough parity.

It is one heck of an x86 penalty, best process, best and largest design teams, largest software base, largest sales volume (partly by historical accident), faster design cycles, world-class manufacturing, and it was merely "competitive".

The biggest factors had nothing to do with the chips at all, wherein their relative competitiveness as chips was of lesser importance.
I'm not sure how loosely we'd have to use the term "competitive" for much of the time span x86 competed with RISCs.

edit:
As an aside, I'm not sure how loosely we'd have to use the term "competed" for much of that history. The big face-off didn't happen until much of the macroeconomic and market conditions had decided the outcome for most of the players.
 
Last edited by a moderator:
No un-cached memory in your universe?
Please, don't be facetious. You'll still end up having the CPU and GPU fight for access to the L3, even if memory is tagged as uncachable. If it's not going through the L3 at all, then the GPU needs a second way to access memory bypassing the L3.
 
Please, don't be facetious. You'll still end up having the CPU and GPU fight for access to the L3, even if memory is tagged as uncachable. If it's not going through the L3 at all, then the GPU needs a second way to access memory bypassing the L3.
Who's being facetious? You made up a model in your head of how that thing should work and now you are complaining about the fact that it sucks!? Perhaps if you had a better idea in the first place you wouldn't need to come up with ridiculous arguments such as the ramdac having to trash L3 cache just to display an image.
 
I think it's worth noting just how much it took to make x86 competitive.
We all know where this sort of arguments converge to, one just needs to pick some random thread from usenet written 15 or more years ago. I'd say reality, which disproved them all, is more compelling than fictitious universes.
 
Please, don't be facetious. You'll still end up having the CPU and GPU fight for access to the L3, even if memory is tagged as uncachable. If it's not going through the L3 at all, then the GPU needs a second way to access memory bypassing the L3.

IGP's have to fight with CPU for memory access any way (on die or not). That's why some AMD chipsets have the sideport memory.
 
We all know where this sort of arguments converge to, one just needs to pick some random thread from usenet written 15 or more years ago. I'd say reality, which disproved them all, is more compelling than fictitious universes.

Perhaps we're just debating what we mean by "competitive", and what exactly was being competitive.

x86 chips for a long time were not competitive from a performance standpoint.
They would have been relatively competitive from a SPECINT standpoint by the PII-PIII era. This is pretty late in the game, turn of the millenium.
Floating point would take longer to reach competitiveness with the leading RISCs, and would happen in part because those competing RISC lines would soon be dropping like flies.

Price-performance had much to do with the business models of many RISC houses, and early decisions to not go down-market.

This and a number of other outside factors completely tangential to x86 (Apple's decision to stop clones, IBM's decision for the Intel chip that allowed for a reduced-width bus, the PC revolution, the gutting of the RISC workstation market at the advent of 3d graphics cards) had much to do with the commercial success of x86, which preceded it being competitive.
 
Back
Top