And despite the infamous overhead X86 CPUs were/are still very competitive.And then x86 turned into RISC CPUs with instruction decoders
It still adds more complexity. For example, how do you avoid the L3 for RAMDAC accesses? Do you use multiple memory controllers on the ASIC, a bypass or what?I'd expect this to be configurable in some ways. There's no point for the display controller to go through L3, probably neither for texture requests (I certainly still expect the gpu itself to have small, local caches). But it would be useful maybe for, say, compressed z buffer.
I'd bet on the quad-core and eight-core communist 64bit MIPS with low-level x86 emulation .
thought they're intended for servers, low power multi-user computers and supercomputers rather than high end desktops.
Exactly.And despite the infamous overhead X86 CPUs were/are still very competitive.
I am not saying that what happened in the past tells us exactly what will happen in the future (it's mostly about power consumption now..),
Good luck with that. First their designs aren't even on part with EV6 levels of performance. Second, I have little faith they can pull of a level of emulation and speed of emulation that is competitive.
If it isn't an x86 it doesn't stand a chance. I used to think otherwise, but have come around over the years. The problem is you can't run the software. And if you can't run the software, you are nothing but expensive sand. And no being able to run a port of linux doesn't work, even for the server market. People have tried, and failed with that model. The ecosystem around x86 is just too big at this point. Its like someone trying to take out itunes but an even harder task.
Good luck with that. First their designs aren't even on part with EV6 levels of performance. Second, I have little faith they can pull of a level of emulation and speed of emulation that is competitive.
If it isn't an x86 it doesn't stand a chance. I used to think otherwise, but have come around over the years. The problem is you can't run the software. And if you can't run the software, you are nothing but expensive sand. And no being able to run a port of linux doesn't work, even for the server market. People have tried, and failed with that model. The ecosystem around x86 is just too big at this point. Its like someone trying to take out itunes but an even harder task.
Gosh, no. But ARM does play a role in this ASP erosion via netbooks, although certainly not the only role. And this isn't only about ARM stealing volume, but also Intel being unexpectedly aggressive pricing-wise with Pineview on one hand, and allowing further penetration of Moorestown into netbooks on the other. The indirect effect of this, combined with a fearful consumer sentiment, will be polarization hurting mid-range notebooks where the most of the BoM goes to Intel right now.I'd bet on the quad-core and eight-core communist 64bit MIPS with low-level x86 emulation :smile:.
If this simply uses existing method (implemented in every x86 cpu) of tagging regions of memory as uncacheable it won't even add complexity. But I'd not be surprised if each part of the gpu doing memory requests could explicitly ask for bypassing (L3) cache neither, that will only add a tiny bit of complexity (cpus have instructions for bypassing cache too).It still adds more complexity. For example, how do you avoid the L3 for RAMDAC accesses? Do you use multiple memory controllers on the ASIC, a bypass or what?
All else being equal, they would in most cases would not be performance competitive.And despite the infamous overhead X86 CPUs were/are still very competitive.
I strongly agree and disagree.
IF you want to run Microsoft Windows, then AMD and Intel has very refined solutions for that, and more abstracted emulation probably won't be cost/performance effective.
However, Microsoft Windows isn't necessary for arguably the vast majority of people.
I wouldn't say it either, and in fact I haven't said it. I wrote that x86 chips were competitive, which is quite different from being undisputed leaders.All else being equal, they would in most cases would not be performance competitive.
The initial qualifier is the one that made all the difference.
ISA (barring truly bad performance bugbears: x87...) is a second-order consideration in the absence of other constraints.
Economies of scale, manufacturing prowess, engineering capability, industry inertia, and business model helped significantly.
Chip to chip, I wouldn't say x86 was the undisputed leader until most of its competitors had thrown in the towel for reasons unrelated to technical superiority.
I wouldn't say it either, and in fact I haven't said it. I wrote that x86 chips were competitive, which is quite different from being undisputed leaders.
Moreover the "all else being equal" argument (which has been used over and over again) is a wrong one because companies don't take decisions in a vacuum. It just doesn't make sense to take a complex system, change the boundary conditions and expect it to still evolve in the same way.
Please, don't be facetious. You'll still end up having the CPU and GPU fight for access to the L3, even if memory is tagged as uncachable. If it's not going through the L3 at all, then the GPU needs a second way to access memory bypassing the L3.No un-cached memory in your universe?
Who's being facetious? You made up a model in your head of how that thing should work and now you are complaining about the fact that it sucks!? Perhaps if you had a better idea in the first place you wouldn't need to come up with ridiculous arguments such as the ramdac having to trash L3 cache just to display an image.Please, don't be facetious. You'll still end up having the CPU and GPU fight for access to the L3, even if memory is tagged as uncachable. If it's not going through the L3 at all, then the GPU needs a second way to access memory bypassing the L3.
We all know where this sort of arguments converge to, one just needs to pick some random thread from usenet written 15 or more years ago. I'd say reality, which disproved them all, is more compelling than fictitious universes.I think it's worth noting just how much it took to make x86 competitive.
Please, don't be facetious. You'll still end up having the CPU and GPU fight for access to the L3, even if memory is tagged as uncachable. If it's not going through the L3 at all, then the GPU needs a second way to access memory bypassing the L3.
We all know where this sort of arguments converge to, one just needs to pick some random thread from usenet written 15 or more years ago. I'd say reality, which disproved them all, is more compelling than fictitious universes.