The Question No-One Dares To Ask!

pjbliverpool

B3D Scallywag
Legend
And in all likelyhood no-one can answer, but I completely understand the circumstances around that..... But for anyone who is in a position to answer........

Tell us, given 5 years, which is what you will realistically get, which do you think you can get the best results out of (in gaming terms), a CPU like Xenon or a dual core X86 (e.g. the X2 or the Core2)?
 
So, you're basically asking which architecture is better for gaming, the PowerPC architecture, or the x86 architecture.

Basically, the PowerPC architecture is theoretically superior in its instruction set. But practically, vastly more R&D has been spent in iimproving the performance of the x86 architecture, and thus the x86 ends up winning. The PowerPC (or a similar RISC) architecture might come out ahead if they ever made inroads to the PC market, but since that seems incredibly unlikely to happen ever, it seems very unlikely that the PowerPC architecture will ever outperform the x86 architecture.
 
digitalwanderer said:
In 5 years holistic neuron clusters will probably be all the rage for gaming. :???:
Well, personally I'm vouching for genetic computing. Although, they do have a quantum computer at my university that can factor 15, so you never know.

Seriously, I'd like to see an enhanced instruction decoder that can load conversion tables on demand. This can be made a part of the context switching operation. It will allow CPUs to run x86, RISC, MIPS or whatever other instruction set they want on the fly with full interopability. Then maybe we'll see a better instruction set emerge then anything around today.
 
Chalnoth said:
So, you're basically asking which architecture is better for gaming, the PowerPC architecture, or the x86 architecture.

I actually took it more as a smaller in-order CPU with more cores like Xenon and Cell, or a larger OOE CPU with fewer cores like the current A64 and Core2, and not an ISA question.

And in 5 years I am leaning towards fewer OOE CPUs will out perform (in most tasks) many in-order CPUs. Though I wouldn't count out asymetrical processing where you have some combination of the two on a single die.
 
Last edited by a moderator:
DudeMiester said:
Seriously, I'd like to see an enhanced instruction decoder that can load conversion tables on demand. This can be made a part of the context switching operation. It will allow CPUs to run x86, RISC, MIPS or whatever other instruction set they want on the fly with full interopability. Then maybe we'll see a better instruction set emerge then anything around today.

isn't that something like Transmeta's processors, except signifigantly more streamlined and tweaked?
 
DudeMiester said:
Seriously, I'd like to see an enhanced instruction decoder that can load conversion tables on demand. This can be made a part of the context switching operation. It will allow CPUs to run x86, RISC, MIPS or whatever other instruction set they want on the fly with full interopability. Then maybe we'll see a better instruction set emerge then anything around today.

Actually it's quite impossible because ISA is not just a bunch of instructions. An ISA contains things such as memory management, privilege models, I/O models, and many more. For example, the format of page table is completely different on different ISAs. Not to mention that some ISA has features others do not have (such as x86's segmentation).
 
that's an easy question. three crappy cores versus two powerful cores, the PC wins. Carmack stated one year ago that a Xenon core was about half the performance of a modern PC core.

and as Killer-Kris I don't see it as an ISA question. On both sides you have weak cores (VIA C3 and C7, Xenon, Cell's PPE) and good ones (G5, K8, Intel "Core" and even netburst ..)
 
Actually it's quite impossible because ISA is not just a bunch of instructions. An ISA contains things such as memory management, privilege models, I/O models, and many more. For example, the format of page table is completely different on different ISAs. Not to mention that some ISA has features others do not have (such as x86's segmentation).

Well I'm sure you could design something sufficently flexible to do it, but I suppose by that point you'll have lost a lot of perfomance you'd get by having it all fixed function. Anyways, it was just wishful thinking to begin with.
 
It simply depends on what you want to do with it. Seen from the surface, for general purpose processing, the OOO X86 cores will probably win. For specifics like gaming, a bunch of specialized cores could run rings around it.

If you only look at the basic X86 and PPC cores, the X86 probably wins if the playing field stays the same. If you also look at things like Cell, it simply depends on the application, development tools and developers.

For games, Cell wins hands-down, if they get the development tools and developer support right. Otherwise, X86 will likely stay king of the hill, closely followed by things like Xenon.

For servers, things like the Sun Niangara processor are likely on par with something like Cell, depending on the application.

And if you only want to compare PPC with X86 in a level playing field, the first wins when you have many independent threads/processes for the same transistor budget, the latter probably wins in all other cases. So, the PPC can only win if the development commitment stays high, and they get some major mass-producing and developer interest, like being used in the current and next generation of major game consoles. ;)



Looking at the (current) evolution of development platforms, using a virtual machine seems to be the only route left. Even for C++ and games, you need that common platform and library to be able to make your game cross-platform, and the penalty in execution speed is shrinking fast and already almost neglectible in most cases.

Not that there is any generic (office) application left that demands programming down to the metal. Not even games. Not even for the speed, if you really wanted to. Middleware would be a much better investment, which brings you back to the common platform.

The only things that are machine specific should (and will) be handled by the OS and middleware or virtual machine as far as developers are concerned. If not now, then next generation for sure.

So, there is no real reason why your next PC couldn't be powered by a PPC or Cell processor. You might have the choice to choose and pick whatever you fancy. And running benchmarks would be very interesting, to say the least.

:D

In the end, it's an interesting race. We'll have to see. There is no clear winner at the moment.
 
Last edited by a moderator:
A better question (to be able to answer the initial question) might be: what virtual machine, OS and common library/middleware (development platform in short) is going to win the battle? While Windows/DX/.NET is king of the hill for now, a more open platform based on Linux/OpenGL/OpenSource is gaining ground very fast.

Depending on the market/application you're looking at, it's a close call to make. The large majority and sales are in the PC/Windows sphere for now, but that's also about the only market left with such domination. And the open platforms (including .NET, depending), don't really have a favored processor architecture.
 
So, you're basically asking which architecture is better for gaming, the PowerPC architecture, or the x86 architecture.

Basically, the PowerPC architecture is theoretically superior in its instruction set. But practically, vastly more R&D has been spent in iimproving the performance of the x86 architecture, and thus the x86 ends up winning. The PowerPC (or a similar RISC) architecture might come out ahead if they ever made inroads to the PC market, but since that seems incredibly unlikely to happen ever, it seems very unlikely that the PowerPC architecture will ever outperform the x86 architecture.

Isn't PowerPC supposed to have some serious performance issues when context switching though?

Anyhow, at this point the instruction set contributes little to the cpu's performance, especially with programming models as similar as PowerPC and x86. Without a major philosophy shift (such as Cell), I think it comes down to who puts more R&D into the chip than the ISA.
 
It simply depends on what you want to do with it. Seen from the surface, for general purpose processing, the OOO X86 cores will probably win. For specifics like gaming, a bunch of specialized cores could run rings around it.

From what im hearing at the moment though, Cells power is being utilised in the graphics side of things as opposed to the tasks a PC CPU would be expected to perform in a gaming machine.

No doubt Cell is much better than even a Quad Core Conroe when it comes to processing geometry and rendering but assuming the x86 leaves all that to the GPU (which of course it would), is Cell still better at whats left? What is left? Physics, AI, scripting, game control.. anything else?
 
From what im hearing at the moment though, Cells power is being utilised in the graphics side of things as opposed to the tasks a PC CPU would be expected to perform in a gaming machine.

No doubt Cell is much better than even a Quad Core Conroe when it comes to processing geometry and rendering but assuming the x86 leaves all that to the GPU (which of course it would), is Cell still better at whats left? What is left? Physics, AI, scripting, game control.. anything else?

I'd say physics is likely to be moved to the GPU as well, since it's mostly a graphical thing anyway.
 
Sure, but this is more a function of what software developers want than what the hardware is capable of. That doesn't mean the calculations are all that close to what is usually done in graphics hardware.
 
Back
Top