Is multiple core technology really needed for next-gen

This is not a knock on Sony's cell processor, I'm just as excited as anyone to see what they accomplish with this new technology. This is more associated with comments made by Iwata and Yamauchi. I would like to know what part does the CPU have in rendering what we see in games. This generation seemed as if it was headed down the path of creating GPU's that reduce the amount of need for tasking CPU(I don't know much technical lingo so please bare with me). The truth is I would like to know what advantages a multi-core cpu has over the technology thats used today in most CPU's?
 
"Needed?" No, not really. It's not "needed" in PC's yet either.

What might it potentially be able to bring about...? Now that's a different question. :)
 
I'm no expert, but one would think that multi-core cpu / multi cpu machines are better at multitasking.
If Sony is going for game console/pvr/web/multimedia all-in-one machine, a multi core CPU might give better performance.
 
It's not that they're 'better at multitasking'. It's that multitasking allows them to exploit their power more fully.

Multiple CPU's are a cheaper way of achieving very high computing power than a single CPU of the same spec. The problem is that to make use of this power you need either multiple applications running at once (i.e. very much not a console scenario) or a carefully constructed application.

Unfortunately, constructing these applications is long-standing problem, and so far substantially unsolved for general programs. A major issue is communication between the different cores - it's astoundingly difficult to design a communication method that works well, and this communication is probably as important as (maybe more than) the raw horsepower of the CPUs. There are far fewer rumours around about comms than about CPU cores (to put this into analogy, there may be a great engine in the car but nobody knows much about the drivetrain).

That said, there are many algorithms used in games that might be well suited to multiprocessing if well implemented on a system with good comms.

If the future consoles do go multi-CPU (even, to some extent, 'just' hyperthreading or equivalent) it will be great news for rocket-science console coders. Whether it will be great news for console gamers remains to be seen.
 
I think it's down to how a super-beefed-up single core CPU can keep up with a multi core one.

I "feel" the multi core choice would be more "efficient" and elegant. But that's probably just personal bias...

In the end some people will throw Hz at the problem, some others will go around it other ways.
As long as the chip gets the job done, i'm not really concerned. Although i feel in the long run we're gonna have to move to multi core cause Hz can only push you so far and will eventually "top up" at a certain number, at which point all we will be able to do is adding more cores to gain extra performance...

So maybe the answer at the moment is possibly uncertain, but i'm almost 100% sure that it's the way to go in the future.
 
london-boy said:
I "feel" the multi core choice would be more "efficient" and elegant. But that's probably just personal bias...

I disagree with this, actually efficiency goes down when you add more CPUs. The performance difference between a 1 processor system and a 2 processors system is quite large. The difference between a 16 processor system and a 32 processor system is much less, relatively speaking. The more CPUs you add the more efficiency drops. Of course how the exact numbers work out is dependant on the application, but it's true as a general rule.

As long as the chip gets the job done, i'm not really concerned. Although i feel in the long run we're gonna have to move to multi core cause Hz can only push you so far and will eventually "top up" at a certain number, at which point all we will be able to do is adding more cores to gain extra performance...

So maybe the answer at the moment is possibly uncertain, but i'm almost 100% sure that it's the way to go in the future.

Both process technology and clock scaling will "hit a wall" eventually. At the moment, packing more and more silicon into a smaller space and continually raising clock speeds is an effective way to get more performance. However process advances will slow down moving into the future, eventually stopping alltogether as the physical limit of the materials used is reached. The future is absolutely uncertain beyond that point.
 
londonboy was probably referring to cost-throughput efficency, which is different than cycle-throughput efficency. In that case that's dependent upon the type of applications that will be be ran on the device and how easily they are to parallelize.
 
nobie said:
I disagree with this, actually efficiency goes down when you add more CPUs. The performance difference between a 1 processor system and a 2 processors system is quite large. The difference between a 16 processor system and a 32 processor system is much less, relatively speaking. The more CPUs you add the more efficiency drops. Of course how the exact numbers work out is dependant on the application, but it's true as a general rule.

I disagree with your comment. Studies by IBM have showed that "actual efficiency" as measured in preformance verse area is in favor of multiple reductionist cores over the high uniprocessor. This is beyond debate really when you consider that everyone in the Industry has agreed that we've past the point of diminishing returns wrt devoting logic to a single thread. The proportional differential between 1:2 Processing Elements and 16:32 is the exact same, the difference is in the programming models applied, the tasks being computed, and the mentality of the coders. This concept is something I've heard Jaron Lanier talk of, during which he described the mentality of "lock-in" wrt our IT based technological thinking and I can't help but agree - although it's probably a species-dependant phenomina.

Multiple CPU's are a cheaper way of achieving very high computing power than a single CPU of the same spec. The problem is that to make use of this power you need either multiple applications running at once (i.e. very much not a console scenario) or a carefully constructed application.

There's isn't much demand for parallel processing in consoles whose main output is 3D visualisation? Remind me again why your company is producing a part for the XBox2. I find comments like this kinda.. awkward.. as you're taking this PC mentality that the "GPU" is distinct from a "CPU" and allowed to be as parallel as possible, but never stop to think that in a closed-box, it really doesn't matter. Just because there are corperate politic, technology barriers, legacy, etc which prevent a PC from exploiting said parallelism on both ICs, a console is intrinsically pure.

If your mentality, the idiosyncrasies of MS/ATI, leads you to be willing to conceed that the potential logic area of a "CPU" doesn't have to be fully utilized because of your big, bad-ass "GPU," then you're going to get your ass handed to you in a console enviroment by someone who isn't factoring in such artificial barriers. And with that, I shall step off my soap-box and get to work. :)

Speaking of which, didn't one of Microsoft's initial requests speak of Vertex Shader type constructs on the "CPU"?!? Also, I have a long day ahead so my responce won't be for a bit, I'm sorry.
 
Aren't we already using multiple cores, albeit massively different designs, when we run a 'traditional' cpu and a gpu at the same time to construct a real time simulation? Would a better approach (for now) be to design processors per function as we have for graphics? ie. generic unit (traditional cpu), physics unit, AI unit, ect.

It seems a balance could be better achieved from that perspective than many of the same cpu. Then again, developers probably like that kind of control. It just seems really complicated from a design/implementation perspective. But I'm hardly an expert. :LOL:
 
WRT efficiency:
It is "generally" true that efficiency falls as the number of processors increase due to increasing latency. Running hpl on varying number of compute nodes shows this easily. However that is quite hardware and application specific. The Earth Simulator was created to solve certain classes of problems, and the software that run on it are specifically written for the hardware. As such it managed to acheive 70% efficiency for certain applications. An impossible number for something so hugh, but true. A SSI cluster with very good interconnect may hit nearly 80% efficiency with hpl.

(For reference, running hpl on a single processor clocks 70-80% efficiency, can't recall exactly.)
 
its not -absolutely- needed, i supposed, but it is needed in order to see a massive increase in processing power. clockspeed alone won't get us enough performance increase. we need the combined effect of multipule cores with increases in clockspeed to see improvement of an order of magnitude.

the same thing can be said of GPUs. you need multipule processing units (pipes, vertex units, etc) combined with clockspeed increases (and memory increases) to achieve enough of a leap in performance.
 
Vince said:
There's isn't much demand for parallel processing in consoles whose main output is 3D visualisation?
Please read my posts more carefully before attacking me. I clearly did not say that.
 
Dio said:
Vince said:
There's isn't much demand for parallel processing in consoles whose main output is 3D visualisation?
Please read my posts more carefully before attacking me. I clearly did not say that.

Dio he is not attacking you... we Italian are very temperamental and loud, you have to excuse us. ;)
 
FWIW I dont' think it's "needed".

I think it presents some interesting opportunities from a technology perspective, but I also don't expect to see those emerge quickly.

The fact is most games are not multithread friendly and it's going to take developers time to start using multicore systems effectively.

As long as the individual cores aren't crippled in the same way the Mips processor on the PS2 is (i.e No L2 Cache, no double math support etc etc.) and as long as someone has put some thought into resolving the multiple views of memory and potential communication issues, then to me it's just more resources to use and you have to like that.

Gflops aren't everything, good integer performance is still very important to your average game.
 
Panajev2001a said:
Dio he is not attacking you... we Italian are very temperamental and loud, you have to excuse us. ;)
Well, if it's not attack, it's unnecessarily vigorous defence. I don't see the need, myself.
 
Dio said:
Panajev2001a said:
Dio he is not attacking you... we Italian are very temperamental and loud, you have to excuse us. ;)
Well, if it's not attack, it's unnecessarily vigorous defence. I don't see the need, myself.

I know, I know... still Vince is not a bad poster ( all the contrary IMHO ) although he sometimes uses the "reply-to-usual-Deadmeat-rant" tone with everyone which is a bit too inflamatory IMHO.
 
Especially when I was arguing that multiprocessing is an effective way to enable the availability of MORE raw, universal horsepower IF you have the coders who are wise enough to make use of it and a sensible comms architecture.
 
ERP said:
Gflops aren't everything, good integer performance is still very important to your average game.
Very true - but in this case it's worh noting that APUs themselves are supposed to be rated for equal integer and float throughput. The other question is how well can that throughput be fed, but then both ratings would be equally affected by that too.

Anyway, on topic, I pretty much agree with what ERP said. Personally I also rather look forward to the concept as long as it's going to be a consistent design without stupid oversights.
 
Vince said:
nobie said:
I disagree with this, actually efficiency goes down when you add more CPUs. The performance difference between a 1 processor system and a 2 processors system is quite large. The difference between a 16 processor system and a 32 processor system is much less, relatively speaking. The more CPUs you add the more efficiency drops. Of course how the exact numbers work out is dependant on the application, but it's true as a general rule.

I disagree with your comment. Studies by IBM have showed that "actual efficiency" as measured in preformance verse area is in favor of multiple reductionist cores over the high uniprocessor. This is beyond debate really when you consider that everyone in the Industry has agreed that we've past the point of diminishing returns wrt devoting logic to a single thread.

The issue isn't single threads vs. multiple threads. The issue is whether clock speeds will hit a wall and adding processors will be the only way to increase performance. I'm specifically referencing Amdahl's law, which actually comes from IBM.

Efficiency is usually defined as the speedup per processor. The more processors you have the smaller the speedup gained by adding additional processors. I'm not saying multi-core CPUs are a bad idea, I'm saying their efficiency drops dramatically as you scale upwards. For now, multi-threading has a great performance potential, but at a point it will hit a wall. It's likely that clock increases can continue beyond that point.
 
Back
Top