Inq says quad core useless for games

Rangers

Legend
http://www.theinquirer.net/default.aspx?article=34916

What's scary about this to me is, if so, we're basically out of CPU performance increases for games, for the forseeable ever. And that day will come for GPU's as well (once lithographic limits are reached)

Core Dou will help for a while though I suppose.

GPU's keep doubling merrily along though, so I guess they just program the games to not scale on the CPU, but keep stressing the GPU more and more?
________
Wong Amat Tower Condo
 
Last edited by a moderator:
<Insert anti-Inq rant here>

The same could be said about dual core until very recently. (Or maybe dual core still isn't used--- I haven't paid attention since I'm still running an Athlon XP.)

Few brand-new technical innovations get used right away. It takes time to figure out how to use new hardware and for it to become common enough to make supporting it worthwhile. Nothing new about that but I suppose the Inq has to find something to write about. ;)
 
Last edited by a moderator:
Atm, games are mostly designed to run on one, maybe two processors. Because of this, the games dont take full advantage of >2 cores.

As >2 cores becomes more mainstream, games will get more support for it. Eventually, I would assume, games and other software will have support for a dynamic number of cores (so the same program will run happily on 1 core or 1024 cores).
 
I don't think Quad-core is useless for PC games, but I do think it's somewhat limited. Any sane person can look at the physics effects they're demonstrating in that latest Alan Wake video and see that the effects on display are incredibly basic and generic. Object interaction looks practically scripted, but it's adequate. Didn't I read somewhere that a CPU core is incapable of rendering detailed cloth/fabric simulations at acceptable speeds?

I'm all for multi-core gaming, but it's going to take a shitload more effort to effectively (and convincingly) code for than a PhysX solution..Regardless of how people feel about it, there's no denying that PhysX is far more tailored for pure physics simulation, and in that regard, is far superior visually to what a CPU core can do.

Though in time things may change..I don't know, i'm no magical time man thing. :devilish:
 
I think the headline ought to be re-emphasized to say, "Inq says quad core useless for games".


They're as utterly clueless and incompetent over there as always, truly the tech-equivalent of scamming-and-lying-through-their-teeth british tabloid press. Any thread with posts referring, quoting or linking to the inq ought to be instantly nuked methinks, to sanitize the board and prevent spread of fallout and pollution.
 
They may be incompetent, but they are an excellent collection of rumours, which B3D certainly can't critise them for as that is what this forum largely is, and more then that they are usually quite funny. I mean the whole bunny suit thing at IDF? That's classic! :LOL:
 
As of right now they're pretty much 100% correct.

As of in the future? They're wrong, dev's will find uses for the more cores they are given, but I personally do not see any drastic improvments in games, at least for the near future, because of an increase in CPU power.
 
Its not the hardware that isnt "scalable" its the current way games are created, and the tools used, thats going to change. This is on the heads of the devs and those that create APIs, not Intel or AMD, they're doing their job and pretty damn good i might add.


Not one consumer product tech currently in use or used in the past can be scaled for speed based around using a single thread without hitting a limitation be it speed or heat or power or all three. If they kept speeding up the K8 or Pentium 4 architecture as the only way to meet the demands of devs, it wouldnt take long to see we're screwed.


I do think devs better get on board and start scaling with the tech, some games show some serious issues and really are going to need to be built to use stuff as it comes available. Its not fun buying a game and having to wait 8-12 months before you can play it with full effects without skipping around because the engine was built for technology that was a dead end. Maybe we'll actually see some processors geared specifically towards gamers soon so they begin to scale within architecture like GPUs instead of raw MHz for two years or longer ;).


Really all i get out of that little write-up is that they asked some devs "so what do you think of Quad-Core" and they simply said "cant use it right now, dont care" because their games in development have been there even before dual core became common. Cant just tack on support and use 2 or 4x the resources, have to be designed for it in mind pretty much from day zero. Its not exactly a surprise or news and fits what the inq usually writes up perfectly.

That said we'll see Octo core (granted 2 die) before 2008 and Nahalem, so like i said, hopefully devs starting to design games are planning for this and we should see some really nice things over the next few years.
 
Last edited by a moderator:
So who's going to spend the millions of dollars it will take to rewrite libraries to take advantage of multi-threading?
 
I think the key point here is not that it's impossible to get games to scale, it's that it's costly in terms of programmer time. This is partly because you have to train programmers to think in a different way. You have to do a lot of R&D to determine what's the best way of distributing the work between threads. And then when you write multi-threaded / parallel code you have a whole load of potential race conditions and deadlocks which you don't have to contend with in serial code, and which may only jump up and bite you one debugging run in a thousand -- which makes them very time consuming to debug. It's the effect that all this extra programmer time has on game development budgets that I think it the key issue, not the pure technicals.

My feeling is that multi-core will accelerate the trend of increasing use of middleware and licensing of off-the-shelf game engines. The engine adevelopers can justify spending a lot of time in R&D for multi-threading because the cost gets amortized over multiple titles. Likewise Havok and their ilk. But I think that multi-core could well be the nail in the coffin of the in-house developed engine. Games will become more and more like boiler-plate slapped on increasingly less customised versions of a small number of standard game engines.
 
My feeling is that multi-core will accelerate the trend of increasing use of middleware and licensing of off-the-shelf game engines. The engine adevelopers can justify spending a lot of time in R&D for multi-threading because the cost gets amortized over multiple titles. Likewise Havok and their ilk. But I think that multi-core could well be the nail in the coffin of the in-house developed engine. Games will become more and more like boiler-plate slapped on increasingly less customised versions of a small number of standard game engines.

Maybe short term. As multi-core platforms become dominant I'd expect it to be a core part of computer science curriculae. So hopefully one day all competent developers coming out of college will already have multi-core ingrained into their thinking.
 
Maybe short term. As multi-core platforms become dominant I'd expect it to be a core part of computer science curriculae. So hopefully one day all competent developers coming out of college will already have multi-core ingrained into their thinking.

You can thank the consoles for that to happen sooner rather than later. ;)
 
Having "multi-core" on your CV doesn't make multi-threaded / parallel code easier to debug (well, it does in that it makes it go from intractable to hard :p), and it doesn't make intrinsically hard to parallelise algorithms easier to parallelise. Current tools and programming languages aren't up to the job, and they're going to have to change big-time if heavily multi-threaded/parallel programming is going to become as easy as designing a GUI in VisualBASIC.
 
Having "multi-core" on your CV doesn't make multi-threaded / parallel code easier to debug (well, it does in that it makes it go from intractable to hard :p), and it doesn't make intrinsically hard to parallelise algorithms easier to parallelise. Current tools and programming languages aren't up to the job, and they're going to have to change big-time if heavily multi-threaded/parallel programming is going to become as easy as designing a GUI in VisualBASIC.

Actually, threading in visual basic is easier than designing a GUI - of course, that is, just as long as you pick the right things to thread. ;) But with two lines of code I can start a subroutine in a different thread, and that works great for a lot of things, like loading or saving things in the background, getting stuff from a database in the background, and so on.

However, I think this kind of in-object threading may not scale to multi-core very well. I'm getting a Core 2 Duo at work though in the coming weeks, so I can do some testing.
 
Noone. But how about compilers?

Auto-multithreading has been a holy grail of compiler writing for ages, however the results so far are IMO thoroughly unimpressive (and this is certainly not for lack of incentive or time; the problem has been around for several decades, and a good solution would certainly help Intel/AMD/IBM/whoever push their multicore processors). There are some compilers that can do limited auto-multithreading on simple, highly data-parallel loops if you spike your code with enough hints, however this requires a great deal of cooperation between programmer and compiler and still exposes only the simplest and most obvious parallellism opportunities.
 
Back
Top