Talk of ~30 core consoles scares me somewhat.
While great from a theoretical viewpoint I don't really see the enormous benefit to developers. In todays environment only the absolute top level developers with the right backing, timeframe and a publisher who doesn't care so much about ROI would see meaningful benefits...
The concern I have,- is hypothetically had the PS3 had 16 spus, or 32... What benefit would have been seen in the majority of games that came out this year?. Developers seem (at least from my view point) to be struggling with problems like those imposed by optical disks and having systems with hard limits - not the slightly fuzzy boundaries of the PC world (eg virtual memory).
Pushing those limits outwards helps today, no doubt, but does it help developers in the future? You only push the hurdle higher.
The industry is expanding, technology is going crazy. Finding a programmer who is honest is bloody hard as it is, finding one who is competent is a challenge. Finding one who can cope with the technology, adapt to it, learn with it and ultimately exploit it... Well hell. What university teaches multithreading, let alone on such a vast scale? My opinion the pool of people who have the ability to exploit these theoretical systems will only shrink in relation to the size of the industry - and yet that is where everyone expects these systems to focus, on the elite hardcore programmers.
Put me on the fence because I don't like it.
The console makers need to focus on making their system easy to exploit if industry shall continue to expand. Hell, most developers already walk a knife edge as it is - just look at the peril lionhead were in until the MS buyout.
So. My thoughts. I'll take the 360 my reference since it is the system I am most known to champion. (I suppose). Although I'm still technically a casual observer - so to speak
Soften the hard limits. Currently, it's
something like:
Code:
EDRAM <== GPU <~~> Memory
|
CPU <-> slow HDD
Why can't it be something more akin to:
Code:
[CPU/GPU] <===> gihugeous fast cache (like edram) <---> Main memory <---> Flash <---> slow HDD
Yeah I know. Not too well thought out, but it's theoretical. Let the machine deal with the hardware, let developers know they can expect lightning fast performance for their most recent ~64mb, good perf for 2gb (ram), ok for the next 2gb (say, flash, or something), then utter disaster for everything beyond that (basically, just like a PC). Point is let the machine do what machines do best, logical ordering and structuring, don't let a human deal with memory
Now. Then comes the programming.
For 90% of the computing world, C++ is dead, let alone assembly. As crazy as I possibly sound, the next xbox
must be designed with .net as a first class citizen, perhaps above that of C++.
If you are a game company, hiring is a huge risk as it is. Getting someone familiar with C++ is hard enough, let alone in such a resource restricted environment where a memory leak or memory corruption with pre allocated resources spells doom. Once again, let the machine do it. .net is stupidly efficient at what it does, and at the low level does a better job of targeting the hardware.
For multicore, I don't have an obvious answer. But I do have suggestions. Having cores of differing performance is OK in my book, provided there are no hoops to jump through to use them. The other thing is GP/GPU, layer this into the language and treat the GPU as an extra set of cores. Treat them as a subset of the primary cores, not an entirely different system. Obviously you shouldn't do networking on a GPU core, but that doesn't mean you should segregate them. With a smart language, IDE and compiler, the developer should know this the instant they try. I cannot stress that enough, a *smart* IDE and language/compiler. You read about things like load-hit-store performance optimization (is that the one?) and think 'how on earth would you describe this to an intern, let alone a normal developer?'...
A similar story can be told of optical media. A horrific limitation that has to have huge thought and investment put in to avoid a disaster. The PS3 allowing copying parts of the game to the HDD must certainly ease the burden somewhat, but to say it's an ugly hack is to give it credit. Honestly something needs to change radically here, and for the life of me I can't see it (for physically distributed games). Some form of cheap (read-only?) flash would be a good start, ala DS - (and you could save your game to the disk
) but it's still hardly ideal yet, and the scale isn't there yet.
Whatever the solution, it needs to be smart. Let the machine handle it, make it fast, and make it good.
And as you may have noticed, I haven't mentioned graphics. As much a graphics geek I am (in the programmer sense), I don't see
challenging hurdles. The problems at the moment are all API and design problems, not hardware. Investment in tools and software will (imo) give much, much better results here than investment in hardware. Why do we *still* not have high level filtering functions (that are crazy optimized for the system), logically simple things like calculating a tonemapping constant are
hard for lots of programmers, and even shipping games get it badly wrong (I'm looking at you, R6:Vegas...) - and lets not get into efficiency here. Look at XNA, perfect opportunity, yet still painfully low level in places (for no obvious benefit), the heritage of DirectX shows though. (Hence my current hobby project making a shader plugin for XNA).
sigh.
Ok I've probably gone on long enough. Hopefully my point is clear, even if my words are somewhat muddled.