Look at most of the games that get released today on the PC, they are console ports, why would this trend revert?
Because we won't see a new console before 2012.
Back in 2006, PC game development was a total mess. There were a lot of people with Shader Model 1.x cards, Shader Model 2.0 cards, Shader Model 3.0 cards, and Shader Model 4.0 was ready for launch. To top it off, each card had its own set of capabilities, and there were a gazillion driver versions. So you had the choice between creating a game that targets older specifications but looks antique on the day of release, or you could write a nice looking game that nobody could run, or you could spend two or three times the development budget to make it run okayish on both older and newer hardware. Suffice to say that graphics hardware underwent revolutionary but turbulent changes, which was hell for software developers.
So when consoles like the PS3 appeared it was heaven. They have one fixed specification and compared to the average PC of that time they were mind-boggling fast. So everyone and his brother embraced these platforms. Since it takes about three years to develop a game, we're still seeing a lot of games today appear on consoles first and then get ported to PC.
However, time is standing still for PS3, while the PC moved on. Nowadays it's a lot less turbulent and they are getting much more powerful. There won't be a new console for another three years or so, so PCs will have reclaimed the position of the dominant innovative gaming platform.
Will history repeat itself when the new consoles arrive? Likely not. Microsoft has understood the importance of having hardware conform to a minimum profile to provide a stable platform for developing games.
Well, at least we know that *today* we don't need it to use complex pixel-shaders to sell well.
That's true for casual gaming consoles like the Wii, but absolutely not for PS3 or XBox 360. And either way today's situation is actually totally irrelevant. Back when Mario was still rendered in 2D it was also quite true that you didn't need 3D to sell well. There was simply no competition offering affordable 3D rendering. But times have changed. So it's silly to think that things won't evolve further for the casual gaming consoles.
That's exactly why I mentioned the Wii. It uses standard off the shelf components which have single-digit cost and consume very little power. And those components are also very simple from a design POV, more than an order of magnitude simpler than any of the current x86 designs if we want to talk about fixed costs.
Oh, absolutely! Today. You can keep talking about that all you like and be 100% right. But that's not what this thread is about.
The costs for designing custom hardware is going up, and that's reflected in the off-the-shelf components as well. So for instance instead of having a complex chip for sound processing, it already makes a lot of sense to do the computations on the CPU and have a tiny DAC for sound output. In theory that's less efficient, but because you're not constantly using all the advanced features you make better use of the the silicon and you have no additional cost. The same thing is likely to happen with graphics and other workloads. Generic chips are cheaper than specialized ones. And an important bonus is that you can use the available processing power any way you like. If you have say six specialized chips, you're kinda forced to use the architecture the way it was meant to be used - at design time.
Think back about non-unified vertex and pixel processing. If you didn't use enough vertices you were not getting full utilization and the geometry looked angular. If you used too many, it became a serious bottleneck and the framerate would drop. So apart form artwork and gameplay every game was identical; everyone strived for the same balance between vertex and pixel processing workload. It severely limited the developer's creativity by offering only one right way to use the hardware. The same thing is currently still true about graphics and other workloads. You're either CPU limited or GPU limited. Always. If you have an awesome idea that will require a lot of CPU time but not a lot of GPU time, your game will be slow and leave silicon unused. So it would be useful to somehow unify the two and use the available performance for what it's needed the most.
Like I said before though, there's a tipping point for everything. Today it's still more efficient to have a specialized chip for graphics and just live with the bottlenecks and limitations. But because of technological progress, workload variation and design cost the tipping point is moving. Sound processing has all but completely moved to the CPU these days...
You may argue that the sound processing workload has remained constant so as dedicated chips got smaller and the CPU got more powerful it made sense to make the shift, but actually sound processing underwent mayor improvements in sample rate, resolution, output channels, filtering, effects, source channels, etc. and still the shift happened. So there is no reason it won't happen for graphics. We're already seeing systems with more GFLOPS available on the CPU than on the IGP. What people consider adequate graphics is evolving slower than CPU performance.
Comparing to a volunteer-based effort isn't very fair and hardware development costs aren't anywhere as high as where you place them.
Why would it not be fair? Their goal is to offer graphics free from bugs in closed drivers. If it's such a great idea to achieve that by designing open hardware then why after five years they're still nowhere near offering a better solution than software rendering, let alone closed hardware rendering? Heck, swShader was a volunteer-based effort that five years ago achieved way more than what they got today. And while they're making progress it's a race they're unlikely to ever win. Why? Because the cost of developing a custom chip from scratch is prohibitively high. Not just for them but for everyone. That's why the industry went from custom design to off-the-shelf in the first place. And the next step is moving toward generic processors and implementing what you need in software.
So it isn't very surprising that lately there have been some posts on the OGP mailing list suggesting to use a CPU or DSP, instead of an FPGA or ASIC...
Low-power 3D IP both for fixed function and programmable hardware is available from multiple vendors, has drastically higher performance/W than software-based approach and is already shipping in million of handhelds. If it was as costly as you depict it this wouldn't have happened.
Absolutely. But once again this is more about the future than the present. Handheld architectures have been evolving towards less chips that are more generic, and will continue to do so, to cut costs and extend capabilities. Think about the iPhone. It is capable of running all the applications in the vast App Store, thanks to a relatively powerful CPU. And its fixed-function GPU just got replaced by a programmable one.
Intel graphics hardware isn't exactly top-of-the-crop isn't it? Their processors surely are on the other hand...
So? Clearly it doesn't matter that much for Intel's sales. Graphics is becoming just another little task a system not primarily meant for gaming has to be able to only run adequately, like sound, to sell. So once CPUs get powerful enough to take over that task, and we're clearly getting there, it's a waste to have a second chip dedicated to graphics.
They were also projecting of releasing 10 GHz P4 for the matter. As I said above I take long term predictions in our market with a grain of salt.
But CPU's still got faster. Exactly how we get there is far less relevant. In fact, the number of cores has become a new parameter that allowed to better optimize the design. We could have easily had 10 GHz Pentium 4's by now. But a 3 GHz Core i7 is much more powerful.
Yes and as you might have guessed the process improvements apply to CPUs just as they apply to more or less dedicated hardware.
Which hasn't helped sound processing from being a market dominated by dedicated hardware...
While in theory it helps all hardware equally, there are other forces at play that dictate a move towards integration, unification, generic processing and software solutions.
And in the same time graphics card did made much more progress both in performance and features as far as graphics goes.
The high-end market achieved this "progress" by throwing more silicon at it and burning more Watts. That's not going to happen for Wii or other low-end systems. Like I said, we're starting to see ever more systems where the CPU delivers more GFLOPS than the IGP. There's only a handful of reasons left why the IGP wins at graphics.
I wouldn't call current GPUs rigid and besides they are getting more flexible by the day and anyway offering more or less dedicated hardware for certain tasks makes perfect sense and that's exactly why there's a market for it.
And as they become more flexible they get more CPU-like. So it gradually makes less sense to use dedicated hardware for it. And a CPU with gather support (which is a generic operation) will be even better equipped to run graphics. The convergence is undeniable. So only two things can happen: either the convergence stops at some point, or they get close enough to no longer require a GPU.
As long as graphics evolves the workload gets more generic. The TEX:ALU ratio hasn't stopped dropping while other forms of memory access increases so at some point it no longer makes sense to have dedicated texture sampling. This may happen ten or twenty years from now, I don't know, but the benefits of dedicated hardware is only slowing it down not stopping it.
I doubt it considering that the most innovative thing that has been done in the console market is stuffing accelerometers inside the controllers of a machine which has significantly _less_ computational power than its competitors.
Again you're not looking at a long enough timeframe. Things have changed dramatically since the time when Mario was a sprite. I obviously don't know exactly what innovation will appear in the next few decades, but I do know that it will be dramatically different from today and generic processing helps spur the creativity of software developers.
That looks like a very nice tool but nearly 100% of the 'multimedia' stuff you find on the internet is based on Flash which is:
- single threaded
- doesn't use any kind of SIMD acceleration (or at least didn't the last time I disassembled their linux plug-in)
- suck big time from a performance POV anyway and could use a lot of straightforward optimization before going into SIMD/threading
Is it going away? I'd really hope so but I fear not.
What were you looking at specifically? ActionScript itself is compiled to intermediate code which then gets interpreted or JIT-compiled. So that's obviously not using SIMD or multi-core. However, if you include for instance a video it can use a highly optimized video codec.
Yes, but only to software that actually _needs_ the performance in the first place and that's exactly what I have written in my post. The rest of the software industry will move along as it already did with MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, etc...
Again, you have to look beyond the application level software. More often than not it uses libraries and frameworks that do use SIMD to some extent. So a developer may think he doesn't need to bother with SIMD, it might be a vital part of the components he uses.
And even if you look specifically at applications that truely don't use SIMD at any level, that fraction is actually not relevant for the system architecture. It's not because some software doesn't make use of it that it's unnecessary. Intel invests in AVX for the software that does make use of it, at various levels.
While development trends may change you are ignoring the fact that in many sectors the market leaders are those who wrote the most craptastic possible software. Flash is a monument to this unfortunate state and I don't see any serious competitors stealing thunder from it yet.
You got to be kidding. Flash is getting some serious competition from Microsoft SilverLight, and Google is working hard trying to make absolutely anything and everything run in a browser. Sure, Flash still has the biggest market share (which it earned through innovation) but if it doesn't keep up with technological advancements it will lose terrain very quickly.
I wouldn't count on it and besides from a performance POV it can already be done on today's processors.
Please. Speech recognition on today's consumer systems is terrible. And it's not because of a lack of algorithms but primarily because of a lack of computing power. It's no coincidence that the word error rate of HMM-based speech recognition is improving at the rate of Moore's law. The best software available today is barely running in real-time on heavy workstations, and still doesn't perform well with speaker-independent conversational voice. But progress is steady so it's only a matter of time before it becomes viable on consumer systems. If putting accelerometers into controllers is innovative because it increases the interaction with the machine, then voice recognition will unleash a revolution.
It depends very much on the code and the language you use, with C/C++ unless you very carefully placed your const and restrict qualifiers it might just not work because the compiler is completely unable to figure out if all those accesses alias or not. Depending on the language there's more to automatic loop vectorization than having scatter/gather instructions.
That's only a minor bump in the road. First and foremost we need scatter/gather instructions before software developers can even use them at all!
Anyway, C++ just like any other language isn't static. We've seen many revisions and there are more to come. There have been proposals to adopt Fortran aliasing rules and to use an 'unrestrict' keyword for explicitely allowing pointer aliasing where practical. Compiler switches can ensure backward compatibility, and warnings can guide developers to write things the way they intended. C++ and pointers is for developers who understand the complexities anyway. And note that a lot of CRT functions have undefined behavior for overlapping memory, so those wouldn't be broken anyway.
So it's not like we don't have any solutions to the problem. A somewhat similar thing happened when Hyper-Threading appeared. Suddenly a fraction of software deadlocked and they blamed Intel for it. But nowadays everyone's aware of the pitfallls and has accepted to write well-behaved code. So programmers can adapt to hardware changes, and scatter/gather is no different.
That's certainly true but we're not there yet and we won't get there very soon IMHO. And even when we get there (we've got the appropriate language, and compiler and libraries) you still have to convince people to go there and use them.
I can settle for "not very soon". It's absolutely not something that's going to happen overnight. There's a gigantic amount of legacy code to rewrite. But just like for instance the move to object-oriented programming has been slow, that never meant it wouldn't succeed.
While your point is valid I'd have pointed to another language instead, having seen SystemC in action for hardware simulation I can tell you that's not something you want to use for performance critical codes unless you are very patient.
That's exactly why I mentioned it's currently an abstraction on top of C++. It merely gives us a peek at the ideas that could be useful to create real languages that natively support a high degree of concurrency.