I'm not sure about that, look at what Dice made on FB2 on the ps3, if the market share are some developers are willing to go through some pain. I actually think Dice state so in one of their paper.The console makers would need to get together and agree to go just as far ahead of the curve as the other, or find someone willing to pay the money to implement the changes.
I think the issue is more risk from a competitive pov, the risk that early showing under perform which looking at the money involved I agree is a a massive hurdle.
Indeed especially as we speak about big money. But it is a case where the market dynamic hold on progress. I mean Sebbbi, Andrex Lauritzen and I guess plenty of others are toying now with software rendering, possibly intellectual curiosity for the former, for the latter the same applies and he is a researcher so wellThe direction the money is going seems to favor being able to make use of what we already have or will soon have, which leverages existing investments and expertise.
It seems that some pretty effective languages are available now, actually I wonder at this point if the issue is more hardware, there is no proper hardware. There is also no market if no consoles manufacturers is interested in jumping forward. In the software rendering thread people seems have to agree that I won't before 5/6 years that thing could change and it is only a "could" as market dynamic and the choices made by the big actors in the field will set the tide.
Without consoles (at least one) there is no market for such a device.
Thing is investment to make that happen would be massive as form what I get from the discussion going on in the aforementioned thread, you need both sane single thread performances, quiet possibly 4 hardware threads, a solid throughput forn the SIMD units, quiet some cache and icing on the cake lot of on chip bandwidth to cope with the requirement of the various stages in 3d rendering. Quiet an extensive list of requirement that call for an engineering jewel/marvel.
Well I can't disput your povm I stated that based on Keldor's comments on the matter (same thread as above).ISA concerns are generally secondary to things like implementation and design, unless something is truly difficult to implement well, like x87 floating point instructions.
That i an interesting though, I would bet that Intel works on larrabee replacement, I wonder what they will come with.A single SIMD unit would have made Haswell worse at existing workloads. The inflexibility of a single vector pipeline could conceivably make it worse overall for most loads that get by with shorter vectors.
Speaking of consoles and what possible now, I wonder if as some stated (always the same thread) if 'well rounded' throughput CPU cores (I mean by well rounded CPU cores that don't give up on anything but peak SIMD performances ala Larrabee) back up by a tiny IGP could have been doable.
Especially looking at what Dice did on the ps3, it may have lessen the pressure on the CPU.
Honestly I won't go further as I can't contribute in any sensible to the aforementioned topic and you and others had a really interesting discussion on the matter. For some reason I think that ultimately CPU are superior, that if you have dedicated units (as graphic cards now, or the video engine in GPU or Intel procs, or sounds card, or whatever accelerator you may found in for example in a PowerEN) those devices should bring great bang for bucks (both power and area efficient).
There is something that I don't like is that graphic workloads get more complicated and GPUs tries to deal with more general purpose type of tasks, in the mean time CPU (whether they are widely available or not) also improved their throughput and still have quiet some room. To me it doesn't look "efficient", you have 2 type of resources (on which you spend quiet some silicon, both burns power, etc.) that "conceptually" fight for the same workloads (which should it should be easier to avoid that workloads fighting for the same resources).
From a software pov it looks like a consistent headache to have both things to works together, they have different strength, load balancing should prove hard, you have to improve code on two different architecture (part could be hidden but still is there on the shoulder of the driver teams), etc.
To me it looks quiet like a dreadful situation at this point, it is not the same as say questioning the validity of having video processing units.
I think the bulk of the computations should be move to the CPU (possibly still to be designed) cores.
At the same time I don't realy agree with Nick, I don't see the future (as any time soon) consisting of plenty of massive cores (like Haswel and its successors). Though to me it doesn't conflict with the idea of having the bulk of the computations done on CPU cores.
I kind of have a reverse position, I would more easily question how many of those cores are needed in the personal realm), looking at what a 360 achieve with pretty slow CPU, the tasks run by the average user, I would think if costumers needs many CPUs cores, they don't need many "big cores". For example if I look at how flash is accelerated by GPU, it looks like quiet an effort on software side, actually if they have this working on GPU, the result would be greater on may "well rounded CPU cores".
I do agree with you when you answered Nick that we do not need +16 (I think the number was 24) haswell kind of cores, though I'm not sure that it discard the need for more CPU cores, neither that GPU should completely disappear anytime soon but they cold focus on the stuff they are massively faster at.
Edit for example looking at that post make me really wonder the extend to which one should invest silicon on GPU (not discard it altogether).
Last edited by a moderator: