That is false. No x86 tax on a CPU will make it cheaper. A Bobcat CPU is pathetic even compared to the current 360 and PS3 cpus.2) A bobcat or Llano would be much cheaper than getting a gpu and then another chip.
I'm comparing Llano to the wii and to xbox 360. Compared to those its very capable and very cheap and would allow nintendo to go the wii route again.
Of course if you wanted to build a power house consle that blows Llano out of the park you could. But it be much more expensive.
That is false. No x86 tax on a CPU will make it cheaper. A Bobcat CPU is pathetic even compared to the current 360 and PS3 cpus.
3dilettante thanks for your insights (in regard to AVX units I meant 256 Bits wide).Per-clock is one part of the overall equation. Neither Atom or Bobcat can get near the clock speed of Xenon, so even a modest per-clock advantage does not mean there is an overall improvement.
There are also situations where OoOE does not yield significant performance gains, such as when there are memory latencies too large to hide, or the code is scheduled well enough that minimal reordering is necessary.
In areas of code with low ILP like some long chains of pointer-chasing, clock speed and the memory subsystem become dominant, and other parts of the architecture do not increase performance.
Without a redesign, Bobcat is not capable of scaling its clock close enough to Xenon to make up for the large clock disparity, at least not at acceptable voltages.
That might require some additional engineering. AMD does not have an uncore like Sandy Bridge, nor does it seem to have one planned for a while. It tends to use a crossbar that takes up a fair amount of space and is less scalable to higher client counts. The modularized Bulldozer keeps the crossbar client count lower by making the pairs of cores share an interface.
The average IPC for a number of workloads for Athlon was around 1 per cycle, at least on some desktop media applications when Anandtech looked into it a while ago.
There are low periods and burst periods. If a core is narrow, and if it is fighting for shared resources, those burst periods take longer. The probability, particularly under load conditions, goes higher because the bursts take longer to get through.
There are limits to what can be gained by adding more threads. The more thread-level parallelism is exploited, the more serial components dominate. Console single-threaded performance is not yet so high that it can be ignored, much less made worse.
If you are stating that if Bobcat were redesigned so it was more like Bulldozer and had more resources to burn that it would benefit more from being put into a module, then I have not said anything to the contrary.
AVX has 8-wide SIMD, not 256.
Fermi and Cayman have 16-wide units, but due to their batch sizes, their minimum granularity is 32 and 64, respectively.
The cores running the MLAA algorithm are most likely larger than the silicon found in the ROPs.
SRAA in particular uses higher-resolution buffers, which would take longer to build if the ROPs were scaled back.
I'm under the impression that Bobcat would shrink to the 28nm TSMC (GF?) process. 32nm is the SOI high-performance process at GF.
The original hope for Xenon was that it was going to be OoO, but the time-to-market was so short that IBM told Microsoft they would not have the time to properly design and validate the part.So I would bet on pretty narrow OoO CPUs, 2 or 3 issues with potent 4 wide SIMD with dynamic speed clock and other performant power containment feasures.
The question is which form of multi-threading, SMT or CMT? CMT looks like a good idea which may not only appreciate by comparing AMD bulldozer performances to their Intel counterparts.
As I understand either you save power and die space throught CMT but you can also reinvest those gains on a better front end and various architectural improvments.
What is you're opinion on the matter?
Wasn't the xbox360 meant to have 256MB RAM but EPIC told them to up it to 512.
That cost them 1 billion$.
If that was true and they didn't launch with 512MB then the PS3 would have truly been the more powerful console.
I can't wait to see what both company's come up with but what will probably push me towards MS is the amount of XBLA games I have. I would like to think they will be compatible and hopefully have some sort of enhancements to 1080p and 60FPS.
What worries me is they no longer have the staff that thought up the achievements and friends list ideas. They appear to be alot less of the company that launched the xbox360.
The demo was created by a 12-person team in about two months, Epic Games President Michael Capps said during a Thursday press briefing at GDC. “Samaritan” was running on a custom-built system using off-the-shelf PC parts, Epic said, including three Nvidia GTX 580 graphics cards, which cost about $500 each.
Very few if any consumers today would be able to run the demo, but with the rapid advancement of technology, it’s not unlikely that we’ll be playing something that looks like this before too long.
“We have a pretty good idea of what’s coming next,” said Capps, who added that Epic is already in preproduction on a brand new title for the next generation of game machines.
Emulating Cell would be nigh impossible on anything other than a similar architecture including instruction set.