End of the line for fast consumer CPU means what for gaming?

Odd, as in my experience it's the opposite, and it would be considering the API's that the PC is always forced to go through. Console *ports* are generally very low CPU usage (my X2 215 Athlon is very rarely the limiting factor in getting significantly better framerates in console titles than their native counterparts), but that just seems to be due to the required level of optimization.

CPU-hogs are far more often games that are designed for the PC from the outset (Metro, Stalker, RTS's for example).

I'm not sure where you are disagreeing with me though?
 
But is that a stable 30 fps with vsync on, even in hectic battles?
I am pretty sure the FPS would dip in hectic battles. I got these figures from a forum collecting frame rates for E350 games. Their list had several other of (AAA) console ports reaching around 30 fps on E350. But they do not list any methods how they have measured the fps (is it average or minimum, or with/without vsynch).
Running the 360 game with no cap / vsync off is the only way to get a reasonable comparison to the frame rate figures that PC gamers and hardware websites almost exclusively use. But only developers can do that.
Or you could force vsynch on for PC as well.
 
Makes me wonder, with the kind of low-level code you guys write for console graphics if a highly modified Haswell design (less cpu cores, more EUs) would be a good fit for the next gen, especially if Intel can give you a large last level cache shared between the CPU cores and EUs, or some kind of EDRAM. I was listening to Tom Piazza's presentation from IDF and he said Ivy Bridge is bringing 20x performance in some GPGPU workloads (he said this in context with the addition of scatter-gather) and Haswell is adding 256bit AVX for integer operations. At 22nm the chip could be fairly compact once you remove the consumer PC features like the power management, PCI-E controllers, and more of the uncore.
 
I am pretty sure the FPS would dip in hectic battles. I got these figures from a forum collecting frame rates for E350 games. Their list had several other of (AAA) console ports reaching around 30 fps on E350. But they do not list any methods how they have measured the fps (is it average or minimum, or with/without vsynch).

I just get this feeling that PC figures will be as high as possible (triple buff. or no vsync) as no vsync is the normal way to benchmark PC games, and no vsync or vsync with triple buffer is the best way to actually play PC games (particularly with lower frame rates).

The Bobcat APUs seem pretty impressive. Wikipedia is showing a Z-01 Bobcat with 1gHz CPU / 276 mHz GPU as coming in at 5.9W.

http://en.wikipedia.org/wiki/List_of_AMD_Fusion_microprocessors#.22Desna.22_.2840nm.29

Who knows what it might come down to at 28nm next year. The idea of something like that in tablet and able to run the full catalogue of x86 Windows software is pretty exciting, and more exciting than Ivy Bridge because it'll actually have an impact on what people can do with their devices.

Or you could force vsynch on for PC as well.

Yeah, that'd give you much better idea too (with the exception of areas where the PC version might hit 60fps but the console would still be capped at 30fps).
 
I just get this feeling that PC figures will be as high as possible (triple buff. or no vsync) as no vsync is the normal way to benchmark PC games, and no vsync or vsync with triple buffer is the best way to actually play PC games (particularly with lower frame rates).
Agreed. And there's many games in that list as well where E350 only manages 15-20 fps at 720p. But it's hard to say whether that is caused by the quality of the PC port, or the game being really depending on certain CPU/GPU functionality that the E350 lacks (compared to consoles). I don't believe any games are specially optimized for Bobcat architecture, or for the low bandwidth integrated APU GPUs, but if we will see them being popular in Windows 8 tablets, maybe things are going to change. Tthe competing Intel HD 3000 GPU would likely receive more love as well if Ultrabooks and tablets reach the level of popularity Intel is hoping for (over 50% of the total PC market).
The Bobcat APUs seem pretty impressive. Wikipedia is showing a Z-01 Bobcat with 1gHz CPU / 276 mHz GPU as coming in at 5.9W.
Bobcat seems to be a really solid CPU architecture. It fares really well in Agner Fogs analysis (pretty interesting read btw): www.agner.org/optimize/microarchitecture.pdf
 
Not much meat but the first hints about the perfs of the first ARMv8 processor:
http://www.anandtech.com/show/5098/applied-micros-xgene-the-first-armv8-soc
It's interesting because it's possibly the first high performace ARM processor too.

EDIT
A presentation about it.(slides accessible through the comments)

EDIT 2 a bit OT though

It would be interesting if it pans out. While the per-core performance is 1/2 of a slow Sandy Bridge, it claims to do so at a low power level.
Presumably, the core would be compact, since they want to jam a lot of cores in there.

The caveats loom large.
This is all estimated, it seems their proof is an FPGA implementation, not actual silicon. Until real chips exist and are benchmarked, their performance and power numbers are suspect.

I am not certain what process node their best-case is supposed to be on. It would be very good if these projections were for 40/45nm prior to some significant power savings with a node jump, but I suspect not.

The power numbers are core+IO/PCH. These days a significant fraction of overall power can come from everything but the cores. There are significant power advantages to having an SOC that Sandy Bridge cannot utilize.
That may yield an unfair core power comparison.
 
I agree it's still to early to draw any conclusion.
I expect the cores to be compact too. At slide #32 they are showing a die but quality is terrific.
Actually I wonder if it's a core or module, module includes 2 cores and shared L2 (at least it looks like it).
 
I can't access the presentation without registering, but how can they show a die shot when their proof of concept is an FPGA?
 
Back
Top