PS4 & XBone emulated?

Yes, although I think there might be another similar one floating around.
The socket doesn't change, and the memory tech is DDR3.
Nothing changes but the core and a drop in the power ceiling needed to compete with the console APUs.

Then again, we still don't know what's the difference between Socket FM2 and FM2+. We know FM2+ is backwards compatible with FM2 CPUs but FM2 boards are not forward compatible with FM2+. There has to be more functionality to FM2+.
You don't think FM2+ could leave some headroom for Hypermemory or a full-on GDDR5 memory controller?


For the desktop? Start with "if", first.

So thinking this through:
- AMD has the IP, time and manpower investment to build iGPUs that are much superior to aything we'll ever see from Intel - from hardware to driver development to developer relations. They have already built two x86 SoCs with such iGPUs throughout 2011/2012 (PS4 and Xbone).
- AMD has been investing a lot in optimizing and promoting their transition to HSA in their APUs.
- AMD has already invested a lot in solving the memory-bandwidth limitation for high-performance iGPUs (PS4 and Xbone) in two different ways (GDDR5 in one, large cache in other).

- And then, AMD will rather prefer to quit the PC business than to take the necessary measures to keep being competitive, with most of them having been adopted in mass-production?


From what I've seen in anandtech, AMD seems to be gathering their old "dream team". If such team reaches such decision, I'll be very disappointed.
 
They've given up on the high end already. I'd say there is little room in any market (save for dedicated GPUs, they are pretty good at that) for a company that executes as poorly as AMD, but I would love to be proven wrong.
 
Then again, we still don't know what's the difference between Socket FM2 and FM2+. We know FM2+ is backwards compatible with FM2 CPUs but FM2 boards are not forward compatible with FM2+. There has to be more functionality to FM2+.
Kaveri is on FM2+. The primary difference appears to be PCIe3.
There may be some electrical changes, but nothing so different as to break backwards compatibility.
The number of pins differs by two.

You don't think FM2+ could leave some headroom for Hypermemory or a full-on GDDR5 memory controller?

Neither is entirely up to the socket, although it depends on how it allocates its pinout. At least with Kaveri and with backwards compatibility we see that we are not getting either, so the pins aren't doing things that much differently from FM2.

So thinking this through:
- AMD has the IP, time and manpower investment to build iGPUs that are much superior to aything we'll ever see from Intel - from hardware to driver development to developer relations. They have already built two x86 SoCs with such iGPUs throughout 2011/2012 (PS4 and Xbone).
Per their investor PR, the nice thing about semi-custom is that AMD doesn't invest the money necessary to engineer those products.

- AMD has already invested a lot in solving the memory-bandwidth limitation for high-performance iGPUs (PS4 and Xbone) in two different ways (GDDR5 in one, large cache in other).
The GDDR5 is leverages technology that AMD has had for years. I'm actually curious as to how differently the controllers are for the DDR3 in Durango versus the GDDR5 for Orbis, since there are GPUs that freely use either.
The volumes and cost for cheap or large-volume OEMs are uncertain.

Large on-die memory has historically not been a good solution for the more varied resolutions of the PC market or the lack of a fixed software platform that can keep track of it.


- And then, AMD will rather prefer to quit the PC business than to take the necessary measures to keep being competitive, with most of them having been adopted in mass-production?
Both are mass-produced, but in the realm of mass production the consoles are not in the same order of magnitude as what AMD's non-console chips have in volume.
PC volumes, even with AMD's weaker share, are likely to exceed the lifetime production of a console in about a year or two. (edit: The upper bound probably depends more on how you count the cat family's product mix.)
Perhaps more critical is that non-custom APUs don't have an outside party footing the bill for their development and production, and AMD has to manage inventory and channel risks.
 
Last edited by a moderator:
Backwards compatibility via emulation/virtualisation has only been really supported by Microsoft and Sony during the early life of a console new console generation. And it's usually when older last gen titles help bolster a new console (when there is less new content available).

With this generation being so 'PC like' then back porting existing PC titles to the consoles seams like a much easer step (rather than emulating or virtualising PowerPC/or Cell optimised titles).
 
Porting the PC versions of last-gen games to new consoles might not be such a walk in the park either. No PC games are made to run on such weak CPUs, especially ones with such low single-thread performance. There hasn't been a PC CPU with such low single thread perf in a long, long, long time.
 
Porting the PC versions of last-gen games to new consoles might not be such a walk in the park either. No PC games are made to run on such weak CPUs, especially ones with such low single-thread performance. There hasn't been a PC CPU with such low single thread perf in a long, long, long time.

Indeed their might be significant work spent optimising a PC centric code base for the new consoles.

But I still maintain this would be much less work than emulating the esoteric features of Cell/PowerPC or R500/RSX!
 
Yes definitely. Anyhow I don't see anybody taking the time to do that with any old games. Folks busy making new games and stuff :)
 
Yes definitely. Anyhow I don't see anybody taking the time to do that with any old games. Folks busy making new games and stuff :)

Well, they are apparently doing it with the 2013 Tomb Raider game. :) PS4/Xbox One will basically be getting the PC version with some graphical enhancements/changes.

Regards,
SB
 
Porting the PC versions of last-gen games to new consoles might not be such a walk in the park either. No PC games are made to run on such weak CPUs, especially ones with such low single-thread performance. There hasn't been a PC CPU with such low single thread perf in a long, long, long time.


Some of the best looking games of this past year seem to be almost imune to high CPU clocks and will behave excellently as long as you have a quad-core.
Check out the CPU performance benchmarks from Tomb Raider, Metro Last Light and Battlefield 4, for example.

I think this might be proof that most games that require very high clocks or an Intel CPU to play decently are just running too much code that was originally written for the 3.2GHz CPUs in the last gen.

My point is: those CPUs aren't weak. It's just that we've been running such poorly optimized code in your PC CPUs for so long that now we think a +3.5GHz CPU is needed to play recent games.
It's probably not, since both Sony and Microsoft chose 1.6GHz Jaguars for the new consoles.
That said, porting PC versions of last-gen games may not be a big problem at all.
 
Hardly, these CPUs are so weak, that the CPUs of the last generation of consoles are equal or even better than them.

The combination of low clocks, low IPC, and the console's OS stealing two cores of the total 8, leaves a much weaker X86 CPU than the previous PowerPC monsters or desktop X86 CPUs.

The only advantage they have over last-gen is their OoOE , which really means there will be less hand-holding for the code. However the complexity of games simulations will suffer, and will not be at the expected level from a next-gen title.
 
Weaker? I think that's a bit of an exaggeration. The 360 had 3 cores at 3,2Ghz. Assuming only 6 cores per game at 1.6Ghz (1.75 for Xbox One) and ignoring that the OS was using those 3 cores as well last gen, you had to use one of them for sound on the 360 but not on the One as it has a dedicated chip, etc., that at the very least evens out. Sure, PowerPC had VMX, but the AMD has AVX, and I've only read that it was more powerful, not less. So far we've seen plenty of explanations of why these modern X86 cores are quite a bit more efficient from various sources clock for clock. So i'd be interested to hear why you think that's false.

Even just for pure GFLOPS, which are probably less relevant now than ever on CPU thanks to the tight CPU/GPU integration, six AMD cores should be able to match 3 PowerPC cores.
 
Hardly, these CPUs are so weak, that the CPUs of the last generation of consoles are equal or even better than them.
The combination of low clocks, low IPC, and the console's OS stealing two cores of the total 8, leaves a much weaker X86 CPU than the previous PowerPC monsters or desktop X86 CPUs.

From what I've heard the same (single threaded) code is running faster out of the box on the jaguar cores than the old handtooled version on the 3.2ghz powerpc cores. So the old simulation/whatever code that took a whole core before can still run single threaded.
The IPC isn't that low at all, from the kabini tests it's around core2 level.
 
Weaker? I think that's a bit of an exaggeration. The 360 had 3 cores at 3,2Ghz. Assuming only 6 cores per game at 1.6Ghz (1.75 for Xbox One) and ignoring that the OS was using those 3 cores as well last gen, you had to use one of them for sound on the 360 but not on the One as it has a dedicated chip, etc., that at the very least evens out. Sure, PowerPC had VMX, but the AMD has AVX, and I've only read that it was more powerful, not less. So far we've seen plenty of explanations of why these modern X86 cores are quite a bit more efficient from various sources clock for clock. So i'd be interested to hear why you think that's false.

Even just for pure GFLOPS, which are probably less relevant now than ever on CPU thanks to the tight CPU/GPU integration, six AMD cores should be able to match 3 PowerPC cores.

X360 has 3 cores with 6 threads .. it's OS was a much less of a performance hog than XO. All of that and we didn't touch on the PS3's Cell of course.

Besides merely matching an 8 years old CPU is not that big of a deal at all, once again developers will focus more on the rendering side rather than the simulation side, and tradeoffs will be made. We expect next-gen titles to have complex mechanics, bigger physics simulations, more stuff happening on the screen at once .. well , we won't be having any of that, not more than what was achieved on the PC anyway.

In fact this CPU point will be the downfall of this so called next-gen ... it turns out, it is not so next-gen at all, it's just playing catch up with 2 years old PCs, burdened by the resolution and complex OS requirements .. some don't expect it to last that long because of that.

From what I've heard the same (single threaded) code is running faster out of the box on the jaguar cores than the old handtooled version on the 3.2ghz powerpc cores. So the old simulation/whatever code that took a whole core before can still run single threaded.
I think this statement lacks some key details, I seriously doubt a single threaded code could run faster on a 1.75GHz core than a 3.2GHz core. Obviously the code is limited elsewhere here, possibly it is not as hand-tooled as it it could be.

The IPC isn't that low at all, from the kabini tests it's around core2 level.
Obviously less than Core2, but never the less, my point exactly.
 
Last edited by a moderator:
So if CPU is so important to physics, then can you explain why Black Flag can do far better oceans, or Housemarque can do far more collisions and particles in Resogun? Housemarque, who we can consider one of the kings and champions of the Cell SPEs, but who have clearly expressed that they far prefer working with CUs?
 
Resogun is the prime example of the workload balance shifting toward the beefier GPU, most physics in that game are GPU-Accelerated, Oceans in AC4 are done through complex shaders (also a GPU problem).

Games have two sides to them : Rendering and Simulation, the latter doesn't just involve physics, but also AI ,gameplay mechanics, animations, draw distance, number of objects in the screen, even HUD and hit calculations. you can compensate by shifting physics to the GPU side, to a degree. What about the rest?

We can't escape the fact these low power CPUs are the bottleneck for both PS4 and XO, next-gen game quality will suffer because of that.
 
sorry dont know it was one of those "hey dav have a look at this isnt this cool moments" at a friends house

there is a windows version for Halo 1 and Halo 2.

My point is: those CPUs aren't weak. It's just that we've been running such poorly optimized code in your PC CPUs for so long that now we think a +3.5GHz CPU is needed to play recent games.
It's probably not, since both Sony and Microsoft chose 1.6GHz Jaguars for the new consoles.
That said, porting PC versions of last-gen games may not be a big problem at all.

as a PC CPU it's weak, you are talking about going back 10 years in time in terms of per core performance for many things... 1.6 Jaguar is viable as a light use laptop/tablet/htpc CPU at best as a PC, not gaming.

many PC exclusive games also scale poorly with more cores and demand high per core performance,
 
Back
Top