New consoles coming with low-clocked AMD x86. Can we now save moneys on our PC CPUs?

  • Thread starter Deleted member 13524
  • Start date
D

Deleted member 13524

Guest
Title says it all.

Now that developers aren't dealing with PowerPCs anymore, x86 PCs won't have to "emulate" some specific instructions from high-clocked PowerPC. Instead, the ports should be rather easy and high-performing on current PC CPUs. As far as we know, the exact same code could be used for CPUs in both PC and console versions, right?

On top of that, at least the PS4 seems to be using pretty standard AMD CPUs which, in the desktop space, are a lot cheaper than equivalent Intel CPUs.


So what will it take to have a similar CPU performance to the 8-core Jaguars at 1.6GHz?
For example, a 2-module/4-core Trinity APU clocked at 3.8GHz - that sells for peanuts nowadays - will have a similar performance?
(I'm assuming Jaguar doesn't beat Piledriver in IPC..)

Or will we still need to pay 2x more for an Intel CPU just because of the operating system overhead?

Could the CPU requirements for a top-end PC experience actually lower for this generation?
 
I think the requirements won't be lower, they;ll go up. It all depends on how GPGPU compute is used and how the CPU interacts with those calculations.

Assume a hypothetical scenario for the PS4 APU, where 1 core issues a massive GPGPU computation, waits for the result, and then branches based upon the results of that computation while all the other cores are busy with something else.

If you tried doing something like that in a standard PC architecture, there would be a lot of latency from moving the data from system memory to GPU memory and retrieving the results back to the CPU. Even though each component (CPU/GPU) is stronger in a PC, the framework that connects them sucks.

So the way I see it, we're either going to need APU's that are the equivalent of what's in the PS4/720 with really fast memory. (I doubt there's anything like that for consumers in the next 2-3 years.) Or CPU's that can perform the equivalent flops of the a consoles GPGPU calculation (maybe even in a single thread). I think upgrading to Haswell-level (32 single precision flops per cycle per core) will be necessary.
 
It'll be interesting to see if it helps AMD out by obscuring their single threaded performance deficit.
 
It just means that multi-threaded performance will become even more important.

But that is the direction PC games would have to take eventually anyway. Crytek, for example, are already heavily leveraging a high number of threads. And even then with the level of effects (physics, etc.) that are used it becomes CPU bound in many cases.

So, for console ports you likely won't need a bleeding edge CPU, but you'll likely still need a higher performing CPU part than what is featured in consoles just due to having a full blown OS that not only needs to run every kind of software imaginable but also has to work with just about every piece of hardware imaginable while not allowing any of said hardware/software take down the OS. Hence, a level of hardware and software abstraction that isn't required on consoles. Consoles can afford to have a leaner OS without as much in the way of hardware abstraction. Desktops require something more robust with an eye towards protecting the user's data and security.

Hence, you need more performant hardware in order to overcome those levels of abstraction that are required on the desktop.

Regards,
SB
 
Are we sure that windows 8 is much heavier than the consoles'?
There seems to be a lot going on in the background in these consoles. Constant video recording, always-on video+voice chat, etc.
 
But nothing compared to what is running on a Windows box, especially a desktop one.

Anyway, if AMD APU works well in PS4, then surely we can get small form factor desktops and laptops with those included?
 
Most of the time background apps are entirely idle, just sucking up RAM. On Windows the games are dealing with DirectX / HAL / drivers and such though.

I'm not so sure I'd want to own Jaguar that much because of how important single-thread performance is to most applications. It would need to be a product that leveraged its low power strengths or something like that. Maybe a tablet.
 
Well, the Jaguar CPUs will have much higher IPC than anything Cell/Xenon could put out... Plus, they support AVX1, so if you don't have a CPU that supports it(Which is quite a few CPUs currently being used by gamers, actually!), you might be fucked hardcore.

Thus, we need to buy some 2X more powerful CPUs for a little while yet. ;)
 
So the way I see it, we're either going to need APU's that are the equivalent of what's in the PS4/720 with really fast memory. (I doubt there's anything like that for consumers in the next 2-3 years.) Or CPU's that can perform the equivalent flops of the a consoles GPGPU calculation (maybe even in a single thread). I think upgrading to Haswell-level (32 single precision flops per cycle per core) will be necessary.

Ot an APU type CPU that can handle those GPGPU requests on die while the graphics rendering is handles by a discrete GPU. AMD suggests this usage model in at least one of it's HSA presentations.
 
Ot an APU type CPU that can handle those GPGPU requests on die while the graphics rendering is handles by a discrete GPU. AMD suggests this usage model in at least one of it's HSA presentations.

Good point, hadn't thought of that. "Regular" PC DDR3 bandwidth maybe sufficient for that type of computation.
 
The problem is whether or not GPGPU type compute takes off in console space, how quickly it does in relation to PC upgrade cycles and which types of processors would be best suited to the kind of workloads found on consoles. Which would be better? Strong PC processors or strong APUs with HSA?
 
Could next gen console ports/crossplatform games leverage an APU + discrete combo using the APUs GPU (exclusively?) for compute?

Right now nobody is going to target such a setup (and hybrid crossfire is pretty "meh"), so doing APU + GPU would mostly just seem to waste resources. Thus relegating the APUs to the low end (and not creating much additional value for AMD on the discrete side). With the heterogeneous architectures supposed to arrive in the PC space around the same time as the new consoles, I wonder what could feasibly be extracted from something akin to an "A10-7800K" (or whatever) Kaveri + a GCN2 Cape Verde replacement.

Edit: Note to self. Refresh old open browser window before replying.
 
Well, the Jaguar CPUs will have much higher IPC than anything Cell/Xenon could put out... Plus, they support AVX1, so if you don't have a CPU that supports it(Which is quite a few CPUs currently being used by gamers, actually!), you might be fucked hardcore.

Thus, we need to buy some 2X more powerful CPUs for a little while yet. ;)

I don't know if developers will really use 256-bit AVX.. it might be an advantage if it saves decode rate - I'm not sure if it'll need two decode cycles or both decoders or what (in which case, just a fetch advantage and on Jaguar that's not a huge deal). But AVX128 is pretty attractive too, vs SSE4. And for hand-spun ASM probably just as hard to convert.

What really sucks is that Intel could have made all their desktop and laptop processors AVX compatible for a couple generations now but haven't. I hope they change this policy for Haswell but I'm not holding my breath.
 
Intel segregating its product lines with the supported ISAs is basically them biting themselves in the ass. Hardcore. It helps prevent the spread of the ISAs...

Also, I figure it's probably more down to the compilers the games will use than handcoded ASM or intrinsics in this gen. It was already fairly rare last gen(I think the most high profile user of that stuff was Sebbi in the Trials games?) to do that stuff by hand...
 
Intel segregating its product lines with the supported ISAs is basically them biting themselves in the ass. Hardcore. It helps prevent the spread of the ISAs...

Also, I figure it's probably more down to the compilers the games will use than handcoded ASM or intrinsics in this gen. It was already fairly rare last gen(I think the most high profile user of that stuff was Sebbi in the Trials games?) to do that stuff by hand...

On that note, I don't think PC games this gen were taking a performance hit due to being non-PPC.. at worst if they used intrinsics they would have been converted to x86 at run-time, if not rewritten entirely. If PC games had higher CPU requirements it's more likely because of more API abstraction and lowest common denominator decisions regarding graphics, which will persist into the next generation. I definitely don't see requirements going down.

GPU requirements for games should go up quite a lot. If console games do end up with stronger GPGPU requirements than typical PCs can accommodate then I expect some of it to be ported to CPU code for PCs.
 
Instead of continually blindly pondering the ramifications of GPGPU taking over in consoles, I'd like to hear just what applications it has outside of the superficial performance-sapping extras we've seen on PC.
 
I think likely if it gets used in consoles in any significant way, it'll be for offloading stuff that would be on the CPU normally... Some games might do stuff like Forward+ though.
 
Instead of continually blindly pondering the ramifications of GPGPU taking over in consoles, I'd like to hear just what applications it has outside of the superficial performance-sapping extras we've seen on PC.

On PC, not much. It's limited by having to shuffle all data over PCIE, hence, it can't be something that ever affects gameplay as long as it has to be done over PCIE.

You'll have to look to next gen consoles to see GPGPU physics take off. Perhaps someday, if Intel embraces GPU compute like AMD and make their integrated graphics more compute capable, then we'll see it make some headway in the PC space. But as long as it has to go over the PCIE bus, then it's going to be mostly a non-starter on PC.

Although TressFX in the new Tomb Raider could be potentially very interesting. It's still basically non-interactive stuff though.

Regards,
SB
 
What about the fact that GPUs are limited in what they are fast at? Branchy code is bad, among other things. For example some physics simulation has not been feasible because of this. Or H.264 encoders that only do the minimum because some things wouldn't work well. Now obviously the latest GPUs are better but still ....

Plus whenever I've used Physx it's not as if there's no performance impact. Lol. It's more like a performance implosion.
 
Isn't this derailing a bit?

What I would like to know is what desktop CPU would be the equivalent to an 8-core Jaguar at 1.6GHz.
Then, try to evaluate what extra performance would be needed for running windows on top of it, and see how much does a CPU with that performance target would cost.
And this would be a CPU inside an APU of course, because non-APUs will disappear from the consumer market in a couple of years.

Then we could see how powerful the iGPU in the desktop APU ( HSA architecture) would need to be to keep up with the GPGPU operations in the PS4's APU. Of course, I don't think it has to be an APU with 18 GCN CUs @800MHz because the PS4 is never going to use them all at the same time for computing.

Then the PC will need a powerful discrete GPU for graphics, but this last part we already have on the market, and for cheap too.
 
Back
Top