Predict: The Next Generation Console Tech

Status
Not open for further replies.
I thought you guys might be interested in this article, it's a PC review that tests pretty much every GPU released over the last 5 years which is cool as it shows how far graphics cards have come at a time when PS3 and 360 were still relatively new machines.

There's actually quite a few shocks in there, a 6570 trades blows with an 8800 GT and some of the older DX10 cards like the AMD HD 4890 and Nvidia GTX 275 still put up a decent fight which is surprising.

A 7850 in a console might not be so bad after all :)

http://uk.hardware.info/reviews/357...cards-64-gpus-tested-from-the-last-five-years
 
Arthur Gies from polygon.com posting some very interesting posts on neogaf .


Originally Posted by aegies:
The next Xbox and PlayStation are not "GPU-centric." There are pretty significant things happening with their processor architectures. And at least for Durango, it's not using off the shelf kit, contrary to what many GAF posters have insisted. Even more troublesome for the Wii U, they have much more dedicated GPGPU capabilities which aren't happening on the dedicated GPUs they're packing.
Originally Posted by aegies:
I am almost one hundred percent positive that Microsoft at least will own the design to their silicon. They did with the 360 after getting burned on the Xbox. Also, the chipset in Durango is not off-the-shelf kit.
Originally Posted by aegies:
I've spent the last year talking to publishers and developers off the record. They've been waiting for Durango in particular. There are things they want to do that they haven't been able to pull off, and many feel that the shrinkage in the market is due to hardware stagnation. The big guys and popular developers that haven't already gone iOS are hitching their wagons to Durango in a big, big way. Which terrifies them, by the way. They are not in any way sure that it's going to turn things around, business wise.

I'm not saying the next Playstation isn't part of the equation, but they've seemed much less clear as to what it will even be. Granted, I haven't talked to everyone, obviously.
Originally Posted by aegies:
And I'm telling you that i'm of the opinion that Durango launch games will look better than just about anything that ever comes out for the Wii U from a technical perspective (I can't account for differences in taste). And your estimation of the power difference between the next Xbox and the Wii U is considerably lower than the estimates of the tech inside of Microsoft's next console that I've heard. It's not going to be a quad-core system, and the GPU will in all likelihood run circles around the GPU in the Wii U.

I know that if Durango is under-powered this will look stupid in a year, but I wouldn't be posting about it unless I was very, very sure.
 
If we go by Treyarch Call Of Duty Titles:

World at war required minimum:
P4@3Ghz

Black Ops 2 minimum:
Core 2 Duo 2.66Ghz

Essentially the CPU requirements have doubled in the intervening time period.

More like tripled or quadrupled. P4 was that bad. I think the primary difference is coming to grips with LHS stalls (I know real code that well over doubled it's performance when it was reoptimized to avoid that, and early on, people didn't have the tools to detect it or even the expertise to know it was bad.), better multithreading, and better SIMD use.

0.2 was still a good number for well optimized code in 2010, last when I had anything to do with Xenon. I really doubt you can do much better on any real loads, simply because even though you have plenty of processing power, you just can't feed it. Normal Xenon code has so many alu ops "in reserve", that you can measure the performance of almost any code by the amount of memory ops it does. You just won't ever use all the alu capacity.

The AMD number you'll have to take on faith ;)

AMD actually claims 1.1 IPC on Jaguar "for some real code". IPC is of course very dependent on the type of the workload you are running, so you cannot do much of anything with declared numbers like that. Based on experience with bobcat, and on the released information on Jaguar, I don't think it will be that far off the mark. Bobcat is a extremely well balanced chip. There are no parts of the pipeline that are overly narrow, and no parts that are really wide without support from other parts. Basically, I think that if you optimize code to the point we're seeing in top console titles, you should be able to get well above 1.

Also, since x86 can have memory operands, 1 ipc for jaguar and 0.2 ipc for a Xenon thread cannot be directly compared.

Code:
add [eax], ebx
add eax, 4
Would be 4 instructions on a ppc processor, but is 2 instructions that can both be executed in the same cycle on x86. (Of course, the latency of the memory ops is more than 1 cycle, but if this were a part of a tight unrolled loop, it would be executed at a throughput of 1 iteration per cycle.)
 
Last edited by a moderator:
Mystery deepens. Are we sure that AMD can provide console that is CPU centric?

may be its not fully off the shelf amd x86 designed processor ! i think it will be customised by Amd's & sony's r&d . and may be its enough for ps4 compared to ps3 ; like 10x cpu performance .
 
Last edited by a moderator:
AMD Launched some new Piledriver (32nm) based Opterons today. Obviously, I wouldn't expect a server chip in a console, but I think these are some decent data points as far as clock, cores, and TPD.

Opteron 3380: 8 cores at 2.6 GHz Turbo 3.6 (8MB of L3) TPD: 65W
Opteron 3329EE: 4 cores at 1.9GHz Turbo 2.5 GHz TPD: 25W.

For the console, my expectation is 25-35W TPD for the CPU. 8 simpler cores at 1.6-2 GHz should fit within that TPD.
 
Ray tracing hardware, if featured, would presumably be in combination with conventional rasterising. It could be used in non=graphics tasks too like AI and physics. the only HW ray tracer I know of is what Imagination are working on, but I don't know if there's working RT silicon yet.
 
For some parts of it, yes. Some parts of physics processing fit the OpenCL model really well, some don't. I think an ideal implementation for next-gen is a hybrid one, where parts of the processing is offloaded to the GPU, and parts are handled on the CPU.

That's not really that a good idea today on PC, because moving data between CPU and GPU is often more expensive than doing the processing. Just one more thing where HSA would shine...

I agree, that sounds very interesting and ideal if the GPU and CPU are onboard the same die. Today you can do e.g. some cloth physics on the GPU that only affects the looks, and they don't affect the game world. It's like having magnificent flags floating very realistically in the wind but you walk up to them and walk through them.
Now, the CPU could work with data coming out of the GPU and vice-versa. I'll leave all the complexity and hard questions about how to make it work and enforce some gameplay-friendly constraints to the game devs.
 
Status
Not open for further replies.
Back
Top