Predict: The Next Generation Console Tech

Status
Not open for further replies.
Tunafish made a comment earlier about modern X86 pushing OoO execution "past the knee" ? overall investing to much silicon and power to reach high IPC.
It got me wonder about the odds of Sony(if X86) using the upcoming Jaguar cores (replacement for bobcats).
Within ~300 sq.mm (let say Cypress size chip) they could pack some of those as well as packing more GPU power than what Kaveri as to offer.

I would say that 6-8 cores + between 12 and 16 simd (ala Bart) could fit on a reasonably sized chip.

Then there is the bus size. 256bits doable and should allow for a shrink at least easily.
192 Bits might be safe if they hope to shrink the chip twice (depends on expect life spam).
In case of a 256 bits bus, going with (way cheaper) DDR3 could be an option granting 60GB/s worse of bandwidth and buying the system (for cheap) quiet some extra RAM.
------------------

It must be true, it's in print and everything!
Anyone actually want 16 PPE(U?)/128 SPE processor?

You thought PS3 was difficult to program - you ain't seen nothing yet!

OK my firm 99.9% prediction - it will not be called PS4. (yeah Shifty I know wrong thread :p)
The name is already known, it's the "new" playstation
------------->[] :LOL:


Sorry I could not pass
 
Last edited by a moderator:
Or drop the SPE's altogether and just use a CPU sporting AVX2. 4 cores at 3Ghz would offer huge amount of vector processing capability making the the addition of SPE's unnecessary except for backwards compatibility..

you would "only" need six SPE.
for your vectors units, wouldn't you need "scatter/gather" for them to be tasty?
AVX2 seems to be another usual round of, "last time, we promised our new simd instructions were bad ass and revolutionary. but they weren't quite complete. this time, they're even wider, and will be useful for real!" :p

just kidding.
/edit : I've checked, and it seems AVX2 includes gather, which has the potential to make it useful indeed!
 
Last edited by a moderator:
It got me wonder about the odds of Sony(if X86) using the upcoming Jaguar cores (replacement for bobcats).

right there I want to say "no", because these cores are made to operate at sub 2ghz frequency (short pipeline). so two bulldozer modules, even if a bit lousy would trounce your 8 bobcat cores. you can't compare CPUs by frequency but bulldozer will run the clock twice as fast :).

a 256bit bus, why not, it's of course expensive but you would just decide to stomach the expense. AMD manages to sell radeon 6850 for mad low prices, cards a bit above 100 euros
 
This made me laugh :LOL:

66019497913364665701335371811EYYTYMYO6t5DKeUrNGxF.JPG


For those with bad eye sight

CPU - 22nm Cell @ 3.2Ghz with 16 PPE's and 128 SPE's
GPU - 22nm Custom Nvidia GPU @ 2Ghz based on Kepler GK104x2
RAM - 10Gb XDR2 + 10Gb GDDR6

What you guys reckon? 600-700w power consumption on full load?

Other little things like mandatory 1080p/30fps for all games....

EDIT : Changed GDDR6 amount :oops:

Wow thanks good find ! But I believe the guy who created this image mistakingly replaced PS5 by PS4, those are the real PS5 specifications. If Ram doubles tenfold between each generation, than those specifications must be for PS5. PS4 : 2 Gb, ps5 : 2 Gb*10 = 20 Gb.

I think indeed this could be a generational leap between ps4 and PS5 :LOL:
 
Anyone actually want 16 PPE(U?)/128 SPE processor?
Me! That thing could do over 3 TFLOPS. 16 PPE would be easy to code for, and the SPEs could be very powerful for graphics work.

The current Cell is supposed to be less than 20 watts at 45nm, if it can be 7 watts at 22nm they would have a reasonable 112 watts... of course the chip would be ridiculously huge, cost a fortune, and would yield 0.0001% But yes, I'd want one :D
 
you would "only" need six SPE.
for your vectors units, wouldn't you need "scatter/gather" for them to be tasty?
AVX2 seems to be another usual round of, "last time, we promised our new simd instructions were bad ass and revolutionary. but they weren't quite complete. this time, they're even wider, and will be useful for real!" :p

just kidding.
/edit : I've checked, and it seems AVX2 includes gather, which has the potential to make it useful indeed!

Plus it sounds like it'll have twice the peak throughput of AVX making 4 cores the rough equivalent of 16 SPE's for vectorised code at the same clockspeed.
 
Me! That thing could do over 3 TFLOPS. 16 PPE would be easy to code for, and the SPEs could be very powerful for graphics work.

The current Cell is supposed to be less than 20 watts at 45nm, if it can be 7 watts at 22nm they would have a reasonable 112 watts... of course the chip would be ridiculously huge, cost a fortune, and would yield 0.0001% But yes, I'd want one :D

I don't think it would have anything like that low power draw. It would be a bigger chip than the GF 680 while running at around 3x the clock speed. 680's TDP is 190w. Or the 7970 for comparisons sake would be about 15% bigger transistor wise but run at well under 1/3 the clock rate and it has a TDP of 250w.

So I don't think 250w would be unreasonable to expect for such a chip which would leave about 100w left over for a GPU even if we allocate a generous 400w for the whole console. But for all those raw FLOPS in that monster Cell, it wouldn't be close to as efficient as something like a 680 for graphics work so you'd be better off getting an even more powerful GPU for say 250w and spending your 100w CPU budget on something like a six Core Haswell which would have a lot more general computing performance and still a healthy SIMD capability of around 0.6TFLOPS which would compliment the say 4 TFLOPs of the GPU nicely.

And it would be a hell of a lot easier to programme ;)
 
actually let's imagine someone does a crappy console with no r&d.
on the CPU side, a core i3 ivy bridge or its successor. on GPU side, a GK107. straight PCIe 3.0 between them.

memory configuration, 4GB ddr3 + 4GB ddr3.
now that's something to play with :). the GPU is memory bandwith starved, but it could load things from the CPU memory pool, to steal bandwith from it. (wow it's just an old -fashioned thing as on Intel i740 and geforce 6200 turbocache)

problem solved : the console is small, runs cool and has gobs of memory.
when not running a game you can shut GK107 down and underclock the hell of the Intel CPU.

two GPU? check.
most advanced GPGPU architecture? check.
specialized hardware? check.
loses pissing contest? check.. though I find it has something to brag over, face to a monstrous console with 2GB ram.
 
actually let's imagine someone does a crappy console with no r&d.
on the CPU side, a core i3 ivy bridge or its successor. on GPU side, a GK107. straight PCIe 3.0 between them.

memory configuration, 4GB ddr3 + 4GB ddr3.
now that's something to play with :). the GPU is memory bandwith starved, but it could load things from the CPU memory pool, to steal bandwith from it. (wow it's just an old -fashioned thing as on Intel i740 and geforce 6200 turbocache)

problem solved : the console is small, runs cool and has gobs of memory.
when not running a game you can shut GK107 down and underclock the hell of the Intel CPU.

That would actually be a pretty good console. I mean depending on what gk107 turns out to be, almost great.

And you really meant DDR3 to the GPU? That could be a problem.

Really simple console would just be an Ivy Bridge with HD 4000, or step up to a AMD a10-5800k APU. Would not be a horrible performer either, likely easily much more powerful than Wii U.

trinitydesk.jpg
 
right there I want to say "no", because these cores are made to operate at sub 2ghz frequency (short pipeline). so two bulldozer modules, even if a bit lousy would trounce your 8 bobcat cores. you can't compare CPUs by frequency but bulldozer will run the clock twice as fast :).

a 256bit bus, why not, it's of course expensive but you would just decide to stomach the expense. AMD manages to sell radeon 6850 for mad low prices, cards a bit above 100 euros
Why would sub 2Ghz be an issue? If it can keep it's pipeline busy, a sub 2GHz chip would trounce the 3.2Ghz PPU or 360 CPU, which, as mentioned in a presentation linked somewhere else on this site and which I am too lazy to find, appear to be able to maintain not much more than 0.2 IPC. Low power, decent performance, low heat, what's not to like :)
 
That would actually be a pretty good console. I mean depending on what gk107 turns out to be, almost great.
My optimistic bets are on a Vishera derivative + a few SPEs + FPGA + Tahiti slim + 4GB of unified high-bandwidth memory stacked in between.

As I mentioned earlier in the Southern Islands thread, my hopes are that the curiously big die size of Tahiti is partly due to some prototype interconnect that allows the chip to be stacked together with other chips in an APU/HSA-like fashion (and they need something at least comparable to the final thing to be put into the dev kits by now),

A Tahiti chip slightly slimmed down for gaming/multimedia purposes should end up somewhere around 200-220mm² @22nm - which should be a pretty good size to be used in the launch systems.

As for memory size: 8GB would be great - but I just don't see that happening given they'll need some VERY impressive bandwidth numbers to feed the entire thing. That being said, I'll take 4GB of insane bandwidth RAM over 8GB of more "traditional RAM" any time - especially if it's unified.
 
actually let's imagine someone does a crappy console with no r&d.
on the CPU side, a core i3 ivy bridge or its successor. on GPU side, a GK107. straight PCIe 3.0 between them.

memory configuration, 4GB ddr3 + 4GB ddr3.
now that's something to play with :). the GPU is memory bandwith starved, but it could load things from the CPU memory pool, to steal bandwith from it. (wow it's just an old -fashioned thing as on Intel i740 and geforce 6200 turbocache)

problem solved : the console is small, runs cool and has gobs of memory.
when not running a game you can shut GK107 down and underclock the hell of the Intel CPU.

two GPU? check.
most advanced GPGPU architecture? check.
specialized hardware? check.
loses pissing contest? check.. though I find it has something to brag over, face to a monstrous console with 2GB ram.

Terrible choice tbh...

You need at the bear minimum a quad core..
 
Me! That thing could do over 3 TFLOPS. 16 PPE would be easy to code for, and the SPEs could be very powerful for graphics work.

The current Cell is supposed to be less than 20 watts at 45nm, if it can be 7 watts at 22nm they would have a reasonable 112 watts... of course the chip would be ridiculously huge, cost a fortune, and would yield 0.0001% But yes, I'd want one :D

Me too...

& probably pretty much every studio whose got a scalable job-scheduler-based engine already built to support PS3 (i.e. probably most of them nowadays)

Easy-to-program doesn't come into it when you've already invested in the technology to leverage a specific architectural hardware design, however esoteric it maybe.

It would just be a case of recompiling against a new platform SDK, fixing up your HAL layers for hid, command buffer gen, shader patching, system info & config interrogation etc. & switch your #define in the job scheduler from MAX_NUM_CORES = 5 to 127 or something...
 
Why would sub 2Ghz be an issue? If it can keep it's pipeline busy, a sub 2GHz chip would trounce the 3.2Ghz PPU or 360 CPU, which, as mentioned in a presentation linked somewhere else on this site and which I am too lazy to find, appear to be able to maintain not much more than 0.2 IPC. Low power, decent performance, low heat, what's not to like :)

http://www.microsoft.com/en-us/download/details.aspx?id=3539 Slide 43

(Mind you, it's nearly 4 yrs old... not that I'm expecting huge gains in average, but still. :p)
 
I will repeat my position, I still believe I was absolutely right, whatever we will see in ps4 or xboxnext will be highly defined by budget constraints from sony and microsoft, how much they want to spend and loose per unit sold ? and not at all by technical problems (heat, noise...etc those problems can easily be solved with more tens of dollars per console for a better coling solution and a better case). Anyway, for those who are still skeptical about the technical feasibility of having a 7970 GPU in ps4 or xboxnext, please look at this link, I cant say it better :
http://www.eurogamer.net/articles/df-hardware-radeon-7970m-alienware-m17x-r4-review


"The Pitcairn core is fairly small, occupying 212mm2 of area. Compare that with the 240mm2 of the RSX in the launch version of the PlayStation 3 and the 180mm2 of the Xbox 360's original 90nm Xenos GPU and we have a ballpark match. Of more interest is power consumption: at full tilt, the 7970M sucks up around 65 watts of power. That's not going to be especially good news for a laptop running on battery power alone, but considering that the launch versions of the Xbox 360 and PS3 both consumed around 200W in total, again we see an eminently suitable match.

But times have moved on from 2005 - Microsoft won't need to invest so much in gaining market share next time around, while Sony simply can't afford to lose hundreds of dollars per unit as it did with PlayStation 3 back in the day. While a Pitcairn core is viable for a next-gen console, the fact remains that it's an expensive, high-end piece of kit and there's no guarantee that the raw power of this remarkable GPU will end up in the new console designs. But if it were, we can reasonably expect next-gen games to outperform what we have shown in this article - the advantages of a fixed architecture in a console mean that developers target the specific strengths of the GPU, inevitably producing results that punch above their weight compared to the same hardware running on PC."
 
As for memory size: 8GB would be great - but I just don't see that happening given they'll need some VERY impressive bandwidth numbers to feed the entire thing. That being said, I'll take 4GB of insane bandwidth RAM over 8GB of more "traditional RAM" any time - especially if it's unified.

yes and no, I believe if you use the memory for variety in textures, a huge environment, lots of sound and ample cache, the memory is useful even if slow.
I did not play GTA IV but I remember previous games, where if you drive a garbage truck then every other vehicle on the road is a garbage truck. if you had a putative playstation 2 with a boatload of ram this funny nonsense wouldn't happen.

with big buffers for rendering, very big shadowmaps etc., data intensive calculations, yes you need a lot of bandwith.
this is where Cell SPEs were actually special, each one had its own very fast storage.
 
I will repeat my position, I still believe I was absolutely right, whatever we will see in ps4 or xboxnext will be highly defined by budget constraints from sony and microsoft, how much they want to spend and loose per unit sold ? and not at all by technical problems (heat, noise...etc those problems can easily be solved with more tens of dollars per console for a better coling solution and a better case).

A technical problem you cant afford to solve is a constraint, and a larger case or more noise is a tradeoff.

And heat can't be "solved" - there's a point where something is generating too much heat for people to want it under their tv or in their kids bedroom or in their cramped av stand or wherever.

Anyway, for those who are still skeptical about the technical feasibility of having a 7970 GPU in ps4 or xboxnext, please look at this link, I cant say it better :
http://www.eurogamer.net/articles/df-hardware-radeon-7970m-alienware-m17x-r4-review

That's a 7970M. There are some technical problems even a few tens of dollars can't solve.
 
I will repeat my position, I still believe I was absolutely right, whatever we will see in ps4 or xboxnext will be highly defined by budget constraints from sony and microsoft, how much they want to spend and loose per unit sold ? and not at all by technical problems (heat, noise...etc those problems can easily be solved with more tens of dollars per console for a better coling solution and a better case). Anyway, for those who are still skeptical about the technical feasibility of having a 7970 GPU in ps4 or xboxnext, please look at this link, I cant say it better :
http://www.eurogamer.net/articles/df-hardware-radeon-7970m-alienware-m17x-r4-review


"The Pitcairn core is fairly small, occupying 212mm2 of area. Compare that with the 240mm2 of the RSX in the launch version of the PlayStation 3 and the 180mm2 of the Xbox 360's original 90nm Xenos GPU and we have a ballpark match. Of more interest is power consumption: at full tilt, the 7970M sucks up around 65 watts of power. That's not going to be especially good news for a laptop running on battery power alone, but considering that the launch versions of the Xbox 360 and PS3 both consumed around 200W in total, again we see an eminently suitable match.

But times have moved on from 2005 - Microsoft won't need to invest so much in gaining market share next time around, while Sony simply can't afford to lose hundreds of dollars per unit as it did with PlayStation 3 back in the day. While a Pitcairn core is viable for a next-gen console, the fact remains that it's an expensive, high-end piece of kit and there's no guarantee that the raw power of this remarkable GPU will end up in the new console designs. But if it were, we can reasonably expect next-gen games to outperform what we have shown in this article - the advantages of a fixed architecture in a console mean that developers target the specific strengths of the GPU, inevitably producing results that punch above their weight compared to the same hardware running on PC."

Won't happen.... Pure and simple...
 
Status
Not open for further replies.
Back
Top