Predict: The Next Generation Console Tech

Status
Not open for further replies.
As said in that other thread, that MCM looks like a test module or prototype of some sort; I'd guess it's one of those devices they run through the fab to check how well everything's performing.

That it'd be a real, early version of a new console CPU seems very unlikely, as noted in the other thread, the chips aren't even glued on. You couldn't run that MCM in a system without risking pulling off the dice along with the heatsink when taking it apart again... Also, the asymmetrical nature of those four chips, two quite big ones, two really small ones - why? What'd the purpose be for that setup? We're NOT going to see multiple physical CPU cores, or Crossfire'd GPUs in a console...
 
You are reading it wrong. 4P refers to 4 cores, so this would have two units of 4 cores, both of them connected to their own 2MB pool of cache, for a total of 8 cores and 4MB cache.

... which would be pretty pathetic. One bobcat core at 1.6GHz is roughly 3-4 times faster than one Xenon thread at 3.2GHz in integer, so unless Jaguar is a real upset, that's realistically less than 5 times the power of Xenon, on 8 threads.

It would be pretty low power and low cost -- I wonder which I would prefer if PS4 and XBnext had roughly the same silicon budget, where Sony spends it on a beefier CPU and MS goes for a better GPU.

Crossplatform games would almost certainly look better on the system with the better GPU, because it's harder to scale visuals up or down to match the CPU.

Another thing I hear vague implications of, it came up in one of steamvar's post too, is that the 720 GPU is more "custom".

I wonder if that's just EDRAM or what.

Hmm yeah I remember that guy but had forgotten about him. Anyways so I combed his posts from that time, and he also claims (I guess) the Next Box is using Jaguar cores, here http://www.neogaf.com/forum/showpost.php?p=37365190&postcount=1269

Which goes against the "powerful CPU" next box rumors.

That cannot be true, thought. There are way too many different sources that imply Durango is faster than Orbis, and the gap being as wide as the one between an high-end machine and a medium one. That's a lot. And the same for the GPU: Durango seems to have a slower one. Then the CPU must be considerably faster.
At this point, MS is working on some really custom solution and this may be a reason why we are getting different informations. Devkits maybe are still based on consumer hardware, and not all the developers have access to the target specs.
 
Somebody check me on this thought. It could well be stupid and/ or impossible. For the Xbox 720.

Following assumptions- As MS, you want Kinect 2 in every household as fast as reasonably possible. Sort of a home version of "Sirri" and the iPhone. You can make commercials of someone asking "Xbox - Hatfields and McCoys Tv show" and the reply is something like "You have that available on Netflix in standard definition or available for sale or rent on Xbox Llive in High Definition."

You have just introduced a payment plan alternative to high initial cost hardware.

You need to both have something cheap enough to sell as a souped up set top box and something that can act as a high end game machine without either creating an absurdly high initial price or putting yourself to far in the red for each unit sold. The payment plan option may help with this but I doubt it could entirely eliminate it.

We had rumors of "weird" and " an SOC and a dedicated GPU" as well as older stuff like "forwards compatibility".

I'm not going after specs here in the usual sense. Total RAM is unlikely to be set yet. the presence of eDRAM is probably somewhat dependant on the above as well.

What if you actually did make 2 versions. The only difference being the presence of the extra "gpu" and its only purpose was to enable going from 720p to 1080p native. Everything else is precisely that same. We haven't really talked about a scaler on here, but that worked well for MS this time around. Could you do this though? You have something maybe 50$ (more?) cheaper to sell as a lower end unit and something to actually offer those who are screaming for native 1080p. Possible? Worth it?
 
Just because a spectrum is open for use in the US doesn't mean they're open for use in Japan, Europe, etc., and vice a versa.
Therefore I don't think they'll use anything outside the "globally accepted" spectrum.

For example, 802.11a bandwidth is prohibited from use in Taiwan because it's actually military spectrum. (at least legally I recall)

Now Nintendo could just change the hardware for each country but that will just be a logistical disaster.

I don't know about the 802.11a restrictions. it's at least free for use in Europe in addition to the US.
other candidates would be an Ultra Wide Band standard or Wireless USB, this operates from 3.1 to 10.6GHz. yes, there's some logistical disaster pending but you could just include hardware that can deal with any suitable spectrum band, and restrict it with firmware.
 
1.8TFLOPS is still not enough for 1080p Samaritan, it needs 2.5T right? I hope they can push it to a 7870 for the final spec and 4g of GDDR5. Thought they wanted "The best and not the cheapest hardware", prove us you really mean it Sony :).

To add to the discussion over a month ago a poster said this in a different PS4 thread.

Clipped Jaguar Rumor

I mentioned this in one of the threads on NeoGAF that I wonder if they did switch to Jaguar cores, would they have done so to save room so they could fit more shaders on the die. If the design is built around the HSA idea offloading as much work as possible with OpenCL (or whatever Sony's GPGPU equivalent) then it might make lots of sense to sacrifice CPU performance if it significantly increases the GPU flops (maybe as high as 2.5 TFlops?).
 
Thats going to be an interesting battle. Sony and MS wont only have to think about game performance, but multitask and media performance too,
They should build the hardware around that idea. They sure going to need lots of ram and multicore CPU's that can take both gaming and other tasks simultaneously without annoying hiccups.
I am holding my Android phone and I see the console future having similar multifunctional capabilities.
I am pretty sure MS is going to create an OS close to what we see today or expect to see in the future in Windows mobile phones that will mimic many smartphone functionalities and seamless transition from one app to the next.
 
I think Vita's OS is a good illustration of the kind of multitasking and media features we can expect. I think we'll finally get real "sleep" capabilities in addition to things like live streaming of gameplay to the internet. The rumored on board flash memory in PS4 should be good for saving a gamestate so you stop playing without finding a save point, and load up exactly where you were without having to relaunch the whole game.

That 2nd spec sheet for PS4 mentions media encode/decode hardware (with my pet theory being a block of SPUs for video compression and decompression, audio, image processing AND backwards compatibility).
 
What if you actually did make 2 versions. The only difference being the presence of the extra "gpu" and its only purpose was to enable going from 720p to 1080p native. Everything else is precisely that same...Could you do this though?
If the GPU is architectural identical with just more shader units, it should definitely work. There'd have to be a business discussion though, and some API aspects forced on developers. As I believe that'll happen anyway, indeed we may even see a family of hardwares that are upgraded every couple of years (although current rumours suggest we are still on for a conventional 5+ year console cycle), so it's a business decision to be mulled over by the powers-that-be. It's not really a topic for the next-gen tech though. ;)
 
Everything I have heard about Durango is pretty-to-very encouraging except... :( Things are still in flux, but the ball is definitively moving (see SE's and Epic's and Crytek's new next gen engine videos) and we should see final silicon tape out and samples by the fall.
 
Steamroller is also targeting 28nm, so a hop to that node wouldn't make a difference when choosing between Jaguar and Steamroller.

One other wrinkle is that only Steamroller APUs mention HSA application support, which if Sony intends to use the GPU for general FP applications would be a necessity. That's not a hard restriction if a semi-custom design is in progress.

So if I understand you correctly it's plausible, but seems unnecessary?

Bg, can you please point out those "weird" things you've been hearing on Durangos part? Seems like we know more about PS4 than next Xbox.

That's interesting, by 'weird' do you mean that Durango has the most exotic architecture of the two this time around?

Or does 'weird' mean that the general tone of rumours that Durango is more powerful (and iherre's claim that Durango= high range PC, PS4= mid range and Wii U = low range) aren't true and there's no clear cut winner at the moment between the two consoles?

First to add context to lherre's claim, that came from an analogy I was using to describe the consoles next gen. Don't take that as saying "Xbox 3's power is equal to a high-end PC". I was saying if you considered Xbox 3 and PS4 "this (high end in this case)" Wii U would be "that". That's when he began to chime in. The PS4 being "mid-range" was my adjustment to how he referred to Wii U. Though that was also before I began to find out actual info. So if he's basing Xbox 3's "power" on what I think he is, I don't agree with his view. But I need more info to make a proper assessment.

And some have hit on what I mean by weird. In the view of what most of us would consider to be a "normal, balanced" modern system, so far it doesn't sound like it. Though customizations could end up being included as well.

To turn a turn on the discussion from GPUs and CPUs...

Does anybody have an idea how the Wiiu streams the video output to the Wiiu gamepad?
Currently from leaked information the Wii u gamepad seems to have Wifi and NFC, as well as bluetooth. Nothing else seems to be specified.

NFC wouldn't seem to work in any capacity for these purposes, so that's out of the window. Bluetooth would seem to have bandwidth problems, so it would seem to me that the most direct way of doing the video feed this would be through Wifi.
Control inputs take up very little bandwidth so I'll leave them at that.

I haven't seen too much on this subject in the forums and it would be interesting to see on feasibility on smartglass on xbox or PSV + PS3 in the future.

Currently PSV+PS3 is the closest thing that's not Nintendo that has shown similar capabilities but there seems to be reports of lag input in Remote Play. I'm not currently at home so I couldn't test this out for myself, but I'll do it later in the day.

Given the relatively low latencies of wifi, it couldn't (or shouldn't) be the connection that we should be worrying about.
CPU intensive encoding issues perhaps? And how does Wiiu get around that?

The patent showed that both the controller and consoles have hardware codecs for the stream. The transmission method used is still not known though.

Which goes against the "powerful CPU" next box rumors.

It's probably going to depend on what we consider as "powerful". Though it does seem that MS has switched to x86 leaving AMD as the only logical option.

I mentioned this in one of the threads on NeoGAF that I wonder if they did switch to Jaguar cores, would they have done so to save room so they could fit more shaders on the die. If the design is built around the HSA idea offloading as much work as possible with OpenCL (or whatever Sony's GPGPU equivalent) then it might make lots of sense to sacrifice CPU performance if it significantly increases the GPU flops (maybe as high as 2.5 TFlops?).

It's definitely plausible. But after hearing Epic's requirements for UE4, it may not be necessary.
 
Yes, PS4 doesn't have a dedicated GPU according to these rumors. It should be much more optimized than Llano and Trinity, but they may have only one 256bit memory controller with concurrent access to the unified memory. GDDR5 should give about 200 GB/s worth of bandwidth. Your specs are unfeasible, both costs and power consumption wise. :D

Well it won't be feasible if they stuck in the chip in current 7970 into PS4. But I believe AMD will actually make some refinement moving from Southern Island to Sea Island GCN, so 7970 performance can be done using much lower power and cost. ie better efficiency. And they can customise it some more for PS4.

PS4 needs Fusion and discrete GPU, for 7970 Xfire equivalent sort of performance. I doubt 7970 will still be in enthusiast performance by the time PS4 launch most likely in 2014.

I mentioned this in one of the threads on NeoGAF that I wonder if they did switch to Jaguar cores, would they have done so to save room so they could fit more shaders on the die. If the design is built around the HSA idea offloading as much work as possible with OpenCL (or whatever Sony's GPGPU equivalent) then it might make lots of sense to sacrifice CPU performance if it significantly increases the GPU flops (maybe as high as 2.5 TFlops?).

Or Jaguar cores are sufficient for consoles. I always thought Bulldozer cores in console are a waste and overkill. Looking at AMD Roadmap, the part with Jaguar cores will be paired with the next gen GCN Sea Island. Hopefully they can bring in more efficiency like NV did from Fermi to small Kepler.
 
Or Jaguar cores are sufficient for consoles. I always thought Bulldozer cores in console are a waste and overkill. Looking at AMD Roadmap, the part with Jaguar cores will be paired with the next gen GCN Sea Island. Hopefully they can bring in more efficiency like NV did from Fermi to small Kepler.


Jaguar is the replacement for Bobcat.
Bobcat is very, very slow. Jaguar is unlikely to be that much faster.

GPUs are only good at problems that can be made parallel and don't care much about latency, any problem that could not would be forced to run on a very slow CPU.
It would also make it very hard and time consuming to rewrite code that was made with a rival console (with a much stronger CPU) or PC in mind, it would be like the early PS3 CELL problems but ten times worse.
 
So Hexus has their interpretation on the rumored Liverpool APU and a discrete HD7970.
http://hexus.net/gaming/news/hardwa...bis-rumoured-feature-next-gen-amd-fusion-apu/
Liverpool is likely to be one of the firm's first APUs to feature support for heterogeneous computing and, as such, the PlayStation 4 is expected to utilise a unified memory structure, with reports suggesting that the console currently features 2GB of RAM in the specs, though Sony is mulling over the idea of 4GB following developer requests.

Though AMD APU technology is progressing at an impressive rate, on-die GPUs still aren't powerful enough to drive a high-end console and so, according to psx-sense sources, Sony's next-gen console is expected to include a discrete Radeon HD 7970 GPU.
If that's the case then wouldn't that be way overkill? The apu alone is 1.8T, now plus a discrete albeit modified 7970 should add another 3T on top of it. Am I seeing things here?
 
That sounds a bit overkill but it also smells intellectual laziness.
The hd 7970 uses a enormous bus and has bandwidth in spare. it's a big chip.
The rumors says the system has a single memory pool.
so the hd7970 has to act as Xenos so as the northbridge for the APU. So it's clearly not hd 7970 as it can\t act as the northbridge.
Then you have the APU with plenty of shader that going to need a lot of bandwidth too. I would say that you would need a link that provide x3 times what pci express (+100GB/s) provides and more to make an efficient use of something that close to pitcairn (fed by a 256 bit bus).

the whole thing just don't make much sense imo.
We are speaking two big chips (both most likely above 350 sq.mm). Pretty enormous TDP.
A 384 bit bus. a crazy fast link between both chip. And with all that expansive hardware only 2GB of ram? On a 384bit bus?
 
Last edited by a moderator:
I can't sleep...
What could make sense to me is more something like this.

An APU may indeed not provide the power Sony wants with the a given silicon budget and power budget.

Sony is not pleased by the power profile of either trinity or even the up coming Kaveri desktop parts and what they achieve within that power profile.

Bobcat replacement the jaguar cores don't suck too much and within the silicon and power budget of Sony are a better fit.

Sony doesn't want to go with chips either too big or too hot.

Sony view Pitcairn for what it is: impressive either wrt to its size, power profile and performance.
They decided to go (from the number of SIMD) that could be named a HD7870LE or hd7860 (or the HD8xxx equivalent)

I would bet it's going to be clocked conservatively (below 800MHz) and then let AMD power tuning thing adjust the GPU to match a defined TDP. 100 Watts is a maximum.

800MHz may be the max speed as it matches the rumored FLOPS counts (and it makes sense to communicate on peak figures).

The chip is going to be tweaked and act as the north bridge (ala Xenos)

The chip will support a 256bit bus connected to fast gdd5. 2Gb of RAM, Sony evaluates the possibility/doability to move to 4GB.

The Apu is going to be around the same size as the GPU ~200 sq.mm

Moving from stream roller cores to Jaguars, I would think Sony would go with more than 4 cores.
I could see the CPU and CU count being anywhere between 4 and 8.
I could see 6 cpu cores and 6 CU as being "sane". ( I may favor 8 cpu cores and 4 SIMD. tho).
I can see the cpu cores max clock speed being 2.4GHz and the GPU being 800Mhz. then turbo does it job to keep the chip within the tdp.

The APu is connected to the GPU/North bridge using PCix3 and has 32GB/s of bandwidth to play with (a bit less with the coherency traffic).

Both chip used TSMC 28nm process.
I could see the TDP of both chip being set @80Watts.

The 7970 is used in dev kit, it's convenient lot of head room in perfs, multiple ACE may help to emulate the APU running parallel form the GPu from example, etc.

Overall the system embark 24 CU operating in the best case scenario (so when tdp allows) at 800Mhz. The peak FLOPS figure would be just below 2.5TFLOPS for the system as a whole.
 
Last edited by a moderator:
I can't sleep...
What could make sense to me is more something like this.

An APU may indeed not provide the power Sony wants with the a given silicon budget and power budget.

Sony is not pleased by the power profile of either trinity or even the up coming Kaveri desktop parts and what they achieve within that power profile.

Bobcat replacement the jaguar cores don't suck too much and within the silicon and power budget of Sony are a better fit.

Sony doesn't want to go with chips either too big or too hot.

Sony view Pitcairn for what it is: impressive either wrt to its size, power profile and performance.
They decided to go (from the number of SIMD) that could be named a HD7870LE or hd7860 (or the HD8xxx equivalent)

I would bet it's going to be clocked conservatively (below 800MHz) and then let AMD power tuning thing adjust the GPU to match a defined TDP. 100 Watts is a maximum.

800MHz may be the max speed as it matches the rumored FLOPS counts (and it makes sense to communicate on peak figures).

The chip is going to be tweaked and act as the north bridge (ala Xenos)

The chip will support a 256bit bus connected to fast gdd5. 2Gb of RAM, Sony evaluates the possibility/doability to move to 4GB.

The Apu is going to be around the same size as the GPU ~200 sq.mm

Moving from stream roller cores to Jaguars, I would think Sony would go with more than 4 cores.
I could see the CPU and CU count being anywhere between 4 and 8.
I could see 6 cpu cores and 6 CU as being "sane". ( I may favor 8 cpu cores and 4 SIMD. tho).
I can see the cpu cores max clock speed being 2.4GHz and the GPU being 800Mhz. then turbo does it job to keep the chip within the tdp.

The APu is connected to the GPU/North bridge using PCix3 and has 32GB/s of bandwidth to play with (a bit less with the coherency traffic).

Both chip used TSMC 28nm process.
I could see the TDP of both chip being set @80Watts.

The 7970 is used in dev kit, it's convenient lot of head room in perfs, multiple ACE may help to emulate the APU running parallel form the GPu from example, etc.

Overall the system embark 24 CU operating in the best case scenario (so when tdp allows) at 800Mhz. The peak FLOPS figure would be just below 2.5TFLOPS for the system as a whole.
I'm definitely fine with this if turns out to be true, I also see Sony upping the Ram to 4gig GDDR5 in the final kit since even two of their own first party studios are begging for such. As long as they nail down a Pitcairn level gpu instead of that 7670 crap I would be content for PS4.
 
I'm definitely fine with this if turns out to be true, I also see Sony upping the Ram to 4gig GDDR5 in the final kit since even two of their own first party studios are begging for such. As long as they nail down a Pitcairn level gpu instead of that 7670 crap I would be content for PS4.
Source please?..
 
And some have hit on what I mean by weird. In the view of what most of us would consider to be a "normal, balanced" modern system, so far it doesn't sound like it. Though customizations could end up being included as well.

People are saying weird because the Durango seems really impressive, except for the GPU which bottlenecks it. As you know.

But the Durango GPU is not final, so it's too early to say imo. I dont think MS is going to make any big mistakes there such as just putting out something not competitive.

All AMD everywhere machines, wow. And people say AMD is dead...

My theory that it's built around Kinext still sounds really good to me. I dont see any other reason for gobs and gobs of RAM if your GPU is supposedly to this point not great. It's clear from bkillian's post it's a pretty obvious theory that they needed lots of ram to do lots of joint tracking and such, the fidelity of Kinect may to some extent depend on the amount of RAM, and they may be going for a Kinect 2.0 with quite high fidelity.

Building around Kinect wont be a bad thing though, depending. It's not like we wont enjoy lots of RAM and a powerful CPU. Just need to make sure the GPU can hold it's end of the bargain.


Overall the system embark 24 CU operating in the best case scenario (so when tdp allows) at 800Mhz. The peak FLOPS figure would be just below 2.5TFLOPS for the system as a whole.

There's no APU and there's no 7970. It's CPU's and an 1152 SP GPU.
 
Status
Not open for further replies.
Back
Top