Predict: The Next Generation Console Tech

Status
Not open for further replies.
I am locking forward to the next generation and then looking over this thread and so see how far of/close the predictions are/where.

Did anybody predict the PS3/X360/Wii before it was leaked/released?
 
I am locking forward to the next generation and then looking over this thread and so see how far of/close the predictions are/where.

Did anybody predict the PS3/X360/Wii before it was leaked/released?

I doubt many got the Wii correct- at least before devs said it was 1.5 times the GC.

Going from what we have heard I assume the Wii U will be something between a 4650 and 5670. If they go with a larger chip, 640SP, then I assume the clocks will be far lower to keep the power/heat requirements in check. 1GB Ram. I don't know enough about IBM's CPU's to make an accurate guess.

Sony are a very different company to the one that released the PS3. They are worth about 1/4 of what they were and have lost billions across all divisions over the last 4 years. IMO they can't afford to launch another console that has the potential to cost them billions in the short-term as they certainly aren't guaranteed profits in years 3-5. A Barts core on 28/32nm in mid 2013 would be the basis for an affordable machine that would have enough power to port anything that the next gen Xbox will play.

Microsoft are very hard to predict as they could go either way. If they bundle Kinect then they will either need to take a sizeable loss or also go with affordable specs. Microsoft can clearly afford to sell at a loss but will they deem it worth while? Microsoft started out on this journey over 10 years ago to control the living room but things are moving on and we could see iOS/Android TV's in the next 12 months. Either way I think that Microsoft will probably play aggressively and put Sony in a difficult position(especially with the strong yen). I expect them to have a sizeable spec advantage on paper but I doubt it will have a huge impact on games. Something similar to a 6950 on 28/32nm.
 
I am locking forward to the next generation and then looking over this thread and so see how far of/close the predictions are/where.

Did anybody predict the PS3/X360/Wii before it was leaked/released?

This is an abbreviated prediction I made a couple weeks ago.

Xbox3

- 6-core "Xenon" 3.5Ghz
- 16-20 Compute Units (1024-1280 ALUs) 800+Mhz
- 2GB GDDR5


PS4

- 4 or 6-core (AMD-based) 3.2Ghz
- 16-20 Compute Units (1024-1280 ALUs) 800+Mhz
- 2GB GDDR5


Wii U

- 3-core (POWER7-based) 3.5Ghz
- 640-800 ALUs (don't know which architecture) 600-800Mhz
- 1.5GB GDDR5, 32MB 1T-SRAM


I don't think there will be too much of a difference between Xbox3 and PS4. Also while I have them at 2GB of memory, I see them going with 4GB if there is an increase in GDDR5 density.
 
He's basing that off a vliw4 7950 and not a gcn which we now know to be true. A Pitcairn Pro will have 20 CU's or 1280 ALU's (The Pitcairn XT will have 24CU/1536ALU and the Pitcairn LE likely be a downclocked Pro). Now unless the 7850 is just a down clocked 7950, which we have zero reason to believe, that'd mean it'll be a 16CU/1024ALU or 12CU/768ALU gcn or maybe a 1152ALU vliw. The chances of a console going with a full set of CU's in a gcn without any harvesting is highly highly unlikely. That means your most likely console amd gpu will have 20CU/1280ALU's and likely lower clocked(800mhz) too.

I thought VLIW4 was GCN?

I think anything less than a Pitcairn XT(1408ALUs) is too low if they launch on 28nm lithography. Pitcairn XT has a 245mm^2 die whereas Xenos & Daughter is ~260mm^2.

At 20nm you have room to fit a Tahiti XT core(2048ALUs) + a pool of EDRAM in a 260mm^2 area, the same as the 90nm Zephyr.

Forgo the EDRAM(which I think will happen) and you could put a bit more into the GPU. In order to do this, perhaps there will be a completely custom part which is what I'm hoping for.

The above speculation doesn't take into account the relevancy of the CPU. If the CPU will be less relevant than before, then it certainly won't be 176mm^2 like it was originally. You could put a bit more into the GPU still.

600x325px-LL-a77c8954_specs.jpeg
 
bgassassin said:
This is an abbreviated prediction I made a couple weeks ago.

Xbox3

- 6-core "Xenon" 3.5Ghz
- 16-20 Compute Units (1024-1280 ALUs) 800+Mhz
- 2GB GDDR5

PS4

- 4 or 6-core (AMD-based) 3.2Ghz
- 16-20 Compute Units (1024-1280 ALUs) 800+Mhz
- 2GB GDDR5

Wii U

- 3-core (POWER7-based) 3.5Ghz
- 640-800 ALUs (don't know which architecture) 600-800Mhz
- 1.5GB GDDR5, 32MB 1T-SRAM

I don't think there will be too much of a difference between Xbox3 and PS4. Also while I have them at 2GB of memory, I see them going with 4GB if there is an increase in GDDR5 density.

Haven't various devs suggested there would be many more cores than this gen? And could it be feasible to have 2GB of DDR3 or another cheaper type of memory alongside the GDDR5?
 
You mean illegitimate? Perhaps, but here is the source.

I knew the source. There are some known errors in those tables like the die size for the Cape Verde chip and other stuff that doesn't make too much sense. It's unlikely that even AMD has all those specs set it stone for all the models and the amount of different models there is absurd for such a small release window. That site itself doesn't seem very credible. 6970 is known. 6950 info is likely very close to true or true, after that it's more or less a quess work from that site imo.

It's better not to use that "info" as a base for any speculation. Just wait few more weeks to have accurate info on more AMD cards.
 
It's also to be considered that current high-end GPUs have still an amount of fixed-function units that could be removed, or reduced. It cannot be done in the PC world yet, because it would lead to performance catastrophe with current and "old" games, but in a console it's possible. Let's remove the ROPs and rasterize the graphic within the shader-core, and further reduce the TMU:shader ratio.
 
It's also to be considered that current high-end GPUs have still an amount of fixed-function units that could be removed, or reduced. It cannot be done in the PC world yet, because it would lead to performance catastrophe with current and "old" games, but in a console it's possible. Let's remove the ROPs and rasterize the graphic within the shader-core, and further reduce the TMU:shader ratio.

Speaking of GCN do you mean for example to include the rasterizer directly in the compute units and reduce the TMUs to 1 or 2 / CU? Are TMUs really overdimensioned?
 
Last edited by a moderator:
It's also to be considered that current high-end GPUs have still an amount of fixed-function units that could be removed, or reduced. It cannot be done in the PC world yet, because it would lead to performance catastrophe with current and "old" games, but in a console it's possible. Let's remove the ROPs and rasterize the graphic within the shader-core, and further reduce the TMU:shader ratio.

GCN in its current form would not be the one to do this. The graphics export path gets its own bus to the ROPs, which saves a lot of traffic over the read/write cache. The rasterization component is not signficant in size, but a CU or quad of CUs is.
Changing the TMU count may be marginal. The texture path is on the general memory path to the L1, so a lot of the hardware used by them is going to stay in place regardless.
 
The above speculation doesn't take into account the relevancy of the CPU. If the CPU will be less relevant than before, then it certainly won't be 176mm^2 like it was originally. You could put a bit more into the GPU still.

Someone can correct me if I'm wrong, but from what I understand you don't want the CPU to be a bottleneck on lower resolutions that don't push the GPU like higher resolutions would. So I don't know if I'd would say it will be less relevant.

Haven't various devs suggested there would be many more cores than this gen? And could it be feasible to have 2GB of DDR3 or another cheaper type of memory alongside the GDDR5?

The ones I remember off the top of my head were someone from Epic talking about the scalability of UE4 and being ready for when 20-core CPUs are available. And someone from DICE saying they knew how to program for multi-CPU/GPU set ups. The DICE one came off to me as just a non-answer to avoid breaking any NDAs.

EDIT: As for the memory I would assume they could if they wanted, but I get the feeling none of them want a split pool of memory. My opinion of course.
 
Last edited by a moderator:
GCN in its current form would not be the one to do this. The graphics export path gets its own bus to the ROPs, which saves a lot of traffic over the read/write cache. The rasterization component is not signficant in size, but a CU or quad of CUs is.
Changing the TMU count may be marginal. The texture path is on the general memory path to the L1, so a lot of the hardware used by them is going to stay in place regardless.

And things like UVD interface and PCIe would any difference in transistor count?
 
There's no disclosure of the exact area and transistor counts, but the UVD block in in Llano is not very big. There's going to be some kind of interface in terms of connecting the GPU to the rest of the system, as far as the high-end chips go, the contribution is dwarfed by the rest of the GPU.
 
I agree with the above post.
I think MS will offer a higher-spec console and place Sony in a bad position.
Personally the better performing multi-plats sealed the deal for me this gen.
MS knows that the money is in exclusive DLC and what better than to have that DLC and multi-plat titles performing even better than what we have at the moment or even DLC that couldn't run on weaker hardware.
 
Had a disturbing/silly thought a couple of days ago.

IBM stated that Wiiu had nice amount of edram on it's CPU.
What if it has a GPU similar to Xenos and daughter die would be moved into a CPU?

It certainly would allow some silly things.. ;)
 
Let's wait and see how it goes, but another IMO, I hope something like a 64MB ( sharing cpu and gpu) eDRAM 512GB/sec almost like L3 cache power7* and no more than 1.5GB RAM (GDDR3 or 1T-SRAM) at 32GB/sec.

*http://www.7-cpu.com/cpu/Power7.html

From what I've heard the former maybe exactly that, but the amount so far seems to be 32MB. The latter also seems to be the same amount (1.5GB), but it wouldn't be 1T-SRAM. We've had discussions on whether it will be DDR3, GDDR3, or GDDR5. I'm expecting GDDR5, but I don't discount the possibility of the other two ending up in there.


Something that also intrigues me is how do people take it if GDDR5 densities don't increase preventing MS and Sony from reaching the 4GB that I see people expecting? I have a tough time believing they would go with a split pool to reach that amount. Anyone heard anything about 4Gbit GDDR5 being made or discussed about being made?
 
How much more expensive is GDDR5 compared to ddr3? At my part time job we sell 4gb ddr3 for 20 euro's and this store isn't exactly the cheapest place to buy parts at. I suppose a console builder won't even be paying 10 euro's for 4gb if they buy directly from whoever is producing the memory. If GDDR5 is much more expensive, could we see seperate memory pools again? 1gb GDDR5 and 4gb of ddr3?
 
Status
Not open for further replies.
Back
Top