Predict: The Next Generation Console Tech

Discussion in 'Console Technology' started by Acert93, Jun 12, 2006.

Thread Status:
Not open for further replies.
  1. JPT

    JPT
    Veteran

    Joined:
    Apr 15, 2007
    Messages:
    2,505
    Likes Received:
    943
    Location:
    Oslo, Norway
    I am locking forward to the next generation and then looking over this thread and so see how far of/close the predictions are/where.

    Did anybody predict the PS3/X360/Wii before it was leaked/released?
     
  2. Medu

    Newcomer

    Joined:
    Dec 27, 2011
    Messages:
    5
    Likes Received:
    0
    I doubt many got the Wii correct- at least before devs said it was 1.5 times the GC.

    Going from what we have heard I assume the Wii U will be something between a 4650 and 5670. If they go with a larger chip, 640SP, then I assume the clocks will be far lower to keep the power/heat requirements in check. 1GB Ram. I don't know enough about IBM's CPU's to make an accurate guess.

    Sony are a very different company to the one that released the PS3. They are worth about 1/4 of what they were and have lost billions across all divisions over the last 4 years. IMO they can't afford to launch another console that has the potential to cost them billions in the short-term as they certainly aren't guaranteed profits in years 3-5. A Barts core on 28/32nm in mid 2013 would be the basis for an affordable machine that would have enough power to port anything that the next gen Xbox will play.

    Microsoft are very hard to predict as they could go either way. If they bundle Kinect then they will either need to take a sizeable loss or also go with affordable specs. Microsoft can clearly afford to sell at a loss but will they deem it worth while? Microsoft started out on this journey over 10 years ago to control the living room but things are moving on and we could see iOS/Android TV's in the next 12 months. Either way I think that Microsoft will probably play aggressively and put Sony in a difficult position(especially with the strong yen). I expect them to have a sizeable spec advantage on paper but I doubt it will have a huge impact on games. Something similar to a 6950 on 28/32nm.
     
  3. bgassassin

    Regular

    Joined:
    Aug 12, 2011
    Messages:
    507
    Likes Received:
    0
    This is an abbreviated prediction I made a couple weeks ago.

    Xbox3

    - 6-core "Xenon" 3.5Ghz
    - 16-20 Compute Units (1024-1280 ALUs) 800+Mhz
    - 2GB GDDR5


    PS4

    - 4 or 6-core (AMD-based) 3.2Ghz
    - 16-20 Compute Units (1024-1280 ALUs) 800+Mhz
    - 2GB GDDR5


    Wii U

    - 3-core (POWER7-based) 3.5Ghz
    - 640-800 ALUs (don't know which architecture) 600-800Mhz
    - 1.5GB GDDR5, 32MB 1T-SRAM


    I don't think there will be too much of a difference between Xbox3 and PS4. Also while I have them at 2GB of memory, I see them going with 4GB if there is an increase in GDDR5 density.
     
  4. IllusionistK

    Newcomer

    Joined:
    Nov 8, 2011
    Messages:
    54
    Likes Received:
    0
    I thought VLIW4 was GCN?

    I think anything less than a Pitcairn XT(1408ALUs) is too low if they launch on 28nm lithography. Pitcairn XT has a 245mm^2 die whereas Xenos & Daughter is ~260mm^2.

    At 20nm you have room to fit a Tahiti XT core(2048ALUs) + a pool of EDRAM in a 260mm^2 area, the same as the 90nm Zephyr.

    Forgo the EDRAM(which I think will happen) and you could put a bit more into the GPU. In order to do this, perhaps there will be a completely custom part which is what I'm hoping for.

    The above speculation doesn't take into account the relevancy of the CPU. If the CPU will be less relevant than before, then it certainly won't be 176mm^2 like it was originally. You could put a bit more into the GPU still.

    [​IMG]
     
  5. DuckThor Evil

    Legend

    Joined:
    Jul 9, 2004
    Messages:
    5,995
    Likes Received:
    1,062
    Location:
    Finland
    That table is fake.
     
  6. Arwin

    Arwin Now Officially a Top 10 Poster
    Moderator Legend

    Joined:
    May 17, 2006
    Messages:
    18,762
    Likes Received:
    2,639
    Location:
    Maastricht, The Netherlands
    Haven't various devs suggested there would be many more cores than this gen? And could it be feasible to have 2GB of DDR3 or another cheaper type of memory alongside the GDDR5?
     
  7. IllusionistK

    Newcomer

    Joined:
    Nov 8, 2011
    Messages:
    54
    Likes Received:
    0
    You mean illegitimate? Perhaps, but here is the source.
     
  8. DuckThor Evil

    Legend

    Joined:
    Jul 9, 2004
    Messages:
    5,995
    Likes Received:
    1,062
    Location:
    Finland
    I knew the source. There are some known errors in those tables like the die size for the Cape Verde chip and other stuff that doesn't make too much sense. It's unlikely that even AMD has all those specs set it stone for all the models and the amount of different models there is absurd for such a small release window. That site itself doesn't seem very credible. 6970 is known. 6950 info is likely very close to true or true, after that it's more or less a quess work from that site imo.

    It's better not to use that "info" as a base for any speculation. Just wait few more weeks to have accurate info on more AMD cards.
     
  9. MarkoIt

    Regular

    Joined:
    Mar 1, 2007
    Messages:
    392
    Likes Received:
    0
    It's also to be considered that current high-end GPUs have still an amount of fixed-function units that could be removed, or reduced. It cannot be done in the PC world yet, because it would lead to performance catastrophe with current and "old" games, but in a console it's possible. Let's remove the ROPs and rasterize the graphic within the shader-core, and further reduce the TMU:shader ratio.
     
  10. stiftl

    Newcomer

    Joined:
    Jun 24, 2006
    Messages:
    118
    Likes Received:
    11
    Speaking of GCN do you mean for example to include the rasterizer directly in the compute units and reduce the TMUs to 1 or 2 / CU? Are TMUs really overdimensioned?
     
    #8970 stiftl, Dec 27, 2011
    Last edited by a moderator: Dec 27, 2011
  11. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,579
    Likes Received:
    4,799
    Location:
    Well within 3d
    GCN in its current form would not be the one to do this. The graphics export path gets its own bus to the ROPs, which saves a lot of traffic over the read/write cache. The rasterization component is not signficant in size, but a CU or quad of CUs is.
    Changing the TMU count may be marginal. The texture path is on the general memory path to the L1, so a lot of the hardware used by them is going to stay in place regardless.
     
  12. bgassassin

    Regular

    Joined:
    Aug 12, 2011
    Messages:
    507
    Likes Received:
    0
    Someone can correct me if I'm wrong, but from what I understand you don't want the CPU to be a bottleneck on lower resolutions that don't push the GPU like higher resolutions would. So I don't know if I'd would say it will be less relevant.

    The ones I remember off the top of my head were someone from Epic talking about the scalability of UE4 and being ready for when 20-core CPUs are available. And someone from DICE saying they knew how to program for multi-CPU/GPU set ups. The DICE one came off to me as just a non-answer to avoid breaking any NDAs.

    EDIT: As for the memory I would assume they could if they wanted, but I get the feeling none of them want a split pool of memory. My opinion of course.
     
    #8972 bgassassin, Dec 27, 2011
    Last edited by a moderator: Dec 28, 2011
  13. Heinrich4

    Regular

    Joined:
    Aug 11, 2005
    Messages:
    596
    Likes Received:
    9
    Location:
    Rio de Janeiro,Brazil
    And things like UVD interface and PCIe would any difference in transistor count?
     
  14. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,579
    Likes Received:
    4,799
    Location:
    Well within 3d
    There's no disclosure of the exact area and transistor counts, but the UVD block in in Llano is not very big. There's going to be some kind of interface in terms of connecting the GPU to the rest of the system, as far as the high-end chips go, the contribution is dwarfed by the rest of the GPU.
     
  15. Barso

    Newcomer

    Joined:
    Nov 24, 2008
    Messages:
    67
    Likes Received:
    0
    I agree with the above post.
    I think MS will offer a higher-spec console and place Sony in a bad position.
    Personally the better performing multi-plats sealed the deal for me this gen.
    MS knows that the money is in exclusive DLC and what better than to have that DLC and multi-plat titles performing even better than what we have at the moment or even DLC that couldn't run on weaker hardware.
     
  16. jlippo

    Veteran

    Joined:
    Oct 7, 2004
    Messages:
    1,744
    Likes Received:
    1,090
    Location:
    Finland
    Had a disturbing/silly thought a couple of days ago.

    IBM stated that Wiiu had nice amount of edram on it's CPU.
    What if it has a GPU similar to Xenos and daughter die would be moved into a CPU?

    It certainly would allow some silly things.. ;)
     
  17. Heinrich4

    Regular

    Joined:
    Aug 11, 2005
    Messages:
    596
    Likes Received:
    9
    Location:
    Rio de Janeiro,Brazil
    Let's wait and see how it goes, but another IMO, I hope something like a 64MB ( sharing cpu and gpu) eDRAM 512GB/sec almost like L3 cache power7* and no more than 1.5GB RAM (GDDR3 or 1T-SRAM) at 32GB/sec.

    *http://www.7-cpu.com/cpu/Power7.html
     
  18. bgassassin

    Regular

    Joined:
    Aug 12, 2011
    Messages:
    507
    Likes Received:
    0
    From what I've heard the former maybe exactly that, but the amount so far seems to be 32MB. The latter also seems to be the same amount (1.5GB), but it wouldn't be 1T-SRAM. We've had discussions on whether it will be DDR3, GDDR3, or GDDR5. I'm expecting GDDR5, but I don't discount the possibility of the other two ending up in there.


    Something that also intrigues me is how do people take it if GDDR5 densities don't increase preventing MS and Sony from reaching the 4GB that I see people expecting? I have a tough time believing they would go with a split pool to reach that amount. Anyone heard anything about 4Gbit GDDR5 being made or discussed about being made?
     
  19. tongue_of_colicab

    Veteran

    Joined:
    Oct 7, 2004
    Messages:
    3,773
    Likes Received:
    960
    Location:
    Japan
    How much more expensive is GDDR5 compared to ddr3? At my part time job we sell 4gb ddr3 for 20 euro's and this store isn't exactly the cheapest place to buy parts at. I suppose a console builder won't even be paying 10 euro's for 4gb if they buy directly from whoever is producing the memory. If GDDR5 is much more expensive, could we see seperate memory pools again? 1gb GDDR5 and 4gb of ddr3?
     
  20. Tahir2

    Veteran

    Joined:
    Feb 7, 2002
    Messages:
    2,978
    Likes Received:
    86
    Location:
    Earth
    What do you think is better, 2GB DDR3 (sys) with 2GB GDDR5 (gfx) or 4GB DDR3 (sys) with 1GB GDDR5 (gfx)?
     
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...