Predict: The Next Generation Console Tech

Discussion in 'Console Technology' started by Acert93, Jun 12, 2006.

Thread Status:
Not open for further replies.
  1. Love_In_Rio

    Veteran

    Joined:
    Apr 21, 2004
    Messages:
    1,627
    Likes Received:
    226
    If Xbox 720´s cpu is an ARMx16 cores... i wouldn´t call this a beast versus a 4 steamroller cores CPU...
     
    #12481 Love_In_Rio, Jun 18, 2012
    Last edited by a moderator: Jun 18, 2012
  2. HEYDOL

    Newcomer

    Joined:
    Jun 12, 2012
    Messages:
    4
    Likes Received:
    0
    Ms still pushing too much on casual gaming this specs are too weak, if they are real the next generation Sony will win hand down .
     
  3. DoctorFouad

    Newcomer

    Joined:
    Sep 30, 2011
    Messages:
    195
    Likes Received:
    0
    I think if we want to get an idea about sony's ps4, we sould look at psvita. if we look at psvita, we can conclude that ps4 would be a very powerful console by any standard.

    but rumors are talking about 4 Gb for nextxbox VS 2 Gb of RAM (surely higher bandwidth) for ps4. Hopefully sony would choose 3 Gb or 4 Gb for its ps4.
     
  4. HEYDOL

    Newcomer

    Joined:
    Jun 12, 2012
    Messages:
    4
    Likes Received:
    0
    what's better 2 gb ddr5 or 4 gb ddr 3 - 4 shared ?
     
  5. sebbbi

    Veteran

    Joined:
    Nov 14, 2007
    Messages:
    2,924
    Likes Received:
    5,296
    Location:
    Helsinki, Finland
    This was true when Xbox 360 and PS3 were released (in 2005), but things have changed since. Most big developers have moved away from architectures that are based on one thread, one task paradigm (separate graphics thread, physics thread, game logic thread, etc). Big developers now use (or are rapidly migrating to) scalable data driven architectures that process small fine grained tasks/items (*). These systems do scale almost linearly to larger core/thread counts (without any, or very small, changes required to code).

    I personally prefer good throughput. If more throughput can be achieved by including higher amount of less powerful cores (with same designated TDP and cost) then thats my preferred choice. Memory subsystems are pretty much the limiting factor nowadays. Lower clocked cores have lower observed memory latencies (in CPU cycles) assuming similar memory subsystem (similar memory latency in microseconds). And two cores clocked at half speed also consume less power than one running at double the speed. Of course if we scale the core count up drastically (64, 128, 256+ cores), maintaining the cache coherency between the cores will become a huge cost.

    (*) Basically all performance critical processing done in games is done to huge amount of separate entities. You have X objects in the game world, you need to determine which X objects are currently visible, you need to determine collisions of X objects, you need to simulate physics for X objects, you need to render X objects to screen, you need to animate X objects, you need to simulate AI for X objects, you need to animate/render X particles, you need to check X triggers, you need to perform X ray casts, etc, etc. Often the X is in range [100, 10000]. There's plenty of potential parallelism. One core, two cores, sixteen cores, 32 cores... it doesn't matter much if your engine is fully designed to exploit data based parallelism.
     
  6. french toast

    Veteran

    Joined:
    Jan 5, 2012
    Messages:
    1,667
    Likes Received:
    9
    Location:
    Leicestershire - England
    Agree, 6gb of ddr 4 unified with a 256bit bus would do the trick nicely, would also enable some nice cost and efficiency improvements down the road in comparison to likely obsolete current technologies.

    It would be a nice way to keep the tdp in check and help sell the console as having next gen technology.

    The presentation looks legit, why else would Microsoft pay lawyers to take it down if it wasn't?? If it was wayy off target and the real console was much better you would think they would leave it up as a decoy??

    It's real Alright but it's probably just one of 20 such presentations put forward by some in house design thinktank, probably has got an outline of the future vision but we shouldn't take it as fact.

    The 64 alu does look to be a typo, after all they refer to xenos as being a 48 alu ....it wasn't it was a 48 vec 5 unified shader....240 alu... I bet this was done by a team with sketchy knowledge of hardware components getting shaders and alu numbers mixed up, also taking into consideration future amd fusion architecture and linking 2 gpus together or something with one replicating xenos.....of course things have moved on a great deal sink mid 2010...as this seems to be going the way the (at the time) successfull profit making wii was, with the financial markets in the doldrums I bet Microsoft has changed things around.


    There's definitely something in that presentation that's hitting a sore spot though, else they wouldn't have pad to remove it.
     
  7. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    10,245
    Likes Received:
    4,465
    Location:
    Finland
    french toast, are you seriously trying to suggest DDR4 is coming in 1.5Gbit or 3Gbit chips? Or are you trying to suggest some sort of mix'n'matching like nVidia did with different sized memory modules (which IIRC wasn't such a great success in benches?)
     
  8. tunafish

    Regular

    Joined:
    Aug 19, 2011
    Messages:
    627
    Likes Received:
    414
    Against 4GB DDR3, 2GB GDDR5 is just much better. DDR4 blurs it a bit, because the fastest available GDDR4 would start to get reasonably close to "slow" GDDR5 at the time of release.

    The big problem with data driven design is interactions. Making a lot of things do something in parallel is very easy when they don't need to talk to each other, but when you want to be able to look at other things in the world to decide something, you need some kind of synchronization, and locking just kills you. You can make it work, with more functional design (double buffer the world!), but that is kind of incompatible with the OoO design paradigm. I'm personally a huge proponent of FP, but I can't really see a lot of the programming world do the switch.
     
    #12488 tunafish, Jun 18, 2012
    Last edited by a moderator: Jun 18, 2012
  9. aaronspink

    Veteran

    Joined:
    Jun 20, 2003
    Messages:
    2,641
    Likes Received:
    64
    There are ways to make the issue of syncs pretty straight forward and easy that generally haven't been done yet because the actual market for them hasn't existed. Pretty much all of the current sync primitives date to bus based multiprocessors where they made sense because you could just easily step on the bus and boom, it is done. In this day of interconnection networks, a lot of the bus based primitives are actually harder to do than more advanced and more valuable primitives. Given modern distributed memory controller architectures, it probably also makes sense to put an ALU in the MC path as well.
     
  10. DieH@rd

    Legend

    Joined:
    Sep 20, 2006
    Messages:
    6,387
    Likes Received:
    2,411
    Im okay with that... but memory will be lower and CPU will most probably be 2 modules / 4 threads.
     
  11. french toast

    Veteran

    Joined:
    Jan 5, 2012
    Messages:
    1,667
    Likes Received:
    9
    Location:
    Leicestershire - England
    Too be honest I just threw a random ram number out there lol. I don't know, I'm kind thinking we are going to need 8 gb ram, but that isn't going to happen, however with the obvious move to the cloud coming in the next few years you will only need to kit the machine out for 3-4 years....special perhaps 4gb would do it.
     
  12. aaronspink

    Veteran

    Joined:
    Jun 20, 2003
    Messages:
    2,641
    Likes Received:
    64
    A lot of people are way underselling memory capacity. There are a lot of things you can do with more memory.
     
  13. Rangers

    Legend

    Joined:
    Aug 4, 2006
    Messages:
    12,791
    Likes Received:
    1,596
    I was thinking more along the lines of they might see "our competitor is shooting for a profitable launch at just 299??? Forget 4GB ram 2GB it is..., we're gonna need to keep those costs low..."
     
  14. Rangers

    Legend

    Joined:
    Aug 4, 2006
    Messages:
    12,791
    Likes Received:
    1,596
    Xenos was always refered to as 48 ALU. http://www.gamespot.com/images/6095043/rumor-control-son-of-dreamcast-and-xbox-next-specs/1/ , http://www.beyond3d.com/content/articles/4/2

    That's why 64 ALU's in the other GPU doesn't seem to make any sense, unless you figure MS is basically using the ancient Xenos design again and just beefing it up. That terminology isn't really even used anymore for GPU, it'd be SP's...
     
  15. Rangers

    Legend

    Joined:
    Aug 4, 2006
    Messages:
    12,791
    Likes Received:
    1,596
    Latest speculation surrounds possible Jaguar cores on both.

    I read the Document as "Arm or x86 havent decided yet".

    Think of small x86 Jag cores replacing the role of arm cores, in this speculation.
     
  16. HEYDOL

    Newcomer

    Joined:
    Jun 12, 2012
    Messages:
    4
    Likes Received:
    0
  17. babybumb

    Regular

    Joined:
    Dec 9, 2011
    Messages:
    609
    Likes Received:
    24
  18. Kb-Smoker

    Regular

    Joined:
    Aug 26, 2005
    Messages:
    614
    Likes Received:
    1
  19. anexanhume

    Veteran

    Joined:
    Dec 5, 2011
    Messages:
    2,078
    Likes Received:
    1,535
    Does everyone agree that x86 cores are a superior path forward over PPC if Xenos is integrated to maintain BC?
     
  20. onQ

    onQ
    Veteran

    Joined:
    Mar 4, 2010
    Messages:
    1,540
    Likes Received:
    56
    [​IMG]

    could we be looking right at the Xbox 3 & not knowing it?



    at first I thought this was just fake because I was thinking no way in hell a tablet has these specs, but looking at it again it's not a tablet the tablet is just a part of it like the Wii U.
     
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...