Predict: The Next Generation Console Tech

Status
Not open for further replies.
If Xbox 720´s cpu is an ARMx16 cores... i wouldn´t call this a beast versus a 4 steamroller cores CPU...
 
Last edited by a moderator:
Ms still pushing too much on casual gaming this specs are too weak, if they are real the next generation Sony will win hand down .
 
If Xbox 720´s cpu is an ARMx16 cores... i wouldn´t call this a beast versus a 4 steamroller cores CPU...

I think if we want to get an idea about sony's ps4, we sould look at psvita. if we look at psvita, we can conclude that ps4 would be a very powerful console by any standard.

but rumors are talking about 4 Gb for nextxbox VS 2 Gb of RAM (surely higher bandwidth) for ps4. Hopefully sony would choose 3 Gb or 4 Gb for its ps4.
 
I think if we want to get an idea about sony's ps4, we sould look at psvita. if we look at psvita, we can conclude that ps4 would be a very powerful console by any standard.

but rumors are talking about 4 Gb for nextxbox VS 2 Gb of RAM (surely higher bandwidth) for ps4. Hopefully sony would choose 3 Gb or 4 Gb for its ps4.

what's better 2 gb ddr5 or 4 gb ddr 3 - 4 shared ?
 
Nearly every dev out there would much prefer less and fatter cores, though.
Can't see them wanting more than 4 cores/ 8 threads.
This was true when Xbox 360 and PS3 were released (in 2005), but things have changed since. Most big developers have moved away from architectures that are based on one thread, one task paradigm (separate graphics thread, physics thread, game logic thread, etc). Big developers now use (or are rapidly migrating to) scalable data driven architectures that process small fine grained tasks/items (*). These systems do scale almost linearly to larger core/thread counts (without any, or very small, changes required to code).

I personally prefer good throughput. If more throughput can be achieved by including higher amount of less powerful cores (with same designated TDP and cost) then thats my preferred choice. Memory subsystems are pretty much the limiting factor nowadays. Lower clocked cores have lower observed memory latencies (in CPU cycles) assuming similar memory subsystem (similar memory latency in microseconds). And two cores clocked at half speed also consume less power than one running at double the speed. Of course if we scale the core count up drastically (64, 128, 256+ cores), maintaining the cache coherency between the cores will become a huge cost.

(*) Basically all performance critical processing done in games is done to huge amount of separate entities. You have X objects in the game world, you need to determine which X objects are currently visible, you need to determine collisions of X objects, you need to simulate physics for X objects, you need to render X objects to screen, you need to animate X objects, you need to simulate AI for X objects, you need to animate/render X particles, you need to check X triggers, you need to perform X ray casts, etc, etc. Often the X is in range [100, 10000]. There's plenty of potential parallelism. One core, two cores, sixteen cores, 32 cores... it doesn't matter much if your engine is fully designed to exploit data based parallelism.
 
Agree, 6gb of ddr 4 unified with a 256bit bus would do the trick nicely, would also enable some nice cost and efficiency improvements down the road in comparison to likely obsolete current technologies.

It would be a nice way to keep the tdp in check and help sell the console as having next gen technology.

The presentation looks legit, why else would Microsoft pay lawyers to take it down if it wasn't?? If it was wayy off target and the real console was much better you would think they would leave it up as a decoy??

It's real Alright but it's probably just one of 20 such presentations put forward by some in house design thinktank, probably has got an outline of the future vision but we shouldn't take it as fact.

The 64 alu does look to be a typo, after all they refer to xenos as being a 48 alu ....it wasn't it was a 48 vec 5 unified shader....240 alu... I bet this was done by a team with sketchy knowledge of hardware components getting shaders and alu numbers mixed up, also taking into consideration future amd fusion architecture and linking 2 gpus together or something with one replicating xenos.....of course things have moved on a great deal sink mid 2010...as this seems to be going the way the (at the time) successfull profit making wii was, with the financial markets in the doldrums I bet Microsoft has changed things around.


There's definitely something in that presentation that's hitting a sore spot though, else they wouldn't have pad to remove it.
 
french toast, are you seriously trying to suggest DDR4 is coming in 1.5Gbit or 3Gbit chips? Or are you trying to suggest some sort of mix'n'matching like nVidia did with different sized memory modules (which IIRC wasn't such a great success in benches?)
 
what's better 2 gb ddr5 or 4 gb ddr 3 - 4 shared ?

Against 4GB DDR3, 2GB GDDR5 is just much better. DDR4 blurs it a bit, because the fastest available GDDR4 would start to get reasonably close to "slow" GDDR5 at the time of release.

data driven architectures

The big problem with data driven design is interactions. Making a lot of things do something in parallel is very easy when they don't need to talk to each other, but when you want to be able to look at other things in the world to decide something, you need some kind of synchronization, and locking just kills you. You can make it work, with more functional design (double buffer the world!), but that is kind of incompatible with the OoO design paradigm. I'm personally a huge proponent of FP, but I can't really see a lot of the programming world do the switch.
 
Last edited by a moderator:
The big problem with data driven design is interactions. Making a lot of things do something in parallel is very easy when they don't need to talk to each other, but when you want to be able to look at other things in the world to decide something, you need some kind of synchronization, and locking just kills you. You can make it work, with more functional design (double buffer the world!), but that is kind of incompatible with the OoO design paradigm. I'm personally a huge proponent of FP, but I can't really see a lot of the programming world do the switch.

There are ways to make the issue of syncs pretty straight forward and easy that generally haven't been done yet because the actual market for them hasn't existed. Pretty much all of the current sync primitives date to bus based multiprocessors where they made sense because you could just easily step on the bus and boom, it is done. In this day of interconnection networks, a lot of the bus based primitives are actually harder to do than more advanced and more valuable primitives. Given modern distributed memory controller architectures, it probably also makes sense to put an ALU in the MC path as well.
 
french toast, are you seriously trying to suggest DDR4 is coming in 1.5Gbit or 3Gbit chips? Or are you trying to suggest some sort of mix'n'matching like nVidia did with different sized memory modules (which IIRC wasn't such a great success in benches?)

Too be honest I just threw a random ram number out there lol. I don't know, I'm kind thinking we are going to need 8 gb ram, but that isn't going to happen, however with the obvious move to the cloud coming in the next few years you will only need to kit the machine out for 3-4 years....special perhaps 4gb would do it.
 
Against 4GB DDR3, 2GB GDDR5 is just much better. DDR4 blurs it a bit, because the fastest available GDDR4 would start to get reasonably close to "slow" GDDR5 at the time of release.

A lot of people are way underselling memory capacity. There are a lot of things you can do with more memory.
 
Well up to this point I think the leaks would give Sony a pretty good idea to gauge at the kind of performance in a nextgen xbox. Seems like Sony can take this as an advantage to finalize their PS4.

I was thinking more along the lines of they might see "our competitor is shooting for a profitable launch at just 299??? Forget 4GB ram 2GB it is..., we're gonna need to keep those costs low..."
 
The 64 alu does look to be a typo, after all they refer to xenos as being a 48 alu ....it wasn't it was a 48 vec 5 unified shader....240 alu... I bet this was done by a team with sketchy knowledge of hardware components getting shaders and alu numbers mixed up

Xenos was always refered to as 48 ALU. http://www.gamespot.com/images/6095043/rumor-control-son-of-dreamcast-and-xbox-next-specs/1/ , http://www.beyond3d.com/content/articles/4/2

That's why 64 ALU's in the other GPU doesn't seem to make any sense, unless you figure MS is basically using the ancient Xenos design again and just beefing it up. That terminology isn't really even used anymore for GPU, it'd be SP's...
 
If Xbox 720´s cpu is an ARMx16 cores... i wouldn´t call this a beast versus a 4 steamroller cores CPU...

Latest speculation surrounds possible Jaguar cores on both.

I read the Document as "Arm or x86 havent decided yet".

Think of small x86 Jag cores replacing the role of arm cores, in this speculation.
 
microsoftxboxsurface_techspecs3.jpg


could we be looking right at the Xbox 3 & not knowing it?



at first I thought this was just fake because I was thinking no way in hell a tablet has these specs, but looking at it again it's not a tablet the tablet is just a part of it like the Wii U.
 
Status
Not open for further replies.
Back
Top