Predict: The Next Generation Console Tech

Status
Not open for further replies.
It is mating season for the singing moles of Redmond, fresh off their migration back from Mountain View, and they are singing about the XBox Next chip. What’s more, they have a name and a date.

The moles are cooing the name softly while they think up new ways to transition Microsoft’s business model from monopoly abuse to patent trolling. The name they are singing in their tunnels sounds like “Obed”, but the spelling might be a bit off due to echoing in the tunnels.

Obed it seems is SoC, CPU + GPU, and of course eDRAM, it sounds an awful lot like an evolutionary version of the current XBox 360 chip. Some say it is an x86/Bulldozer part, but everything we have been hearing for a long time says that the chip is going to be a PPC variant. In any case, the GPU is definitely made by AMD/ATI, and IBM has a big hand in the SoC design.

The moles all say that production is set for late 2012, possibly the early days of 2013. basically once the moles get settled in to Mountain View for the winter. They will give the thumps up or down on silicon based on parts they get back in Q1 of 2012. If all goes well that is. That puts production of the XBox Next in the late spring or early summer of 2013, just in time for singing mole mating season. Nothing gets a sow’s attention like a new SoC.S|A


http://semiaccurate.com/2011/08/15/n...name-and-date/

Hmm..
 
This has been attempted at various times, and it can very easily end badly.
It's not a different approach, if we recall what RISC vs CISC was about, or things like the attempts to market an architecture that could run Java bytecode. Some attempts more modestly targeted accellerating parts of it.
Attempts at closing the semantic gap between machine language and higher level code have been made for decades.

The problem with baking language primitives into the ISA is that the more generally capable the instruction, the less likely that it is a good fit for any specific instance of its use.
Silicon is not very flexible, and the hardware and instruction set have to include a significant number of provisions for any variation or behavior that may arise.

If the basis of the hardware were more configurable, it might allow an FPGA-type solution to elide unnecessary parts of the implementation.
Straightline performance and power for many typical loads do not favor FPGAs as they exist right now, since they use scads of silicon and internal interconnect to be configurable.

I remember reading about the mysterious Intel iAPX 432, fascinating but the implementation was quite an harassment (and there was a tragical low hanging fruit to exploit it better)

wow.. it even includes support for garbage collection. and that was in 1981!

wikipedia said:
Software running on the 432 does not need to explicitly deallocate objects that are no longer needed, and in fact no method is provided to do so. Instead, the microcode implements part of the marking portion of Edsger Dijkstra's on-the-fly parallel garbage collection algorithm (a mark-sweep style collector). The entries in the system object table contain the bits used to mark each object as being white, black, or grey as needed by the collector.

The iMAX-432 operating system includes the software portion of the garbage collector.

about for your reconfigurability idea, it makes think of the transmeta crusoe.
 
The moles are cooing the name softly while they think up new ways to transition Microsoft’s business model from monopoly abuse to patent trolling.

I guess Charlie likes Ms as much as he does Nvidia :LOL:

The name they are singing in their tunnels sounds like “Obed”, but the spelling might be a bit off due to echoing in the tunnels.

Obed it seems is SoC, CPU + GPU, and of course eDRAM, it sounds an awful lot like an evolutionary version of the current XBox 360 chip. Some say it is an x86/Bulldozer part, but everything we have been hearing for a long time says that the chip is going to be a PPC variant. In any case, the GPU is definitely made by AMD/ATI, and IBM has a big hand in the SoC design.

This passage doesnt make too much sense besides being inconsistent. Why would IBM have a heavy hand in the design if it was possibly an all AMD part (that part only makes sense if the CPU is IBM)?

It all sounds somewhat reasonable I suppose. I dont like the whole SoC idea as it implies a less powerful chip, but then again next gen consoles working with a presumable 200 watt TDP envelope, precluding a 300 watt discrete GPU, it may not hurt at all. I would hope it is one beefy SoC compared to all that came before, and if it includes any bulldozer it certainly would be.

It's another rumor to file away I guess. Corresponds somewhat with the Hardocp report (although that one swore it was a fusion, thus all AMD, part).

The timeline suggests late 2013, which I do think is possible. I do not think any reports of a 2012 launch are possible, because of the existence of Halo 4.

PS, the last several times, once several weeks ago, I have clicked any semiaccurate links, I get 404. Is the site dead or is it a problem on my end?
 
I guess Charlie likes Ms as much as he does Nvidia :LOL:



This passage doesnt make too much sense besides being inconsistent. Why would IBM have a heavy hand in the design if it was possibly an all AMD part (that part only makes sense if the CPU is IBM)?

It all sounds somewhat reasonable I suppose. I dont like the whole SoC idea as it implies a less powerful chip, but then again next gen consoles working with a presumable 200 watt TDP envelope, precluding a 300 watt discrete GPU, it may not hurt at all. I would hope it is one beefy SoC compared to all that came before, and if it includes any bulldozer it certainly would be.

It's another rumor to file away I guess. Corresponds somewhat with the Hardocp report (although that one swore it was a fusion, thus all AMD, part).

The timeline suggests late 2013, which I do think is possible. I do not think any reports of a 2012 launch are possible, because of the existence of Halo 4.

PS, the last several times, once several weeks ago, I have clicked any semiaccurate links, I get 404. Is the site dead or is it a problem on my end?

late 2013 is good unless we hit new deep recession worldwide in which case all bet's are off(again).
 
I guess Charlie likes Ms as much as he does Nvidia :LOL:



This passage doesnt make too much sense besides being inconsistent. Why would IBM have a heavy hand in the design if it was possibly an all AMD part (that part only makes sense if the CPU is IBM)?


I agree, but there are rumors about IBM (forums) is involved and coupled with the paradigm CGPU,APU,raised SoC on cell phones/tablets etc more prominent these days the most likely to be an SoC with PPC + AMD gpu.
 
Perhaps you could then expand on what exactly the "point" is?

The point had nothing to do with latency or framerate.
I was simply saying having a large number of flops does not a good CPU for games make.
A good CPU for games should efficiently run game code, without placing an undue burden on development. Because any thing that makes it difficult to produce good code, cuts down on the amount of iteration that can take place in a game, and indirectly impacts quality. If a processor is esoteric in design there had better be a big payoff to justify it.

The meta point is more at the level of you can't look at parts of a game in isolation, a game isn't a collection of technologies and assets, it's a whole. And you have to understand the development process when your looking at designing "good" hardware, as much as the pure technology. Increasing the burden on engineering has to have a payoff of it' simply not worthwhile. You have to be somewhat pragmatic when it comes to development on large teams.

We used to discuss whether using a GC'd language for all of the none critical code would result in better or worse performance overall. On the face of it looking at the pure technology side you would say worse, but you have to understand on a team of >20 engineers, perhaps 3-6 are experienced hardcore low level coders, and if you can free them from having to fix the random memory writes and null pointer dereferences that get introduced, they can spend that time optimizing or improving other areas of the game. It could end up being a net performance win.

When game teams were 1 engineer or 5 engineers, and games were a couple of hundred thousand lines of code it wasn't an issue.
A lot of game technology today is about data driving and allowing artists/designers to iterate, it's widely accepted that more iterations in those fields produce higher quality. We do a piss poor job of doing the same for engineers. In the 80's I could assemble a game (all of it) in under 10 seconds, I worked on one game not so long ago that had a worst case link time of 20 minutes.

I personally like banging bits, I like playing with new and obscure processor designs, on my personal projects I still look at the disassembly the compiler produces for "critical" code sections and if I don't like it I rewrite the offending functions in assembler. On large game teams I'm more pragmatic I'm looking at what produces the biggest win for the game given deadlines and personell.

If your designing processors for games you should look at game code and figure out what needs improvement. I have no proof of this, because I haven't seen the majority of game code written across PS3/X360, but I don't think what most game code is crying out for is increased computation density. Certainly there are certain aspects of code where computation density is a big win, but I think most of those lend themselves to GPGPU like computation models.

As a total aside, I'm very old school when it comes to framerate, to me if it's not 60Hz it's the wrong set of compromises. However I've seen a couple of studies recently on the 60/30 debate and how they affect review score and sales. The short version is they don't for the most part people buying/reviewing games don't care.
 
As a total aside, I'm very old school when it comes to framerate, to me if it's not 60Hz it's the wrong set of compromises. However I've seen a couple of studies recently on the 60/30 debate and how they affect review score and sales. The short version is they don't for the most part people buying/reviewing games don't care.

Could you please point/link to any of these studies. I have never seen a good one.
 
We used to discuss whether using a GC'd language for all of the none critical code would result in better or worse performance overall. On the face of it looking at the pure technology side you would say worse, but you have to understand on a team of >20 engineers, perhaps 3-6 are experienced hardcore low level coders, and if you can free them from having to fix the random memory writes and null pointer dereferences that get introduced, they can spend that time optimizing or improving other areas of the game. It could end up being a net performance win.

You can still get memory leaks and null pointers using a GC. Also, when working with fixed memory, like on a console, you always have to think about memory allocation. Just because you are using a GC you can not allocate 1 gig of memory when you only have a limited amount of memory available.
 
This isn't going to happen. My bet is on 2GB. 4 GB is a pretty unlikely option IMO but it might happen. 8GB = zero chance.

My bet is on 2GB as well. No chance of crossing into 64bit address space - too much cost and porting hassle - Sony already tried that with PS3 and we all know what happened.
 
This isn't going to happen. My bet is on 2GB. 4 GB is a pretty unlikely option IMO but it might happen. 8GB = zero chance.

IBM re-engineered Cell memory controller for DDR2, Sony could do the same go with 8 GB DDR3. It should matched the current XDR implementation in terms of bandwidth.

If they go with PVR TBDR, maybe they can get away with using DDR3 too for video memory, another 8 GB DDR3 there wouldn't be too expensive in 2013-14 time frame.
 
IBM re-engineered Cell memory controller for DDR2, Sony could do the same go with 8 GB DDR3. It should matched the current XDR implementation in terms of bandwidth.

If they go with PVR TBDR, maybe they can get away with using DDR3 too for video memory, another 8 GB DDR3 there wouldn't be too expensive in 2013-14 time frame.

Please god of gamers listem this guy ;) :oops:

This is why my next gen (dream mode here and thinking games at 720P...) game machine begins with powerVr6 24/32 cores etc ... powerVr TBDR dont needed much bandwidth and 8GB RAM DDR3 128 bit interface with 8Gbit modules maybe is possible at low costs.



...etc (Shifty idea : include A15 as "PPU" + 6/8SPUs)
 
Last edited by a moderator:
Status
Not open for further replies.
Back
Top