Predict: The Next Generation Console Tech

Status
Not open for further replies.
On the other hand, AMD and even moreso Intel (because they are less beholden to short term gain) is invested in the success PCs for consumer use. Consoles are in a way destructive to their livelihoods, that has to play a part in the pricing.

That's a good point.

With the new Fusion and Sandy Bridge products coming from AMD and Intel, they actually have a decent way to combat consoles by providing a lower bound on graphics performance for developers to target.

We may just see a revival of the PC as a gaming platform.

Cheers
 
Last edited by a moderator:
That's a good point.

With the new Fusion and Sandy Bridge products coming from AMD and Intel, they actually have a decent way to combat consoles by providing a lower bound on graphics performance for developers can target.

We may just see a revival of the PC as a gaming platform.

Cheers

Though it will take some time until they have reached sufficient market penetration and I am not so sure the gaming and multimedia segement will keep driving the development of the x86 platform for long. The atom does a pretty a decent job today and the upcoming models even more so. It is definitely good enough for most office computer needs, which has been x86:s home turf and volume driver for long.

Webbrowser game coupled with social networls is where the massmarket is among consumers and hi-end graphics is not pre-requisity for that.

Basically what I am trying to say is, atom will probably take marketshare away from x86, so the PC-market will become more heterogenous and blurred compared to today. Tablet-gaming will likely grow a lot the coming years.
 
That's a good point.

With the new Fusion and Sandy Bridge products coming from AMD and Intel, they actually have a decent way to combat consoles by providing a lower bound on graphics performance for developers can target.

We may just see a revival of the PC as a gaming platform.

Cheers

I wonder how much the console market would interfere with the pc market. I think people who buy those low end pc's would kinda be like the tablet/mobile phone ''gamers''. It's a nice extra, and they use it but it probably makes the market broader and not really eat up the high-end pc/consoles/handheld sales because they offer a far better experience.

So depending on how much customization needs to be done I suppose supplying a console cpu would be a nice way to sell another ~50 million cpu's without spending much time and effort.
 
Whether the low end PCs push PC gaming and the sales of higher end PCs is irrelevant to the question whether better consoles (cheaper or higher performance) hurts sales of higher end PCs.
 
I think you'll see the same push for IP ownership by the platform holders with the next gen consoles as you do with PS3/360. They want to be able to control manufacturing and die shrinks, controlling platform cost late in the life cycle has at least historically been essential to the model. It's the reason XBox was so much of a money drain, it wasn't the cost at launch it was the cost in years 4 and 5.

That pretty much rules out Intel and I'd guess AMD, although MS an Sony probably don't have any real choice when it comes to GPU's.

So we're left with some Power PC variant or some ARM variant or something obscure. Although at this point perhaps it's not practical to go anyway except Intel/AMD.
 
I imagine a good contract could be obtained from AMD who basically need the money! They're not in a strong position to secure a console contract and charge top dollar for years. If they are approached, it'll be with a clear understanding of passing on price reductions, and it'd still be in AMD's interest to accept the deal. given they can offer both parts, CPU and GPU, they could get more per system and the console company could pay less than if they sourced two independent processors. So I can definitely see AMD being a strong option.
 
AMD (or maybe even Intel) may be in a position come the next cycle where they feel a thin margin on sales to MS/Sony might be justified by the volumes that might soak up otherwise unused capacity. AMD's capacity of course is a little less known and a little more fluid on a forward-looking basis, but clearly both companies are used to pricing an identical chip across the spectrum in order to maximize the profits available from capacity on hand. There's of course the point and history that neither Sony or MS would want to be 'trapped' at 2012's "great" pricing come 2015. Maybe both sides could compromise and come to some sort of rolling/reviewed reduction at agreed-upon milestones.

Personally I'd love something semi-exotic again, and I'm just generally "wait and see" until we get any sort of real news beyond the PS4 architecture rumors of last year.
 
Given that it's pretty hard to get capacity allocation in fabs and AMD doesn't own any fabs themselves, what could they do, since they cannot license x86 IP without Intel's consent? Why would MS/Sony want to go through AMD when they can just contact GloFo or other foundries themselves to get the best price/capacity allocation they can get?
 
Price/capacity would be maximized, sure, but we're only considering Intel and AMD here to begin with because of potential architectural/ecosystem advantages - I don't think it was ever the pricing as the #1 driver of this line of thinking.
 
Given that it's pretty hard to get capacity allocation in fabs and AMD doesn't own any fabs themselves, what could they do, since they cannot license x86 IP without Intel's consent? Why would MS/Sony want to go through AMD when they can just contact GloFo or other foundries themselves to get the best price/capacity allocation they can get?
AMD can put a firewall in the company between the divisions which negotiate with GloFo if necessary ... there need be no difference between the prices Microsoft could negotiate themselves and what Microsoft could negotiate through AMD.

(Although in reality Microsoft would get together with AMD and hash it out and split the difference rather than bidding up against eachother with GloFo ... so actually it would be cheaper for Microsoft.)
 
Last edited by a moderator:
A dual SDXC reader could replace both the optical medium and extend the storage capability of the console.. with a fast integrated 32GB SSD as base storage and for cache loading.

15$ for the card reader, 30-40$ for 32gb SSD seems realistic.

It would be less expensive than a combo blu-ray + hard drive, the console could be thinner, lighter and no optical drive means no scratch, and much less noise. Games could use 8 - 16 Gb SDHC cards in the beginning, and when more space will be needed use 32/64gb SDXC card (which will be considerably cheaper by the end of next generation). Also transfer rate should be higher.. the faster BR reader is around 54 MB/s peak, where reading speed of SDHC UHS I and SHXC UHS I /II are around 300 MB/s peak ( with real world performance probably around 100 MB/s).

As for AMD, i agree: they are the best choice for MS: they could offer the processor, the Z-RAM (instead of eDRAM) and the GPU and make already a deal to make a SoC out of it when viable.

From a power prospecting, developers are asking for at least 16 threads-processor.. Bulldozer could provide at most 8 (4 modules) within 200 mm^2@32nm.. PowerPC - 7 architecture looks more interesting from this POV: 4 cores can provide up to 16 threads, in a reasonable die-size at 32nm (probably less than 200 mm^2, considering that a full Power7 chip now is 570 mm^2 with 8 cores+32Mb eDRAM@45nm).
 
Last edited by a moderator:
As for AMD, i agree: they are the best choice for MS: they could offer the processor, the Z-RAM (instead of eDRAM) and the GPU and make already a deal to make a SoC out of it when viable.
There is no Z-RAM for AMD, there will never be Z-RAM for AMD.
AMD abandoned the tech some time ago.

I am generally skeptical of using Bulldozer as we know it in the console space.
In the case of x86, Bobcat would seem to be more likely since it is designed to be synthesizable and portable to foundry processes.
Bulldozer is a server architecture that is ill-suited for consoles. The chip is too big, the cache subsystem is a poor fit, and the acceptable yields for a server/enthusiast desktop CPU are unacceptable for a low-margin volume console CPU.
Redesigning it to meet a wholly different segment would cost serious cash.

Why AMD would do this for a few bucks a chip, and why MS would pay hundreds of millions to get a far diminished Bulldozer-lite does not compute for me.
 
HD is necessary for MMOs.

Not with a custom sdxc set up .


64gig sdxc set up in 4 16 gig raid 0 devices or 8 8 gig ones. Read speed would hit 60mb/s per section. That would be at around laptop speeds.

If the mmorpg is only 32 gigs it leaves you with 32 gigs for updates and patches
 
There is no Z-RAM for AMD, there will never be Z-RAM for AMD.
AMD abandoned the tech some time ago.

I am generally skeptical of using Bulldozer as we know it in the console space.
In the case of x86, Bobcat would seem to be more likely since it is designed to be synthesizable and portable to foundry processes.
Bulldozer is a server architecture that is ill-suited for consoles. The chip is too big, the cache subsystem is a poor fit, and the acceptable yields for a server/enthusiast desktop CPU are unacceptable for a low-margin volume console CPU.
Redesigning it to meet a wholly different segment would cost serious cash.

Why AMD would do this for a few bucks a chip, and why MS would pay hundreds of millions to get a far diminished Bulldozer-lite does not compute for me.

bobcat is very low powered though. Sure you can put 12 of them or 16 of them in a very small chip but will they be able to split the game code into so many threads and at that point wouldn't it be better to go with a 12 or 16 core bulldozer
 
There is no Z-RAM for AMD, there will never be Z-RAM for AMD.
AMD abandoned the tech some time ago.

I am generally skeptical of using Bulldozer as we know it in the console space.
In the case of x86, Bobcat would seem to be more likely since it is designed to be synthesizable and portable to foundry processes.
Bulldozer is a server architecture that is ill-suited for consoles. The chip is too big, the cache subsystem is a poor fit, and the acceptable yields for a server/enthusiast desktop CPU are unacceptable for a low-margin volume console CPU.
Redesigning it to meet a wholly different segment would cost serious cash.

Why AMD would do this for a few bucks a chip, and why MS would pay hundreds of millions to get a far diminished Bulldozer-lite does not compute for me.

What about T-RAM?
http://en.wikipedia.org/wiki/T-RAM

And yes, i agree that Bulldozer doesn't seems the right choice. At 40nm a Bobcat core takes 10 mm^2 with 512kb of L2. At 28nm (and they can also use bulk process) a 16-24 core processor could be feasible within 150-200 mm^2. AMD may also recycle the architecture for other many-core processor for different market (computing for cloud computing, datacenter).
 
bobcat is very low powered though. Sure you can put 12 of them or 16 of them in a very small chip but will they be able to split the game code into so many threads and at that point wouldn't it be better to go with a 12 or 16 core bulldozer

That requires an MCM of two chips, with a likely combined die area of ~600 mm2.
AMD, if it is lucky, will be selling those MCMs in the server market at prices up to two orders of magnitude higher than it would get as a console component.
Microsoft would balk at even paying the production cost, and AMD would laugh at selling a G34 socket product for 30 bucks.



Not much info exists on plans to integrate it in an actual AMD product.
 
Last edited by a moderator:
There is no Z-RAM for AMD, there will never be Z-RAM for AMD.
AMD abandoned the tech some time ago.

I am generally skeptical of using Bulldozer as we know it in the console space.
In the case of x86, Bobcat would seem to be more likely since it is designed to be synthesizable and portable to foundry processes.
Bulldozer is a server architecture that is ill-suited for consoles. The chip is too big, the cache subsystem is a poor fit, and the acceptable yields for a server/enthusiast desktop CPU are unacceptable for a low-margin volume console CPU.
Redesigning it to meet a wholly different segment would cost serious cash.

Why AMD would do this for a few bucks a chip, and why MS would pay hundreds of millions to get a far diminished Bulldozer-lite does not compute for me.
Do you think it would make sense to "bulldozerized" bobcat cores? It's something that bothered me since we got more info on the bobcat family. When we look at the specs we see that one bobcat core @1.2 GHz consume as much power as the two cores version running @1GHz.

As I see it AMD should have pass on single core bobcat and go with modules as for bulldozer.
I can help but believe that AMD could have achieved something even more impressive than bobcats.
Is there a reason for AMD to pass on the option? Or they simply didn't have the resources for the project (bulldozer, llano,bobcat, gpus, etc. they are spread thin... too thin. I don't know how Bulldozer or llano will fare but I've gut feeling that AMD should have focus on only one project and that it should have been bobcat).
"Bulldozerzize" bobcat would bring a lot of advantages for a low power device or I mislead (happens often), the chip would be 50% than a single core version but would offer 80% of the perfs of a two cores system. Actually it could be even better as AMD could have go for a single 128bits wide SIMD unit. Single thread perfs would still be the best in town in regard to power consumption. Either way AMD could have invest the saved die space to add more cache or GPU SIMD or improve things here and there.
I'm willing to see where AMD is heading with their Bobcat 2.0 :)

For a console I could see the SIMD pumped up (256bits wide instead of 128). Do you think it would be worse it to with more than two modules?
I know some people here want big and huge system but as I see it such a chip while tiny and power efficient (even clocked @1.6GHz) would bit the crap out of nowadays consoles CPUs. (When I see benchmark of Atom vs others X86 cpu on top of that when I consider P IV vs more efficient X86 chips... I feel like oh my god Xenon has to suck so badly... still the 360 pushes what I consider as acceptable graphics).
For me a working fusion chip for a console now (so @40nm) would be:
Two modules, 256KB of L2 per module, 3MB L3 shared by cpus and gpus (AMD should borrow Intel SnB "un-core"), a "kurt class gpu", 128 bit bus to 2GB of GDDR5.
@32/28nm I'm not sure "adding" would be the best way to go, make things better would sound like wiser move to me.
 
Last edited by a moderator:
Bobcat's CPU performance is somewhat below an Athlon Neo dual-core.
When the 360 came out, its 3.2 GHz tri-core was equated (roughly) with ~2 GHz desktop CPUs.

If the rough equivalence holds true, current Bobcats are already too slow, as the Athlon Neo at 1.3GHz would lag Xenon significantly.
A module of Bobcat cores would be less acceptable. The cores are already miniscule, and since they are already narrow they don't have as much hardware to share in the front end or FPU. The 2-wide front end is going to have far better utilization than the 4-wide Bulldozer is able to share.

The question is what would happen if the TDP limits were relaxed, and how flexibly the core can be redesigned since its design philosophy is to exploit more automated methods and generic circuit implementation.

My concern at this point is that generic watered-down x86 cores do not offer much over the relatively unimpressive console cores, so what's the compelling reason?
 
Status
Not open for further replies.
Back
Top