Predict: The Next Generation Console Tech

Status
Not open for further replies.
I think you're right, but I also think that the definition of "gaming PC" is growing fast. That's thanks to browser based games and things like the Sims, and also the big jump in integrated graphics. The fastest Llano models are console beaters, and even the better i5 integrated stuff will let you enjoy a pretty good game Portal 2.

Llano would be be a game changer if AMD could make the damn thing, and Ivy Bridge graphics should be approaching console level and will be force fed to almost everyone. Combined with the gaming push that MS are expected to make with Windows 8, I think (hope) that the idea of a "gaming PC" as something with a discrete graphics card will be changed permanently!

The fastest Llano doesn't have the memory bandwidth (within reason) to be a console beater, when today's current games are not as well optimized as they could be. What is remarkable is that lower end Llano and even Sandy Bridge (or better yet, Ivy Bridge) at least get the most mainstream and possibly disinterested in PC gaming computer users in the door it they really want to delve into the possibility of playing games on their laptop or what not. That gentlemen, is very significant. It's only too bad that it's taken this long for IGPs to reach this level. If very fast APUs really take off and the demand for more graphics horsepower becomes significant, we might see the mainstream implementation of triple or quad channel memory set ups. 2 64 bit channels, even at 32+ GB/s with DDR3-2xxx are not enough, so either a new RAM solution, faster speeds, or more channels will become necessary. Even at the expense of power consumption, it may be necessary for higher end laptops to do this if the full capabilities of the APU are to be realized. You just can't add in a graphics card with a laptop.
 
Last edited by a moderator:
Well, what's making me really pause about this 4770 belief is that the 360 Slim already consumes 90W at load, so... it'd have to be some sort of dark nega-energy magic to put an 80W GPU inside the WiiU. That's not even considering the CPU!

I've mentioned it moons ago, but something like a 400 ALU part (e.g. 5670, 60W @ load) would make a lot more sense. Even downclocking it to 500MHz, you would still get something that can go far beyond Xenos' capabilities whilst still being thermally viable when you still have the supposed 360-like CPU to deal with.

And speaking of the CPU in relation to the "+50% on paper" comment, there will likely be larger caches (it'd be really really hard to not exceed 1MB L2 total *cough*) as well as OOOE/quad-threads (assuming any truth to Power 7 inheritence) that push up the system's capabilities and power consumption.

Agreed on all counts ...

My only point with the prior posts WRT Wuu GPU power was that the 4770 is the most we can expect from Wuu.

As far as the reasoning behind not using a newer dx11 gpu, perhaps it was cheaper to license the older 4xxx series, thus Nintendo (being the cheap bastards they are!) will likely look in that direction.

4770 would likely be cutback considerably or clock scaled back to fit into Wuu though to fit into a smaller TDP.
 
Yes, and IBM is closer to AMD than Intel in terms of manufacturing prowess.
Manufacturing is important but it is not the only factor, design matters, bulldozer has been a disaster other designs could probably be far more competitive on the same manufacturing process. The question then is how good are ibm's latest cores compared to intels, and which if any are viable for consoles in modified form.
This is why I don't see Cell in a next gen console. Cell's capabilities would overlap with a GCN or Fermi GPU. Go with a smallish OoO CPU with predictable performance like the i3-2100 (or as close as IBM/AMD can get to it) and a larger, very flexible GPU. Something based on Kepler or GCN.
The problem is the gpu is likely to encounter thermal issues if you try to make it too big and powerful, either you embed the cpu with it in SoC or if you've a separate cpu why not make the most of this new separate area? Here's the info someone said in another board
OK, I tried to measure FLOPS directly on a 580 and I got 1354 GFLOPS in single precision and 198 GFLOPS in double precision (counting each FMA as two instructions.) The programming guide is misleading.
We're not likely to get a 580 gpu according to some so slash that number. You'd likely get less than 1Tflops. Yet conservative estimates of scaling something like cell would yield over 1Tflops single precision with very flexible general purpose computing elements for next gen. Clearly we want the gpu to be doing as much graphics tasks as possible, would it be better to couple the separate cpu chip with more gpu hardware or simply to scale existing cpu designs? How flexible and easy to program are the latest gpus? how much of that figure can you get on realistic workloads? how do they fare at things like game physics compared to spus? I personally think that a two chip design will have more chances of being high performance than a single SoC design. That would usually be one cpu and one gpu. I don't know if it's possible to do the redesigns but the jump from EE to Cell was 30+x in single precision flops, if another such jump for a nextgen cpu was possible by redesigning architecture, it would put it at 6Tflops above any console viable gpu. Even without such redesigns conservative scaling gets around 1Tflops, quite competitive.
 
Valid points steampoweredgod, but I still do not expect to see Cell in a next gen console. Haven't heard anything about Cell development in a long time.

BTW I wouldn't even consider Bulldozer at all in the discussion. Like you said it is clearly a disaster.
 
http://wiiudaily.com/2011/12/wii-u-has-quad-core-3ghz-cpu-768-mb-of-ram/


* 768 MB of DRAM “embedded” with the CPU, and shared between CPU and GPU

This would be completely impossible right? I mean, 768 MB of EDRAM?

Also lherre has said a bunch of times it's a tri-core CPU in Wii U, for what it's worth.

A 4770 is worlds away from RSX or Xenos. 80-90W TDP for just the GPU in that chassis is a bit much don't you think? That's approaching original 90nm current gen consumptions.

While I agree wii u wont have anything as powerful as a 4770, this type of reasoning would also preclude xb720/ps4 from having a gpu better than a 4770, which I cant agree with at all...
 
Manufacturing is important but it is not the only factor, design matters, bulldozer has been a disaster other designs could probably be far more competitive on the same manufacturing process. The question then is how good are ibm's latest cores compared to intels, and which if any are viable for consoles in modified form. The problem is the gpu is likely to encounter thermal issues if you try to make it too big and powerful, either you embed the cpu with it in SoC or if you've a separate cpu why not make the most of this new separate area? Here's the info someone said in another board We're not likely to get a 580 gpu according to some so slash that number. You'd likely get less than 1Tflops. Yet conservative estimates of scaling something like cell would yield over 1Tflops single precision with very flexible general purpose computing elements for next gen. Clearly we want the gpu to be doing as much graphics tasks as possible, would it be better to couple the separate cpu chip with more gpu hardware or simply to scale existing cpu designs? How flexible and easy to program are the latest gpus? how much of that figure can you get on realistic workloads? how do they fare at things like game physics compared to spus? I personally think that a two chip design will have more chances of being high performance than a single SoC design. That would usually be one cpu and one gpu. I don't know if it's possible to do the redesigns but the jump from EE to Cell was 30+x in single precision flops, if another such jump for a nextgen cpu was possible by redesigning architecture, it would put it at 6Tflops above any console viable gpu. Even without such redesigns conservative scaling gets around 1Tflops, quite competitive.


Steampoweredgod, you are obviously new here as you are not allowed on beyond3d to speak of using a cpu unless its as a paperweight or maybe a coaster.

Also, any mention of a cpu in a post must also include gpgpu, like I just did.

Welcome.
 
While I agree wii u wont have anything as powerful as a 4770, this type of reasoning would also preclude xb720/ps4 from having a gpu better than a 4770, which I cant agree with at all...

Pretty sure 4770 isn't built on 28nm nor is it being shoved into so tight a space as WiiU unless you think next gen is being made tiny. Did I miss the memo on what next gen chassis will be? I'm not sure the problem is. Or you may have just forgotten how much power PC parts are pushing compared to previous years. *shrug*

edit: 192W 5870 in a console? Even if scaling were perfect (towards 28nm), that's pushing limits @ 100W, so you better hope there's a beefy cooling system in place.

Anyways, if you want to do some rudimentary, naive scaling to fit similar silicon budgets (as the start of the gen), you can probably make a case for a Barts-level/size GPU, but that's ignoring any sorts of customizations to taylor it towards MS's goals for DX11+ and what devs want more (I could see them skimping a bit on ROPs to focus more on shading). Thought I covered this months ago a couple hundred pages back. I was speculating on node scaling for possible avenues.
 
http://wiiudaily.com/2011/12/wii-u-has-quad-core-3ghz-cpu-768-mb-of-ram/

* Quad Core, 3 GHz PowerPC-based 45nm CPU, very similar to the Xbox 360 chip.
* 768 MB of DRAM “embedded” with the CPU, and shared between CPU and GPU
* Unknown, 40nm ATI-based GPU

It's interesting they claim Nintendo have been testing two types of devkits.
Here's hoping they ship the one with the above specs ... (even though I bet they don't mean EDRAM by embedded, that amount would be impossible)
 
Possibly on package like mobile GPUs? Probably just someone incorrectly using the word somewhere between the source and the report.
 
Well, perhaps they mean embedded in the Xenos sense, ie the daughter die with 10MB eDRAM. I suppose two 384MB 1T-SRAM chips could be on the same package as the CPU and/or GPU, kinda like the Wii is now.
 
It's interesting they claim Nintendo have been testing two types of devkits.
Here's hoping they ship the one with the above specs ... (even though I bet they don't mean EDRAM by embedded, that amount would be impossible)

I tend to think that whoever made up these numbers had eDRAM in mind. Which make this whole thing unreasonable at least.

If I were the one leaking data like this (which I wouldn't but whatever) I would make god damn sure that it doesn't contain absurd stuff like "memory embedded with CPU" and 768MB next to it. This is so unreasonable I have to take this rumor as a fabrication by some fanboy who has no knowledge whatsoever about the technology.

There's even a "fix" to this rumor on GAF which claims that it was supposed to be 3/4GB of RAM + 96MB of eDRAM. Now it makes sense. Kinda. What do you need 96MB for? 1080p x 32bpp = 7,91MB. Let's say you need 4RTs for g-buffer, plus 2 extra RTs for double buffering. That's less than 50MB. At 128bpp you need 31,7MB per RT so 96MB can hardly store 3 of those. So either you can mix RTs of different sizes or this is a questionable design decision. Or it's for parallel TV + pad rendering in which case perhaps you need that much. But seeing that "fix" comes from GAF it may very well be BS all along. ;S
 
Keep in mind, that does fly in the face of the +50% comment by a ton. A 4770 is worlds away from RSX or Xenos. 80-90W TDP for just the GPU in that chassis is a bit much don't you think? That's approaching original 90nm current gen consumptions.

That was a pretty vague comment from some analyst so I'm not putting much stock in it.

Even if the GPU itself was basically a 4770 that doesn't mean it has to be clocked at the same speed as the full 4770. A 550Mhz 4770 GPU (just the GPU not the entire card) would be more like 45-50w and still 3-4x as powerful as RSX.
 
Last edited by a moderator:
Link please?
We were discussing in the context of the site everyone's laughing at now, which was what the rumours around E3 were pointing towards for the dev kits then.

Just because one of the points listed sounds like complete bullshit (768MB of embedded) doesn't mean everything else is, especially in light of translation errors and misunderstanding.

Also even if the GPU itself was basically a 4770 that doesn't mean it has to be clocked at the same speed as the full 4770. A 550Mhz 4770 GPU (just the GPU not the entire card) would be more like 45-50w and still 3-4x as powerful as RSX.
This is true, though I am also trying to consider cost in addition to clocks vs power. rv740 is ~140mm^2 against the ~104mm^2 of the 5670. And we likely have to add some pool of fast ram on top of that. 5670-class is already sporting quite a bit more hardware than current gen, so I don't see why it wouldn't be a good option for them.

It should be obvious, but lowering clock speed also has a direct impact to geometry throughput.
 
I actually misunderstood your first comment, edited once I realised what you meant.

I didn't realise the 5670 was running at 775Mhz, that would be a nice power increase over RSX/Xenos even with only 400 SPU's. Though I don't like the fact it only has 8 ROPS.

Also from what I've heard dev kits were using a 640 SPU part, at least back in May/June.
 
Charlie has a new rumor



Exclusive: XBox Next chip just taped out
http://semiaccurate.com/2011/12/05/exclusive-xbox-next-chip-just-taped-out/
Remember when we told you about the upcoming Xbox Next chip that was quite imminent? It looks like SemiAccurate’s moles were right on target, and some sources are now telling us that it just taped out.

Yeah, basically, the chip is ‘done’, and first silicon likely went in to the oven in the last two weeks. If this is true, Microsoft should have silicon back in time to give the families of XBox systems engineers a miserable holiday season, their loved ones will be doing breakneck bring-up work on Xbox Next.

In any case, we hear the PoR is still for mass production in December 2012, but that could change quite a bit depending on bugs, foundries, software, and devs. If that date holds, add 3 months for first mass production silicon out of whoever ends up making them, a few weeks for system production, and likely a few months for stockpiling launch quantities. This means late spring or early summer 2013 for a launch.

...more @ the link
 
Or... more likely the SoC of 360. His August article, which he even references here, discusses that.

That would easily fit in with the 360 set top box rumour along with an announcement at CES (where a set top box for movies/cable services is a "CE Device" and would actually make sense at CES). 360S is already an SoP with modest power consumption (90W).

IBM has their own edram tech, and they were already involved with the 360S. The last part would be moving the edram to a common node for the final SoC design/integration, which is possibly still 45nm (since the daughter die was still on 65nm with unchanged dimensions from prior revisions).
 
Status
Not open for further replies.
Back
Top