Predict: The Next Generation Console Tech

Status
Not open for further replies.
There's a bit of a boo-boo:

Right now the only fit from AMD's line would be the Juniper core, as found in the Radeon HD 5770. However, we'd still have around 20 per cent die-space spare and who knows what kind of customisations and extra power AMD could factor into that. So what does this give us in comparison to the present day Xenos? An HD 5770 gives us 960 stream processors, 12 texture mapping units or 48 texels per clock.
He meant to write that the 5770 +20% would give 960SPs/12TMUs (Juniper is 800/10).
 
I find it disturbing that he didn't once mention power consumption of the chips...

I was wondering what can be done with ED-RAM aside from simply sticking the ROPS there. Would it be efficient if they also mated the tessellator to the ED-RAM or even put stream processors there as well to take advantage of the extra bandwidth for post processing?
 
There's a bit of a boo-boo:

He meant to write that the 5770 +20% would give 960SPs/12TMUs (Juniper is 800/10).

Still disappointing, while it may be realistic, Xenos was close to top of the line in 2005. I'd rather not see Xbox 3 go mid range GPU, though indeed times may be changing.

Also, 2GB RAM would be paltry, expect 4-8 GB.

Also, 850 mhz Juniper clock, I expect first thing they'll do is downclock to save lots of heat and power. While Xenos was comparable to to PC GPU's of 2005, it was clocked lower, 500mhz while 7900GTX was up to 650 mhz. So if it featured a juniper per the article, I'd expect a 700 or even 600 mhz clock.
 
Last edited by a moderator:
Still disappointing, while it may be realistic, Xenos was close to top of the line in 2005. I'd rather not see Xbox 3 go mid range GPU, though indeed times may be changing.

Yeah well, power consumption is non-linearly scaled... Plus there's the issue of what a 28nm part would look like instead by the time 2014 hits. A 5870/6970 pretty much consumes the same as/more than the entire 360 @ 90nm, but we don't know what will happen at the next node for AMD.

Also, 2GB RAM would be paltry, expect 4-8 GB.
We all want nice things. ;) As I've mentioned awhile back, GDDR5 is 32-bit I/O per chip. They'll have to strike a deal to manufacture 8Gb chips in order to hit 4GB with just 4 chips. And I presume we do want the GPU to be able to scale down in size over its lifetime.

Alternatively, they go with the fastest speed DDR3 (I don't know if DDR4 would be ready in time or what its specs entail) and have other ways of mitigating the need for framebuffer bandwidth (larger on-die caches or um... TBDR or back to edram for example).

Also, 850 mhz Juniper clock, I expect first thing they'll do is downclock to save lots of heat and power. While Xenos was comparable to to PC GPU's of 2005, it was clocked lower, 500mhz while 7900GTX was up to 650 mhz. So if it featured a juniper per the article, I'd expect a 700 or even 600 mhz clock.
Well, as AlphaWolf mentioned, Juniper sits at around 95W at load, and I presume that's including the power consumption from the RAM too. This is all at 40nm as well, and again, I find it hard to believe they'd use that in 2014.

It sort of makes the speculation pointless, but "an interesting exercise might be to scale up the cost structure of the existing architecture." :p

----------------

Just for reference (and in case it's not immediately obvious), while 6790 does have the same hardware specs as Juniper, the 6790 is just a gimped Barts, so the die size is still pretty huge (255mm^2) and meant to be attached to a 256-bit bus.


-------------

hm....
http://techreport.com/articles.x/17747/3
And here's the Radeon HD 5750, the 5770's little brother. For this mission, Juniper has had one of its SIMD cores clipped, along with the corresponding texture unit. Clock speeds are de-tuned, too, with the GPU at 700MHz and memory at 1150MHz. Thanks to the changes, the 5750 tops out at 86W of power draw and is rated for just 16W at idle.

Considering that the 5750 has a disabled SIMD and 150MHz loss and it still consumes just under 90W... well... it's really not just clock speeds, or at least, that's not what you should be considering.

Voltage is the main factor in the active power consumption (and well, the drive current). The maturity of the manufacturing process is kind of the key here.
 
We all want nice things. As I've mentioned awhile back, GDDR5 is 32-bit I/O per chip. They'll have to strike a deal to manufacture 8Gb chips in order to hit 4GB with just 4 chips. And I presume we do want the GPU to be able to scale down in size over its lifetime.

So you think PS4 and Xb3 will be limited to only 2GB RAM?

I'd like to hope they'll do better than that. If not whats the point of even trying to outdo Wii2?

Edit: Perhaps if MS uses EDRAM again, they can go 8GB? Sony might be stuck with 2GB for PS4 though without EDRAM. Regardless, I hope a solution is found and I assume so...

If superphones are at 1GB RAM now, 2GB will just be too paltry to be expected to last 2014-2022!
 
Last edited by a moderator:
I think DDR4 would be a better solution than GDDR5. Then again, there is 16-bit (clamshell) mode for GDDR5... 8x4Gb would give you the 4GB.

-----------

8 chips for the lifespan of the console unless they increase density (stacking was mentioned for DDR4). I'm not too keen on the idea of 16 chips in the console - that's a lot of real-estate on the motherboard as well as a lot of wire tracing and a decent chunk of power consumption altogether.
 
Edit: Perhaps if MS uses EDRAM again, they can go 8GB? Sony might be stuck with 2GB for PS4 though without EDRAM. Regardless, I hope a solution is found and I assume so...

I'm a little confused at this statement. How would using EDRAM mean that MS could go to 8GB of RAM? Think you mispoke here somehow.
 
Powerdraw wise the speculation makes sense, but I don't think we'll see much from a 2009 GPU in a 2013-2014 launching console.

Edit: I haven't read the article though...
 
Last edited by a moderator:
Powerdraw wise the speculation makes sense, but I don't think we'll see much from a 2009 GPU in a 2013-2014 launching console.

Edit: I haven't read the article though...

Well, you're right about that. It's sort of a stepping stone speculation really because we haven't seen 28nm GPUs yet. Looking at 55nm, RV770 was 256mm^2 @956M transistors. Fast-forward to 40nm, and we got Juniper at 166mm^2 @1.04B transistors - half the memory bus width, +100MHz core clock, about half the power I think...

Taking the additional transistors into account, the scaling was fairly close to the ideal. Another node jump (using similar ratios here) would put a 28nm variant at around 100mm^2. It's a naive approximation when further considering the changes they made with Cayman, but 28nm with Cypress-level specs should put it back closer to and still under 200mm^2 (ballpark).

--------

On another note, I'm sure everyone would love to have 1080p, and 32ROPs would be great for raw throughput, but in the end devs are, as always, going to compete with one another and it's just a simple fact that they can do more with 720p; being sub-HD res doesn't stop Call of Duty or Halo from selling. They'll pick a resolution that's convenient and go from there - 1024x600 + 2xMSAA was convenient for CoD's renderer, 1152x720 is convenient with the mini-G-buffer renderers on 360. It's pretty clear that the future is compute shading anyway. Devoting more towards that is a no-brainer for a piece of hardware that'll be lasting 5+ years. I think it's likely that we'll see higher core clocks anyway (the triangle setup is naturally tied to it as well), so even doubling the ROPs from the current gen will have gone a long way to making it a non-issue. They can also make each ROP more robust too e.g. full rate fp16 or just insane z-rates...

hmmmmmm........
 
The technical specs will be dictated by economics.

A BOM of $500 seems fair for a $400-$450 launch price, I don't believe Sony nor MS will abandon the loss leading model. This gives us a cost breakdown very similar to last gen:
Around 450 mm² die area in total for CPU and GPU.
4GB RAM
Optical drive
$30-$50 worth of permanent storage

Both MS and Sony had reliability problems related to heat/cooling. Microsoft had a tons of heat related RRODs and Sony had failures related to their neat low noise centrifugal cooling system. Since quality cooling solutions costs, I expect both to aim at a slightly lower power consumption of 120-150W.

Speed will be primarily dictated by power consumption, secondly by yield. Power scales with operating frequency to the third degree, lowering speed by 26% halves power consumption, this gives a lot of room for hitting the right power envelope.

Cheers
 
Both MS and Sony had reliability problems related to heat/cooling. Microsoft had a tons of heat related RRODs and Sony had failures related to their neat low noise centrifugal cooling system. Since quality cooling solutions costs, I expect both to aim at a slightly lower power consumption of 120-150W.

Given the Falcon revision, 150W would be a pretty decent compromise for similar cooling hardware. The GPU heatsink in the launch unit was just awful though.

I could see them focusing a lot more on the GPU side rather than the CPU. This is keeping in mind that there is the PC platform; it's probably safe to say that the uptake of octo-core or octo-threads is pretty slow there, so I don't expect them to go with more threads or execution units than that. A 28nm CPU should be pretty forgiving no matter what design they choose.

I suppose Sony could throw more SPEs in there, but I'm finding it hard to justify it if the best use appears to be graphics-related (lighting, post-fx, MLAA...). Even MLAA in PhyreEngine has switched from a pure SPU implementation to just edge detection on SPU, rendering on GPU. Adding a few more beefier, "normal" cores would be nicer for multiplatform.
 
Both MS and Sony had reliability problems related to heat/cooling. Microsoft had a tons of heat related RRODs and Sony had failures related to their neat low noise centrifugal cooling system. Since quality cooling solutions costs, I expect both to aim at a slightly lower power consumption of 120-150W.
I agree. Indeed, Sony were aiming for 65nm at launch this gen, so they never intended to run so hot and expensive as they launched at. MS launching early must have known they'd be on 90nm, but perhaps they underestimated the heat issues given the new lead-free solder? I doubt they'll repeat that mistake and run hot, so both should be looking at a more easily dealt-with heat output. bigger chips and lower clocks seems the order of the day.

Any word on what XDR2 is achieving in the real world? I'm still hoping Sony go that route with a massive pool of unified RAM, assuming it actually works out as cost effective.
 
You can make a good cooling solution quite cheaply. Something like the GTX 580 is specced at 300w and when it uses 200w it's pretty quiet, atleast when compared to launch 360 :)
 
Status
Not open for further replies.
Back
Top