Predict: The Next Generation Console Tech

Status
Not open for further replies.
I have to say we agree on something :) I have no idea the speed differences between gddr5 and ddr 4...nor the costs..but that might be the cheapest way to get 4gb with nice bandwidth and a slim bus.

No way we are getting 8gb Ian console.

8GB of DDR3/4 should cost LESS than 4 GB of GDDR5.

I have a question, why bother with simd engines on cpus?? Wouldn't it be far better to allocate that space to the gpu?

No, not really. GPUs still cannot run a large portion of codes at all efficiently, even incredibly simple parallel codes.
 
I agree 8GB seems pretty unrealistic. 16x the current generation is too high given the rest of the rumoured specs. 8x seems reasonable though so I'm betting on 4GB of reasonably fast GDDR5 + some edram. 2GB of very fast GDDR5 isn't out of the question though, possibly still with edram.

2GB of memory and the manufacturer can kiss their butt goodby. Also GDDR5 is extremely expensive and basically at EOL. 8GB of ddr4 + edram/packaged wide dram is pretty cost effective and actually has potential to scale to lower costs over time. 4GB of clamshell GDDR5 will only go up over time.

Also it is important to remember that both xbox360 and ps3 were underspec'd when they came out and were on the wrong side of a technology transition. They both had ridiculously low amounts of memory even at launch. Something both the devs and MS/Sony have been banging their heads against for 8 years now.
 
Right so it is worth having simd engines. And the guy who says consoles had low ram = no comment..just think back to when box 360 was released and the graphics cards around and the price of said cards compared to price of console...

I think 8gb of ddr 4 unified on a 256bit bus is too good to be true, however that would be proper next generation alright.


Right so we have established that having simd engines work well, the question is whether we should use lots of smaller cores low clocked with 128-256bit simd, or 3-4 much larger higher blocked cores?
 
Right so it is worth having simd engines. And the guy who says consoles had low ram = no comment..just think back to when box 360 was released and the graphics cards around and the price of said cards compared to price of console...

The ATI X1800 and NV 7800 both had 512MB versions.

Add in the typical 1GB-2GB that standard gaming rigs had in 2005 and then yes, the Xbox 360 was on the lower end. The PS3 in 2006 more so.

My 6800GT had 256MB in 2004 and I also had 2GB in my system. It is harder to compare launch pricing on various GPUs due to the enthusiast models having a heavy premium while the 1 step lower models with 10% less performance can cost over $100 less. The trend toward massive price drops a month or two after the launch -- even though the IHV and the OEM and Retailer are all making a profit.

I think 8gb of ddr 4 unified on a 256bit bus is too good to be true, however that would be proper next generation alright.

Why would picking a Memory technology at the beginning of its lifetime (i.e. costs will come down quickly over the lifetime) such a crazy idea versus picking EOL products like GDDR5 and DDR3 which will only go up over time? Add in DDR4's power advantage and ... ? I don't see the problem.

Right so we have established that having simd engines work well, the question is whether we should use lots of smaller cores low clocked with 128-256bit simd, or 3-4 much larger higher blocked cores?

Smaller cores may clock higher and larger more advanced cores may clock lower.

And having SIMD is good, the question is do you go with 6 cores with robust SIMD units or do you go with 4 cores with solid SIMD and shift all that real estate to the GPU. All the talk about Xenon: yeah, it stunk in a lot of ways but code vectorized on it per the same developers dissing it say it can compete head to head with modern processors. So a solid vector unit is a must, but the question is how much space do you dedicate to the CPU and what do you get out of it.
 
Right so it is worth having simd engines. And the guy who says consoles had low ram = no comment..just think back to when box 360 was released and the graphics cards around and the price of said cards compared to price of console...

Graphics cards had between 256 and 512MB of memory and CPUs had between 512MB and 2 GB of memory, quickly transitioning to 512-1024 for graphics cards and 1-4 GB of memory for CPUs.

The average computer today ships with around 6-8 GB of memory and 1-2 GB of VRAM. For a realistic 8 year life span, the consoles absolutely need 4GB minimum and are much better off with 8GB. New game engine are targeting 64b primarily and between 4-8 GB of memory which is only going to increase over time. Given the rate at which low cost PCs are increasing graphics performance, consoles need fairly reasonable graphics performance and around 8GB of memory if they don't want to see their lifetime cut significantly.

I think 8gb of ddr 4 unified on a 256bit bus is too good to be true, however that would be proper next generation alright.

Basically, I think without 8GB of memory, the consoles this generation will have a significantly shortened lifespan.


Right so we have established that having simd engines work well, the question is whether we should use lots of smaller cores low clocked with 128-256bit simd, or 3-4 much larger higher blocked cores?

Part of the issue is that the programming models for lots of small cores are still very immature. And ideally, you would want a combination: 2 large cores and 6-8 small cores all running the same ISA.
 
@acert..yes gaming rigs had 2gb of ddr ram, but that was system ram barely used in gaming for the time, a high end very expensive graphics card had 512mb of gddr3 by the time box360 was announced mid 2005...and those cards were in short supply and cost more than the box it's self....someone mentioned a few pages back that the ram alone cost $60 or so which is a crazy price, the system was well balanced and proof will be that £250xbox 360 games looked just as good if not better than a £2000 PC for its time.

Xbox sold for a loss it was that advanced for its time and price point, although you can't expect a little white box selling for £250 to be ahead of the pc world for too long..that's unreasonable.

You have to put things into perspective.
 
GDDR3 is old (it's what is on the 6800GT) yet new graphics cards are still coming with it.
currently the only cheap Kepler card uses it.
I'm sure there's no problem about getting gddr5 for the next 8 years, considering there's no actual replacement for it : no gddr6 is on the horizon. only ddr4 is the contender but will really be on the mass market in 2015 or 2014.

ddr4 on the next gen consoles? it sure is tempting but can be underspecced and in short supply. with slowest ddr4 having the speed of fastest ddr3.
a combination can be good but I'm thinking of 64bit ddr3 or ddr4 + 128bit gddr5.
 
Right so we have established that having simd engines work well, the question is whether we should use lots of smaller cores low clocked with 128-256bit simd, or 3-4 much larger higher blocked cores?

and i can see no way 8 jaguar cores will be better then 4 steamroller cores. Your issues with bobcat based is that its currently only 64bit ALU's per core, it only decodes 2 Macro ops a cycle. expanding the cache system out to 8 cores, the memory controller is single channel. There is so much stuff to change and for what?

just look at the difference between the E-350 (w17 watt) and trinity @ 35 watt (cant find 17wat benchmarks). we can predict the 17watt trinity's performance just look at the CPU clock differences 2.3 turbo to 3.2 for 35 watt vs 2.1 to 2.6 for 17watt. So at worst it has 81% of the clock rate and 80% of the mem bandwidth ( 1666 vs 1333).

you then look at http://www.anandtech.com/bench/Product/600?vs=346 for 35 watt trinity vs 17 watt e-350.

your looking at the E-350 having around 30-35% of the performance of trinity at 17watts.
thats a mighty long gap to bridge and that gap will grow bigger with steamroller.
thats without even thinking of any of the additions Sony might want to the core.
 
Last edited by a moderator:
and i can see no way 8 jaguar cores will be better then 4 steamroller cores. Your issues with bobcat based is that its currently only 64bit ALU's per core, it only decodes 2 Macro ops a cycle. expending the cache system out to 8 cores, the memory controller is single channel. There is so much stuff to change and for what?

Well for a start you could dedicate more die area to the GPU. You could also write off an entire defective core without losing 25% or 50% of your dual module CPU. Bobcat and its successors are designed to to be easily modifiable, so a 128-bit ALU and dual channel memory controller might not be off the cards - it's unlikely a Steamroller based PS4 SoC would use dual channel DDR3 so changes would have to be made anyway.

just look at the difference between the E-350 (w17 watt) and trinity @ 35 watt (cant find 17wat benchmarks). we can predict the 17watt trinity's performance just look at the CPU clock differences 2.3 turbo to 3.2 for 35 watt vs 2.1 to 2.6 for 17watt. So at worst it has 81% of the clock rate and 80% of the mem bandwidth ( 1666 vs 1333).

you then look at http://www.anandtech.com/bench/Product/600?vs=346 for 35 watt trinity vs 17 watt e-350.

They're different generations of processor on different manufacturing processes and targeted at different segments of the market. With the same generation of core on the same manufacturing process Jaguar cores may offer increased throughput per watt or per mm^2. Or maybe not. I don't know, but Trinity vs Bobcat certainly won't give you the full picture, IMO.
 
Well for a start you could dedicate more die area to the GPU. You could also write off an entire defective core without losing 25% or 50% of your dual module CPU. Bobcat and its successors are designed to to be easily modifiable, so a 128-bit ALU and dual channel memory controller might not be off the cards - it's unlikely a Steamroller based PS4 SoC would use dual channel DDR3 so changes would have to be made anyway.
you mean dual channel DDR 4 :LOL:

if this is going to be another 6-10 odd year gen, initial yields are only a small part of the equation. also when you went and added all the stuff for your 128/256bit FP unit you just made your core a whole lot bigger as well.

the plus side for AMD is if the PS4 core has all the features of the steamroller APU SOC, if defects stop it being a full PS4 it can be sold as a Dual core APU.



They're different generations of processor on different manufacturing processes and targeted at different segments of the market. With the same generation of core on the same manufacturing process Jaguar cores may offer increased throughput per watt or per mm^2. Or maybe not. I don't know, but Trinity vs Bobcat certainly won't give you the full picture, IMO.

bobcat has very very poor performance scaling and power scaling with clock. bobcat is awesome if you wanted a 10 watt device, from there is just went down hill. bobcat's design was 1 to 10 watts (its in its hotchips presentation) given that trinity does so well at 17watts why would AMD moves jaguars target TDP range up, if anything it will go down so it can hit tablets better.



edit: this thread has a million views!!!!!!!!! :oops: :runaway:
 
Last edited by a moderator:
2GB of memory and the manufacturer can kiss their butt goodby. Also GDDR5 is extremely expensive and basically at EOL. 8GB of ddr4 + edram/packaged wide dram is pretty cost effective and actually has potential to scale to lower costs over time. 4GB of clamshell GDDR5 will only go up over time.

How are you going to match the performance of 5 gHz GDDR5 on a 256-bit bus with DDR4 though, without moving to a 512-bit bus?

Also it is important to remember that both xbox360 and ps3 were underspec'd when they came out and were on the wrong side of a technology transition. They both had ridiculously low amounts of memory even at launch.

Not really. DDR1 and DDR2 even on a 256-bit bus wouldn't have offered sufficient bandwidth to match the DDR3 in the Xbox 360. And MS were so keen to avoid a 256-bit memory bus they added another chip to the GPU.

Developers on this forum have repeatedly talked about bandwidth limitations hampering their efforts on the 360. Reducing bandwidth further, while also adding the wider main memory bus that MS were so keen to avoid, might have been a net loss for the platform (almost certainly from from MS's pov).

Something both the devs and MS/Sony have been banging their heads against for 8 years now.

Console developers bang their heads against everything. Memory quantity, memory bandwidth, memory latency, everything related to optical drives, everything related to processing power. Basically everything. Memory quantity doesn't appear to have hampered the 360's market performance - while hardware losses were starting to make people question MS's entire presence in the console market.

Graphics cards had between 256 and 512MB of memory and CPUs had between 512MB and 2 GB of memory, quickly transitioning to 512-1024 for graphics cards and 1-4 GB of memory for CPUs.

When the 360 launched in late 2005 there was one card with 512 MB of ram, and it was an Nvidia marquee card that you couldn't find for love nor money in the shops. The actual top end cards (x1800XT and 7800 GTX) had 256MB.

In late 2006 Nvidia released the super top end 8800GTX with 768 MB of ram. Most "enthusiast" gamers were grubbing around with the 320, 384 or 640MB 8800 models though. The X1900XT had 512MB.

I don't recall seeing consumer GPUs with 1GB of ram until 2007. The 2900 XT 1GB edition with its 512-bit memory bus (lol) springs to mind first (because it was both costly and a bit rubbish).
 
How are you going to match the performance of 5 gHz GDDR5 on a 256-bit bus with DDR4 though, without moving to a 512-bit bus?

256b DDR4 + embedded DRAM/MCM'd dram. With either g-spec'd DDR3 or DDR4 you can reach ~3gb/s which on a 256b bus gives you ~100 GB/s of bandwidth which alone which is honestly a pretty decent chunk of bandwidth, especially if the front/back buffers are being handled with an embedded or wideIO dram.


Developers on this forum have repeatedly talked about bandwidth limitations hampering their efforts on the 360. Reducing bandwidth further, while also adding the wider main memory bus that MS were so keen to avoid, might have been a net loss for the platform (almost certainly from from MS's pov).

Well then it is a good thing they can get a lot of bandwidth using some pretty much OTS memory this gen and get large memory capacity.



Console developers bang their heads against everything. Memory quantity, memory bandwidth, memory latency, everything related to optical drives, everything related to processing power. Basically everything. Memory quantity doesn't appear to have hampered the 360's market performance - while hardware losses were starting to make people question MS's entire presence in the console market.

Except that pretty much all games shipped for 360 and PS3 is the last several years have had to severely cut down assets because the consoles simply don't have the capacity. Don't fill up all the memory? pre-stream in the next chunk of assets.



When the 360 launched in late 2005 there was one card with 512 MB of ram, and it was an Nvidia marquee card that you couldn't find for love nor money in the shops. The actual top end cards (x1800XT and 7800 GTX) had 256MB.

Which changed basically 3 months after they launched. As I've stated before both the 360 and PS3 were designed for the wrong side of a technology transition from a lot of different aspects. Prior console generation stayed pretty cutting edge for quite a while where as 360 and PS3 were bargain basement incredibly quick technology wise.
 
Last edited by a moderator:
Nobody here considered the new HMC RAM...?! i think both DDR4 and HMC are not yet here BUT HMC is much much faster (and in the long run probably cheaper)...

stacked/wideio dram makes sense from a FB perspective, but doesn't really have the capacity required to be an overall solution.
 
stacked/wideio dram makes sense from a FB perspective, but doesn't really have the capacity required to be an overall solution.

How do you comment the fact Microsoft joined the HMC consortium... ??

To me it's much indicative... but it indicate also the new console will come in 2015.
 
Repeating the same HMC stuff over and over again does not add any value to the discussion. Please stop it with these repetitive posts.
 
Status
Not open for further replies.
Back
Top