Predict: The Next Generation Console Tech

Status
Not open for further replies.
Why is EDRAM a bad choice? From everything in this thread, a single pool would require a bus that would be expensive and very difficult to shrink.

EDRAM is also very expensive, requires lots of space and is very power-consuming
Which is why all implementations you see of it are in small quantities and practically for framebuffer only.
 
IIRC, I've seen a lot of devs here say that they'd prefer to be able to do whatever they want with any EDRAM in the future. Like a scratchpad.
 
Still though 90nm gave us 10MB of edram couldn't 28nm or even 22nm give us 64-128MB of edram ? That should be enough to help with bandwidth problems
 
IIRC, I've seen a lot of devs here say that they'd prefer to be able to do whatever they want with any EDRAM in the future. Like a scratchpad.
It depends on the cost of the eDRAM vs. the cost of a fast system bus. eDRAM isn't amazingly cost effective, as evidenced by the fact it isn't ubiquitous! Before, when DRAM speed wasn't fast enough, it was necessary for fast memory, but with the option of hundreds of GBs a second main memory, that GPU die real-estate could be spent on processing logic, or left out for a cheaper GPU. Personally I feel if eDRAM was going to offer that much, we'd see it in integrated mobile GPUs. In theory it'd offer a large speed boost for any laptop running from main memory. In terms of the consoles we have, eDRAM didn't make PS2 better than XB, nor XB360 better than PS3, such that it's an obvious choice. Given the trade I'd rather have a unified, flexible, fast pool of RAM for everything, keeping development simple and having fewer limitations. The only reason not to IMO is the cost of a 256 bit bus which requires the sorts of workarounds of these consoles - split memory or eDRAM.
 
It depends on the cost of the eDRAM vs. the cost of a fast system bus. eDRAM isn't amazingly cost effective, as evidenced by the fact it isn't ubiquitous! Before, when DRAM speed wasn't fast enough, it was necessary for fast memory, but with the option of hundreds of GBs a second main memory, that GPU die real-estate could be spent on processing logic, or left out for a cheaper GPU. Personally I feel if eDRAM was going to offer that much, we'd see it in integrated mobile GPUs. In theory it'd offer a large speed boost for any laptop running from main memory. In terms of the consoles we have, eDRAM didn't make PS2 better than XB, nor XB360 better than PS3, such that it's an obvious choice. Given the trade I'd rather have a unified, flexible, fast pool of RAM for everything, keeping development simple and having fewer limitations. The only reason not to IMO is the cost of a 256 bit bus which requires the sorts of workarounds of these consoles - split memory or eDRAM.

But didnt the eDRAM help the PS2 do some great stuff which without it, it would have had a larger performance discrepancy from the XBOX?
One of the best examples I can remember were the crazy particle effects in ZOE2. I think Rachet & Clank and Jak & Daxter used it to achieve those large and detailed worlds too.
 
But didnt the eDRAM help the PS2 do some great stuff which without it, it would have had a larger performance discrepancy from the XBOX?
One of the best examples I can remember were the crazy particle effects in ZOE2. I think Rachet & Clank and Jak & Daxter used it to achieve those large and detailed worlds too.
Yes, but the gains in some places are losses in others. In PS2's era there was no other way to get that BW, but the cost was shader capability. PS2 had to just simply draw lots of stuff over and over. Now Sony had by far the best benefit from eDRAM of any console, but they left it behind for this generation. Yes, PS3 is BW limited, not helped by split pools, but it can still hold its own against the eDRAM enabled console. XB360 on the flipside has a limited eDRAM capacity that has put some pressure on devs regards tiling, which has negatively impacted other rendering strategies. So both systems have their faults. eDRAM isn't a cure-all solution but a specific problem solver. If the whole reason for eDRAM is to overcome BW bottlenecks, when main RAM can provide enough bandwidth then eDRAM loses its relevance. That's what we're looking at now, especially by 2014 I think. 200 GB/s seems a likely minimum given current PC GPU speeds. Resolution likely isn't going to be higher than we have now - maybe full 1080p. Advanced AA techniques might see no more than 4xMSAA needed, with better AA made from processing subsamples in more ingenious ways. Overall I don't think next-gen consoles will be BW limited, so eDRAM seems an unnecessary complication.

Who is even using eDRAM these days? IBM is on their server processors as a massive cache. I can't think of any GPU's using it. Even with their experience of Xenos, ATi haven't spun off an eDRAM enabled GPU line. The reason it has survived in consoles is BW and cost concerns, but I think looking forwards, the bullet must be bitten and a large pool of fast RAM selected over a mix of cheaper solutions.
 
actually a 128bit 7GHz GDDR5 setup would reach only 112GB/s, far from the 200GB/s minimum
even planned successor to gddr5 will probably not able to push that bandwidth on a 128bit bus
 
Please forgive me, I am not an engineer so my understanding, years reading here not withstanding, is probably not what it shoud be. Perhaps I have misread the thread, but my understanding was that currently there is no forseeable (in the timeframe one is looking at) solution to the bandwidth problem. Not on a 128-bit bus.

If your choices are:
1. 256-bit bus -> way too expensive is what I have read repeatedly here.
2. Split RAM pool -> Again, perhaps I have misread the thread, but split RAM pools seemed to be something that was heavily disliked.
3. EDRAM -> downside dependant on the size.

Would some of this choice be based on a CPU vs. a GPU centric design? It simply appears, from a laymans perspective, that a large enough chunk of EDRAM is preferrable to a split RAM pool. Options 2 and 3 being the only realistic solutions for $/watt/timeframe.
 
Sadly I don't believed that there is an easy solution to bandwidth problems.

EDRAM like in Xenos doesn't seem a fit to modern techniques. It can't be reached by any resources RBEs aside. Devs would want instead a scratchpad memory available for the GPU as a whole (ALUs, RBEs, TUs if it makes sense). To make this the EDRAM has to be on the same piece od silicon as the rest of the GPU. This has multiple implications, first on the die size available to the GPU (even more if you consider the fusion like design) secondly it sets restriction on the lithography process you can use. All together quiet a bother.

A 256bits bus? My feel is that it's unlikely but possible if a manufacturer goes with a fusion chip (one bus but a big can be an option) it's next impossible if there is to be multiple chips (one 128bit bus for the CPU + a 256bits one for the GPUs sounds like way to much costs).

Clearly the best that can happen in this regard is to move to hardware that do a better use of inner bandwidth and data locality (so more storage on chip but not as a big unified scratch pad memory pool). So more of a mobile approach of GPUs.
 
Bandwidth isn't really a problem to solve.
It's a constraint, any one of the manufacturers could design a system with a single memory pool and a 128 bit bus.
Devs will work with what they get, it's the nature of console development.
Does having the flexibility to use any piece of memory for any resource out weigh the reduced bandwidth?
It's a tough call.

IME Manufacturers tend to massively over compensate for the most complained about feature of the previous generation console.
 
I honestly don't see a 192bit (1.5\3\6GB) or 256bit (2\4\8GB) UMA bus with GDDR5 as being that far-fetched for a console in 2012->2014.

The developer-friendliness of a system that's not bandwidth-bottlenecked for graphics and/or CPU could be well worth the extra cost.
They're selling right now graphics cards with a 256bit bus and GDDR5 memory for a little over 125€ (HD6850), so it's not really pohibitive.

But for Microsoft, backwards compatibility with X360 would be a pain in the ass, though.
I guess if they launch their next console in ~2014, they can probably have an UMA architecture with ~300GB/s, giving them enough headroom to allow the EDRAM emulation.
 
The problem is that higher bus width comes with a cost that doesn't go away. You're essentially increasing the minimum size of the part. If they do decide to do it, it's going to come at the cost of something else.
 
Keeping in mind up coming shrink of GGRD5 chips what is the maximum amount of RAM that could be tied to a 128bits bus?

Alphawolf I agree that a 256bits bus would come at the expense of something else. I believe it could be an option (manufacturers will try to avoid) if they come with single chips systems. If they have a massive + 300 mm2 single chip, it gives them some room for shrink before the bus size becomes a bother. Going with a 256bits would also allow them to use the lowest quality GDDR5 of the time.
From my POV a 256 bits bus coupled with low rank GDD5 is the best compromise in perfs and flexibilty, but manufacturers might think otherwise.
 
Keeping in mind up coming shrink of GGRD5 chips what is the maximum amount of RAM that could be tied to a 128bits bus?

AFAIK all GDDR5 modules are 32bit. However they can be operated in clamshell mode. This means at this point the maximum is 8x 2Gb = 2GB of RAM. Based off this I cannot see consoles with a 128 bit bus having more than 4GB of RAM for the next generation of consoles.

One thing you should remember is that GDDR5 requires IIRC 30% more pins than GDDR3. So a 128 bit GDDR5 bus is around the equivalent of a 166 bit GDDR3 interface.
 
http://kotaku.com/#!5794000/microso...or-new-consoles-leaving-nintendo-in-the-clear

A little maybe perhaps timeline info/rumors with salt. Claims MS and Sony are looking to 2014...

Hey thats only three years not so bad. At least it's something semi concrete.

Also claims MS board has not decided on next gen specs yet, and is unsure whether to go with a profit at launch or money losing at launch hardware strategy.

Personally I expect something in the middle, money losing, but not as bad as 360/PS3, something more reasonable. If they can avoid RROD mess again that will save them a ton of money anyway this time around separately so they should factor that in too imo. Meaning even if they're losing just as much on the hardware as 360 at launch, they should still end up way better off.
 
I think for 4Gb chips are more of a demand issue then a supply one. Samsung just started production of 4Gb lpddr2 for customers. If Sony or Microsoft were to approach a memory supplier and ask for 4Gb and we will buy them by the million I think it would get done.

Obviously no one will go as deep in the hole as the PS3 was at launch and I could see Sony and Microsoft waiting to 2014 if the Wii2 or whatever it is called is just a marginally upgraded current gen system assuming if doesn't have some awesome new feature they are worried about.
 
Status
Not open for further replies.
Back
Top