Predict: The Next Generation Console Tech

Status
Not open for further replies.
function, thanks for catching my mistake regarding months. I was at work and other stuff demanded attention as well. So it's 11x reduction over 39 months, 3.25 years.


As for hoho's post .... Ooooh boy. Here goes.

Depends how well your streaming system works. Also, just adding extra RAM won't help nearly as much for IQ if you don't pair it up with higher bandwidth and that costs an arm and a leg compared to just using bigger memory chips.

Let's be honest here - streaming in textures mid-game is why most multiplatform games look so goddamn ugly and it is the sole reason behind texture pop-in. Almost every Unreal Engine 3 game suffers from it regardless of platform, and badly. Rage will suffer from it as well, as Carmack himself admitted.

The amount of unique data you can see on screen per-frame is pretty much only limited by vram bandwidth. Capacity is nice too but not nearly as important or useful, at least providing you have enough for getting the basics to work ( = as much as competitors).

What use is the amount of unique data on screen, if it comes in with a visible lag because some low-speed optical drive or HDD has to stream it there? Unique data is useful if you don't have to stream it in, and then you can have a frame with no pop-in graphics if you can fit all necessary info into the framebuffer, right?

True but that will NEVER happen in a console world where you have clearly defined hardware. How many games had significantly worse IQ on PS2 than XB considerng the latter had tons more RAM? How big part of IQ difference on PS3 vs XB360 was caused by lack of RAM* and not by differently allocated performance**?

*) IIRC PS3 reserved more RAM to OS and it also has two memory banks instead of one unified chunk.
**) PS3 having generally weaker GPU and stronger CPU

You're already seeing it. This whole generation has been about optimizing PS3 to somehow cope with multiplatform titles that it has not been the lead platform for. If you have the time, look up Digital Foundry's platform comparisons.

PS3 loses on IQ frequently, and oftentimes in texture quality (see Bioshock), although RSX is much better at texturing than Xenos (13 200MTex/s vs 8000MTex/s). I can only chalk it down to the max 256MB of frame buffer vs X360s < 384MB of unified RAM, a 50% difference.

Again, let's assume MS or Sony had doubled the amount of RAM their machine had vs the competitor. Do you think they could have had significantly higher IQ as a result and not be bottlenecked for having to stream twice as much data through their GPU?This was addressed before but it's quite funny how THE most powerful platform, PC, hasn't been the lead platform for pretty much the entire lifespan of latest gen consoles :)

What I do know that if not for the 256MB limitation of its framebuffer, PS3 would have handed the X360 its ass when talking texture quality, having a wild 65% advantage in texturing power!

Texture resolution is only a tiny part of IQ. Shading is much more important and bigger textures don't help you much there.

No amount of shading will reduce the blurryness of textures. It will only make them more shiny.

You were using a deeply flawed example. Consoles are not comparable to PCs and as has been repeated several times already simply having more ram without increasing bandwidth and processing power will not give you higher IQ. Only reason why your 320 vs 640M example works is that no one expects any random PC GPU to handle anything thrown at it.

hoho, is that really you? :) What, so now console GPU and memory subsystems are not by and large the same exact technology used in graphics cards?? I know you know better than that.

The only difference between consoles is that once they run out of VRAM they go to the HDD or worse, the optical drive, while on the PC there is main RAM which holds that data, and is orders of magnitudes faster than any HDD. Yet even that does not help.

So please tell me how can you fit 363MB of data in a 320MB buffer without reducing image quality (and that includes the pop-in textures that come with streaming)?

While the price of the chip is dropping the cost of providing N-bit wide channel to it is pretty much constant and overall costs significantly more than the memory chip itself (more complex motherboard, memory controller, power usage, ...) so again all you could get is just more RAM at same bandwidth for less loading screens and less need for streaming but you won't get much better IQ.

A game completely without any texture or worse, geometric pop-ins and a normal draw distance?? Unimaginable luxury for the next gen?

What I don't understand is how you, who I know has lots of experience with 3D games, have not seen enough of the world to know that a HD 4850 today with 1GB of VRAM can run games much, much better than the first model with 512MB of VRAM, while having the exact same bandwidth and computational power?

How nice of you to bring up a clearly undercut price point. In the next ~22 months the cheapest RAM was still significantly more expensive than that DDR2 there.
Also, in the same list there was a 512MB SDR for $39 or 1G for $78 on sale on August 2003. That doesn't quite match your nice prediction of price reduction :)

Memory industry prices operate on volume. The more volume, the cheaper they are. Once one memory type is out of volume, it's price will go straight up. That's not something new.

Do you think that 100 million + another 100 million times 4-8GB is enough volume to maybe keep the prices down over a 10 year-cycle, factoring in that PC graphics cards will use the same memory type for a significant portion of that period?

Find me a game that is GPU computing-limited* on a 560GTX and compare 1G vs 2G cards. I'd love to see the IQ and performance difference between them when you crank up the texture resolutions.

*) because that's what is really limiting game IQ and that's what gets botttlenecked first on a console. Anecdotical "evidence" with a game that doesn't have any kind of half-decent streaming is not an example of what happens in a real world on consoles.

Find me a game on PS3 that can bottleneck the texturing capability RSX. You can't, because the PS3s 256MB frame buffer cannot hold the needed amount of textures to do that in the first place.

These are the sad realities of the soon-to-be-last-gen.
 
You've missed out on a lot of discussions here, but suffice to say, you're looking at the wrong culprit. Storage space is the problem.

you're right, I totally overlooked it in my little post.
I favor a standard HDD, a 1TB one would be likely with increasing densities, so you get virtually unlimited storage space. transfer rate are over 100MB/s this time, too.

a game could do a quick install (1 or a few GB) with game code, menus, main character assets etc. then have a "cache everything that's read from bluray forever" scheme.
 
Overall keeping in mind cost, I would favor a 2GB of cheap GDDR5 on a 256 bits wide bus

For the 2012/14 timeframe, I'd expect that to cost about the same what 4GB of (cheap) GDDR5 on 128bit + ~20MB of edram would at release, with the 128-bit solution roughly halving it's cost in the first 2 years, while the 256bit one not going down in price *at all, ever*.

Memory controllers take static space in a die and don't shrink. Edram shrinks. 256bits would always require at least 8 memory chips, while at 128bit you can switch to 4 in a few years after the launch. Needing traces for 256-bit bus will cost more for most of the lifetime of the device (if these platforms are seriously going to last until 2022, they will likely integrate the ram on package before then).

I'd expect the performance of the 4GB 128bits + edram to be better than the 2GB version. In any case, if they want a 256GB bus, they might as well up the ram to that 8GB, simply because that wouldn't cost that much more for them in the long run.
 
For the 2012/14 timeframe, I'd expect that to cost about the same what 4GB of (cheap) GDDR5 on 128bit + ~20MB of edram would at release, with the 128-bit solution roughly halving it's cost in the first 2 years, while the 256bit one not going down in price *at all, ever*.

Memory controllers take static space in a die and don't shrink. Edram shrinks. 256bits would always require at least 8 memory chips, while at 128bit you can switch to 4 in a few years after the launch. Needing traces for 256-bit bus will cost more for most of the lifetime of the device (if these platforms are seriously going to last until 2022, they will likely integrate the ram on package before then).

I'd expect the performance of the 4GB 128bits + edram to be better than the 2GB version. In any case, if they want a 256GB bus, they might as well up the ram to that 8GB, simply because that wouldn't cost that much more for them in the long run.
So you think more ram plus another bus of some form and another chip is that much cheaper than a wider bus? I actually question that strongly.
As for shrinking usually shrinking a design doesn't get you the same benefit a designing soeting from scratch on a new process, usually you end up with something that's 70% of the original design at best (see there for xenos the factor are 0.85 and 0.77) Ms supposedly will go SoC to pack decent power it's likely to be north of 300mm², shrink that you will see something big enough to fit your memory controllers. If they are in a best case scenario (that 210mm² from 300mm² chip) and it's not enough it won't be by much, they may optimize for power even more build extra redundancy, etc. But the most likely scenario is that there won't be any problem (Nvidia fit a 256 bits bus on the GTS 250/ GT 92b which die size is 230mm².
The cost may remain fix but that true for a lot of the parts you would include the bus from soc to edram adds complexity and it's going no where, EDRAM will shrink at lower rate than the rest of the system. And there is possible cooling to take in account.
For Edram 20MB is not enough, even light weight G-buffer as the one used by Crytek or Sebbbi won't fit if they render anywhere close to 1080p. Their G-buffer memory foot print is ~50% than the one of a forward render at the same resolution, at 1080p we're speaking of 15MB adding 7MB and it no longer fit. And I feel like they can't go with tinier g-buffer.
Then if you have EDRAM RAM will be DDR3, bandwidth won't be twice the bandwidth available to Xenos (22GB/s), the resolving cost to main ram will increase. No to mention all the restriction that set for the developers.

If they want to add more than 2GB, because it cost the same in two years well good for them but I though we were speaking of saving money it may also cost the less, or have tiny memory chips, cooler, etc.

Another thing about last till 2022, how do you know? because this gen last years doesn't imply that the following will follow the same trend, this gen went pretty wrong with huge loss for both MS and Sony, it was a huge jump for many developers too multi-threading+poor serial performances. It won't be the same for the net gen no matter the word on the street. Editors were saying that 3 years ago, now they have directx11 part at end, pc with plenty of cores/threads, they already have plenty of memory space on pc too. Shortly I don't expect the explosion of production costs to happen neither I expect the editors and manufacturers to pursuit this path.
Honestly a 256bits bit might be the most elegant solution both from a software and hardware pov. it would be a super straight forward system, devs would get plenty out of it pretty quickly, with a more reasonable BOM, online service subscription, sales of various accessories, no Rrod like incident, manufacturers might do more money and faster. On top of it there might not be a Wii this gen to stealth some of the core segment sales, shortly they may be nothing that prevent the manufacturers to move forward earlier with the next generation and that doesn't mean eol the existing model. If I take the other way around and they plan to last ten years how ram is relevant to sales assuming costumers have no choice, had the Wii what it take even to ship by 2006? And more critically to last till 2011? As the first answer is no, it's obvious that it shows now but we're not speaking of hardware in the same ballpark, a wii now would be what? a sluggish dual core backed by a directx9a class gpu at best with a handful of MB of EDRAM and 256MB of main ram.

The past is interesting as a reference but something that happens once may not happen again as there could be special circumstances for that to happen, if it happen multiple time.. well that another matter, wider bus is part of that kind of thing either we would be stuck to xbox 64bit wide bus i guess? Bus size has grow with almost every generation of console now it's a taboo and can't be considered as an option? I don't agree with that.
 
you're right, I totally overlooked it in my little post.
I favor a standard HDD, a 1TB one would be likely with increasing densities, so you get virtually unlimited storage space. transfer rate are over 100MB/s this time, too.

a game could do a quick install (1 or a few GB) with game code, menus, main character assets etc. then have a "cache everything that's read from bluray forever" scheme.


An optical drive using General Electrics hologram technology could store 100's of gigabytes of data per disc and have data rates a bit greater than 100MB/s. It was just announced GE is almost ready to start license the technology. They have 500GB discs and are now working towards 1TB.

The extra capacity is important because a John Carmack RAGE engine (ID Tech 5) game wouldn't have to compress the data as much so the texture resolution would be higher. RAGE has 100 GB of data alone forjust the textures.
 
Actually, the exact inner workings of Rage aren't really known. We've heard all kinds of figures and mechanisms but - at least that's my impression - it's really hard to tell exactly how much actual texture memory is used at runtime, how big the compressed dataset is, how big the original one is, and what resolution and compression differences are there between the various platforms. Even the same content changes between various trailer releases, making it kinda hard to tell what's what for now.

So I'd wait for DigitalFoundry's analysis before drawing any solid conclusions.
 
So you think more ram plus another bus of some form and another chip is that much cheaper than a wider bus? I actually question that strongly. <snip>
Then if you have EDRAM RAM will be DDR3, bandwidth won't be twice the bandwidth available to Xenos (22GB/s), the resolving cost to main ram will increase. No to mention all the restriction that set for the developers <snip>

If they want to add more than 2GB, because it cost the same in two years well good for them but I though we were speaking of saving money it may also cost the less, or have tiny memory chips, cooler, etc. <snip>


Honestly a 256bits bit might be the most elegant solution both from a software and hardware pov. it would be a super straight forward system, devs would get plenty out of it pretty quickly, with a more reasonable BOM, online service subscription, sales of various accessories, no Rrod like incident, manufacturers might do more money and faster. as an option? I don't agree with that.

Hi Liolio

A few questions and comments if I may:

1. What other bus do you mean? There would be one 128bit bus and an internal bus for the EDRAM a bit like Xenos and Xenon now.

2. Why will RAM be DDR3 RAM with eDRAM? Is there a technical reason why it can't be eDRAM plus GDDR5 like tunafish said?

3. A 256bit bus may increase the BOM over adding eDRAM and a 128bit bus, we don't know the exact figures at this time. RRoD and other overheating issues weren't necessarily caused by having eDRAM in a system and without eDRAM and just a 256bit bus they can still occur so that point seems irrelevant. PS3 has its own YLOD for instance without eDRAM.

I would aim for 40MB of eDRAM and a 128bit with fast GDDR5 memory.
 
Oi Gubbi I would preferably estimate 1/2 that cost for Arcades simply because you simply dont need much more than 4GB for your basic SKU.

I think next gen is going to be much more digital distribution centric than this gen. Microsoft want to have a decent chunk of mass storage in their console because that way they can sell more stuff to people, XBLA, old 360 games, complete new games and DLC.

As for caching. My idea wasn't really to cache a complete Bluray disc, "install"-style. But rather have it cache recently loaded data (from HDD or Bluray). I want at least 50GB Flash because that way I get to cache all data from a Bluray. The load tiimes in games like Mass Effect and Dragon Age drives me nuts.

The caching implementation could be made more intelligent than the current 360 one, which reserves 3x2.5GB for caching. It could simply use all available Flash.

Cheers
 
it could be a 192bit bus with gddr5, too. you have a nice 3GB memory with that.
in terms of watts you can't really afford an extremely powerful GPU, even at 28nm. I imagine about radeon 6850 level as the top boundary.

if it's big edram with 2GB 128bit gddr5 it will be worse in some aspects, much better in others, it won't suck at all.
I'm mostly weary of developers doing yet again games without AA ; using just a big framebuffer rather than edram + dram would allow them to juse memory as they see fit, and employ any non orthodox technique.

just one pool of memory, down from two on the x360 and eight on the ps3 ( :) ). I don't know how much this is relevant.
 
Last edited by a moderator:
I think next gen is going to be much more digital distribution centric than this gen. Microsoft want to have a decent chunk of mass storage in their console because that way they can sell more stuff to people, XBLA, old 360 games, complete new games and DLC.

As for caching. My idea wasn't really to cache a complete Bluray disc, "install"-style. But rather have it cache recently loaded data (from HDD or Bluray). I want at least 50GB Flash because that way I get to cache all data from a Bluray. The load tiimes in games like Mass Effect and Dragon Age drives me nuts.

The caching implementation could be made more intelligent than the current 360 one, which reserves 3x2.5GB for caching. It could simply use all available Flash.

Cheers

this could be done as a luxury option. HDD for the base model, SSD (perhaps as seamless caching, and perhaps 120GB big) on a high end SKU.
 
liolio said:
Are some data available for the seek times of these holographic disks?

I've not actually seen a real number, but I've heard they are much faster than traditional optical media. I'm not really sure how GE is doing it, but past iterations of HVD did not spin allowing them to be made in a variety of formats with high seek times and sizes ranging from 30GB to multiple terabytes.


it could be a 192bit bus with gddr5, too. you have a nice 3GB memory with that.
in terms of watts you can't really afford an extremely powerful GPU, even at 28nm. I imagine about radeon 6850 level as the top boundary.

Laptops are shipping with 6850s now (slightly lower clocks), you don't think they can improve on that by 2013/2014? I'm not expecting a retail part or anything, just that level doesn't seem to be an extreme boundary to me. A 75W GPU part could be a fit for a 200ish watt box.
 
The point is, when they were designing the hardware, they weren't thinking about what might happen. They did the best they could on their budget, and trusted the devs to figure out how to make the best use of it.

So the idea that the hw engineers should try to guess what devs want to do 10 years from now and accommodate that is completely backwards...

I respect your opinion and its actually your every post is a lesson for me, but don't entirely agree here, because we have the case of the Epic that directly influenced the construction of the console xbox360 with required extra RAM for UE3.0,even in the past and knew there was much psone research on what the developers wanted at the time and beyond (tools and GTE gpu), so I think the console manufacturer(i never talk about engineers exclusively but first tean devs helping too = console manufacturer) needs to hear the developers like Crytech, Carmack of id or Epic again with your estimates of 10 times the performance and will need more RAM(Crytech wants 8GB RAM), I think console manufacturer must hear what developers want and needs and go a little further as MS did in 2005 with xbox360 equipped with best in terms of graphics processing and opening a new paradigm (since September 2003 there was talk in 48 alus gpu / shader unified, etc.).
 
Last edited by a moderator:
Hi Liolio

A few questions and comments if I may:

1. What other bus do you mean? There would be one 128bit bus and an internal bus for the EDRAM a bit like Xenos and Xenon now.
Well I speak of a bus because assuming more than 20MB of EDRAM and a SoC bigger than xenos was at launch, I assume than the EDram chip would not be on the same pad as the Soc/Main chip.
I also thought of heat dissipation, the SoC even with power efficient CPU cores and conservatively clocked bart class GPU (newer arch but to give a ref) will pump its fair share of power and heat, for some reason I'm not sure it's a good idea to put something else than the SoC on the pad, I'm not an engineer but I feel like breaking the symmetry on the pad may have an influence on how well the heat is dissipated. The edram chip may pump also extra or be forced at higher temperature that one would one. Especially after the RroD incident I'm not sure it's a good idea to take risk on the matter, but let be clear this is speculation based on the few I know I may be wrong. By the way that one thing I considered while... considering (my vocabulary sucks :LOL:) a wider bus, if you have really single chip that you can cool properly in a pretty straight forward fashion why start to break the simplicity of the design, it can't be free. The same is true from software POV, one chip, one unified pool of RAM, your average (no say in a mean fashion) sweet dream while starting to pollute the design simplicity? Even if it means spending a bit more? (actually as I stated I'm not confident that it would cost more). It's imho a lesser bullet to swallow.
Looking forward it will BC for future system, once your there one chip one pool of ram evolution should be straight forward basically it's also setting the basis of your future architecture.
2. Why will RAM be DDR3 RAM with eDRAM? Is there a technical reason why it can't be eDRAM plus GDDR5 like tunafish said?
Well DDR3 is cheaper, there are bigger memory chips available (may not be true in 2013), I read that DDR3 require less pins vs GDDR5 (but wires/busses on the mobo are simplier with GDDR5 or it's the other around? :| ). DDR3 offers lower latencies which would help the CPU. Overall if you remove most of bandwidth intensive operations and let them run in the smart edram chip it could be a better and more economic choice to go with DDR3.
3. A 256bit bus may increase the BOM over adding eDRAM and a 128bit bus, we don't know the exact figures at this time. RRoD and other overheating issues weren't necessarily caused by having eDRAM in a system and without eDRAM and just a 256bit bus they can still occur so that point seems irrelevant. PS3 has its own YLOD for instance without eDRAM.
No they were not connected but I wasn't stating so or wasn't clear enough. As I explain above I'm not sure we can expect the smart edram chip to be on the same pad. So in mind there were to be another chip on board (the edram) which may require passive cooling. Actually in mind if you were to have really only 1 chip to cool and the surrounding mem chips it will be easier to design a functional cooling system.
I would aim for 40MB of eDRAM and a 128bit with fast GDDR5 memory.
40MB sound nice you may fit most G-buffer @1080p (without AA) plus some extra render targets, if the SoC can read from the edram you paying the price from copying data to main RAM may happen only before the frame buffer is send to the display :)
But as I said being able to read from edram implies a more complex chip, more complex mem controller between the SoC and the EDRAM, etc. it's ain't free.
If you have GDDR5 actually the cost of resolving to the main ram may remain constant vs its cost on the 360 so it may be possible to have the edram not accessible from the SoC. It will still have a cost.
Really overall with the impressive use of bandwidth nowadays RBE are doing I'm less and less confident edram would be a win vs how "lean" the system would be with a just a wider bus.
I let aside too technical considerations I can't adress like "aren't RBE which are linked to gpu L2 in modern GPU used for atomic operation which seems use more and more by developer, would not it be a problem to move them on a separate chip?" because I really I've no clue, not a all :LOL: Or thing like the "SoC CPU and GPU may see a flat coherent memory space, what the por and con af edram taking this advantage (if it is one) in account?" because I really don't know either).
Bus width has grown in console history so I believe it's a debatable point, it ain't free but nothing is and I'm not sure the mantra behind edram (limited space with high bandwidth) use is mixing well with current rendering practices which are about beefy g-buffer multiple render target, etc. whic "trade" bandwidth for memory space (gross simplification on :LOL: ).
 
Last edited by a moderator:
because we have the case of the Epic that directly influenced the construction of the console xbox360 with required extra RAM
Honestly I think this story is well and truly overplayed.
MS most certainly did not add extra RAM to 360 to appease Epic. If I were speculating as to what the exact cause was, I'd make note of the fact that they made the decision at about the time the first PS3 devkits were circulating through 3rd party devs.

MS do listen to what developers say, and so does every other manufacturer, but at the end of the day they have a budget and they make decisions more RAM might mean weaker CPU or GPU, or whatever.

A case in point there was a support guy at MS who sent out an email asking devs if they would like more EDRAM about a year prior to 360 launch. Of course every one responded with yes, and of course MS couldn't accomodate because of cost and the support engineer had simply never understood what it would cost/unit to do..
 
Bottom line..

Sony has always brought out a state of the art console and handheld (for it's time).

They've done that with the PS1....
I'm not sure PS1 was really state of the art. Sony used some clever tricks to squeeze decent 3D from affordable hardware, resulting in wobbly vectors!

They've done that with the PS2....They've done that with the PSP....They've done that with the PS3...
PS3's GPU was a generation outdated at launch.

The CPU power is pretty much always approx 10x, and the memory upgrades are almost always 16x (give or take a few).
PS1 > PS2 = 8x. PS2 > PS3 = 16x. That's only two samples which is nothing like enough to identify an 'almost always' rule. Even adding PSP to Vita into the mix, that's 2 out of 3 happened to have a 16x increase in RAM. That's not a basis for any sound extrapolation!

There is literally zero sign that says Sony is going to break their pattern and in fact, the VITA shows that pattern still holds.

I may not be the most technically sound guy here, but damn me to hell if I can't recognize a pattern that been laid down over and over and over and over and over.
It hasn't, and patterns in human choices can be changed because they aren't necessarily founded in the same way. What was necessary for the P2 > PS3 RAM increase was 16x. If the next generation will be served with an 8x increase (same as PS1 > PS2) then that could be chosen. If next-gen requires 16 GBs RAM, Sony will go with a 32x increase. Or maybe they'll pick 6 GB for a 12x increase. They're not going to pick a 16x increase in RAM just because they've done that a couple of times before. ;)
 
I've moved the nature of the next generation talk to its own new thread here. This thread is now solely for talking about the possible hardware solutions, rather than what exactly the target is going to be (tablet, online streaming box a la OnLive, simple console, gaming PC, etc.). Feel free to speculate a hardware design for a particular gaming concept here (256 MBs RAM for an OnLive box, AMD Bulldozer for a tablet, multicore x86 for a console), but keep the discussion as to why someone should build a particular type of console to the other thread.
 
I'm not sure PS1 was really state of the art. Sony used some clever tricks to squeeze decent 3D from affordable hardware, resulting in wobbly vectors!

This is one case where it really was state of the art, I remember seeing a demo of a preproduction devkit devkit, at some show or other.
It was the most impressive graphics I'd seen at the time short of RE's.
Yes it didn't do perspective correction, but I had an unreleased demo board from S3 in my office at the time, what was later described as a graphics decelerator by many people.
It wasn't until 3DFX and Rendition later released that PC's caught up and passed PS1.
PS2 was interesting, but PC's and the PC graphics card vendors already owned the graphics space by then.
 
I bought the original Playstation after watching a Toshinden and Ridge Racer demo at my exchange the day it came out.

It was pretty incredible for its time--I owned a Saturn at the time, but I had to get it.
 
Status
Not open for further replies.
Back
Top