Predict: The Next Generation Console Tech

Status
Not open for further replies.
I see megatextures as an example of more ram is better. Rage is relatively low res because of memory limitations, whereas the computing load is cheap.
more memory means higher res textures. imagine an xbox 360 with 2GB ram and fast blu ray drive, everything else equal, and all textures are 4x the res. Lossless high quality sound effects and music ; a bit less texture popping ; increased edram and the game running at 4x FSAA.

you get the very same game but with much better quality.

I would like 4GB ram and 1TB+ hard drive. that way all streaming is from hard disk, not from bluray to flash then flash to memory :) (but your point stand somewhat as content would come much faster, almost seamless from the SSD)
 
Last edited by a moderator:
I think one of my edits got lost. think about this:

If adding more memory after all the changes costs average of 5$ per console then the cost after cycle would have been something like 5$*100million=500$ million. This is 500 million dollars away from profits. BOM really starts to matter when you get to such scale of manufacturing. 5$ is random number and I expect it to be much on the lower side... Also if the console ends up being hugely popular number is closer to 150million sold than 100 million.(ps2 numbers)

In the end I'm sure sony and microsoft will do the best possible box from their respective goals. It's just not very trivial to add huge amount of really fast memory when in console scope it's possible to optimize streaming very well and get brilliant results with less and use the money elsewhere more productively or just pocket the winnings. If I had to choose between 2GB of ram+ 64GB of fast ssd on console I would rather take that than 4GB of ram and optional slow 2.5" HDD and game developers having to rely streaming from optical media due to lowend sku. Optimizing engines and streaming using that fast SSD will give really nice boost to megatextures and megamodels.

edit. I wouldn't also be surprised if plenty of next gen games still run in 720p or some sub 1080p resolution giving nice scaling to 1080p. I'm guessing devs will want to push fancier pixels rather than more pixels. 720p requires almost magnitude less memory than on pc side where resolutions on highend rigs start to be ridiculous.

:???:

No offense....but I think I'm done.
 
Bottom line..

Sony has always brought out a state of the art console and handheld (for it's time).

They've done that with the PS1....They've done that with the PS2....They've done that with the PSP....They've done that with the PS3....and now they're getting ready to do that with the VITA.


The CPU power is pretty much always approx 10x, and the memory upgrades are almost always 16x (give or take a few).

There is literally zero sign that says Sony is going to break their pattern and in fact, the VITA shows that pattern still holds.


I may not be the most technically sound guy here, but damn me to hell if I can't recognize a pattern that been laid down over and over and over and over and over.

2 to 4 gigs is not even worth the time to upgrade. that is NOT going to happen....Sony knows better.
 
Last edited by a moderator:
You should think about streaming a while and stop just spouting ram, ram, ram.

Think about scenario where console has 4GB ram + 4x bdrom. This device can read about 20MB/s from the disc assuming no seeks are needed. Assume another console that has 2GB of ram and 64GB flash memory that can read 400MB/s. After 10 minutes of gaming console with 4GB memory has shown user 4GB+0.02x60x10=16GB of content(or less). On the other hand the 2GB ram console has shown user 2GB+0.4x60x10 = 242GB of content(or less). The 2GB console could ofcourse use also optical to stream to make the difference between 2GB and 4GB unit even bigger. Random access SSD provides would probably make a big difference I'm not even trying to quantify.

It's not about the amount of ram it's about unique content each frame and how to get that content to display. Ofcourse streaming doesn't work exactly like this but you should get the idea why streaming is IN and loading whole level once to huge ram is out(when you can trust that there is good streaming architecture on the bottom which is not the case in pc world(ever) and definitely not true in current consoles(base sku optical drives)).

Next you can ofcourse say that you want both fast ssd and ridiculous amount of it and a lot of memory and and and... but in the end the package has to be cheap and reliable which means compromises need to be made. Comparing to pc's and expecting consoles to beat pc's is just unrealistic. Console bom needs to be around 500$ or less including kinect v2 or move v2. Highend pc can have 1000W powersupply, SLI or even quad gpu's, multiple cpu cores and so on. Where consoles shine is efficiency, asset optimization and bang for buck. When you know what you are developing for everything can be optimized so much better than on pc world where there is abstraction on top of abstraction and huge variety on hardware where game needs to run.

This gen still has plenty of games running in ridiculously low resolutions like 640p. Why would you expect next gen everything to be 1080p(or higher) when fancier graphics and effects can be achieved in lower resolution rendering? Scaling is trivial for good material. 960x1080 resolution might just be what we start to see next gen A Lot(especially on 3d enabled games). If I may borrow your console cycles and lessons learnt it's devs tend to push effects to the point that resolution get's lowered to crank in the fancy pixels. Often also framerate and lag is introduced while doing this(doing optimizations in engine which cause dependencies between frames and one or more frames lag before action done by user happens on screen).
 
Last edited by a moderator:
Another example I can give you is (american) football game. Let's assume we have superhighres model for each player which is 100MB per player. Each team has 11 players so this makes 2x11x100MB=2.2GB highres models for extreme closeups. Our architecture without streaming cannot fetch even single model in time for closeup scene(5seconds per model load time assuming again that 20MB/s speed). On the other hand the SSD enabled console takes only 0.25seconds to load the model which makes it completily feasible to load the models on demand. Suddenly we see that the 2GB ram system uses less ram for the same scene due to dynamic asset loading. And better yet, the ssd system can use similar lod for referees, audience, extreme audience closeups(like faces) and so on.

Next imagine game like oblivion or grand theft auto and think about what streaming relying on fast ssd could do. Next think if you really need 8GB main ram next gen or if you actually need balanced architecture, proper streaming and games really optimized to use the hardware to the max.

On pc world this is easy, the soccer game would just claim "high quality mode" requiring more ram and it wouldn't even think about doing it differently. Consoles on the other hand would not claim you need more ram, they would find a balance such as the streaming solution I'm talking about.(and is used by games like rage). Consoles can make leaps, pc world on the other hand moves fairly slowly due to need to support old hardware and variety of new hardware.
 
Last edited by a moderator:
I see megatextures as an example of more ram is better. Rage is relatively low res because of memory limitations,

You've missed out on a lot of discussions here, but suffice to say, you're looking at the wrong culprit. Storage space is the problem.
 
And so we come back again and again to very high density RAM being needed to keep the number of chips down...

Let me know when the roadmap indicates high speed 8Gbit GDDR5, so we don't need 16 chips just for 8GB RAM (and accompanying mainboard tracing and spacing). It's certainly a viable path for future cost savings when they hit 16Gbit GDDR5 right?

Oh and don't forget that the XDK would be expected to have buckets more RAM too. But hey, it might just be like the 360 whose mobo just didn't have the space for another 8x512Mbit chips so devs were stuck developing with 512MB instead of 1GB (when 1Gbit chips finally came out in quantity).
 
See, and that's really where the problem is.

And I don't mean to be disrespectful, but many of you seem to have nearsighted vision, where all you can think about is today.....what's being done and needed for today.

2 to 4 gigs would be great for a machine of today....but it would be absurdly low for the long haul. 10 years from now, when the machine(s) would still be going, they'd be starved and famished for more memory.

Again....today isn't where the visions should be coming from, especially since a PS4 won't even get here until 2014.

Again....no offense to any of you but to suggest 2 to 4 GiGs is gonna be good enough for a "do it all" machine that needs to go to 2022...that's fairly absurd to me.

These things aren't upgradable PC's. They need to last 8 years now


I totally agree here, because I think talking on the graphic department next gen consoles need to be imagined for a universe of a period from 2014 until 2022, ie, a console manufacturer have to anticipate or make bets as far as possible what may be needed in graphics processing terms and still leave a vacant space in terms of processing for things that have not yet been invented (need here general porpose/cpu or very flexible gpu and more RAM), because who would imagine that in 2005/2006 deferred rendering / shading would be the current paradigm or that the cell or xenos gpu or would be doing many post processing filters and even AA(MLAA and FXAA)?

About more RAM per se... if sony and ms still believe in making their next gen consoles as an entertainment hub,with skype, streaming video, various interconnections etc to compete with smartphones,ipads, androids etc.more RAM is even more necessary,even counting with many streams that can be applied through a perfect synchronization media drive, HDD/SSD storage etc.
 
Last edited by a moderator:
Lots of good points from manux I think, there's only one thing I could think of adding:

... On pc world this is easy, the soccer game would just claim "high quality mode" requiring more ram and it wouldn't even think about doing it differently...

And then someone would make a hi-res texture mod for the "high quality mode" and recommend 4x more ram. News of this would spread to console forums. :D

I totally agree here, because I think talking on the graphic department next gen consoles need to be imagined for a universe of a period from 2014 until 2022, ie, a console manufacturer have to anticipate or make bets as far as possible what may be needed in graphics processing terms and still leave a vacant space in terms of processing for things that have not yet been invented (need here general porpose/cpu or very flexible gpu and more RAM), because who would imagine that in 2005/2006 deferred rendering / shading would be the current paradigm or that the cell or xenos gpu or would be doing many post processing filters and even AA(MLAA and FXAA)?

You could spend an infinite amount of money making a console that could run everything you haven't thought of, at 1080p, in ten years time!

A console has to make money or there's no point in it existing and the different elements of it need to be balanced. The only sensible approach is to make an educated guess as to where game software is going and build the best system you can within your budget.
 
Last edited by a moderator:
Yeah, neither the X360 nor the PS3 had been designed to support deferred rendering, it was on the developers to make it work on the existing hardware.
 
If for super casual uses 4GB is required nowadays it says a lot on the shitty nature of the OS one would use on a console.
 
Thing is, quite a bit may change next gen. For example, Microsoft has finally woken up and realized that they need to unify all their platforms together and make them interoperable. So on the Xbox 720, will they perhaps want the user to be able to pop up a fully functional Windows App Store while playing any game? That's going to take some ram. They bought Skype, will they want fully functional video calling Skype aviailable while people play games? Gonna need some ram for that. Maybe there will be some new killer use for Kinect that they want running all the time in the background in some way, might need some ram for that. Would be cool to have a music service running all the time in the background streaming your tunes, hmm might need some ram for that as well. Who knows at this point what they will come up with...but point being that the next boxes may be about more than just playing games. I'd agree that 4gb would be enough for just playing games, but as do-it-all boxes it may not be enough.
But all these other functions aren't going to consume multiple gigabytes, unless they're really crappily implemented. I can't see any engineer saying "4GBs is going to be plenty to run our games, but let's chuck in another 4 GBs just in case." ;) Surely it'd be more likely that there's a 200 MB RAM reservation for OS tasks, and a chunk of the flash buffer or whatever reserved for cacheing OS content?

All through this thread we've been considering choices based on logical progressions of the console space. Now all of a sudden we're having to entertain what-if scenarios?? What if Sony decide PS4 is also going to operate as a 3D editing workstation? Better give it 16GBs. :p

If there's going to be a case made for 8 GBs over 4 GBs on account of supporting OS tasks, I'd like to see some more involved reasons than 'it might'. We know from this gen and from an understanding of software that's not sitting on a massive OS that multitasked services is possible in small amounts of RAM. The logical extrapolation from that is that, same again next gen, there'll be a small amount of total system resources reserved for OS tasks. I don't see a reason to add in extra GBs of RAM or an extra CPU or a second HDD to support a load of PC tasks when that's an unnecessary burden on the console that's trying to be cheap!

And that's really a topic for another thread. Identify what the nature of the console is going to be in one discussion, so a technical discussion can identify the hardware requirements of that platform.
 
Lots of good points from manux I think, there's only one thing I could think of adding:



And then someone would make a hi-res texture mod for the "high quality mode" and recommend 4x more ram. News of this would spread to console forums. :D



You could spend an infinite amount of money making a console that could run everything you haven't thought of, at 1080p, in ten years time!

A console has to make money or there's no point in it existing and the different elements of it need to be balanced. The only sensible approach is to make an educated guess as to where game software is going and build the best system you can within your budget.


Im partially agree, but if ps3 coming with the cell processor with original patent SPU had less local stores with 128k and not 256k and especially MS coming with R420/480 and not Xenos /R500/C1 anticipating the paradigm unified shaders archtecture (scalar, tesselation, etc.) would be less effective technologies to last 10 years.

Sony and MS dont have to repeat wii formula to make money with hardware since day one(i know you dont did mention about wii...),i think the console manufacturer must repeat again the same pattern of evolution of previous bets (did not mention the sony here with blu-ray drive prices almost killed ps3) in particular as MS did that was about US$560 BOM(ps2 in 2000 have BOM almost US$480/500) to be hit the break even at least 18/24 months,until there the reason of life of console manufacturer still profit(royalties,licensing etc) came from game sellings to balance that.

At the end I worry about next gen console,because smartphones and tablets are growing rapidly and may in the future very soon subtract considerable game market share much more if next gen console are not substantially flexible, powerful and differentiated.
 
Last edited by a moderator:
Yeah, neither the X360 nor the PS3 had been designed to support deferred rendering, it was on the developers to make it work on the existing hardware.

Agree,but ps3(cell processor with improvements over original patent august/2002) and xbox360(Xenos/R500/C1) are flexible and powerfull enough to allow developers to work with Deferred Render paradigm.
 
And actually you are all assuming a 10 years + cycle for next-generation console.
Maybe they will go with cheaper hardware, and aim for a 5 years cycle, which probably is what Nintendo is going to do. A Trinity-based console with 2 GB of GDDR5 may allow a 299$ launch break-even, and it still be considerably faster than this generation console. With a 5-y cycle they could follow much better the evolution of technology than with a 10-y one.
 
I still think something like a two tiered memory system would be doable. I mean, 4GB of main RAM for GPU and CPU, accompanied by some slow RAM for caching or swapping. That would do away with a lot of load time and streaming issues and wouldn't be nearly as expensive as an SSD AND faster AND more reliable.

Games like Uncharted or God of War already use the HDD as a intermediate streaming cache. But that's with every PS3 having an HDD, too. And since the hardware makers probably want to go a cheaper route than having an HDD in every system, I think a slow RAM bank could do the trick. Especially since optical media won't be nearly fast enough for filling up 4GB reliably fast.
 
I've been wondering lately about Charlie claims on the next box which resumes mostly as it will be SoC.
People doesn't seem turn on by the idea of a 128 bits bus connecting 2GN of GDDR5 offering +60GB/s of bandwidth. I also read that people (including devs) want EDRAM again. So let assume Charlie claims is true and that the SoC is produce using 32nm an IBM/GF process.
I believe that 8 power A2 cores, some L2 and a low clocked bart like part is the best case scenario.
So such a SoC with 2GB of RAM would be competitive from an economic POV (cost) but adding the mobo, flash memory, various chips, HDD (depending on sku) peripheral as pad, Kinect, etc. and say a 499$ high end SKU is likely to loose money. No matter the inflation I don't think they can go higher in price.
I don't believe that the cost of adding EDRAM somewhere in the design will be offset by switching to DDR3 ram, but let assume that they are willing to loss a bit more to extend the system potential and potentially (that's not automatic) the system life time. How to do the most of the investment in EDRAM?
Honestly I don't believe that a "360 like" implementation will cut it, you have neat benefits but you will lost a lot of the perf when you resolve/copy your render target in the main RAM. Most likely the amount of data were speaking about will double, using DDR3 the bandwidth to main ram won't, and it implies that the link between the SoC and the EDRAM will have to be faster. Either way you allow the SoC to read directly from EDRAM, I assume this will cost both silicon and even more bandwidth till it's looks like the most efficient and convenient option to me. So Edram would mostly be a limited amount a really fast VRAM (which it is not now it doesn't act as VRAM does). If you have efficiency in mind you may want to enough EDRAM to fit various render targets, your frame buffer, a g-buffer, etc.
You may also want to send the framebuffer to RAMDAC form the EDRAM (why copy to main RAM and lose perfs on your investment in EDRAM.
Overall EDRAM can be an option, but if one manufacturer want to make the most of its investment it better invest a significant amount of its silicon on this peace of hardware. So they the SoC I describe above is +/- 300mm², the manufacturer remove the RBEs (move to EDRAM), still they add a fast link the EDRAM chip, overall they would have to down grade the chip imho, to give picture move from 8 cores 12 SIMD array to 6 cores and 8 SIMD array possibly up the clocks. Still to have a convenient amount of "really smart" EDRAM they will need to invest in a big peace of silicon (I say big as +200mm²) and other costs as the communication link between the chip possibly some cooling for the second Chip. A second chip as well as the bus connecting it to the SoC will have an impact on the mobo lay out. If I go back to my initial system this will be costlier, even with a "360 like" implementation it will be costlier than the first offering, I believe significantly.
Honestly I thought a bit about it as it looks like a really "most wanted feature by some" but assuming that deferred renderer are catching up as well as rendering at 1080p the use of more and more render targets, I can't see the investment in EDRAM being a tiny one. I reach the conclusion (many may disagree with) that the cheapest way to solve the bandwidth problem is a wider bus. It clearly has a cost but it's a known quantity and have benefits on many account from a software pov as well as mobo lay out one. I hear the wolves howling but bus size have been increasing with every generation of hardware, we're in an age where the size of the data involved in rendering is starting to discarding the EDRAM as an option.

Overall keeping in mind cost, I would favor a 2GB of cheap GDDR5 on a 256 bits wide bus to any options including more RAM but offering lesser bandwidth or including some form of edram, I'm not sure the trade off are worse it. 256 bits wide bus sounds like an absolute taboo here but looks at the system as a whole (even including software), I'm not sure the taboo is legit. It's the simplest and most elegant solution imho. It has a "fixed cost" the same is true for the connection between the hypothetical SoC and the hypothetical "smart EDRAM" chip. The Smart EDRAM may end needing some cooling even passive, etc. Then you have the cost to supporting the platform as a software platform, the cost for editor.
I start to really question this "256 bits wide bus is not an option" mantra, it could actually be a "win-win" situation.
And those that are scared about shrinking the SoC and no longer being able to fit a 256bits bus well I believe it won't happen anytime soon, assuming a chip north of 300mm² @ 32nm, would still be beefy enough @22nm, as for 16nm well I wonder when will IBM/GF get there and more importantly when it will become more interesting from an economic POV vs than a well "worn" 22nm one. Imho next gen system may see only one shrink, 16nm should be when IBM/GF catchup with Intel in regard to trigate transistors, once this one will mature that could be a perfect time to launch a new system (I feel like it will be a long while). By the way I also is argument "because this gen last X years, next gen will have to last x years" is let say a stretch. It depends on many thinks, the same many things that made this gen long but those many things are in no way constants.

EDIT
Actually a system enjoying +115GB/s worse of bandwidth and assuming way more bandwidth efficient RBE actually 360 emulation might become possible.
 
Last edited by a moderator:
And so we come back again and again to very high density RAM being needed to keep the number of chips down...

Demand for higher capacity memories have resulted in die stacking of DDR2 and 3 dies. If there is demand for high capacity GDDR5, I'm sure we will see the same.

I'm not advocating 8GB or more. I'm just saying that I don't believe capacity of individual DRAM dies will be an issue.

Cheers
 
Demand for higher capacity memories have resulted in die stacking of DDR2 and 3 dies. If there is demand for high capacity GDDR5, I'm sure we will see the same.

I'm not advocating 8GB or more. I'm just saying that I don't believe capacity of individual DRAM dies will be an issue.

So are current 4Gbit DDR3 chips dual die, die stacked chips? If so, is a lack of demand the only reason there's no 4 Gbit GDDR5 available?

It looks like "single die" 4 Gbit chips are more or less here (http://www.memphis.ag/index.php?id=...ws]=29&cHash=0f28b130c33788d6ff412d785377352c). Is this what will allow 4 Gbit GDDR5 to appear?
 
Status
Not open for further replies.
Back
Top