Predict: The Next Generation Console Tech

Status
Not open for further replies.
function,

but how does the data get to flash? o_o

Do you mean to exhange optical media for flash cartridges? Too expensive, maybe?
Do you mean that data will be downloaded from Live!/PSN onto the SSD inside a console? Expensive and too constrained.

Surely you cannot mean that the data will be loaded from disc to flash to RAM, thus negating all possible upsides from flash cache, since loading straight to RAM and keeping it there would be faster (if system has enough RAM to hold all data ofc)?
He's talking about the principal of a hybrid HDD, only with the flash on the system so it works with HDD and optical. It can be used to cache significant amounts of data very cheaply and solve some of the access issues boosting streaming performance, should drop significantly in price over the life of the console, and could even offer entry-level storage for a cheap HDD-free console with enough capacity to allow some download content (think 32 GBs flash with 16 GBs for cacheing and 16 GBs for download content). There's a whole other thread on the possibilties of incorporating flash.
 
First I should probably note that I'm trying to say that increasing the amount of RAM to Nx the amount of competitors and magically get better image quality thanks to higher texture resolution makes absolutely no sense. At least assuming that the consoles have roughly equal performance in terms of GPU computing power and memory bandwidth.
Ho Ho, regarding your comment I don't think a GPU with the texturing capability of ~250 000 Mtex/s (a mid-high range GPU in 2014, roughly extrapolating from 84 480 MTex/s for 6970 in 2010) will prove unable to render the quality of textures that could fit in 8GB alongside with other content.

In fact, if we were to extrapolate that Xenos is powerful enough to render 256MB worth of textures without issues @ 8000 MTex/s, then mathematics tells us that to render 8GB worth of textures at a similar speed we would, in fact, need 256 000 MTex/s of texture fillrate. Which should be completely normal by 2014.
You missed my point. My point was that devs will be pretty close to capping the memory bandwidth and/or GPU throughput no matter how much RAM the system has and thus there is a limit of how much stuff you can stream through the GPU per frame. You can keep more stuff in there but it won't improve image quality. If you want to improve quality you WILL have to increase bandwidth and/or computing power of the GPU to be able to stream the data through but if you already are doing that then your console will have better visuals thanks to beefier hardware, not due to more RAM.

Another thing to consider is that with bigger textures all the caches will become less effecticve as well and you will need to have higher overall bandwidth. If you increase texture size 4x you will need >4x the bandwidth unless you also increase cache sizes 4x.

Mind telling me if the amount of RAM was as important as you try to say it then why do we see the GPUs with ludricous amounts of vRAM being pretty much as fast as their counterparts with same GPU but smaller amounts of vRAM? Do you seriously think that you can pump up the texture sizes on the bigger-ram parts without needing considerably more HW resources to actually show it on screen at same FPS?


I have nothing against having gobs of RAM in a console and it would definitely help with lessening the need for loading screens. Just don't try to make up stuff about it magically improving IQ :)
 
Mind telling me if the amount of RAM was as important as you try to say it then why do we see the GPUs with ludricous amounts of vRAM being pretty much as fast as their counterparts with same GPU but smaller amounts of vRAM? Do you seriously think that you can pump up the texture sizes on the bigger-ram parts without needing considerably more HW resources to actually show it on screen at same FPS?


I have nothing against having gobs of RAM in a console and it would definitely help with lessening the need for loading screens. Just don't try to make up stuff about it magically improving IQ :)

So you say that in theory there should be no performance or IQ difference between a 8800 GTS 320MB and a 8800 GTS 640MB? :D

Well, in 2006 when they launched there practically wasn't. But now it's 2011, 5 years later, and Deus Ex Human Revolution will run out of VRAM even at 1280x720 when trying to play that game at high detail.

Basically, whenever there's a game that cannot fit its data into the VRAM of 8800 GTS, performance turns from playable at Full HD to unplayable at 1280x720.

So what would you do to still get the game running on the 320MB version? Easy, you reduce IQ.

Thus, compared to that, the 640MB version can suddenly boast "magically improved" IQ in the form of sharper textures...
 
Last edited by a moderator:
So what would you do to still get the game running on the 320MB version? Easy, you reduce IQ.

Thus, compared to that, the 640MB version can suddenly boast "magically improved" IQ in the form of sharper textures...
... and the 320M version will have big part of it's memory bandwidth sitting idle. In a console world that would mean devs were lazy and/or made a mistake when figuring out how to balance the load between bandwidth and computing. In reality that shouldn't happen. So it's an awful example that doesn't prove your point one bit.
 
The whole platform is built around a budget which can't be ignored and, prior to this RAMathon, hasn't been. Every discussion this thread has had to date regards feasible GPUs and CPUs and storage mediums etc. has been with an eye of producing a balanced system without anyone suggesting more CPU or GPU will hands-down win the next console-war battle. Indeed, console-warring hasn't been a factor at all, and we've just discussed reasonable probabilities of particular architectures. The whole marketing and services discussion hasn't needed to be raised because everyone who has contributed to this thread over the past several years has already understood that. ;)

4 GBs fast RAM is a nice fit for the next gen boxes based on speculations of how much content devs want to make, overall system design and costs, how much actual difference 8 GBs would make versus 4GBs considering a very probable 1080p maximum resolution, how much processing power there'll be to drive those pixels and use that RAM, etc. 8 GBs is an option, but it'd come with compromises elsewhere to the platform. 2 GBs is a cheaper option, that'd save money that could be spent elsewhere. There's also the possiiblity of less RAM and more eDRAM, with eDRAM being a significant cost. You can repeat all you like that an 8 GB console would be better than a 4 GB console, but that won't change those options.


2 to 4 GB is not good for a high(er) performance console/multimedia device that's "does everything" over the next 10 years.
 
MS has shown pretty huge changes and feature additions since the 360 launched. Are you multitasking a ton on console? There are computers for that purpose. What multimedia features require even 1GB of RAM at a given time?
 
... and the 320M version will have big part of it's memory bandwidth sitting idle. In a console world that would mean devs were lazy and/or made a mistake when figuring out how to balance the load between bandwidth and computing. In reality that shouldn't happen. So it's an awful example that doesn't prove your point one bit.

You do understand that memory bandwidth can be as wide as you want but once you have to go after your texture data in the slower system RAM or in case of consoles, the HDD/optical drive, the performance drops to unplayable?

And how does computing and bandwidth tie into this? As long as we are dealing with texture sizes less than 320MB the 8800 GTS 320MB computes just fine and shows exactly as good numbers as its bigger brother.

This is what I've tried to explain from the start - bandwidth and computing are very, very nice but once you hit the physical barrier of not having enough VRAM you basically have to choose between reducing image quality or having unplayable fps.

Therefore it makes a lot of sense not to skimp on having twice the RAM for something like1/30th of the cost of the whole box if you want it to last!
 
2 to 4 GB is not good for a high(er) performance console/multimedia device that's "does everything" over the next 10 years.
PS3 already 'does everything' with 512 MBs. Tablets and mobile devices can do everything in 512 MBs. What changes are going to happen over the coming years that'll see 4GBs struggle to be enough storage? As mentioned previously, typical use of PCs don't use a fraction of their 3 or 4 GBs even when running web browsers with multiple tabs, media playing, and productivity software concurrently, and you can always use VM on HDD for all those applications you're not actually using in your multitasking environment. And finally, consoles are CE devices so don't face the challenges of HD stereoscopic video editing or 30 megapixel image Photoshopping or whatever niche functions actually exhaust a PC.

Erick's protestations about lack of graphical power are already pushing the limits of plausible arguments - the idea that even 2GBs of RAM is going to see a multimedia, multifunction device struggle is plain taking the biscuit! :mrgreen:
 
This is what I've tried to explain from the start - bandwidth and computing are very, very nice but once you hit the physical barrier of not having enough VRAM you basically have to choose between reducing image quality or having unplayable fps.

Huge RAM is nice, but once you hit the barrier of not being able to texture from memory due to low bandwidth or GPU power, you basically have to just not use memory that you've paid for the purpose of graphics.
 
About memory: apparently everyone assumes that only id is going to utilize virtual texturing in the next generation. Because if an engine supports that, I don't really see the need for large piles of memory.
It certainly is a big undertaking, evident by the long development cycle of Rage, but once id starts to release titles with this tech, others will soon have to follow. I think the general audience will consider the richness of the Rage game world a lot more important than the "slight" differences in Crysis 2 detail levels. Sure they'll need an HDD for the caching and probably DVDs won't cut it for content delivery, but that's already evident with other titles as well.

Yeah, various frame, G and other buffers can eat up a lot of RAM, but gigabytes?? We're not trying to run something in 2500*1600 at 8x SSAA. Geometry data shouldn't be as big, we're already at a point of many pixel sized triangles, so either tessellation and displacement has to get in, or something completely different (but I don't expect a sudden shift to voxels...)

Sound and voice can't really get any further IMHO, current consoles do absolutely fine, what else would we need?

Yeah you can use RAM for other things - caches, animations, other kinds of game data. But most of this is tiny compared to the textures and if you can remove that limitation with virtualized memory space then I don't see why 8 gigs would be necessary, especially when you consider all the other hardware aspects that could be improved in all current consoles.
 
Therefore it makes a lot of sense not to skimp on having twice the RAM for something like1/30th of the cost of the whole box if you want it to last!
That 1/30th the cost (and where'd you get 1/30th the cost - the RAM on PS360 was ~20% of BOM and it'd be some years before the extra RAM became a negligable cost) may net you only marginal improvements in games, and quite possibly not even enough for Joe Gamer to notice. A game's visuals are affected by model complexity, variety of models, texture quality, variety of textures, asset pop-in, lighting quality, shaders, shadow quality, transparency performance, antialising, anisotropic filtering, framerate and tearing, and whatever other techniques are being developed. More RAM only helps with a few of those, which can also be addressed by other techniques (displacement, megatextures/megameshes). More RAM won't provide an advantage in most games if the rival is offering less RAM targeted by multiplatform devs so that COD and FIFA and Madden players don't see any advantage in buying the larger-RAM console. A console company could go all out on funding an extravagant tour-de-force using a console's full RAM, but that'll only push a console's market success so much.
 
By the way arguing that 320 MB isn't enough for DX is kinda silly. It's built for a platform with 512MB of RAM, makes sense that the textures won't fit into VRAM on a PC, and the slow PCIX bus would cause texture trashing. Lower texture detail and it'll run fine.
 
You do understand that memory bandwidth can be as wide as you want but once you have to go after your texture data in the slower system RAM or in case of consoles, the HDD/optical drive, the performance drops to unplayable?

Just as a ballpark figure, how many console games texture direct from the optical drive when they run out of memory?
 
I feel like I'm hitting a brick wall here.

Shifty, do you understand that nowadays the most powerful platform is the lead platform for most devs? That just comes with the fact that you can scale down detail as necessary on the lower specced platforms while you cannot scale detail up if your original target was a lower specced box.

It completely negates the multiplatform point - if one platform is more powerful, it will be the lead platform.

Laa-Yosh, the whole idea of having enough RAM is so you can push the need to reduce IQ as far forward into the future as possible. Obviously the platform which is forced to reduce IQ first will suffer greatly at this disadvantage.

On the previous page someone said that bandwidth is more important than RAM size, again. You'd think that I put that dog to rest with the 8800 GTS 320MB vs 8800 GTS 640MB example.

Yes, you need to find a balance, but does 8GB of GDDR5 at 224GB/s sound bandwidth starved? I don't think so.
 
Here's a timely piece interviewing devs asking what they want next gen. A couple talk about graphics, and the last explicitly requests lots and lots of RAM, but the rest are more to do with connectivity, accesibility, non-graphics advances, and content creation. I dare say my idea of a Brave New World of a portable tablet based console that plugs in to a TV for use as a conventional console or playing media, would see far more success with lower specs than a standard powerful console. But to me it's clear that the console designers aren't just looking at specs any more, and are looking at the broader services and experiences. Saving money on hardware to invest in the other aspects of the console (and wider network, with Live on PC and mobile, and SonyNet doing whatever Sony finally do) experience is definitely something on their plate.

This isn't a thread to discuss overall strategies, and I'm just highlighting with these developers that the hardware performance target isn't really the aim here. We have been and should be talking about families of tech, and general configurations, rather than nailing down clockspeeds, core counts, and the business outcomes of particular options.
 
I feel like I'm hitting a brick wall here.

Shifty, do you understand that nowadays the most powerful platform is the lead platform for most devs? That just comes with the fact that you can scale down detail as necessary on the lower specced platforms while you cannot scale detail up if your original target was a lower specced box.
1) Have you read these console forums at all over the past few years?
2) Console development isn't like PC development. Resources are managed extremely closely (depending on developer). You can't just grab a load of assets and scale them, all done.
3) I've already mentioned the business argument to the contrary, where publishers don't always want to release a stronger game when they can. They aim for parity. Take that one up with Joker454 if you disagree. ;)

It completely negates the multiplatform point - if one platform is more powerful, it will be the lead platform.
Pffft. I guess you didn't pay any attention to last gen either! PS2 was the lead platform because it was the popular one. XB and GC got ports, and sometimes cheap-and-nasty ones to boot. It was very rare for the more powerful platform to be the lead because assets could be scaled down so easily.

Obviously the platform which is forced to reduce IQ first will suffer greatly at this disadvantage.
As evidenced by...?
 
I don't mean strictly doubling the RAM, simply have more RAM that what you expect your competitors to ship with.
By PS360 time between 512MB and 1GB or RAM the load time might have been longer but not significantly, the difference in what the costumers would have perceived would have been greater than the trade off.

As things stand I think the 360s extra available memory has been an advantage against the PS3, but it's hard to say how much. I think the 360s more advanced graphics chip has been a bigger competitive advantage (allowing higher frame rates and resolutions), and that extra budged directed towards more edram and GPU shaders may have had the greatest immediate impact.

Now if we speak 4GB vs 8GB, populating the RAM become more and more problematic and doubling already consistent loading may not be a good idea. So I agree with you, some pages ago I was actually considering 2GB back with enough flash storage a better tradeoff than 4GB without it. Obviously both is better, 8GB with really good SSD would be even greater, but I completely agree that economic considerations won't let this happen.

Yeah, there needs to be some intermediate storage to overcome optical disk issues. I'm interested to see what approach Nintendo take to loading with the WiiU. Maybe they'll just tough it out straight from optical...

I still get nostalgic for carts btw! :D
 
I feel like I'm hitting a brick wall here.

You built it, it's only fair you should hit it!

Shifty, do you understand that nowadays the most powerful platform is the lead platform for most devs? That just comes with the fact that you can scale down detail as necessary on the lower specced platforms while you cannot scale detail up if your original target was a lower specced box.

Scaling down things like models and core game logic is not necessarily that easy.

It completely negates the multiplatform point - if one platform is more powerful, it will be the lead platform.

That is definitely not always true.

Laa-Yosh, the whole idea of having enough RAM is so you can push the need to reduce IQ as far forward into the future as possible. Obviously the platform which is forced to reduce IQ first will suffer greatly at this disadvantage.

That'll be why the Xbox hammered the PS2 then, and why early on the Dreamcast killed the PS2 off entirely!

On the previous page someone said that bandwidth is more important than RAM size, again.

No-one said this.

You'd think that I put that dog to rest with the 8800 GTS 320MB vs 8800 GTS 640MB example.

Please explain how a 4GB Llano system with 1866 memory beats an 8GB Llano system with 1333 memory.

Yes, you need to find a balance, but does 8GB of GDDR5 at 224GB/s sound bandwidth starved? I don't think so.

Who told you this is what the next gen systems will have?
 
What about scenario where megaxture and megamodels become standard and the performance is more about how many unique "items" can be streamed per frame rather than how much can the engine prefill ram and reuse same assets.

To me it would make more sense to use some of the BOM towards making streaming work as good as possible rather than increasing ram over 2GB. In this scenario I would rather have fairly fast flash for caching/game installs and proper busses and io chips which enable streaming both from flash and optical media at the same time. Similarly rather than overly large ram I would prefer less of fast memory enabling devs to actually use the memory in significant ways.
 
Laa-Yosh, the whole idea of having enough RAM is so you can push the need to reduce IQ as far forward into the future as possible. Obviously the platform which is forced to reduce IQ first will suffer greatly at this disadvantage.

4GB is a LOT of memory and developers should be using it in clever ways instead of brute force methods. Virtual texturing, streaming, and so on. We're already seeing way more content on the current consoles than what would be possible if the devs hadn't done some serious work.

Loading all your textures into VRAM at the same time is wasting resources at this level. Faster background storage would allow 1. faster loading in general 2. similar visuals through the utilization of advanced data management, so it's a better investment altogether.

On the previous page someone said that bandwidth is more important than RAM size, again. You'd think that I put that dog to rest with the 8800 GTS 320MB vs 8800 GTS 640MB example.

You didn't, because it's an extreme case. Again, a game developed for 512MB memory obviously won't be able to fit all its runtime texture data into almost half as much VRAM.
And on the PC you can't compensate for that in any way, even if it might be possible to do some level of streaming on a closed platform.
 
Status
Not open for further replies.
Back
Top