Which is better, more RAM or a stronger GPU?

Rangers

Legend
This question is probably too open ended to get anywhere, but it's something I've often wondered at.

Lets say for example, using PS3 as an example, you could either have a 50% GPU clock increase (no additional capabilities) (and lets say somehow, 50% bandwidth increase as well to avoid a bottleneck). OR, you have 256 MB of additional Vram. Giving the machine 768 MB of total RAM (50% increase). The same question can be applied to 360 btw, it doesnt matter.

Another good specific way to phrase the question was, I was thinking about how the Lindbergh arcade hardware would fare as a home console. The Lindbergh has a big RAM edge, (1GB system memory+256MB GPU) whereas it has a 6800 GPU that is a generation behind the current consoles. So it's basically, ignoring the CPU etc, the RAM vs GPU argument again..how would Lindbergh games look versus the current home consoles, were it a heavily supported home console?

The whole Lindbergh thing may be an unnecessary confusion to the issue, so just focus on the first question.
 
As always, it's a balancing act. In the current consoles, more VRAM would probably give people the preferred improvement in texture quality and perhaps image quality (dunno if most games are BW limited or GPU limited regards AA and AF). However, if you have 16 GBs RAM on a Ti4200, versus a GTX295 with 4 MBs RAM, both are going to fail to give the results of a medium level GPU with a moderate amount of VRAM. Because RAM is normally a premium component on consoles, it tends to be the array they are weakest, and ordinarily you'd get better results by adding more RAM over the current designs instead of upping the GPU. But Wii shows the alternative can still happen ;)
 
Something tells me that if the current consoles had at least a gig of RAM that things would be a lot different with respect to load times, texture quality, and lots of other stuff.

512MB is actually very gimpy with the quality of graphics the machines are supposed to run. That amount, shared for everything, is really unfortunate. But consoles have always been crippled with respect to RAM capacity.

I'm surprised that the console makers don't think more proactively about RAM capacity, considering how without fail RAM prices drop dramatically throughout the life of the machines. I suppose it's also possible that they are much more interested in seeing hardware profits and/or winning price wars.
 
Well it wasn't particularly crippled for the timeframe in which they were designed/released. At the time most video cards didn't have more than 256 megs of memory. 512 meg cards were just starting to gain momentum (other than enthusiast class cards). System memory is a bit deceiving as a console doesn't require a lot of memory in order to support an OS that can do everything as well drivers and applications that are always on.

So 256 megs for system memory was adequate although perhaps a little low when 512 would have been a bit better.

Compared to current day machines, the memory on both counts is looking more and more anemic however.

But as to the question posed...

More GPU power would allow more advanced shading techniques with less compromises having to be done... While more memory would allow more textures to be loaded and just more data to be loaded and utilized.

I'm not sure which I would prefer out of those. As more art assest would be nice, however, we've certainly not hit a wall with regards to what can be done with shaders.

Although I'd probably lean more towards memory if it ensured that devs ALWAYS used AA/AF.

Regards,
SB
 
The current optical drives would spend about 5 minutes with loading up 1 GB of RAM, so I don't know how much use that would be. Okay, so load once at the beginning and then keep streaming, but still... the entire point of a console is to be able to turn it on and start playing immediately. And we already have to endure at least 2 minutes with every single game, on just 512MB...
 
The current optical drives would spend about 5 minutes with loading up 1 GB of RAM, so I don't know how much use that would be. Okay, so load once at the beginning and then keep streaming, but still... the entire point of a console is to be able to turn it on and start playing immediately. And we already have to endure at least 2 minutes with every single game, on just 512MB...

It might improve the annoying caching to HDD that many games are doing though.... That stuff is almost like good 'ol swapping to disk on PCs.

How about all that texture pop-in that's frequent with UE3 games? Mass Effect was a mess with that. Load more into RAM, in the background, and leave it there.

You are right though that the optical drives are major disadvantages when it comes to getting data into RAM. Consoles are the only place I know of where optical drives are still used to stream data. PCs haven't done that since the early days. It's hard for me to think up things more annoying than listening to 360's DVD drive running at max speed continuously to stream data.
 
Well I am a gpu whore, so I'd take faster video hardware :) Look at it this way, with the same ram and faster video hardware, you'd have todays games looking exactly the same except at 60fps with no screen tearing.
 
Well I am a gpu whore, so I'd take faster video hardware :) Look at it this way, with the same ram and faster video hardware, you'd have todays games looking exactly the same except at 60fps with no screen tearing.

Hey Joker, how far could a console manufacturer go with faster hardware without screwing up the games? Say Microsoft decided that next year they wanted a Directx11.5 class GPU instead of the Xenos, would the games still work flawlessly? Also how far could it go before it runs into another bottleneck? The game couldn't just render at twice the frame-rate without being CPU limited or am I missing something?

Wouldn't it be easier to render at a higher resolution than render at a faster framerate, seeing as rendering resolution is almost completely in the GPUs domain.

Lastly, how much could they change/fiddle with without actually breaking the games?
 
Last edited by a moderator:
Hey Joker, how far could a console manufacturer go with faster hardware without screwing up the games? Say Microsoft decided that next year they wanted a Directx11.5 class GPU instead of the Xenos, would the games still work flawlessly? Also how far could it go before it runs into another bottleneck? The game couldn't just render at twice the frame-rate without being CPU limited or am I missing something?

You usually can't mess with console hardware without breaking stuff. I think the only "safe" bet would be to just jack up cpu/gpu clock speeds. In theory they could use newer hardware and try to make it compatible with the back catalog of games, but you just never know if every game would work right with all the kooky things devs do.

Without any code re-writes you would indeed hit cpu limits if you just slapped a newer gpu in there, although I still think most games are gpu limited so you'd see a real nice performance/visual jump just from a better gpu (like no tearing and better AA). If you were brave though you could make your game multi console aware, and shuffle some cpu work to the gpu on the newer console version. Yeah...not likely :)


Wouldn't it be easier to render at a higher resolution than render at a faster framerate, seeing as rendering resolution is almost completely in the GPUs domain.

There are some issues with just bumping up the res. For example, some post process buffers are sized to match and/or be a multiple of the main frame buffer to achieve the desired effect. So if you tried bumping the res you might affect the visual results unless you also possibly bumped the size of some post process buffers, but then that affects memory. Or, bumping res would affect tiling calculations in the 360's case, etc. In any case resolution is overrated, there's way better places to spend cycles :)
 
You usually can't mess with console hardware without breaking stuff. I think the only "safe" bet would be to just jack up cpu/gpu clock speeds. In theory they could use newer hardware and try to make it compatible with the back catalog of games, but you just never know if every game would work right with all the kooky things devs do.

Without any code re-writes you would indeed hit cpu limits if you just slapped a newer gpu in there, although I still think most games are gpu limited so you'd see a real nice performance/visual jump just from a better gpu (like no tearing and better AA). If you were brave though you could make your game multi console aware, and shuffle some cpu work to the gpu on the newer console version. Yeah...not likely :)




There are some issues with just bumping up the res. For example, some post process buffers are sized to match and/or be a multiple of the main frame buffer to achieve the desired effect. So if you tried bumping the res you might affect the visual results unless you also possibly bumped the size of some post process buffers, but then that affects memory. Or, bumping res would affect tiling calculations in the 360's case, etc. In any case resolution is overrated, there's way better places to spend cycles :)

Thanks! So essentually if the console maker decided to bump the clock speeds and bandwidth up by say 10% all round and 20% on the GPU, it won't break anything and perhaps yield the best case scenario response from the hardware and that would yield more consistant framerates and a better overall user experience? Then I for one advocate that they do just that! I would rebuy my Xbox 360 console just to make all the games run perfectly with fewer frame-rate hitches and no screen tearing.
 
The current optical drives would spend about 5 minutes with loading up 1 GB of RAM, so I don't know how much use that would be. Okay, so load once at the beginning and then keep streaming, but still... the entire point of a console is to be able to turn it on and start playing immediately. And we already have to endure at least 2 minutes with every single game, on just 512MB...

I more or less agree with your reasoning but how did you come up with 5 minutes?

1000/20=50 sec
1000/10=100 sec

I hate loading screens but I don't recall seeing one as bad as 2 minutes either.
 
I more or less agree with your reasoning but how did you come up with 5 minutes?

1000/20=50 sec
1000/10=100 sec

I hate loading screens but I don't recall seeing one as bad as 2 minutes either.

I think hes talking about the time it takes to actually 'play' once you hit that 'play button'. With consoles its fairly significant with all the movies etc they play to hide the loading.
 
Thanks! So essentually if the console maker decided to bump the clock speeds and bandwidth up by say 10% all round and 20% on the GPU, it won't break anything and perhaps yield the best case scenario response from the hardware and that would yield more consistant framerates and a better overall user experience? Then I for one advocate that they do just that! I would rebuy my Xbox 360 console just to make all the games run perfectly with fewer frame-rate hitches and no screen tearing.

Yeah I think bumping the clock would be the cheapest route too in addition to being 100% BC. The other only option for perfect BC would be to simply double the GPU hardware similar to a SLI setup and just disable one GPU when operating in BC mode, of course this option would be quite expensive. SEGA did this when they released NAOMI 2 which was 100% BC with NAOMI 1.
 
Yeah I think bumping the clock would be the cheapest route too in addition to being 100% BC. The other only option for perfect BC would be to simply double the GPU hardware similar to a SLI setup and just disable one GPU when operating in BC mode, of course this option would be quite expensive. SEGA did this when they released NAOMI 2 which was 100% BC with NAOMI 1.

It probably wouldn't be worth it, unless of course they were pad limited for a shrink and had to/could chuck something in there to space out the die. What would you say the chances of that being for say the 45nm node?

But yeah I would definately be interested in an Xbox 360 with slightly faster internals! :D
 
I think hes talking about the time it takes to actually 'play' once you hit that 'play button'. With consoles its fairly significant with all the movies etc they play to hide the loading.

Exactly. You usually get 3-4 title screens and such, then a main menu to look around, and altogether it takes at least 2 minutes to start playing any game.
 
Exactly. You usually get 3-4 title screens and such, then a main menu to look around, and altogether it takes at least 2 minutes to start playing any game.

The BOM for 2 GB Flash is 4 US$ today. By the time they introduce consoles with more memory it should be much lower, perhaps 8GB for the same price. I do hope flash caches (with clever management) for optical disc drives and HDDs will turn real sometime in the near future.

Imagine having startup times < 5 seconds for your games the second time you start them. :D
 
I wonder if any of the console devs still use clock speeds for game timing though. In older machines, they did. That means if the clocks were changed, gameplay speed will run too fast. Everything would get turbo and broken. They'd have to add a way to switch the system from "legacy mode" to "faster mode" ala Wii.

The companies will never split their market up like this though. I'm not sure there would be a tangible benefit to them. People are going to buy the hardware regardless of whether it gets improved or not.

The games would almost certainly need to be patched to actually do anything besides run faster. Antialiasing and vsync won't magically enable. Good luck getting companies to patch their old catalog.
 
Last edited by a moderator:
I wonder if any of the console devs still use clock speeds for game timing though. In order machines, they did. That means if the clocks were changed, gameplay speed will run too fast. Everything would get turbo and broken. They'd have to add a way to switch the system from "legacy mode" to "faster mode" ala Wii.

The companies will never split their market up like this though. I'm not sure there would be a tangible benefit to them. People are going to buy the hardware regardless of whether it gets improved or not.

The games would almost certainly need to be patched to actually do anything besides run faster. Antialiasing and vsync won't magically enable. Good luck getting companies to patch their old catalog.

I think the thought process is merely to smooth out the frame rate (not increase) and reduce screen tear.
 
Thanks! So essentually if the console maker decided to bump the clock speeds and bandwidth up by say 10% all round and 20% on the GPU, it won't break anything[...]

Well, in theory. In practice, that's a really nice way to find previously undetected race conditions. Besides, you would have to deploy a whole bunch of new devkits, QA would have to test on two platforms... fun.
 
Back
Top