128 MB better in the long run

rAvEN^Rd

Newcomer
Someone just argued in another forum that a Ti4200 with 128 MB would be better than 64 MB in the long run. He argued that no games today would use 128 MB of memory but that the would in the future. Some of you helped me understand the thing with memory last week and based on that the argument appears to be flawed.

Here's how I figure:
Games today can very well use more than 64 MB of memory if you just have a high enough resolution and use some demanding sort of FSAA ( 1600x1200x32 4 sample for example). I think it's true that people will need 128 MB and that they will run their games (or whatever) in higher resolution but they won't do it on a Ti4200 since that card just won't provide the proper framerates in that mode.

My conclusion is that you should get 128 MB of memory if you intend to run high resolutions with FSAA. If you don't want that you might as well go with the 64 MB version since that has faster memory.

Does my reasoning make sense or didn't I quite understand what I was told about memory?
 
reasonable sense. Also how often do you upgrade? Is their a fair chance you will get an R300/Nv30 part by say next spring. If the answer is yes, you might as well get the faster cheaper 64meg version now.
 
Isn't most of the extra memory used to hold textures? So if games started appearing with alot more texture data then you need that extra memory for performance.
 
The Ti 4200 doesn't really have enough fillrate to run at the highest resolutions with FSAA at good performance, anyway, so it is of little consequence.

The fact remains that (at least most) of the 64MB variants have faster memory, leading to higher performance in the vast majority of situations. Not until games are developed for DX8 hardware as a mininum specification (About three years away yet) will 128MB really be a significant benefit.
 
I've been wondering a lot about this myself. Before, the only textures the game developer had to worry about was the graphic artwork that gets slapped onto the model. Now that games are featuring fancy shading effects, this brings about more textures to handle such as specularity maps, bump maps, normal maps, etc. So do these new kinds of maps cause proportionately additional consumption to the memory area traditionally reserved for conventional texture maps or are they generally more "lightweight" as to not be a significant contributor of VRAM use?

Take this Doom 3 game, for instance. It's been said that it will be ideally running with about 90 MB of textures. Is this relatively large number due to the game really using 90 MB of graphic artwork or is the graphical artwork represented by a more nominal value, but the additional use of specularity, bump, and normal maps result in this "bulky" rating?

I'm a laymen at best for this kind of topic, so please excuse if I have used incorrect terminology or conceptualization in my comments above.
 
For one, I think that bump maps often call for even more accuracy than standard texture maps. Anyway, yes, bump maps can eat up tons of space, though I'm not certain you need that many. I would imagine that each surface in DOOM3 holds a surface textures and a bump map. The bump map in particular would be referenced multiple times in order to produce the lighting effects.
 
Back
Top