256MB Graphics Cards

When do you think 256MB cards will be necessary?

  • Next 6 months

    Votes: 0 0.0%
  • Next 12 Months

    Votes: 0 0.0%
  • When DoomIII Ships!!

    Votes: 0 0.0%
  • 256MB?? This is getting silly...

    Votes: 0 0.0%

  • Total voters
    133

Dave Baumann

Gamerscore Wh...
Moderator
Legend
Just wondering what your thoughts on this are, given that at this time it seems as though there won't be any 256MB NV30's.
 
When I can afford one?

At the moment I dont think they are necessary..most systems dont have that much RAM!

OK.. howabout when I can get twice the performance of current top end technology for half the price :)
 
I voted 12 months and hopefully the new PCI express will help bandwidth yet it still delivers only 8Gbps vs 19.6 on the 9700 onboard ram, so I expect memory to increase again...

Interesting you would post that now :D
 
256MB will be "necessary" about a year after it becomes a high end standard. And by "necessary" I mean "to run 3DMark05." :p
 
Aren't we getting to the point where, even with full FSAA, almost half the 256 MBs of memory would be completely unused most of the time? I think this is jumping the gun, just a little bit...
 
Nagorak said:
Aren't we getting to the point where, even with full FSAA, almost half the 256 MBs of memory would be completely unused most of the time? I think this is jumping the gun, just a little bit...

6xFSAA at 1600x1200 uses how much memory?
(on a 9700)?

Even 4x uses enough to cause slowdows (AGP texturing) in some maps of UT2K3 at that res.
 
I was considering this after reading a thread discussion about high end graphics moving to integrated chipsets (I don't think so, given the lack of significanceof AGP speed, and the scheduled arrival of higher speed bus specifications, but maybe some developers have some exciting ideas that just aren't possible currently).

What struck me as an interesting question, given that monitor evolution seems to have shifted to quality/space improvements rather than increasing size (and therefore increasing resolution) is whether we're near a peak, and what level that peak will be. Even if AA both continues to be memory storage inefficient, what would make us go past 512MB at all (consumer 3d cards)? Texture details are going to be added by shaders, not just more and more textures, and even an application we have that does that (UT 2003) has trouble utilizing even 256MB.

Hmm...maybe 3d textures will take off? How widely could they be used? Could that do it?
 
If ATI ever issues a card or drivers that offer the option of pure rgss or a hybrid wouldn't 256 megs be very useful? Also, if Vogel is reading this especially, doesn't UT2k3 "automatically" (IIRC the .ini file can already be hacked) offer the option for ultra high rez textures ONLY if it detects a video card with 256 megs of memory? Afaik the highest texture setting(s) available in the game offers nothing compared to the next highest setting(s) for cards with 128 megs ...... there's been several threads on this here and elsewhere in the past.
 
Althornin said:
Nagorak said:
Aren't we getting to the point where, even with full FSAA, almost half the 256 MBs of memory would be completely unused most of the time? I think this is jumping the gun, just a little bit...

6xFSAA at 1600x1200 uses how much memory?
(on a 9700)?

Even 4x uses enough to cause slowdows (AGP texturing) in some maps of UT2K3 at that res.

1600 * 1200 * 4 * 3 * 6 = ~131 MBs?
And that's not even counting triple buffering.

Anyways as you say it's allready needed in certain situation.
 
I think they need to start fixing the whole CPU/motherboard stuff. Graphics cards have had an astounding rise over the past years, and I guess CPU speed has as well, but the interface I.E. AGP 8x, or pci express still isn't up to par, nor are harddrives. Anyway 256 seems overkill to me, I mean we just started using 128.
 
1600 * 1200 * 4 * 3 * 6 = ~131 MBs?

As discussed in the thread about GFfx's 2xMSAA which blends the two sub-sample framebuffers in a post-filter, most FSAA implementations (including GFfx at >2x) blend the sub-sample backbuffers in the GPU and then store then in a single frontbuffer. (Or "middlebuffer" if you're triple buffering? What's the correct term for this?)

So, with triple buffered 6xMSAA, that'd be 1600 * 1200 * 4 * (6 + 2) = ~59 MB. (I think. Can you get away with reusing any of the buffers and still have triple buffering?)

Any game that doesn't need AGP texturing to render in 1600 * 1200 no AA on a 64 MB card, also won't need AGP texturing to render in 1600 * 1200 6xAA on a 128 MB card. The problem with some of the 3DMark03 tests is that they barely fit on a 128 MB card with no AA, which is a new situation.

edit - Ante P: you wrote that the calculation I quoted above is *not* counting triple buffering? Then what's the factor of 3 for? (Apologies in advance if I'm missing something stupid...)
 
Tahir said:
Yeah I hear from my sources 256MB is gonna be an option on R350.

No, really. :idea:

Yeah - the 6x mode is apparently about as useable as the current 4x mode on the R300. 256MB RAM will be beneficial in certain situations. "Ultra high-end" AIBs only though.

MuFu.
 
Dave H said:
So, with triple buffered 6xMSAA, that'd be 1600 * 1200 * 4 * (6 + 2) = ~59 MB. (I think. Can you get away with reusing any of the buffers and still have triple buffering?)
That calculation doesn't seem to count the memory needed for the Z-buffer. With a 32-bit multisampled Z-buffer, it becomes
1600 * 1200 * ( 6*8 + 2*4 ) = ~103 MBytes.
 
I think TC is very common now, and it's largely used to pack in higher-resolution textures by game developers. So, TC is not helping reduce the memory requirements of video cards.
 
MuFu said:
Yeah - the 6x mode is apparently about as fast as the current 4x mode on the R300. 256MB RAM will be beneficial in certain situations. "Ultra high-end" AIBs only though.
Those numbers sound so excessive as to be fake. I'm excited. :)
 
That calculation doesn't seem to count the memory needed for the Z-buffer. With a 32-bit multisampled Z-buffer, it becomes
1600 * 1200 * ( 6*8 + 2*4 ) = ~103 MBytes.

Thanks for the correction. Aren't most Z-buffers still 24-bit, though? (That'd get you to ~92MB.)

OT but related: how exactly are framebuffer/z-compression working such that they save bandwidth yet don't save memory usage? Conversely, wouldn't hierarchical-Z require more memory space (in the same way that mipmaps take up more memory but reduce bandwidth utilization)?
 
Dave H said:
That calculation doesn't seem to count the memory needed for the Z-buffer. With a 32-bit multisampled Z-buffer, it becomes
1600 * 1200 * ( 6*8 + 2*4 ) = ~103 MBytes.

Thanks for the correction. Aren't most Z-buffers still 24-bit, though? (That'd get you to ~92MB.)

OT but related: how exactly are framebuffer/z-compression working such that they save bandwidth yet don't save memory usage? Conversely, wouldn't hierarchical-Z require more memory space (in the same way that mipmaps take up more memory but reduce bandwidth utilization)?

RE: your OT - I'll let someone with more experience answer. I have the idea, i think, but i dont want to sound the fool, plus i cant think of how to say it.

Still, even 92MB leave only say 36 MB for textures.

UT2k3 certainly uses more than this, and the amount needed is only gonna go up....
 
Back
Top