Does the nVidia architecture still give you the 16bit edge?

mockingbird

Newcomer
I remember back in the day I hated my Radeon, because one of the follies of the Radeon architecture, was that if you lowered the color depth to 16bpp, you would still get the same speed as playing in 32bpp.

ATI touted this as a "feature". With nVidia cards, you were able to get a huge speed boost with demanding games.

Is this still the case?

I'm planning for a new system in the future, which will probably be a Kuma+AM3, and was wondering what my GPU option would be. Too bad, I really had faith in Trident's "surprise" which turned out to be bullshit. Looks like it's still ATI and nVidia.

ATI has always had shoddy product support. The original Rage Theatre would actually "deinterlace" on the video in port by just removing one of the fields altogether. And DSCALER wouldn't help because this was hardwired :???:

OTOH I hate nVidia because they've turned from the underdog into the evil narcissistic emperors of the GPU world. I'll probably still go with an ATI card, but it would be just nice to know.

Also, in anticipation of Carmack's Rage, how is ATI's OpenGL performance still on par with nVidia's?

Thanks
 
Do games still offer the option to use 16bit depth? Today with all the HDR craze most internally use 64bit anyways :p
 
If you are referring to the ALU processing precision for the shaders, G71 was the last time when 16-bit components were faster (or 32-bit being slower, depending on PoV). Since G80 and the mandatory requirements on a API level, regarding this issue, it's all 32-bit realm and hence no need to spend time and resources for this kind of optimization.
 
I still play some Quake3 based games like Jedi Academy and Jedi Outcast.

But it looks like I'm way behind ;) ;) ;) Oh well, what can I say, money's been a hard thing to come buy lately.
 
Uhm, you do realize that with a modern high-end GPU, FPS in Quake3 will be capped at 999, right? I don't think 16-bit support for these games makes any sense whatsoever anymore.
 
I would disagree with this.

My brother has a Radeon X1450 on his C2D laptop, and it's not powerful enough to drive Jedi Academy at the native LCD resoution of 1280x800.

I'm referring to OpenGL 1.5 and Directx5-8.1 games.

If you are referring to the ALU processing precision for the shaders, G71 was the last time when 16-bit components were faster (or 32-bit being slower, depending on PoV). Since G80 and the mandatory requirements on a API level, regarding this issue, it's all 32-bit realm and hence no need to spend time and resources for this kind of optimization.
 
What kind of GPU are you planning to get? It might be better if you tell us your budget and preferences so someone can make a possible configuration for you.
 
1.ATI has always had shoddy product support. The original Rage Theatre would actually "deinterlace" on the video in port by just removing one of the fields altogether. And DSCALER wouldn't help because this was hardwired :???:

2.OTOH I hate nVidia because they've turned from the underdog into the evil narcissistic emperors of the GPU world. I'll probably still go with an ATI card, but it would be just nice to know.

3.Also, in anticipation of Carmack's Rage, how is ATI's OpenGL performance still on par with nVidia's?

Thanks


As noone else has tackled thing points yet, I will.
1. Still living in 2000 are ya?
2. Stupid reason for not going with the best available, but whatever floats your boat.
3. For the most part, ATI has improved from the Q3A days.
 
My brother has a Radeon X1450 on his C2D laptop
X1450 is pretty damn far from 'modern high-end'. And the bottlenecks are elsewhere nowadays so it feels pretty unlikely that 16bpp would give you a significant performance boost anyway (although I'm sure it would - just not much).

Let's be realistic though: nowadays, if you want to play recent games, you either buy something on the level of the HD3850/8800GS as a strict minimum, or you don't and only play old stuff. It's a sad situation to be in, but whadya want. The perf/$ at that pricepoint (>$150) is incredibly higher than in the low-end, anyway. Just to give you an order of magnitude to think about: I would estimate a HD3850 to be 6 to 10 times faster than a Mobility Radeon X1450.
 
I remember back in the day I hated my Radeon, because one of the follies of the Radeon architecture, was that if you lowered the color depth to 16bpp, you would still get the same speed as playing in 32bpp.

ATI touted this as a "feature". With nVidia cards, you were able to get a huge speed boost with demanding games.
No, you get that backward. Even then, both GeForce and Radeon cards would have the same performance with 32bpp and 16bpp internally. So technically, the nvidia cards (tnt-GeForce2) didn't get faster with 16bpp, they got indeed slower (a lot) with 32bpp. The reason a GeForce2 got much slower with 32bpp was that (a) it was faster to begin with (could output 4 pixels per clock) than a Radeon (2 pixels per clock) and (b) it didn't have any bandwidth saving features. Thus it was completely and utterly limited by memory bandwidth in 32bpp mode, in contrast to the radeon (the hyperz features it had were probably actually a bit overkill for the raw performance it had...).
But anyway, that's certainly not relevant any longer. Both vendors have bandwidth-saving methods, and I bet those aren't optimized for 16bpp...
 
Thanks McZak, I was just gonna post up about that bit too. The "speed boost" with 16bpp bit cracked me up pretty hard as a rewrite of history.

Also gotta agree ATi had crap support back in the day, but they totally turned that around years ago and have been excellent for a long time now.

Double-also; my C2D laptop has an X1400 and while I think it's a great card for viddy/web cruising on a 17" laptop it is by no means a gaming card.....not by a long stretch!
 
Both G80/G9x and R600/RV6x0 look horrible at 16-bit color depth. They don't dither at all and you get loads of banding as a result.

The crazed fan(atic)s of the Thief games and System Shock 2 are "up in arms" over this issue because their treasured games look terrible on the new cards. Local hero (and Oblivion hero), Timeslip, has written a hack to force D3D to render Dark Engine-powered games at 32-bit depth. It's pretty buggy though. Crashes Shock 2 when the inventory comes up.
http://www.ttlg.com/forums/showthread.php?t=113501

This is what Quake 3 looks like on G80 if you set the game to 16-bit color depth. :)
quake3%202007-03-18%2017-32-44-90.jpg


System Shock 2 with 16X AA and 16X AF, in addition to loads of banding!
SHOCK2%202007-03-16%2001-28-36-04.jpg



So if you really want to play a game with 16-bit color-only, you'd better stick to GF 7900 and X1950 and down. I believe the NV folks have admitted that they are aware of the limitation of the GPUs, but I doubt they consider it a major issue.
 
Last edited by a moderator:
Why do I feel as though I stumbled into a thread written by Rip Van Winkle?

I half expected someone to come chiming in about a Voodoo card.
 
Why do I feel as though I stumbled into a thread written by Rip Van Winkle?

I half expected someone to come chiming in about a Voodoo card.

I always preferred the dithering from 3Dfx cards, compared to the nVidia cards, when rendering in the 16bit days. ;)
 
16bit mode is very useful on a geforce 6100 where you're awfully bandwith limited, even (or especially) in older games. and you still can use 32bit compressed textures.for the same reason you may want to run 16bpp on IGP and laptop GPUs.

with a future (or current) midrange card though.. you're looking at a HD 3850/70, 9600GT or RV770. fast ram on a 256bit bus. From 9700 pro level and onwards 16bpp is pretty much irrelevant (and on my ti4200 it was for running AA 8xS in some older games)
 
Come to think of it, I may actually have been affected by this. I installed Might & Magic 6 recently, due to a nostalgia thread, and it looked awful being rendered by my 8800 GTS. I don't remember it looking quite that bad. Maybe I need to put it on lower end hardware LOL
 
I always preferred the dithering from 3Dfx cards, compared to the nVidia cards, when rendering in the 16bit days. ;)

and on voodoo5 ypu have a driver option to force 32bit rendering even though you never really needed it. 16@22bit is great (improved on V5) and most times 16bit looks good on old game, especially if 32bit is not even supported.
 
Both G80/G9x and R600/RV6x0 look horrible at 16-bit color depth. They don't dither at all and you get loads of banding as a result.

The crazed fan(atic)s of the Thief games and System Shock 2 are "up in arms" over this issue because their treasured games look terrible on the new cards. Local hero (and Oblivion hero), Timeslip, has written a hack to force D3D to render Dark Engine-powered games at 32-bit depth. It's pretty buggy though. Crashes Shock 2 when the inventory comes up.
http://www.ttlg.com/forums/showthread.php?t=113501

This is what Quake 3 looks like on G80 if you set the game to 16-bit color depth. :)
quake3%202007-03-18%2017-32-44-90.jpg


System Shock 2 with 16X AA and 16X AF, in addition to loads of banding!
SHOCK2%202007-03-16%2001-28-36-04.jpg



So if you really want to play a game with 16-bit color-only, you'd better stick to GF 7900 and X1950 and down. I believe the NV folks have admitted that they are aware of the limitation of the GPUs, but I doubt they consider it a major issue.

Interestingly revealing. Video card manufacturers getting a little ahead of themselves. It's like Abit when they tried to dump PS/2 from their "IT7" mobos. (Yea, yea. I know. I'm living in the past)

The immediate solution that comes to mind is to use a PCI supplementary card for older games. Of course, future motherboards will not have PCI slots and to my knowledge, there are no PCIe x8 or x4 video cards and I don't envision any in the future.

So, in retrospect, I'm glad I brought this topic up even in the face of some well intentioned derisive heckling. Want to play older games? Your new computer is less effective at this task than a Tualatin or Coppermine with a Geforce2 GTS...

No, you get that backward. Even then, both GeForce and Radeon cards would have the same performance with 32bpp and 16bpp internally. So technically, the nvidia cards (tnt-GeForce2) didn't get faster with 16bpp, they got indeed slower (a lot) with 32bpp. The reason a GeForce2 got much slower with 32bpp was that (a) it was faster to begin with (could output 4 pixels per clock) than a Radeon (2 pixels per clock) and (b) it didn't have any bandwidth saving features. Thus it was completely and utterly limited by memory bandwidth in 32bpp mode, in contrast to the radeon (the hyperz features it had were probably actually a bit overkill for the raw performance it had...).
But anyway, that's certainly not relevant any longer. Both vendors have bandwidth-saving methods, and I bet those aren't optimized for 16bpp...

Hmmm. So the Geforce was superior all along. The Geforce2MX (Addmitedly released at a significantly later time than the Radeon 7200) crushed the Radeon 7200 in 16bit and was almost on par with it at 32bit and cost less than half. Yea yea, I know, ancient news.

Boy, ATI was overpriced from the start. Too bad 3dfx was bankrupted by some unknown entity.
 
Back
Top