ALU:TMU vs. Unified Shaders

Hello All,

I had a few quick questions. I found it interesting that the x1900 had 3:1 ALU:TMU ratio and was wondering when midrange $200 card would have the same (2:1 or 3:1). Then I thought about the whole unified shader Vista thing and wondered if we had time for another round of midrange cards with a non-1:1 ratio before everything had unified shaders.

1] Will anything have unified shaders 2006 Christmas?

2] Will everything have unified shaders 2006 Christmas?

3] Will we have a split with budget cards still being a simple 1:1, midranged cards 3:1 (or other), and high end cards being unified (this is if nVidia goes unified)?

I am just wondering if a midranged 8 pipe 3:1 128 bit card would be a better idea than a highend 16 pipe 1:1 256 bit more expensive card. My first thoughts would be that while the 16 pipe card would be faster in newer games by brute force, the 8 pipe 3:1 card might be able to close the gap enought to not justify the extra cost of the high end card.

Just thinking, and I only normally lurk in this fourm so go easy on me. =)
Dr. Ffreeze

drffreeze.net
 
Dr. Ffreeze said:
Hello All,

I had a few quick questions. I found it interesting that the x1900 had 3:1 ALU:TMU ratio and was wondering when midrange $200 card would have the same (2:1 or 3:1).
The Radeon x1600 (Pro and XT) has a 3:1 ratio of ALUs to texture sampling units.
Dr. Ffreeze said:
Then I thought about the whole unified shader Vista thing and wondered if we had time for another round of midrange cards with a non-1:1 ratio before everything had unified shaders.
I don't think the existance or non-existance of Vista has anything to do with unified shading architectures. At the hardware level, the engineers can do whatever they want to do, and a "classic" architecture will certainly not make it impossible to support a more unified software model and vice-versa.

Going unified at the hardware level will:
1)enable instant proper vertex texturing support
2)enable load balancing between vertex and fragment/pixel bottlenecks

These are both much better reasons for that shift than Vista. As all things in life it comes at a complexity cost though, which makes this an interesting question nevertheless IMO.
Dr. Ffreeze said:
1] Will anything have unified shaders 2006 Christmas?

2] Will everything have unified shaders 2006 Christmas?

3] Will we have a split with budget cards still being a simple 1:1, midranged cards 3:1 (or other), and high end cards being unified (this is if nVidia goes unified)?
1)Possibly. Don't know anything about that, but if it happens it'll more likely be a high-end part IMO.

2)No.

3)We have it now, as mentioned above, and it's not a unified shader architecture. I wouldn't expect the midrange to make such a dramatic architectural shift in one year.

Dr. Ffreeze said:
I am just wondering if a midranged 8 pipe 3:1 128 bit card would be a better idea than a highend 16 pipe 1:1 256 bit more expensive card. My first thoughts would be that while the 16 pipe card would be faster in newer games by brute force, the 8 pipe 3:1 card might be able to close the gap enought to not justify the extra cost of the high end card.
Good point. You might want to read this:
http://www.beyond3d.com/reviews/ati/r580/int/

Basically, ATI have decided that a 4 pipe card with a 3:1 ratio is a better idea than an 8 pipe card with 1:1.
ATI may or may not release a doubled x1600 ("x1700" is the expected name; that would be a 3:1 ALU:TEX ratio and 8 "pixel pipelines", 4+ "vertex pipelines") later in the year.

I'm a bit foggy on what's brewing at NVIDIA currently. I'm slightly out of the loop.
 
Back
Top