Most Significant Graphics Technology for 2005

Sinistar said:
Chalnoth said:
Smartshaders are novelty toy, nothing more.
sort of like ps 3.0 an a 6800. ;)
Smartshaders are these little programs that can be turned on by the end-user to apply post-processing to a frame. They have no merit to a game developer, since the game developer can do this anyway. PS 3.0, on the other hand, allows more flexibility, allowing for higher performance on a wider variety of shaders than PS 2.x. More freedom for the developer is vastly different from a novelty toy that end-users turn on to make their games look funny.
 
digi said:
It's not so much a specific technology IMHO, but rather the fact that the two top IHVs have such tight competition this year.

THAT is the most signifigant 3D-thingy for me this year, and I feel it's just been a win/win year for the consumer.

I would have to disagree with this. While there may be more competition between the two which ultimately helps improve performance, there is also a downside. Just look at the prices, as much as $700 for an X800 XT PE and around $650 for a 6800 Ultra. And that's if you can find them.

Chalnoth said:
Smartshaders are these little programs that can be turned on by the end-user to apply post-processing to a frame. They have no merit to a game developer, since the game developer can do this anyway. PS 3.0, on the other hand, allows more flexibility, allowing for higher performance on a wider variety of shaders than PS 2.x. More freedom for the developer is vastly different from a novelty toy that end-users turn on to make their games look funny.

Chalnoth Chalnoth, I thought we had gone over this SM3 ordeal enough already. Increasing the instruction set limit and making it a little easier for developers does not constitute a very special or worthwhile feature. Especially considering the cost to implement it. Obviously since nvidia already has it implemented they really have no choice but I doubt we'll see any low end cards with the feature for some time, at least not until 6800U performance becomes the mid-low standard.
 
because I am usually wrong, I want to make sure that few things DO happen:

In 2005:
- There's not going to be new manufacturing processes.
- There's not going to be commericial hardware raytracer for desktop market.
- There's not going to be capable 3rd player in high end.
- There's not going to be noticiable price drop in card prices in europe.
- There's not going to be any suprises what so ever.



I think that's enough for start. ;) let's see in 3rd of Jan. 2006 how wrong I actually was. ;)
 
making it a little easier for developers does not constitute a ... worthwhile feature.

Ease of development is a worthwhile feature. The harder something is to develop for, the longer it takes for games to be developed, the more it costs for said games to be developed and debugged.

Many of the effects done by DX8 and DX9 can still be done on older register-combiner-style pipelines, the major difference between doing it on DX7 and DX8 is the ease of using PS1.0 syntax vs a bunch of Set* functions, and perhaps the ability to collapse a pass. Likewise, PS2.0 added some instructions and longer programs, but the bulk of shaders still done today can be accomplished on 1.4 and earlier. Some via multipass, which isn't always a huge performance loss, and is more of a developer headache than anything.


PS2.0 introduced more orthogonality into the instruction set, more temporary registers, a HLSL shading language, an effects framework, and more features which, more then anything, make using DX9 more developer friendly.

Take HDR in SM3.0 for example. You can do HDR blending with fragment programs and lots of multipass, but it's a major pain in the ass, especially if you want it to run well. Such a pain in the ass that most developers will avoid it, even if the performance implications were smaller. Having HW support, more than anything, makes it easier to implement.

The PS/2 is a great example of this. A bitch to program for, but had enough fillrate performance that if you spent the extra effort, you could accomplish effects which DX8 HW could do with shaders.
 
ANova said:
Chalnoth Chalnoth, I thought we had gone over this SM3 ordeal enough already. Increasing the instruction set limit and making it a little easier for developers does not constitute a very special or worthwhile feature. Especially considering the cost to implement it. Obviously since nvidia already has it implemented they really have no choice but I doubt we'll see any low end cards with the feature for some time, at least not until 6800U performance becomes the mid-low standard.
Well, increasing the possible program length is secondary to the improvements to the instruction set. Dynamic branching, predication, gradients, no dependent texture read limitation, no limitation on number of texture instructions, and more interpolated registers are also benefits of PS 3.0 over PS 2.0 and PS 2.0b (the additional interpolated registers are what allow FarCry to do its lighting in fewer passes with SM3 hardware).

And what, pray tell, do you think the cost is? There are far too many differences between the NV4x and R4xx parts to really compare them in transistor counts on the basis of PS 2.0 vs. PS 3.0 functionality only. And as for price to the consumer? Well, it seems to me that nVidia's products are faring pretty well on price vs. performance at the moment.
 
I'm not arguing that SM3 isn't a step up from SM2 (hmm dejavu) but it certainly isn't anything special either. Yeah Far Cry implemented it and we saw the results, 5 fps typical gain. Lets jump for joy! Plain fact is SM3 won't come in handy until shaders become quite complex, at which time even current high end cards will run too slow to play at any normal rate. It was more of a checkbox feature for nvidia to tout (as it was the only real advantage, if you can call it that, the NV40 had over the R420).

And what, pray tell, do you think the cost is? There are far too many differences between the NV4x and R4xx parts to really compare them in transistor counts on the basis of PS 2.0 vs. PS 3.0 functionality only.

Not really, my assumption is the cost for ATI to implement SM3, which requires quite a transitor increase due to the requirement for FP32. And as the benefits are few and far between, the cost is thusly high.

And as for price to the consumer? Well, it seems to me that nVidia's products are faring pretty well on price vs. performance at the moment.

Only in relation to the 6800 GT, which stands to change once the X800 XL releases. :p
 
ANova said:
digi said:
It's not so much a specific technology IMHO, but rather the fact that the two top IHVs have such tight competition this year.

THAT is the most signifigant 3D-thingy for me this year, and I feel it's just been a win/win year for the consumer.

I would have to disagree with this. While there may be more competition between the two which ultimately helps improve performance, there is also a downside. Just look at the prices, as much as $700 for an X800 XT PE and around $650 for a 6800 Ultra. And that's if you can find them.
Personally I think both the X800 XT PE & 6800 Ultra were "reviewer edition" cards that were never intended to be available in quantity.

That's why I was personally a bit miffed that ATi didn't initially come out with an X800 XT, just an X800 pro with 16 pipes....I felt they needed a card that enthusiasts could actually be able to get and afford. (Then again, the X800 pro VIVOs pretty much filled the bill with that one when you figure in their mod success rate. ;) )
 
ANova said:
Not really, my assumption is the cost for ATI to implement SM3, which requires quite a transitor increase due to the requirement for FP32. And as the benefits are few and far between, the cost is thusly high.
Really? How do you define high cost? If you're talking about the cost to the consumer, it's going to be relatively small no matter how you slice it:

1. The total increase in die area from implementing SM3 isn't going to be more than about 20% (the approximate difference in die area between the R420 and NV40: and the NV40 goes further than simply implementing SM3).

2. For chips, particularly early in their lifetime, the primary cost to the consumer is not the engineering cost, but rather the R&D cost.

3. The final product that the consumer purchases is a video card, not a chip. The entire cost of the video card is a fair bit higher than just the chip itself, as it also includes things like (for high-end boards) expensive RAM, and typically, these days, sports a rather complex design.

How, after all, do you think that nVidia is able to produce their GeForce 6800 products at competitive prices to ATI? The cost to the consumer of the larger die area is probably only a few percent, and the greater efficiency of the core may make this cost even less in the future, when one considers price vs. performance.

Only in relation to the 6800 GT, which stands to change once the X800 XL releases. :p
Well, it looked to me like the 6600 series also offers (typically) higher performance at a similar price to the X700 series. And the 6200 series also offers similar price/performance ratios as ATI's X600 series. As for the X800 XL, judging its price vs. performance won't matter a whole lot until it reaches the market. We don't yet know how nVidia is going to respond to this product, so I don't think things will change much.

So, I don't think there is any basis in stating that the cost of SM3 is too high, considering the cost to the consumer appears to be slim to none.
 
Chalnoth said:
So, I don't think there is any basis in stating that the cost of SM3 is too high, considering the cost to the consumer appears to be slim to none.

Who said anything about cost to the consumer?

Consumer PRICE is driven by supply and demand. The low-end parts will always be priced similarly (or even downard as time goes on).

The problem is, there is a COST threshold for the producers. Every transistor that's "wasted" on FP32 / SM3.0 in the low end, is something that COULD have been used for something else. Deeper pipes, better AA, faster clocks, cooler operation, or whatever.

And if nothing else, just more profits, giving more R&D money for more worthwhile things.

Again, this is not to say that SM 3.0 is not an improvement over 2.0. It's just not some earth shattering thing...and ceratinly no revolution in the low end market.
 
Chalnoth said:
So, I don't think there is any basis in stating that the cost of SM3 is too high, considering the cost to the consumer appears to be slim to none.
Unless nV is taking a hit with their margins in order to maintain market share. If so, it is not a position any company wishes to maintain.

Edit: Joe beat me to it and was more detailed (verbose ;) ) as well.
 
Joe DeFuria said:
Again, this is not to say that SM 3.0 is not an improvement over 2.0. It's just not some earth shattering thing...and ceratinly no revolution in the low end market.
Except every improvement in the low end is important. The low-end market largely determines how much work developers will put into supporting certain technologies. As an example, as Carmack recently stated in his blog, he is finding new uses for the gradient instructions, but is having to find workarounds due to the lack of ATI support.

No, technology advancement in the low-end is far more important even than advancement in the high-end.
 
nelg said:
Unless nV is taking a hit with their margins in order to maintain market share. If so, it is not a position any company wishes to maintain.
nVidia's financials have been improving since the release of the NV40. The third-quarter results will be released tomorrow, and are expected to be quite good.
 
Chalnoth said:
Joe DeFuria said:
Again, this is not to say that SM 3.0 is not an improvement over 2.0. It's just not some earth shattering thing...and ceratinly no revolution in the low end market.
Except every improvement in the low end is important.

Obviously.

It's just that some improvements are more important than others. Is it more important to have as fast basic DX9 support as possible in the low end, or DX9 ++++ at reduced speeds?
 
Chalnoth said:
nVidia's financials have been improving since the release of the NV40. The third-quarter results will be released tomorrow, and are expected to be quite good.

Huh? There is nothing being released tomorrow. NVDA's fourth quarter (and fiscal year) ends on the last week of January and they won't report until at least two weeks later.
 
another big deal of 2005 is we should see the last of the geforce fx line come and go.

Also the same goes for the r200 line .


Both are very good things
 
kemosabe said:
Huh? There is nothing being released tomorrow. NVDA's fourth quarter (and fiscal year) ends on the last week of January and they won't report until at least two weeks later.
Yeah, sorry, read the date wrong. Was November 4.

Regardless, the financial results were good.
 
Joe DeFuria said:
It's just that some improvements are more important than others. Is it more important to have as fast basic DX9 support as possible in the low end, or DX9 ++++ at reduced speeds?
How much reduced? I claim that the speed reduction will be small to nonexistant from the additional feature support. In fact, as games start to use the more advanced features offered in PS/VS 3.0, performance can only increase. Hell, it should be clear that the NV4x is already faster on a clock-for-clock basis than ATI's offerings in the majority of benchmarks, and that's before making use of PS/VS 3.0.
 
jvd said:
another big deal of 2005 is we should see the last of the geforce fx line come and go.

Also the same goes for the r200 line .


Both are very good things

I 2nd that!
 
Chalnoth said:
How much reduced? I claim that the speed reduction will be small to nonexistant from the additional feature support.

And I claim that use additional uselfulness of PS/VS 3.0 in low-end parts will be small or non-existent compared to PS 2.0 parts.

So there.
 
Back
Top