SM 3.0, yet again.

AndrewM said:
ANova said:
I know I won't, at least not unless ATI's solution manages to show something actually worthwhile pertaining to SM3's use, which nvidia certainly hasn't done yet.

That is exactly what DemoCoder was talking about.

:)

No, not quite.
 
Humus said:
A factor of 8 is quite stretching it. If you're limited by vertex fetch, then you can in some cases get near that.
...
First of all, the visibility pass is very cheap, it's more or less just transform. Compared to the lighting shader it's short. In my demo it's maybe half the instructions of the lighting shader.
Ah, yeah, forgot about that. But still, the fact remains that such a scenario, when limited by geometry, will be slower when done in multipass. Memory bandwidth limitations, of course, also will help to make the branching path faster.
 
nAo said:
Ailuros said:
Would additional logic actually help in the end or will we see better results with future hardware and future APIs?
The problem with actual VT implementation can be solved (as it already 'solved' or alleviated in PS!) spending more transistors ;)

My question was targeting a WGF2.0 unified Shader scenario (and why not even a Geometry Shader in mind).

Would NV40 have more dedicated transistors for VT, would it had made a significant difference overall, or would there had been other possible bottlenecks in the meantime that require more sophisticated hardware and API's in the end? I mean wouldn't you need a GS or PPP combined with VS3.0 for real "advanced HOS" in the end (as an example)? Aren't NV40's VS30 restricted to point sampling only?

Why would any IHV waste more than necessary transistors in the end, apart from those necessary for features X to be supported for the time being? (and yes obviously related to first SM3.0 implementations). Of course it's not ideal, yet last year around this time it was either very basic VT support or nothing.
 
There's no need to spend more transistors on a feature that wouldn't be used widely in a chip's lifespan. It's a question of balance. I'm sure NV could make a very good VT and significantly better branching - at the cost of SM2 performance, for example. Somehow i'm sure you wouldn't be very happy about that ;)
 
DemoCoder said:
Yep, full reverse spin I bet. Once ATI has SM3.0, I think all the SM3.0 naysaying will disappear, and all of a sudden a whole crop of "only possible with SM3.0" scenarios will appear. And people in the past who were exclaiming no big deal between sm2.0b and sm3.0 will suddenly be at the head of the bandwagon, especially if ATI's performs better. For example, if ATI's dynamic branching performs better, then dynamic branching support will suddenly be an achilles heel, despite the fact that previously, it wasn't, and the real life scenarios where it was used were few and far between. Now, such support will be seen as *crucial*

Sort of like how nVIDIA fans downplayed the impact on image quality AA had, arguing that higher resolutions were better? How they argued that FP24 held no advantages over FP16? How PS 1.3 was just as good as 1.4? Or how any video board that couldn't draw its power entirely from the graphics port was somehow flawed in design? How multi-chip or -board designs were bad?

Writing a post asserting that f@nboys will twist things to favor their preferred company is hardly ground-breaking. The fact that you only complain about one set as being guilty of this behavior when it's a pattern of conduct that can be ascribed to the more zealous fans of any company is a noteworthy pattern in and of itself.
 
Well, I was under the impression that what you just stated, John, has been argued in these forums so much more commonly than the reverse. So, personally, I didn't see it worth noting (at least, not to those who frequent the forums).
 
I complain because alot of the spin-shifting comes from people formerly or currently associated with B3D itself, not run of the mill zealots, and also because of the bias here is much stronger against nV than ATI.
 
DemoCoder said:
I complain because alot of the spin-shifting comes from people formerly or currently associated with B3D itself, not run of the mill zealots, and also because of the bias here is much stronger against nV than ATI.

Have you stopped to think why that may be? Beyond just the zealot/f@nboy excuse.
 
Spin-shifting comes from partisanship. "What was once bad, now becomes good" does not come from objective evaluation of the facts, if that's what you're hinting at. If something were objectively bad or true, it would remain so.

For example, if dynamic branching really doesn't yield any truly interesting visible results, than whether or not ATI has it, or runs it fast, is irrelevent to the truth of that statement, since no one has produced a demo of dynamic branching *even one that runs dog slow on NV* that shows something that blows away SM2.0b. The fact that ATI would have it and run it fast would not alter the fact that no known kick-ass uber-shaders exist for SM3.0 that won't run acceptably with "executive both branches, and select result" of SM2.0.

So when SM3.0 exists on ATI, dynamic branches won't suddenly become the cat's meow, since we currently don't know of very many algorithms today that requirement SM3.0 *regardless of the hardware's efficient implementation of them*

If dynamic branching wasn't that useful yesterday and 5 months ago, it won't suddenly be mega useful when ATI ships their part, unless, today, you can show me a shader that will make all the difference in the world.

As for why biased B3D staff? If a company treats you nice, feeds you all kinds of private, NDA info, you don't think that has an effect? I think NVidia probably treats many sites at arms length through their PR department, and ATI has many more engineering folk hanging out at B3D and PM'ing and emailing B3D members. So with NVidia, you feel like you are being held at distance, but ATI feels more "folksy" and "friendly"

But, regardless of that, how you are treated personally should not influence how you evaluate the truth of statements about hardware or cause you to shift your values when convenient.
 
Democoder, don't worry mate, as soon as R520 hits the streets, I'm sure Humus will come along and write a sweet demo with options for dynamic branching that will demonstrate just how great ATI's implementation will be.

Hang on, why don't you write such a demo? What does your name actually mean?

Jawed
 
DemoCoder said:
So when SM3.0 exists on ATI, dynamic branches won't suddenly become the cat's meow, since we currently don't know of very many algorithms today that requirement SM3.0 *regardless of the hardware's efficient implementation of them*
True, but SM3.0 still might be the cats meow when ATi releases it....there might actually be games that take advantage of it out by then. ;)
 
DemoCoder said:
I complain because alot of the spin-shifting comes from people formerly or currently associated with B3D itself, not run of the mill zealots, and also because of the bias here is much stronger against nV than ATI.

I don’t recall anyone from Beyond3D talking about SM3.0 in relation to ATI recently. Not sure how there has been any spin shifting there. When ATI release SM3.0 we’ll question them on why they have done it now and not before, why they felt it wasn’t worthwhile a year ago but is now, etc., etc., and then we’ll be reporting on what ATI tells us. We've already asked their CEO some farily hard questions on why they haven't done it when NVIDIA did.

DemoCoder said:
For example, if dynamic branching really doesn't yield any truly interesting visible results, than whether or not ATI has it, or runs it fast, is irrelevent to the truth of that statement, since no one has produced a demo of dynamic branching *even one that runs dog slow on NV* that shows something that blows away SM2.0b. The fact that ATI would have it and run it fast would not alter the fact that no known kick-ass uber-shaders exist for SM3.0 that won't run acceptably with "executive both branches, and select result" of SM2.0.

And our articles only make quantative statements on things that we can test, if we can’t test it we won’t give a conclusion on it, of course we will report on what the IHV will tell us about it, stating “according to xxx….â€. I can’t really see how this relates.

DemoCoder said:
As for why biased B3D staff? If a company treats you nice, feeds you all kinds of private, NDA info, you don't think that has an effect? I think NVidia probably treats many sites at arms length through their PR department, and ATI has many more engineering folk hanging out at B3D and PM'ing and emailing B3D members. So with NVidia, you feel like you are being held at distance, but ATI feels more "folksy" and "friendly"

You’re wrong actually – none of the engineers will proffer pre-release information, if you care to notice they don’t post that frequently anymore (more posts are coming from behind another firewall at the moment!).

Anyway, clearly you have issues with the site the way it is right now and, well, thats your issue. I believe Reverend has already commented on your posts of late, and I’m sick an tired of seeing the same thing from you post after post. You have a simple choice, either you can choose to contribute to the forum in a constructive manner, arguing the case without constantly taking swipes at us, which we know you can, or you don’t, in which case I will take that decision out of your hands.

John Reynolds said:
Writing a post asserting that f@nboys will twist things to favor their preferred company is hardly ground-breaking. The fact that you only complain about one set as being guilty of this behavior when it's a pattern of conduct that can be ascribed to the more zealous fans of any company is a noteworthy pattern in and of itself.
 
DaveBaumann said:
I don’t recall anyone from Beyond3D talking about SM3.0 in relation to ATI recently.
Well there is this one fella who has been mercilessly torturing us with his sig....
bleh2.gif


;)
 
DemoCoder said:
If a company treats you nice, feeds you all kinds of private, NDA info, you don't think that has an effect?

This is coming from a person who attends nVidia sponsored events, relates to us some information from talking directly with some of their engineers, etc. I don't recall you doing the same with ATI.

In other words, perhaps you should consider if your "view" of B3D stems from the fact that you are "folksy and friendly" with nVidia, rather than the other way around?
 
digitalwanderer said:
DemoCoder said:
So when SM3.0 exists on ATI, dynamic branches won't suddenly become the cat's meow, since we currently don't know of very many algorithms today that requirement SM3.0 *regardless of the hardware's efficient implementation of them*
True, but SM3.0 still might be the cats meow when ATi releases it....there might actually be games that take advantage of it out by then. ;)
IMHO, pushing SM3 doesn't really make sense for gamers or developers given the marketshare of ATIs DX9 products in the machines of game players. Although people populating these forums may change cards as often as their underwear (open for interpretation :)), Valves statistics emphatically indicates that this is far from the norm, even among FPS players.

No matter how tech-happy the developer, some grounding in marketplace realities is in order.

ATI doesn't want to appear to fall behind technologically, so of course they have to move to SM3, and once they do, they will beat the "New! Shiny!" drum, just like nVidia. After all they want to sell kit, and inducing angst in the more impressionable consumers is a way to achieve that. However, until ATI has transitioned to a complete SM3 line-up, they can't really use SM3 in their marketing, or they will help sell nVidias cards in the market niches where they don't have any SM3 representation - not good.
 
Entropy said:
IMHO, pushing SM3 doesn't really make sense for gamers or developers given the marketshare of ATIs DX9 products in the machines of game players.

The real emphasis for 3.0 right now is on developers with next-gen console kits. Once these games are released and ported to the PC, it'll be interesting to see whether or not the developers add 2.0 paths (and the performance difference they may show).
 
Joe DeFuria said:
This is coming from a person who attends nVidia sponsored events, relates to us some information from talking directly with some of their engineers, etc. I don't recall you doing the same with ATI.

#1 I attend both kinds of events, when they are convenient for me to do so. One of the first events I ever attended was ATI Shader Day, which I wrote about, because ATI was handing out free R9700 PROs when they were not even available in volume in stores yet, and before DX9 was out. I complimented ATI heavily and lavished praise on the R300 then. ATI gave me early alphas of DX9 drivers, and access to DX9 betas well before public MSDN testing started. If there's any payola or people I should be thanking, it should be ATI.

#2 If ATI engineers send me some info, then I will post it. I only post info I have. But I don't use the fact that NV send me some info to beat down the R3xx+, and I have never talked down the R3xx and above architectures.

Moreover, the info NV send me was similar to what Colorless did for 3dfx and the V6000 mip-map trick. I suggested a way to implement gamma-correct AA efficiently on the NV4x, OpenGL guy was saying it wouldn't work, and I was told by NV privately that it in fact, does. I did not use this info to bash the R300.

In other words, perhaps you should consider if your "view" of B3D stems from the fact that you are "folksy and friendly" with nVidia, rather than the other way around?

Folksy and friendly? I had a small conversation on AA and that's it. No communication elsewise. I don't receive review hardware from them. I don't get inside information about upcoming HW. Nvidia engineers on B3D are certainly much more stealthy than ATI, whose engineers openly reveal themselves on the forums, and sometimes openly comment on their competitors products.

The only early hardware I ever got was from ATI. :)
 
DaveBaumann said:
And our articles only make quantative statements on things that we can test, if we can’t test it we won’t give a conclusion on it, of course we will report on what the IHV will tell us about it, stating “according to xxx….â€￾. I can’t really see how this relates.

You don't need hardware to make an argument as to whether dynamic branching is worthwhile or not. This can be made on purely theoretical grounds. Assume dynamic branching runs with no performance hit period, now show me an interesting shader that uses low-cost dynamic branches which cannot be done on SM2.0 efficiently.

Up until now, the commentary on SM3.0 has been "anything you can do in SM3.0 you can do in SM2.0", which is mostly true. The question to this date is, does SM3.0 perform better, and the answer is, except for some pathological cases, SM2.0 "branching" suffices. No one has been able to demonstrate a really important "must have" shader for which low-cost dynamic branching would clearly differentiate it from SM2.0 products, to NV4x.

So far, SM3.0 has been more about ease of development, and less about performance.


As for getting private info, I think you yourself said, back when we were debating the Video Shader a year or so ago, that the reason why you seem to "defend" inaccurate information about ATI HW more is because you get more of it. And your .sig itself is ATI related. First about how ATI doesn't have SM3.0, and then crossing it out. Why even have this sarcastic, somewhat ATI defending .sig if you didn't have advanced info? Martox similarly has an anti-NV biased .sig. Am I imagining this? Should I not be concerned that moderators are running around with the equivalent of partisan bumper stickers?

If you run a media/journal oriented site, expect criticism. I think it is fair game to criticize both the content of the reviews/posts for technical mistakes as well as potential editorial bias. The New York Times doesn't seem to have a problem with people questioning their editorial policy and in fact, includes letters to the editor doing just that.
 
Back
Top