Another Richard Huddy Interview

Re: 3D cards

Proforma said:
ATI is holding the industry back from supporting SM 3.0 and thats a fact. I don't think it can get any clearer than that.

No it is not, no one would make SM3 games anyway this generation the volume of the NV4x+R4xx would not be enough (even if the R4xx had supported SM3).

It takes time to develop games (something the people that claims that Ati holding back the industry does not seem to realize), no game that really takes advantage of SM3 will be finished this generation anyway we, the only games with SM3 support we would see this generation would be with quick recompiles of the SM2 shaders.

As long as developers has a platform for developing their SM3 path the industry will not be hold back and they have that platform in the NV40.
 
chavvdarrr said:
Let's sum it up:
3Dc is best 3d feature ever
SM3.0 is evil (ati presentation anybody? :p )
SM2.0b is best 3d feature ever
fp32 is evil
fp16 is evil
SM2.0a is evil
brilinear is evil when used from "enemy"
brilinear is best 3d feature ever when used from "us"
If enemy tells us how to benchmark its evil
If "my company" tells how to benchmark - its great
Supporting some standarts only partially is OK if its by "my" company
if "my" company sells slow budget card its ok
if "enemy" sells slow budget card its bad.
Pointing to any problem connected to "my" company is bad
Pointing to any problem connected to "enemy" company is good.
etc
etc
etc

None of those claims has been posted in this thread, you have to learn to see thrue your own bias so that you able to read what people actually writes instead of what you think they write.
 
chavvdarrr said:
"Another Richard Huddy Interview"
http://www.oc-zone.com/modules.php?name=Articles&rop=conteudo&id=65
fact is, that 3Dc is NOT the best thing after bread, no matter what ATi marketing department says. Agree ?

I don't think anyone is claiming 3Dc is the best thing since slided bread. I've not seen it here and Huddy only comments on it as its a new feature in X800, which is something you would expect.

3Dc isn't anything but another fairly small feature (in terms of gates on the chip), but one that could be fairly useful to developers. It appears that ATI took it upon themselves to do some normal map compression research, probably because they were getting developer feeback concerning Normal map memory requirements, and that has already been commented on and utilised by id in Doom3 - 3Dc is an extension of that research and gives developers another option. Thats all.
 
If the current generation of ATI cards supported SM3.0 would it be more likely that more developers would use these features? If the answer is "yes" then Proforma's point is essentially correct. If ATI releases a SM3 capable card in the next six months or so (or whenever the next product cycle is due) then there won't be any significant problem at all. However, if they leave it for a year or so then I think it will have an impact on developers' decisions. The larger the market the more likely a feature will be utilised - this is common sense.
 
What kind of SM3 feature can you think of that developers might want right now?

Instancing is allready available with ATI.
Longer shaders are allready available.

Dynamic branching? Don't think so. Both ATI and NVidia have indicated in the past that in most cases, it gives an enourmous performance hit, while the benefitial situations are few. Ofcourse, at the moment, the NVidia PR says different, but we have yet to see proof... 8)

Real displacement maps? Cost too much performance, and not really clear if it's better than just building higher polygon models in the first place. All the displacement map talk for Facry and UE3 has been about 'virtual' displacement maps, which are simply smart bumpmaps...

I don't think SM3.0 adds very much features that are immediately usefull.


Seems to me, the shaders standards are moving faster ahead then the hardare that is developed, and much faster than the hardware that gamers actually own. We're talking about SM3.0 features, while new games like D3 and HL2 are still very much busy at staying compatible with DX7 and DX8 technolgy hardware...

If your engine really NEEDS features that are only available in SM3, than you're getting into big problems selling the game... Even if the X800 would support SM3, the user base would be simply too small. Unless you can produce a game that is so uncanningly good, that you can convince all gamers to upgrade their systems. :D
 
IMO I think you can break up “feature adoption†into two extremes with some fuzziness in between. The first extreme being “early adopters†and the second being “engine integratorsâ€.

The early adopters are those that are picking up the technology soon after its available – but because of development timescales and hardware availability the features used are probably mainly there to add cool features, but are mainly going to be ones that can be integrated into the game and engine framework very quickly. Alternatively, the engine integrators are looking a lot further out and aren’t basing the requirements on the immediate need, but the spread of hardware and basis for that over a period of time.

Even if there is some hardware available the early adopters are going to adopt it no matter what since it’s a cool selling point for those with the hardware, and these days the adoption may see them with an injection of cash from the IHV with the feature anyway. Conversely the engine integrators are taking a longer term views and looking at the likely spread of hardware in the future, what the stable basis is and even console capabilities.

Take a look at Doom3, for instance, this engine shows the most integrated lighting scheme of any engine to date, and yet its primary core features are based on capabilities that any DX8 class board can handle, and even some DX7 – they have since added some further elements for DX9 class boards. Unreal Engine 3, something not slated to come until 2006, will utilise Shader 3, because by the time its out that’s what a lot of boards will use, and consoles should be up to that level – however, its basis currently is, and probably still will be, Shader 2.0 because that’s where it started out its development and that is going to be an acceptable minimum level of support thanks to Intel’s 915 platform.

So, IMO, I think the early adopters will mostly adopt the features regardless of the number of vendors support, and the engine integrators are looking at a wider scope than the immediate need. The fuzziness in the middle is probably going to be where the biggest impacts are.
 
Diplo said:
If the current generation of ATI cards supported SM3.0 would it be more likely that more developers would use these features? If the answer is "yes" then Proforma's point is essentially correct.

I think the answer is no, software developed with SM3 in mind is not targeted at this generation of hardware anyway.

If ATI releases a SM3 capable card in the next six months or so (or whenever the next product cycle is due) then there won't be any significant problem at all.

The problem is if they do not release any low-end or mainstream parts for a long time. That would have a significant influence on the volume of SM3 capable cards in the long run the lack of SM3 in current card has not.

However, if they leave it for a year or so then I think it will have an impact on developers' decisions. The larger the market the more likely a feature will be utilised - this is common sense.

Yes - as long as a critical mass is reached.
 
Tim said:
I think the answer is no, software developed with SM3 in mind is not targeted at this generation of hardware anyway.
I don't think that is necessarily true. Yes, there are no games that require SM3 (and probably won't be for a couple of years or more) but there are games that are targeting this generation of technology that could benefit from SM3 support (and, indeed, some definitely will be supporting it). The 'Far Cry' patch has shown that in certain circumstances (multiple lights etc.) that SM3 can do in one pass what requires two or more in SM2.x Will there be games released in the next year that would benefit from SM3 optimisation? I think almost undoubtedly.

The question is whether developers working on a game due for release in the next 12 months will add SM3 support for those cases were it can either improve rendering speed or add extra bells'n'whistles. It seems to be rather obvious that the probability of this would increase if ATI supported SM3. If that is the case then it's not illogical to make the statement that ATI are holding things back (it's a rather confrontational way of wording things, but it's never-the-less a pretty logical assumption). It would be good if ATI fans reminded themselves that SM3 is not an Nvidia feature but a DirectX feature and one that developers seem to be unanimously in favour of. The faster standards are adapted the faster they get implemented.

My point is that ATI not supporting SM3 this time is no real big deal. However, if they skip another generation then it will be a problem. Remember, todays high-end video cards become tomorrow's bottom line. At the current time there are a number of games out that require some form of PS support to work. This reuirement could only be achieved when a large percentage of the target audience was likely to own such hardware. If ATI skip a couple of generations of SM3 support now this will effect game engines due to be released in the next 2 years or more, since theywill be less likely be able to insist on SM3 as a requirement. I think it all hinges on what ATI do for their next hardware...
 
Diplo..... just how many games, after more than 2 years, require DX9 v2.0? Where were you when nVidia refused to make DX8.1 compliant cards? And where were you in the last 2 years complaining about how lousy DX9 v2.0 was on all nVidia cards until the 6800 series? The difference between 2.0 and 2.0c is minimal, compared to the difference between DX8 and DX9. In fact, I'd even say the difference between DX8 and DX8.1 was more than the difference between DX9.0a, b or c. And no one has proven, anywhere, that DX9.0c can do anything that DX9.0b can't do......

Fact of matter is that, long before DX9.0c is mainstream, there will be DX10......
 
martrox said:
The difference between 2.0 and 2.0c is minimal, compared to the difference between DX8 and DX9.

The comparison here is between SM 2.0 vs SM 3.0. There are some very large and obvious differences between the two shader models (specification of full precision, maximum instruction limits, branching, etc). Moving from an R3xx type DirectX 9.0 hardware to a gpu that supports DirectX 9.0c is not trivial, and will in many ways require a brand new architecuture (given the need to change native precision, max instruction limits, etc).

Fact of matter is that, long before DX9.0c is mainstream, there will be DX10......

NV will have a top to bottom lineup of DX 9.0c hardware available this year apparently, so this hardly seems to be true.
 
martrox said:
Diplo..... just how many games, after more than 2 years, require DX9 v2.0?
None, that I'm aware of. However, technology doesn't progress at a linear rate, it's more like an exponential curve. However, the point is largely irrelevant - what you need to ask is whether the point where SM2 becomes a requirement would be longer or shorter if one of the major video card manufacturers failed to adapt it.
Where were you when Nvidia refused to make DX8.1 compliant cards?
Ummm, let me think.... I was probably at home or at work. Perhaps you could give me a specific date and I'll try and check my diary?
And where were you in the last 2 years complaining about how lousy DX9 v2.0 was on all nVidia cards until the 6800 series?
The FX cards were indeed lousy. But what has that got do with anything at all? What Nvidia did two years ago is totally irrelevant to what happens now. If Nvidia screwed up 2 years ago does that now make it OK for ATI to possibly do the same? (I don't think they have screwed up, btw, I just hope they adopt SM3 sooner rather than later).

Y'know, this is why I try and avoid posting in this forum much anymore because you have to deal with these kind of knee-jerk responses. You can't say anything without some IHV-fan jumping down your throat and attacking you for having a honest opinion. Whether I criticised Nvidia 2 years ago has nothing to do with this argument and doesn't change a thing. Why do people always have to drag this down to petty ATI vs Nvidia arguments? All I care about is whether features get adopted that make games better. That is the bottom line. I don't care who does it, I don't care how they do it, and I don't care what manufacturer X did 2 years ago when it has no relevance to this. Understood?
 
Well, for those moaning that ATi is stifling development, consider what the alternative could have been.

If ATi had put their efforts into a Shader 3 part they might not have been able to do PCI-E parts, in which case the OEM’s would have had to adopt more Intel integrated graphics or nVIDIA’s bridged parts. What would you rather see happen – OEM’s using, and selling lots, of parts that support Shader 2 reasonably well as they do in Ati’s X series, or even more integrated and nVIDIA FX class stuff being pushed on end users.

Which scenario, that or the one we have at the moment or this alternative, would stifle development and features in games more?
 
Diplo said:
That is the bottom line. I don't care who does it, I don't care how they do it, and I don't care what manufacturer X did 2 years ago when it has no relevance to this. Understood?

Not to pick on U nor is this directed at you, I just used your post as an example. But..it does not have any relevence to the issue, your right on that. However it shure as heck makes you LOOK biased for NV. Saying the past does not matter only makes it worse. The past matters plently as it points out if your really in it for the sake of developement or for another reason....
 
Richard Huddy said:
John Carmack of ID software, the creators of Doom3, also noted that NVIDIA’s driver seems fast, but he also noticed that if he made small changes to his code then performance would fall dramatically. What this implies is that NVIDIA are performing “shader substitutionâ€￾ again.[/b]

Now what did carmack actually say
On the other hand, the Nvidia drivers have been tuned for Doom's primary light/surface interaction fragment program, and innocuous code changes can "fall off the fast path" and cause significant performance impacts, especially on NV30 class cards.

That is classic PR taking things out of context selective quoting to make it seem like your position is valid. Huddy makes it seem that it was not an issue especially on the nv3X but on all nv cards. Whereas it is clearly not what carmack said.
 
Carmack also said:

but it should be noted that some of the ATI [X800] cards did show a performance drop when colored mip levels were enabled, implying some fudging of the texture filtering. This has been a chronic issue for years, and almost all vendors have been guilty of it at one time or another. I hate the idea of drivers analyzing texture data and changing parameters, but it doesn't visibly impact the quality of the game unless you know exactly what to look for on a specific texture.
 
Sxotty said:
Now what did carmack actually say
On the other hand, the Nvidia drivers have been tuned for Doom's primary light/surface interaction fragment program, and innocuous code changes can "fall off the fast path" and cause significant performance impacts, especially on NV30 class cards.

That is classic PR taking things out of context selective quoting to make it seem like your position is valid. Huddy makes it seem that it was not an issue especially on the nv3X but on all nv cards. Whereas it is clearly not what carmack said.

My emphasis:

On the other hand, the Nvidia drivers have been tuned for Doom's primary light/surface interaction fragment program, and innocuous code changes can "fall off the fast path" and cause significant performance impacts, especially on NV30 class cards.

Yes, Carmack says this happens on all "Nvidia drivers", but the effect is especially worse on NV30 class hardware. In previous comments, Carmack has said that all his NV optimisations have now been replicated in the drivers.

This has been gone through in several other threads. I don't know how some people can have such problems reading a simple english sentence.
 
jb said:
But..it does not have any relevence to the issue, your right on that. However it shure as heck makes you LOOK biased for NV.
Well, that's what I hate. If you say something as innocous as "SM3 looks good" you immediately get labeled an Nvidia fan or an ATI hater and have to deal with all the crap that goes with it.

Saying the past does not matter only makes it worse. The past matters plently as it points out if your really in it for the sake of developement or for another reason....

No, what I did or didn't say 2 years ago has zero relevance to anything. I don't need to justify my comments by trying to dig up posts I made two years ago criticising Nvidia. I'm not getting into these childish arguments about loyalty or whatever. The forum has a search facility, feel free to use it (not that I was posting here two year ago, any way). As it happened I recommended to all my friend that they not buy FX cards and get a 9800 Pro instead. Before that I said the GF4 (but not MX). Whatever happens to be best at time, in my view. Really, though, this is supposed to be serious forum for rational debates, not ones based on paranoia about peoples' supposed loyalty.
 
Sure, we can argue semantics all day, but it does seem obvious that Richard [Fuddy] likes to spin things in favor of ATI and against NV. Not surprising, as most marketers act this way. However, Richard's disdain for NVIDIA is very obvious in seemingly every interview he gives. I don't see this high degree of bitterness coming across in interviews from the people at NV such as Vivoli.
 
Back
Top