"An interview with Richard Huddy & Kevin Strange

Well, has anyone had any conversation with him about his "Developer Relations role" and the leaked presentation?

And as far as transistor budgets go - why does anyone here care about that? If a company wants to pack more in what's wrong with that? If performance is good gamers should only be happy. Obviously FX series sucked. They probably didn't intend for it to suck, that would be my guess. So if performance is good on rest of 6000 series will we still be criticizing NVIDIA for having more transistors?
 
Voltron said:
Well, has anyone had any conversation with him about his "Developer Relations role" and the leaked presentation?

And as far as transistor budgets go - why does anyone here care about that? If a company wants to pack more in what's wrong with that? If performance is good gamers should only be happy. Obviously FX series sucked. They probably didn't intend for it to suck, that would be my guess. So if performance is good on rest of 6000 series will we still be criticizing NVIDIA for having more transistors?

Because you can only fit so many transistors on a chip. The more you use for checkbox features, the less you have for useful features. Look at NV40 - it's bigger, runs hotter, needs more power, costs more to make, has worse yeilds. By the time anyone is using those extra 60 million Sm3.0 transistors, the NV40 will be superceded, and given Nvidia's past performance on first generation features, it won't be fast enough to be usable.

Maybe if it was fast and good enough, Nvidia wouldn't be trying to bribe developers to code to SM3.0 at the exclusion of SM2.0 - the benefits and speed of SM3.0 on NV40 would speak for itself, instead of having to be subsidised with lots of cash.
 
Every transistor costs money. Last I checked, nobody wanted to pay for things they don't use.
 
Bouncing Zabaglione Bros. said:
Because you can only fit so many transistors on a chip. The more you use for checkbox features, the less you have for useful features.

There seems to be a lot of people claiming SM3.0 to be a checkbox feature only lately. Without no evidence afaik.

Look at NV40 - it's bigger, runs hotter, needs more power, costs more to make, has worse yeilds.

Running hotter seems to be debatable. Didn't one review state that it was actually the opposite ? (though it requires more power)

By the time anyone is using those extra 60 million Sm3.0 transistors, the NV40 will be superceded, and given Nvidia's past performance on first generation features, it won't be fast enough to be usable.

And that's a fact ?
 
Bjorn said:
There seems to be a lot of people claiming SM3.0 to be a checkbox feature only lately. Without no evidence afaik.

Well, let's see:

1) Small gap compared to PS1.4 vs. PS2.0
2) Some of the uses for it are not theoretically useful in graphics due to the speed required
3) It can't be used right now regardless because nothing supports it yet

This is definitely a checkbox feature for now. It has yet to be backed up with anything. And let's face it, PS2.0 was a checkbox feature for a while too. Hence why we have MX cards out there still.
 
this is pretty laughable. both companies would like to have the best performing product at every price point. assuming all things are equal, that product would have higher prices, regardless of transitor budget. thats kind of the way of business - brand preferences and that aside - nobody is going to pay more for less.

and from a cost point - are transitors the only cost a company incurs? is that the only way to measure costs?

could we take a poll of how many people in these forums have a financial interest in either company?
 
Voltron said:
this is pretty laughable. both companies would like to have the best performing product at every price point. assuming all things are equal, that product would have higher prices, regardless of transitor budget. thats kind of the way of business - brand preferences and that aside - nobody is going to pay more for less.

People pay more for less all the time. That's often what brand loyalty and marketing are all about - it attaches a value to the manufacturer of a product, rather than the product itself.

Voltron said:
and from a cost point - are transitors the only cost a company incurs? is that the only way to measure costs?

No but it's pretty important from a customer's point of view when it comes to what you are getting in your chip. You are paying money for a feature that may not be used, may not be usable, and will make the performance of the rest of your product suffer.

Of course there are other things, like product support. However, if you are a NV3x owner and just finding out that Nvidia is paying developers to orphan your card, you're probably not too impressed with that either.

Voltron said:
could we take a poll of how many people in these forums have a financial interest in either company?

Ahh - divert the argument and imply that people who disagree with you have some kind of financial interest in disagreeing with what you say. For the record, I have never had any financial involvement in either Nvidia or ATI - what about you?
 
Voltron said:
and from a cost point - are transitors the only cost a company incurs? is that the only way to measure costs?
Pretty much. The unit cost of manufacture is proportional to die size (which isn't proportional to transistor count, but it's a more easily understood term than silicon area so the two get used interchangeably). This isn't a linear proportionality though - as the chip gets larger it costs more to add more area.

As to the latter, I work for ATI, that's well known around here.
 
sorry, i didnt mean to imply that people need to invidually state their interests. i was just curious in general about how many people on these forums have financial interests? with a real poll.

anyhow, i have had interests in both companies at various times (nvda now).

well, in addition to die size, there is process, yield, inventory management. i think R&d are also pretty important costs.

this thing is getting way, way off topic. sorry.
 
Voltron said:
sorry, i didnt mean to imply that people need to invidually state their interests. i was just curious in general about how many people on these forums have financial interests? with a real poll.

anyhow, i have had interests in both companies at various times (nvda now).

well, in addition to die size, there is process, yield, inventory management. i think R&d are also pretty important costs.

this thing is getting way, way off topic. sorry.
wow Voltron, I think you may have just averted a flame war.. thank you :)




(BTW, I'm not being sarcastic)
 
Voltron said:
well, in addition to die size, there is process, yield, inventory management. i think R&d are also pretty important costs.
Yield is a function of transistor count and die size. Process merely scales the cost, so doesn't change the situation that more die == more expensive. The others aren't quite relevant to the discussion because they would be there whatever happened.

The original point I made was 'We could spend more die on more features but they aren't free, and we don't think you want to spend that money right now for something you will never use'. We make these calls designing every chip.

All companies are trying to make sure they get that judgement call right. I think ATI have done very well on that score since the 8500.
 
Eronarn said:
1) Small gap compared to PS1.4 vs. PS2.0
It may be smaller, but still big IMO.

2) Some of the uses for it are not theoretically useful in graphics due to the speed required
You mean practically? That some of the features PS3.0 introduces are relatively slow on NV40 doesn't mean they are not useful for real-time graphics. You can have shaders that need a few dozen cycles to execute and still get playable framerates.

3) It can't be used right now regardless because nothing supports it yet
This might change soon.

This is definitely a checkbox feature for now. It has yet to be backed up with anything. And let's face it, PS2.0 was a checkbox feature for a while too.
And it is still very usable on the first card that supported it.
 
Xmas said:
Eronarn said:
1) Small gap compared to PS1.4 vs. PS2.0
It may be smaller, but still big IMO.

In the future, yes, but many people feel that right now it's not going to immediately be useful.

Xmas said:
2) Some of the uses for it are not theoretically useful in graphics due to the speed required
You mean practically? That some of the features PS3.0 introduces are relatively slow on NV40 doesn't mean they are not useful for real-time graphics. You can have shaders that need a few dozen cycles to execute and still get playable framerates.

Yes, thanks for catching me there. But basically, what I meant was that there are features that while they will be useful in the future, are not powerful enough to be used on current graphics cards. Like for example, the ultra-long shader in the Nalu demo would be completely unrealistic to run in an actual game, even if the technique is very advanced and innovative.

Xmas said:
3) It can't be used right now regardless because nothing supports it yet
This might change soon.

See below.


Xmas said:
This is definitely a checkbox feature for now. It has yet to be backed up with anything. And let's face it, PS2.0 was a checkbox feature for a while too.
And it is still very usable on the first card that supported it.

As bolded. It doesn't do anything right now. It can't be considered more than a checkbox feature until it does. If your computer had support for a nuclear generator as a power source, but you won't be able to obtain one to power it on the open market for two more years, it's a checkbox feature because it doesn't actually serve a purpose for usage.
 
I would like to add something to this .

I have been burned many times in the past with regards to looking towards the future .

I bought a geforce 3 becasue they showed doom 3 running on it and said this will be the card for doom 3 .

We all know now that doom 3 will run like shit on a geforce 3 .

I bought a 9700pro expecting to play doom 3 and half life 2 on it along with all other dx 9 games .

I know i wouldn't want to run half life 2 or doom 3 on it anymore though it would prob do a good enough job. Just not good enough for me .

luckly for me the 9700pro was the fastest card on the market for most games at the time and continued to hold through with newer games .

This time i saw how fast the x800xt was and it made 6x/16x playable with more performance to be gained from the new memory controller.

It plays everything faster than my 6800gt clocked to ultra speeds .

all games for the next 2 years will still have the sm 2.0 code paths that they will continue to devote most of thier time to as that is the highest installed base (just like all the recent dx 9 games had strong p.s 1.1-1.4 support) .

By the time the first games come out that i will want sm 3.0 there will be much faster hardware than the 6800s . As sm 3.0 support will not be the main dev platform till unreal 3 engines games come out in 2006.

By then just like the 9700pro and geforce 3 before that I will be upgrading and so will 90% of us on tehse forums .

as its stands i would recomend ati up to the 200$ price point . I would put nvidia at the 300-400$ point and ati at the 500$ point .

If you don't agree that is cool with me . But the x800xt is the fastest card on the market right now
 
nggalai said:
zeckensack said:
That's hardly 3DCenter's fault, and neither is it your fault -- maybe it's my fault though :D
Just because you suggested the pencil question? No way! :D

93,
-Sascha.rb
I didn't think you'd go for it verbatim :D
The remaining part of the question that you skipped should have made it clear that it wasn't too serious formally. The question was a raw diamond waiting to be polished :D
The local audience thought this question was effectively crap and shouldn't have been posed ... but at the core it was a sincere question.
I like to believe that a part of the devrel responsibility is providing a communication channel to the driver developers. And because driver developers are busy with very delicate work, I thought there is a real need to insert a "crap filter" between the people bothering devrel (who can, presumably, just take it), and them.

Like ... the driver programmers sit in their own wing in the builiding, there's heavy carpet everywhere to dampen any noise, and there are "do not disturb" signs at all doors, 24/7. Devrel occasionally and quietly slip envelopes under the doors that contain the new milestones. And maybe I'm just crazy ;)
Anyway, that was the kind of thing I was going at with the question: how does devrel integrate into the company culuture, and how does external communication propagate through the company.
 
Bouncing Zabaglione Bros. said:
That depends on whether you believe Nvidia's SM3 and SLI is just a marketing tickbox or if it is actually usable. Given what's come out over the last few days about Nvidia trying to pay developers to not use SM2.0 even on their own SM2.0 cards, it's just as likely that SM3.0 is just a marketing checkbox for Nvidia, just as SM2.0 was for it's last generation.

I think the new results from Far Cry mean you can now officially change your mind ..if you want to ?
 
whql said:
Regardless of their roots, complex chips don't fall out of trees. In the past 3 months ati have made 4 in the desktop space alone - three them are using a process nvidia still havent figured out and another being the first time its used in a graphics chip; in that time nvidia have struggled to bring their single sm3.0 part to market. sli is no biggie either - ati had massively scaleable parts long before!

Now that is spin ! You're not Alastair Campbell are you ?
 
dizietsma said:
Bouncing Zabaglione Bros. said:
That depends on whether you believe Nvidia's SM3 and SLI is just a marketing tickbox or if it is actually usable. Given what's come out over the last few days about Nvidia trying to pay developers to not use SM2.0 even on their own SM2.0 cards, it's just as likely that SM3.0 is just a marketing checkbox for Nvidia, just as SM2.0 was for it's last generation.

I think the new results from Far Cry mean you can now officially change your mind ..if you want to ?

depends on if you read the test right


from anand
all at 1600x1200 4x/8x

sm 2.0 6800ultra 61 fps
sm 3.0 6800ultra 63fps .

2fps nice


man go river

sm 2.0 58.6
sm 3.0 60

1.4 fps nice


nvidia demos sent to anand

research
sm 2.0 47.1
sm 3.0 59.3

12.2



regulator

sm 2.0 51.4
sm 3.0 54.6

3.2 fps


training


sm 2.0 52.7
sm 3.0 56

3.3 fps


volcano

sm 2.0 50.9
sm 3.0 61.8


10.9

conclusion is its the patch and new drivers that make up most of the performance.

Best case benchmarks from nvidia show minor to major improvements. 2 from anand show minor improvements
 
Back
Top