"ATI is smug but Nvidia's the bug in the rug"

Status
Not open for further replies.
dizietsma said:
I'm slightly confused with the SM3.0 bit for Ati. Can 2.0b do the same job as 3.0 and if so why waste the millions of transistors needed to get it ?

Or is it marketing again ?

Pretty much yes, certainly for the life of both the current generation of cards. That's why ATI decided not to put in a third more transisitors to do it at this time when it means they can clock their cards higher and therefore beat 95 percent of what SM3.0 can do by using SM2.0b.

As for marketing, that seems to be what Nvidia really excel in. SM3.0 is something their competitors don't have yet, so of course it is being billed as the best thing ever, even though that doesn't seem to be the case so far.
 
Re: I made some of the same observations and was labeled &lt

Proforma said:
I am not an Nvidia <bleep> like a few people think I am, but dude last time its Nvidia's problem and this time its ATI's IMO.

And that because you, and many others, are only looking at things from the perspective of the enthusiast - ATI isn't playing the enthusiast game at the moment, NVIDIA are because they have to.

ATI and NVIDIA have purposely taken to focusing on different elements of the market in the immediate term: ATI the OEM's and NVIDIA the enthusiast.

ATI are coming off a period where their "brand" stock is very high and their architecture wasn't in as urgent a need to be replaced, so they put their engineering efforts into transitioning their entire line of chips across to PCI Express. Making 4 new chips, to be released within a 1 or 2 month period is no mean feat, but in order to do that they have traded off the architectural cycle.

Meanwhile, NVIDIA took the alternative route - the FX line saw their brand become much lower in stock and they lost the enthusiasts, which (according to their business model) will ultimately drive through to the lower end (which it has in recent months). NVIDIA's priority was to transistion to a new architecture in order to improve their architecture and right some of the issues they had previously.

The problem is that you can't do both - introduce an entire architecture and transition your line to PCI Express; NVIDIA thought they could with the PCX range, but with ATI offering native parts, which have performance and cost advantages ATI picked up all the Tier-1 OEM business for the initial transition - this is what ATI is aiming at and its no coincidence their guidance for the coming quarter is at record levels for either ATI or NVIDIA.

Of course, both routes have their pit falls - ATI doesn't look too great with the enthusiasts and they have longer term issues when NVIDIA does bring native PCIe NV4x to the market; they may not gain some more available slots or even loose some with the OEM's - the key for them is to time things correct and ensure the Shader 3.0 part comes in good time. Alternatively, NVIDIA are have a die size disadvantage, which puts the costs up - it will be key to see if the NV40 die size is matched by both next cycle, or if NVIDIA falls back a little to previous high end die sizes.
 
Re: I made some of the same observations and was labeled &lt

Proforma said:
Well, when there are three or more monsters the framerate takes a huge dive and also it slows down in some areas drastically where there almost slow motion. I am towards the end now. I am out of Hell and back.

And on other cards the framerate improves when you have more monsters on the screen? If you are having serious bottom end framerate issues you might look to things other than your graphics card.
 
Re: I made some of the same observations and was labeled &lt

AlphaWolf said:
Proforma said:
Well, when there are three or more monsters the framerate takes a huge dive and also it slows down in some areas drastically where there almost slow motion. I am towards the end now. I am out of Hell and back.

And on other cards the framerate improves when you have more monsters on the screen? If you are having serious bottom end framerate issues you might look to things other than your graphics card.

Valid point actually - IIRC the skinning is done on the CPU, which is going to impact performance with more character on screen.
 
Fodder said:
"Depending on the exact situation" DX6 can do the same job as DX9.

That is true. But I'd say that no pixel shading (DX6) can seldom do the job of the PS2.0 - only in the trivial case when there is no shader output. However, it's far more challenging to find examples of sensible PS3.0 shaders that can't be effectively done in PS2.0. Of course, there are some pathological cases, but are they of importance; that's the question, isn't it? And I believe we haven't yet seen the answer to that one.
 
Re: I made some of the same observations and was labeled &lt

AlphaWolf said:
Proforma said:
Well, when there are three or more monsters the framerate takes a huge dive and also it slows down in some areas drastically where there almost slow motion. I am towards the end now. I am out of Hell and back.

And on other cards the framerate improves when you have more monsters on the screen? If you are having serious bottom end framerate issues you might look to things other than your graphics card.

My System specs are as follows:
Pentium 4 2 Ghz
512 Megs of DDR Ram
Windows XP Professional (Service Pack 2)
Direct X 9.0c
Sound Blaster Audigy 2 (with the latest drivers)
ATI Radeon 9800 with 128 Megs of RAM
Catalyst 4.8

All options set for performance in the control panel for DX and OGL.
Completly new wiped system formatted with NTFS.

This system is hardly top of the line, but yet I can run everything else fine, but when it comes to doom 3 its a real issue. It slows down with a few demons on the screen at times and other times its fine.
 
OEMS and Enthusiasts are both important

And that because you, and many others, are only looking at things from the perspective of the enthusiast - ATI isn't playing the enthusiast game at the moment, NVIDIA are because they have to.

Nvidia has some things going on in the OEM department as well. Thats important as well and probably the MOST important of all.

However, like I have been arguing for a long time. Programmers like myself have to deal with hardware issues when building programs for others outside of my hardware. Just like when the GeforceMX didn't have shaders and that caused a lot of issues because you can't use that card for that purpose, so a lot of people bought that card thinking that they bought a nice system with that card in it, but yet it doesn't support shaders very well if not at all.

Same with ATI, they keep trying to work around this by adding in a few features to their drivers to make SM 3.0 look to be insignificant and its not, its not an Nvidia feature but a DX 9.0c feature which is already available.

When I look for instancing, I look for SM 3.0, that will skip over ATI's little hack and now I will have to change my program and my programming standards to put a hack in that should not be there in the first place.

I want ATI to also focus on the enthusiasts market as well as thats where the new features will come down to everyone at low prices, it will start there and then move down. With ATI's OEM cards they go against the very standards that Microsoft is trying to create (talking about features such as Instancing).

As a programmer, I have to say that buying an ATI card right now is not being future proof and the way that ATI is hacking on SM 3.0 features I can't always be responcible for their driver hacks and issues. Therefore ATI's latest hardware is no good for me.
 
Doom 3 issues

kyleb said:
lol, your cpu is what is slowing you down, not your ati card.

Well I don't have any problems with FARCry or any unreal 2004 game and Doom 3's graphics arn't really that good to make it this slow.

Its not exactly like I have a 486 trying to run Doom 3 here. Whats the latest Pentium 4 3.60 Ghz? Its not like CPU's are at 6 Ghz or something
and like I said, the graphics, physics and gameplay isn't exactly out of this world. Unreal 3, Half life 2, Far Cry look much better and I have not tried out the Unreal 3 yet but still come on.

I am waiting on a Dual Core 64-bit CPU and PCI Express so I can get a new system.
 
Re: OEMS and Enthusiasts are both important

Proforma said:
And that because you, and many others, are only looking at things from the perspective of the enthusiast - ATI isn't playing the enthusiast game at the moment, NVIDIA are because they have to.
Nvidia has some things going on in the OEM department as well. Thats important as well and probably the MOST important of all.
And you're in a position to state such? Do you work for NVIDIA? Didn't think so.
However, like I have been arguing for a long time. Programmers like myself have to deal with hardware issues when building programs for others outside of my hardware. Just like when the GeforceMX didn't have shaders and that caused a lot of issues because you can't use that card for that purpose, so a lot of people bought that card thinking that they bought a nice system with that card in it, but yet it doesn't support shaders very well if not at all.

Same with ATI, they keep trying to work around this by adding in a few features to their drivers to make SM 3.0 look to be insignificant and its not, its not an Nvidia feature but a DX 9.0c feature which is already available.

When I look for instancing, I look for SM 3.0, that will skip over ATI's little hack and now I will have to change my program and my programming standards to put a hack in that should not be there in the first place.
What do you know of ATI's instancing support? What do you know of the HW and SW required for it? What do you know of NVIDIA's instancing support? What do you know of the HW and SW required for it? I'll answer for you: Nothing!
I want ATI to also focus on the enthusiasts market as well as thats where the new features will come down to everyone at low prices, it will start there and then move down. With ATI's OEM cards they go against the very standards that Microsoft is trying to create (talking about features such as Instancing).
You're talking out of your *ss here.
As a programmer, I have to say that buying an ATI card right now is not being future proof and the way that ATI is hacking on SM 3.0 features I can't always be responcible for their driver hacks and issues. Therefore ATI's latest hardware is no good for me.
If you need SM 3.0 features, that's fine. But faulting ATI for finding way to expose features they support is just plain stupid.

-FUDie
 
Re: OEMS and Enthusiasts are both important

Proforma said:
Same with ATI, they keep trying to work around this by adding in a few features to their drivers to make SM 3.0 look to be insignificant and its not, its not an Nvidia feature but a DX 9.0c feature which is already available.

When I look for instancing, I look for SM 3.0, that will skip over ATI's little hack and now I will have to change my program and my programming standards to put a hack in that should not be there in the first place.

That’s a somewhat backward way of looking at it. This isn’t something designed to make SM3.0 look insignificant – its something they can do with their hardware that can provide a benefit; there’s very little difference between Instancing and Centriod sampling other than the fact that MS removed the requirement for PS3.0 with Centriod Sample but not the link for VS3.0 and Instancing. ATI’s instancing does what DX requires and provides similar benefits, so why not use it – this is, infact, the opposite of your example with MX.

I want ATI to also focus on the enthusiasts market as well as thats where the new features will come down to everyone at low prices, it will start there and then move down.

And they are with the X800, they are - however their impression of what is sensible in terms affordable die size differs from NVIDIA's at the moment. We'll ses if this is off if ATI moves up to NVIDIA's die size or NVIDIA drops back a little.

With ATI's OEM cards they go against the very standards that Microsoft is trying to create (talking about features such as Instancing).

Aren't you over blowing this somewhat? They have offered one feature that isn't presently exposed in a nice manner in DX - lots of programmers circumvent DX Caps and enable features by device ID; you can call ATI's instancing in the same manner.
 
Re: Doom 3 issues

Proforma said:
kyleb said:
lol, your cpu is what is slowing you down, not your ati card.

Well I don't have any problems with FARCry or any unreal 2004 game and Doom 3's graphics arn't really that good to make it this slow.

As I said before - vertex skinning and the shadow vertex calculation in Doom3 are done on the CPU, not the graphics core. The more characters you have one screen the more vertex processing the CPU is doing (both character skinning and shadow extrusion for the extra objects).
 
Re: Doom 3 issues

DaveBaumann said:
Proforma said:
kyleb said:
lol, your cpu is what is slowing you down, not your ati card.

Well I don't have any problems with FARCry or any unreal 2004 game and Doom 3's graphics arn't really that good to make it this slow.
As I said before - vertex skinning and the vertex calculation in Doom3 are done on the CPU, not the graphics core. The more characters you have one screen the more vertex processing the CPU is doing (both character skinning and shadow extrusion for the extra objects).
C'mon, Dave, he's a programmer, certainly he must know what he is talking about! :LOL:

-FUDie
 
Re: OEMS and Enthusiasts are both important

That’s a somewhat backward way of looking at it. This isn’t something designed to make SM3.0 look insignificant – its something they can do with their hardware that can provide a benefit; there’s very little difference between Instancing and Centriod sampling other than the fact that MS removed the requirement for PS3.0 with Centriod Sample but not the link for VS3.0 and Instancing. ATI’s instancing does what DX requires and provides similar benefits, so why not use it – this is, infact, the opposite of your example with MX.

Can I have a link on this (that MS removed the requirement for PS3.0)?

I have not seen proof of this.

I wish I could see the world like you see it. I see this as
yet another FUD from ATI's marketing Department to make it
seem like they have SM 3.0 features, along with Humus's branching
demo who just happens to work for ATI.


I want ATI to also focus on the enthusiasts market as well as thats where the new features will come down to everyone at low prices, it will start there and then move down.

And they are with the X800, they are - however their impression of what is sensible in terms affordable die size differs from NVIDIA's at the moment. We'll ses if this is off if ATI moves up to NVIDIA's die size or NVIDIA drops back a little.

Aren't you over blowing this somewhat? They have offered one feature that isn't presently exposed in a nice manner in DX - lots of programmers circumvent DX Caps and enable features by device ID; you can call ATI's instancing in the same manner.

Well, it matters to me as I have to travel through a mindfield to find out of a card supports instancing without having to depend on device ID's.

It makes DX caps useless and I depend on them to deliver the goods. I don't want to have to depend on device ID's because I want something more dynamic and not be a b*tch to code for.

I take it that the lot of you don't program for a living and only
play games. Well as a programmer this all matters to me
as if I want features to run on different video cards, I can't rely on a DX standard and I might as well call it a day.
 
Re: Doom 3 issues

FUDie said:
DaveBaumann said:
Proforma said:
kyleb said:
lol, your cpu is what is slowing you down, not your ati card.

Well I don't have any problems with FARCry or any unreal 2004 game and Doom 3's graphics arn't really that good to make it this slow.
As I said before - vertex skinning and the vertex calculation in Doom3 are done on the CPU, not the graphics core. The more characters you have one screen the more vertex processing the CPU is doing (both character skinning and shadow extrusion for the extra objects).
C'mon, Dave, he's a programmer, certainly he must know what he is talking about! :LOL:

-FUDie

Well I don't know everything, but you don't either.
Must be called FUD-ie for a reason. duh.

I understand about vertex skinning being on the CPU instead of what it should be on ala the GPU. I got that part, the issue is that I have had all this before on the CPU and it never went as slow as Doom 3, maybe its because of the shadows. I will disable the shadows and see if thats the problem, but then you might as well be playing quake 3. :(
 
wish I could see the world like you see it. I see this as
yet another FUD from ATI's marketing Department to make it
seem like they have SM 3.0 features, along with Humus's branching
demo who just happens to work for ATI.

Well they support instancing. Which is a sm3.0 feature. So its not fud from ati.

If they do instancing they do it. Judging from the benchmarks they do it faster than nvidia do it. So how are they spreading fud ?

The problem lies in ms tieing instancing to sm3.0 though all of ati's dx 9 cores do it all the way down to the r300.
 
Can I have a link on this (that MS removed the requirement for PS3.0)?

I have not seen proof of this.

Take a look as the DXCaps list under DX9.0c – there is now a caps bit for it; this wasn’t the case initially.

I wish I could see the world like you see it. I see this as yet another FUD from ATI's marketing Department to make it seem like they have SM 3.0 features, along with Humus's branching demo who just happens to work for ATI.

Well, that was Humus’s demo, not ATI’s and he is neither part of their marketing nor was it released by ATI.

However, ATI’s instancing offers exactly what DX requires (other than VS3.0!) and provide the same benefit – and that is offered to every user from 9500 to X800; what’s the issue if it provides the right benefits? Isn’t it doing ATI users more of a disservice not to offer those benefits? Its not as though they just enabled this as a selling point for X800 but enabled it for all boards that have the capability for supporting it.

Well, it matters to me as I have to travel through a mindfield to find out of a card supports instancing without having to depend on device ID's.

Well, you don’t have to – there is a method for checking, however other developers do use device ID’s.

Well as a programmer this all matters to me as if I want features to run on different video cards, I can't rely on a DX standard and I might as well call it a day.

Then, as a programmer, shouldn’t you be requesting that MS offer a Caps bit for it as they have with Centriod Sampling?
 
Re: OEMS and Enthusiasts are both important

Proforma said:
I take it that the lot of you don't program for a living and only
play games. Well as a programmer this all matters to me
as if I want features to run on different video cards, I can't rely on a DX standard and I might as well call it a day.

I take it this is a bit of an exageration. Seriously if as a coder, having to do a work around - to get a fairly significant improvement on hardware which is quite widespread - is enough to make you call it a day, I cant say Im too impressed with your coding skills. If on the other hand you are simply saying you wish you didnt have to do this, then yeah I can see your point.

But then again, you could say the same thing for PP on NV hardware - with ati you dont need to bother with PP, you dont need to go through each shader and work out if pp would degrade the quality too much - just set everything to full Precision.
 
Status
Not open for further replies.
Back
Top