Another Anand tease...

GetStuff said:
ATI would have good reason to hold AnandTech suspect over his objectivity.

The reason is not good enough for ATi to stop running ads on his site? :D

No your right but I guess their idea is to sway him away from nvidia as opposed to outright abandonment. These sites like AnandTech remind me of the people who go around to restaurants and write critiques up. These guys get catered to, I only wish I could manage something of the sort. ;)
 
Geek_2002 said:
First I would be suprised if there is a 13 micron NV30 part from nvidia that isn't absolutely packed full is hardware and driver issues. Second there has been little to none in terms of rumors with regards to the part. Thirdly it could very well be another party all together whom we arn't considering.

Why must the production on a new die process result in myriad hardware and driver issues? That's just silly...

Regardless, a few things to consider:

1. We already know a fair amount about what the NV30 will be providing: Increased programmability, more performance (esp. with aniso/FSAA), 64-bit floating-point color.

2. nVidia has pulled surprise product launches before. I think their best was the GeForce2 GTS launch.

Anyway, expect an official announcement in late August.
 
Chalnoth said:
Geek_2002 said:
First I would be surprised if there is a 13 micron NV30 part from nvidia that isn't absolutely packed full is hardware and driver issues. Second there has been little to none in terms of rumors with regards to the part. Thirdly it could very well be another party all together whom we arn't considering.

Chalnoth said:
Why must the production on a new die process result in myriad hardware and driver issues? That's just silly....

Well if there is going to be problems this is likely the time period there will be. In fact it is "silly" IMHO to dismiss the likely hood that there will be problems in the beginning not the other way around Chalnoth.

Chalnoth said:
Regardless, a few things to consider:

1. We already know a fair amount about what the NV30 will be providing: Increased programmability, more performance (esp. with aniso/FSAA), 64-bit floating-point color.

2. nVidia has pulled surprise product launches before. I think their best was the GeForce2 GTS launch.

Anyway, expect an official announcement in late August.

Yeah but I wouldn't dare compare the NV30 120 million transistor chip to a 25 million transistor geforce 2 GTS. Further the more advanced a chip is it would seem logical to assume that they would have more technical difficulties. At any rate this is all speculation I am not posting in this thread as if it is hard facts so what ever we will have to wait and see what the outcome is and all of the blithering we are doing here will either be relegated to being a false rumor or cleared as being good observations.
 
Nagorak said:
By the way, has anyone noticed how ATi is tearing 3rd parties away from Nvidia left and right? If I owned Nvidia stock I'd sell it, and not just because the NV30 is supposedly delayed, losing all those clients has the be hurting them. And, unlike ATi, they don't have their own board manufacturing to fall back on.

That would be an issue if people were buying cards based solely on the manufacturer, but in reality the chip the card is based on is the deciding factor. I have yet to see a person outright say "I want an ASUS card" or "I am buying MSI". People first choose the chip they want and then choose among the manufacturers making cards based on that chip.

You are much better off looking at market share rather then number of manufacturers.
 
Geek_2002 said:
Yeah but I wouldn't dare compare the NV30 120 million transistor chip to a 25 million transistor geforce 2 GTS. Further the more advanced a chip is it would seem logical to assume that they would have more technical difficulties. At any rate this is all speculation I am not posting in this thread as if it is hard facts so what ever we will have to wait and see what the outcome is and all of the blithering we are doing here will either be relegated to being a false rumor or cleared as being good observations.

Well, I guess I did misunderstand you when you were talking about hardware issues...as those are absolutely inevitable early in development. It just sounded to me like you were talking about a launch part.

Anyway, regardless of how much more complex the NV30 is, nVidia has far more experience and engineering/financing muscle now than ever before (and significantly more than ATI...). Yes, there is a very real possibility that the NV30 won't make the late August/Early September timeframe, but we don't know for sure just yet...

If ATI does launch their R300 tomorrow, then I think two things are pretty much assured:

1. It will be very good compared to today's video cards.
2. It won't be able to stand up against the NV30 (from a certain perspective...it may give good price/performance by comparison...but I doubt its features will be close to the NV30's).
 
Chalnoth said:
Geek_2002 said:
Yeah but I wouldn't dare compare the NV30 120 million transistor chip to a 25 million transistor geforce 2 GTS. Further the more advanced a chip is it would seem logical to assume that they would have more technical difficulties. At any rate this is all speculation I am not posting in this thread as if it is hard facts so what ever we will have to wait and see what the outcome is and all of the blithering we are doing here will either be relegated to being a false rumor or cleared as being good observations.

Well, I guess I did misunderstand you when you were talking about hardware issues...as those are absolutely inevitable early in development. It just sounded to me like you were talking about a launch part.

Anyway, regardless of how much more complex the NV30 is, nVidia has far more experience and engineering/financing muscle now than ever before (and significantly more than ATI...). Yes, there is a very real possibility that the NV30 won't make the late August/Early September timeframe, but we don't know for sure just yet...

If ATI does launch their R300 tomorrow, then I think two things are pretty much assured:

1. It will be very good compared to today's video cards.
2. It won't be able to stand up against the NV30 (from a certain perspective...it may give good price/performance by comparison...but I doubt its features will be close to the NV30's).

Err I am thinking you are underestimating ATI. I believe that they are one of the largest Technology companies in Canada. Further I also think that you are underestemating their enginering talent. But for the most part I agree with what you say but I don't understand why it is that everyone always assumes that nvidia will always be better then ATI. Anyhow time will tell.
 
Logic dictates that I cannot state that nVidia will always be better than ATI.

Still, I can see more than enough flaws with the Radeon 8500's design that are not in the older GeForce3 design that I don't currently have much expectation for ATI's engineering talent in comparison to nVidia's.

Examples:

1. Incomplete anisotropic implementation.
2. Smoothvision that never materialized.
3. Less performance per clock, despite having second-generation memory bandwidth savings tech.
4. Inferior MIP map selection algorithm (resulting in much texture aliasing).

...and I'm sure I could think of a few more if I sat here for a while. Anyway, my point is, it still really looks to me like ATI is just too far behind nVidia to truly usurp them in high-end 3D consumer graphics. I've also always been more impressed by nVidia's 3D graphics implementations (in terms of correct rendering and supported features).
 
Chalnoth said:
Logic dictates that I cannot state that nVidia will always be better than ATI.

Still, I can see more than enough flaws with the Radeon 8500's design that are not in the older GeForce3 design that I don't currently have much expectation for ATI's engineering talent in comparison to nVidia's.

Examples:

1. Incomplete anisotropic implementation.
2. Smoothvision that never materialized.
3. Less performance per clock, despite having second-generation memory bandwidth savings tech.
4. Inferior MIP map selection algorithm (resulting in much texture aliasing).

...and I'm sure I could think of a few more if I sat here for a while. Anyway, my point is, it still really looks to me like ATI is just too far behind nVidia to truly usurp them in high-end 3D consumer graphics. I've also always been more impressed by nVidia's 3D graphics implementations (in terms of correct rendering and supported features).

Here goes the pissing match sigh.

1. Nvidia uses incomplete AA
2. Nvidia still only supports DX8 fully
3. Nvidia instead of developing a new core for the geforce 4 ti series uses old geforce 3 architecture overclocked resulting in a hot running GPU.(BAD..)
4. Nvidia still has High Ordered Surfaces disabled.
5. Nvidia does not support PS 1.4 as of yet.
6. Nvidia has no support in the Det drivers for AF under D3D.
7. Nvidia has failed to give end users a boost in perfomance with their DET drivers for some time now resulting in the Radeon 8500 nearly performing as well as the Geforce 4 ti 4600 in many benches.
8. Nvidia geforce 4 ti series of cards are still more expensive then the Radeon 8500 but the desparity in perfomance has narrowed substaintially.

I could probably think of a few more negatives. For the most part I think that geforce 4 ti technology is old and really there is little in it that is new. IMHO I believe that nvidia is behind technologically speaking. I mean a few more FPS does not really equate better technology. Particularly where the geforce 4 ti series is conserned, I mean everyone knows that the speacial cooling unit attached to the cards keep the GPU from burning up right?

PS : I am not going to get into this with you Chalnoth because I know it will never end. So I am not going to play the game today. I don't think either of us would sway the other anyhow.

EDIT : I know you insist on having the last word.
 
Geek_2002 said:
1. Nvidia uses incomplete AA

Result of the implementation, not any engineering failure.

2. Nvidia still only supports DX8 fully

???
All we have right now is DX8!

3. Nvidia instead of developing a new core for the geforce 4 ti series uses old geforce 3 architecture overclocked resulting in a hot running GPU.(BAD..)

The 4Ti architecture is significantly improved from the GF3. nVidia was waiting for the .13 micron process to implement their next-gen tech.

4. Nvidia still has High Ordered Surfaces disabled.

Only in Direct3D, but it is of little consequence, as no games currently use the tech (and likely won't until the GF3 is the min spec for games, which may be another two years). The curved surfaces work perfectly under OpenGL (and are used in a number of nVidia demos).

5. Nvidia does not support PS 1.4 as of yet.

This is not an engineering failure. (design decision)

6. Nvidia has no support in the Det drivers for AF under D3D.

Not an engineering failure. (driver GUI)

7. Nvidia has failed to give end users a boost in perfomance with their DET drivers for some time now resulting in the Radeon 8500 nearly performing as well as the Geforce 4 ti 4600 in many benches.

Again, not an engineering failure. (Already optimized...and you're exaggerating...)

8. Nvidia geforce 4 ti series of cards are still more expensive then the Radeon 8500 but the desparity in perfomance has narrowed substaintially.

Not an engineering failure once again (price depends on market conditions).

Please show me one engineering failure in an nVidia graphics card (a feature never implemented properly, for example).
 
This statement alone tells you who the candidates are...

1. ATI
2. nVidia
3. ST Micro
4. Matrox

It's safe to scratch Matrox off the list. And while you're at it, eliminate ST Micro as well. That leaves 2.

He doesn't neccesarily have to mean STM, after all both STM and IMGTEC are companies behind Kyro II (which apears in that article), so he could just mean IMGTEC.

I'm not saying that I think he does mean them of course, I have no idea who he means.
 
Dam I got sucked in again, I gotta get a life.


Geek_2002 said:
1. Nvidia uses incomplete AA

Chalnoth said:
Result of the implementation, not any engineering failure.

Well it may be a failure preferable they would do proper AA and just can't manage a good implementation yet. The implementation would have been decided on after the failure of the enginering team to do proper AA.

2. Nvidia still only supports DX8 fully

Chalnoth said:
???
All we have right now is DX8!

We have DX8.1 with PS 1.4 that is unsupported by nvidia.

3. Nvidia instead of developing a new core for the geforce 4 ti series uses old geforce 3 architecture overclocked resulting in a hot running GPU.(BAD..)

Chalnoth said:
The 4Ti architecture is significantly improved from the GF3. nVidia was waiting for the .13 micron process to implement their next-gen tech.

I don't see where they changed much at all the card has the same features basically. Nothing really new at all except that it runs considerably hotter/faster then the geforce 3.

4. Nvidia still has High Ordered Surfaces disabled.

Chalnoth said:
Only in Direct3D, but it is of little consequence, as no games currently use the tech (and likely won't until the GF3 is the min spec for games, which may be another two years). The curved surfaces work perfectly under OpenGL (and are used in a number of nVidia demos).

Yeah that is why they disabled the feature. It works poorly at best.

5. Nvidia does not support PS 1.4 as of yet.

Chalnoth said:
This is not an engineering failure. (design decision)

Hrm, not so sure could be a "design decision" but could also be a failure on the enginering behalf. BTW you sit in on nvidia executive meetings?

6. Nvidia has no support in the Det drivers for AF under D3D.

Chalnoth said:
Not an engineering failure. (driver GUI).

Yeah granted but one has to wonder why they choose not to support it in their DET drivers.

7. Nvidia has failed to give end users a boost in perfomance with their DET drivers for some time now resulting in the Radeon 8500 nearly performing as well as the Geforce 4 ti 4600 in many benches.

Chalnoth said:
Again, not an engineering failure. (Already optimized...and you're exaggerating...)

Am I ? I saw this posted in another forum and thought OMG nvidia has to do something about this soon.

quake3_normal.jpg


rtcw_normal.jpg


jedi2_normal.jpg


oc_perf_2.jpg


As much as I hate to post bench pics they do help to prove a point. There are others I could post as well.

8. Nvidia geforce 4 ti series of cards are still more expensive then the Radeon 8500 but the desparity in perfomance has narrowed substaintially.

Chalnoth said:
Not an engineering failure once again (price depends on market conditions).

Please show me one engineering failure in an nVidia graphics card (a feature never implemented properly, for example).

Well you could argue that the cooling arpiture on the cards is jacking the price tag up on geforce 4 ti cards and that this would be an engineering failure to a degree as the geforce 4 ti cards need the extra cooling to keep from burning through the PCB. ;)

You assume that it is some sort of an executive decision in many cases but in reality these decisions are a result of what they can and can't do from an engineering stand point. It is good in a way that they work this way as no one get a broken feature. cough nvidias AF cough. But it really doesn't prove that they are more technologically advanced at all.
 
Doomtrooper, some will probably say that the use of 16-bit texel color interpolation is also a mere design decision and not a hardware flaw. It was the way that nVidia found accurate of decompressing textures in hardware. Just a thought :LOL:
 
A design decision... yes it was intentional, but it was a mistake ;)

Anyways... so what have learnt today?

Nada, nowt, zilch

....
 
ATI's aniso has never made my eyes bleed...

The fact is, 90% of the time ATI's aniso works exactly as intended, with more detail and less speed hit than the competition. The same can't be said about nVidia's texture compression. I don't consider myself biased towards either company (I'm running a Geforce right now) but the reality of this issue is just inescapable.

IMO, sometimes shortcuts/approximations work well, other times not:

q3_1_geforce_comp.png
 
Oh great, this thread is degenerating into one of those "your hardware is broken pissing contests." The funny thing is if you support a card then these problems are the result of implementation, but if you don't you get to say it's broken. It's just ridiculous, you can go 'round and 'round about that and it proves nothing (except in the mind of the person making the stupid claims). Let me just say that Nvidia products are no more and no less "broken" than ATi cards and leave it at that.

I'm sorry Chalnoth but what you're saying is just wrong. There has been no information whatsoever to base relative speed claims on. The information you are sighting is basically "Nvidia has been better in the past." Which is a debatable point, but one which is also pointless to debate. Regardless, I'm sorry but that just doesn't prove anything.

The other things that are being sighted are the following:

1) Nvidia is hyping the NV30 as being *cough*revolutionary*cough*.

Everyone seems to hate me for saying this, but damnit man, they've said this about every card they've EVER released. If hype was a proven fact then the GF3 would be rendering FF in full cinematic beauty and Pixar would be buying them up like hotcakes. I don't see that happening, do you?

2) ATi does less work per clock cycle than Nvidia (or some such nonsense).

They are completely different architectures, so obviously they do things differently. In fact this makes no difference, as proven by good ol' Intel. Although P4s do less per clock cycle, they also are capable of running at much faster clocks and thus result in higher performance than the Athlon (albeit not that much more). Clock efficiency and clock cycle don't matter. If you had a card that was 300% as efficient as the GF3, but it only ran at 1 MHz, it'd still be a PoS. Meanwhile if a card was released that ran at 900 MHz and was only slightly faster than a GF3, it'd still be better regardless of it's "inefficiency". Yeah, maybe if you had that GF3 running at 900 MHz it'd be faster...too bad there's no chance of ever getting it to run that fast. By the way as spurious as it is, this argument can only be applied to the R200 generation of ATi hardware. As I recall the R100 was way more efficient in terms of clock cycles than the GF2.

3) Nv30 will have more transistors than the R300.

Umm...both chips are estimated to be around 120million transistors, so I don't know what the basis behind this is. By the way, the similar transistor count leads me to believe that both chips will perform fairly identically, rather than what you're suggesting about the NV30 being so much more advanced.

4) If the R300 is really good at release, Nvidia will "know what to do" to get their chip running faster for release.

Yeah, just like that, they'll wave their magic wand and the NV30 will run waaay faster, right? Just like 3DFX knew "what to do" to get the V5 line competitive? It isn't like they can just pull out a screw driver or scalpel and tweak the core, to achieve any major changes they'd have to go completely back to the drawing board and that would set them back about...2 years? I agree Nvidia can just speed bin their chips by selling only those that clock really well, and grinding the rest up into sand, but they'd also lose insane amounts of money that way.

5) The NV30 is coming out later, so it must be better.

Riiiight!

Anyway the point of this post is not to say the NV30 is going to suck, or even that the R300 is going to be better. The point is: no one honestly knows yet!

The people are saying the NV30 is going to be better because Nvidia always has better parts, are the same sort of people who claimed that 3DFX would never be dethroned by the competition. Look where 3DFX is now!

The NV30 does have one thing going for it, I'll grant you that: the 0.13 micron process. That's literally the only thing we have to indicate any real advantage for the NV30. That's the only valid argument at this point that I'll accept to suggest that the NV30 will be faster. And there may be something to it (although it will depend on whether ATi released a 0.13 version of the R300 later on). I'm just sick of people acting like it's set in stone that the NV30 will blow the R300 away. Let's just sit back and wait and see.

The only thing that's been proven thus far, IMO, is the amazing ability of Nvidia's marketing to pull the wool over people's eyes as the competition's release date approaches.
 
Back
Top