ATI spreading FUD ?

What, you dont see the irony in a company scared about incompabilities in their competitors product when they have had a lackluster record themselves on application compatability?

You may call it cheerleading. I call it a good laugh on a Monday morning :)
 
DemoCoder said:
It's FUD, pure and simple, and it will be interesting to see how those playing up PCI-E differentiation on this BBS will react when the VS3.0/PS3.0 differentiations start coming out.

Well, it isn't FUD to state that nVidia's custom support for "AGP x16" is not equal to support for PCIex16, is it? It might be close, certainly, but still no cigar as it's not the same thing...;) It seems to me that here ATi is only pointing out the obvious.

Also, regardless of the efficacy of nV40's PS3.0 implementation, or the lack thereof, that does nothing to alter the fact that "AGP x16" support does not equal PCIex16 support.
 
Chalnoth said:
Why does this have any relevance? The "AGP x16" will only be active when the bridge chip is in use. The data transfer mode is there to make better use of the bridge than nVidia's older graphics processors will be able to. It was described as being able to emulate the bi-directional mode of PCI Express by very rapidly switching betweens ending and receiving data.

Doesn't matter how "rapidly" it switches, if it switches it isn't bi-directional in the sense of doing both directions at the same time, right? Eg, "emulation" is...well "emulation", isn't it?...;) Close, but no cigar. I mean, nVidia isn't pretending the two are the same, so there's really no point in advancing that perspective, is there?

It's FUD because of the "might."

I said "might" because it "might not" cause any problems. You'd feel more comfortable if I'd said that it positively would cause problems? Whether it causes problems at all is an entirely separate issue from the fact that AGP x16 does not equal PCIex16. Seems pretty straightforward and uncomplicated.

I saw a good argument on one of the previews (forget which one, unfortunately):
By using a bridge chip, nVidia will have a much easier time clearing chip inventory. On the other hand, if ATI misreads the market, and sells more of one type of R42x than they planned, they will have a much harder time getting rid of the inventory of the other type.

That's a fine argument from nVidia's point of view--economically--and I've said as much myself. However relevant the economics may or may not be to nVidia, it doesn't change the fact that AGP x16 does not equal PCIex16. I really don't think early adopters of PCIe mboards are going to settle for a half-way emulation of PCIex16 support in a 3d card, do you? Might as well just stick with AGP x8 instead of doing that. The thing is that early, native production of PCIex16 pcbs to support its reference designs might be an economic problem for ATi, but it surely won't be a problem for the customers who buy those products, right? As to whether supporting PCIe immediately helps ATi economically, that's a separate issue from the obvious benefit to PCIe consumers in choosing native support over a bridge-chip emulation.

Yes, with the rest of the NV4x lineup. They will use the bridge chip too, however, to operate on the AGP bus.

I think all nVidia reference designs will eventually be offered in native PCIex16 pcb versions, and that nVidia will eventually use the bridge chip for the purposes of bridging PCIex16 pcbs to operate on AGP x8. This is how I understand ATi intends to use a bridge, which I think is the right way to do it. Of course, my opinion is that bridging AGP x8 pcbs up to support "AGP x16" instead of PCIe is the wrong way to do it. I think nVidia is looking at costs of the manufacturing of PCIe pcbs initially, without an appreciation for the fact that people buying PCIe mboards initially and shelling out $500 for a 3d card are apt to be very picky about what they buy, and such distinctions are likely to be relevant to them.

What people do I think will depend on how well nV40 compares to what ATi will be offering. Also, availability between the competing products will be a key issue as well. Assuming both products are available at similar prices and quantities, and at roughly the same times, then if nV40 is perceived as being the better product the lack of native PCIe support won't matter at all. But if the R4x0 products are perceived nearly the same, then the distinction could become critical. And if R4x0 is clearly perceived as superior, then lack of native PCIex16 support is just one more nail in the nV40 coffin...;)

Also, if nVidia isn't sure as to when yields of nV40 will reach critical mass such that the product can be retailed in the mass markets by its board OEMs, then it could be that nVidia just isn't terribly concerned with doing a native PCIe pcb for nV40 at this time because it knows that when yields mature to the point of being able to sell it they'll have a native PCIe pcb ready to go and the issue will be entirely moot. In short, it could be that by the time the nV40U gets to market it will be native PCIe, and we'll all have had this discussion for just about no reason at all...;)
 
Unknown Soldier said:
erm.. no.

The card still only has one slot.. and the Mobo only has one AGP 8x slot.

US

huh? the "AGP 16x" is contained between the GPU and the HSI bridge, the card plugs into the motherboard via PCI-Express.
 
Oh please. FUD? Either you are a Nvidiot fanboi or woefully ignorant of PCIE.

Clocking the AGP interface at 16x instead of 8x doesn't even come close the the fact that PCIE provides for both direction to be full speed AT THE SAME TIME. AGP can only have one direction running at any instant. How is ATI pointing this out FUD?

I dunno if ATI also points out the potential for lower performance due to higher latency in using an external bridge, but any systems engineer or architect can tell you the exposure is there. There. Did I just spread FUD? Or did I point out a real shortcoming of an approach.

And what is the excuse for being willfully ignorant that more parts means higher failure rates? nV's HSI adds another component (and heat source) to the board.

Sounds like a fanboi flame post to me. But in case you aren't, I suggest you go to the Intel site and read
http://developer.intel.com/technology/pciexpress/downloads/3rdGenWhitePaper.pdf
 
Which reminds me.

Since nV likes to position itself as the technology leader (and by implication infer ATI is not), why did nV take a retro, uncommitted approach to PCIE? (for all of their next gen products as far as I can tell)

A true leader would have integrated the PCIE bridge and provided an external reverse bridge to go back to AGP. Or put both interfaces on a single IC. Not do what nV did (a really half-hearted "committment" IMO).
 
Scarlet said:
Which reminds me.

Since nV likes to position itself as the technology leader (and by implication infer ATI is not), why did nV take a retro, uncommitted approach to PCIE? (for all of their next gen products as far as I can tell)

A true leader would have integrated the PCIE bridge and provided an external reverse bridge to go back to AGP. Or put both interfaces on a single IC. Not do what nV did (a really half-hearted "committment" IMO).
A bridge chip may very well sell better to the average joe, if marketed right. Like now, they are calling it the "High Speed interconnect." Joe sixpack will look at it and think "humm... it's a high speed thingie, how come that ATI card doesn't have one -- maybe that ATI card isn't high speed."
 
thatdude90210 said:
Scarlet said:
Which reminds me.

Since nV likes to position itself as the technology leader (and by implication infer ATI is not), why did nV take a retro, uncommitted approach to PCIE? (for all of their next gen products as far as I can tell)

A true leader would have integrated the PCIE bridge and provided an external reverse bridge to go back to AGP. Or put both interfaces on a single IC. Not do what nV did (a really half-hearted "committment" IMO).
A bridge chip may very well sell better to the average joe, if marketed right. Like now, they are calling it the "High Speed interconnect." Joe sixpack will look at it and think "humm... it's a high speed thingie, how come that ATI card doesn't have one -- maybe that ATI card isn't high speed."

Good point. I hadn't thought of that. I can see it now: "Hey Bubba, this Nvidia card has PCIE and a special High Speed Interconnect, buy it instead of the ATI card, which ain't got no HSI."

Not exactly good for the consumer though. Guess I shouldn't be looking for any graphics marketing jobs anytime soon. I'm just not devious enough.
 
WaltC said:
Doesn't matter how "rapidly" it switches, if it switches it isn't bi-directional in the sense of doing both directions at the same time, right? Eg, "emulation" is...well "emulation", isn't it?...;) Close, but no cigar. I mean, nVidia isn't pretending the two are the same, so there's really no point in advancing that perspective, is there?
You don't understand.

The AGP portion of the bus rapidly switches.

The PCI Express portion can operate in bi-directional mode.

In essence, this means that the card will be able to make use of the full bandwidth that would be offered by an AGP 16x implementation, even in cases where bidirectional data is called for.

Another way to think of this is that the NV40, when PCI Express variants are released, will act exactly like a native PCI Express graphics card with somewhat lower bandwidth.

But I really fail to see why you seem to consider it such a big deal (and why compare it to PS 3.0 support???). Bus changes in the past have not resulted in anything resembling major graphics performance changes. I don't expect the bridged NV40 to show any less performance improvement from moving to PCI Express than the R423 will over the R420 (assuming same clocks, memory, etc.).

I said "might" because it "might not" cause any problems. You'd feel more comfortable if I'd said that it positively would cause problems?
No, the point is you don't know, and neither does ATI. That "might" goes right into the definition of FUD: Fear, Uncertainty, Doubt.
 
Scarlet said:
Since nV likes to position itself as the technology leader (and by implication infer ATI is not), why did nV take a retro, uncommitted approach to PCIE? (for all of their next gen products as far as I can tell)
Um, they are doing this. All of the rest of the NV4x lineup will be PCI-Express native graphics chips. The NV40, on the other hand, is set to be available quite a while before any PCI Express motherboards, so most people will want to purchase an AGP variant.

By the time PCI Express motherboards are actually available, the NV45's should be just about ready to replace the NV40's on the high end.
 
Chalnoth said:
The PCI Express portion can operate in bi-directional mode.

Terminology police: AGP ius "Bi-directional" (one interface, both directions) PCI-E is unidirectional (it has two paths that each go their own direction - one downstream, one up).

In essence, this means that the card will be able to make use of the full bandwidth that would be offered by an AGP 16x implementation, even in cases where bidirectional data is called for.

An AGP16X implementation will not be able to match the PCI-E upstream bandwidth. AGP will use upstream transfers at 2X PCI rates (266MB) or 1GB in AGP texturing mode.

As for AGP bidirectional switching there has usually been an overhead in doing this. Be interesting to see how much it will be when removing the chipset side.

There may also be cases where the upstream data is required (such as video streaming to a file) and may have priority at the AGP end.
 
Since nV likes to position itself as the technology leader (and by implication infer ATI is not), why did nV take a retro, uncommitted approach to PCIE? (for all of their next gen products as far as I can tell)


From an economical standpoint it makes sense. For starters when are the PCIE MBs going to be making their debut? I heard Intels chipset is due out at the end of the month which puts you at a June availability at the earliest. Nothing really ramped until July\August. Marketshare of PCIE will remain low through 2004 so why spend the money to make cards that only work on an interface that will represent such a small marketshare for the year?

As for whether or not there will be a real world difference in performance Ill make a guess and say no. I would honestly be surprised if we have game designed to take advantage of PCIE in the next 18 months.

So from an economic POV it makes sense to have a bridge if it will work fine. It will ensure every 6800 U that is made will be compatible with both standards. While it may not provide the absolute best performance I dont think it will be far off.
 
Nothing really ramped until July\August. Marketshare of PCIE will remain low through 2004 so why spend the money to make cards that only work on an interface that will represent such a small marketshare for the year?

Retail = small
OEM = Large.

Once Intel releases its PCI-E motherboards there will be LOTS of OEM slots open.
 
DaveBaumann said:
Chalnoth said:
The PCI Express portion can operate in bi-directional mode.

Terminology police: AGP ius "Bi-directional" (one interface, both directions) PCI-E is unidirectional (it has two paths that each go their own direction - one downstream, one up).


Terminology police Internal Affairs Unit: AGP is half-duplex. PCI-E is Full Duplex.
 
Yes, but it is confusing to say "X is unidirectional" which, according to standard definitions, would imply that X can operate in only one direction. I could not find Intel documentation saying "PCI Express is unidirectional". Instead, they say, "PCI Express has two unidirectional links"

For example, ATI's own fuddery asserts NVidia's PCI Express implementation "provides only unidirectional bandwidth", thus implying ATI PCI Express provides bidirectional bandwidth.

These are confusing usages of the terms unidirectional and bidirectional. Thus, the term "two unidirectional links" or "full duplex" would be more precise.
 
Back
Top