ATI spreading FUD ?

Evildeus said:
I suppose it could induce issues, but that depends on how the Nv bridge works, how does PCI-X software affect Nv's bridge, does PCI-X software will take into account NV bridge?etc.

Its not a question of how it works, but what PCI-E delivers. If you write an application expecting unidirectional transfer rates, both upstream and downstream simultaneously at 4GB/s, as defined by the SIG, at the very least you are going to run into upstream bandwidth issues as it physically cannot provide the full upstream bandwidth (at that utlisistation you are likely to run into bidirectional transfer issues of AGP, but that needs to be tested). If someone looks at the PEG16X spefications and writes and application with those requirements in mind to be ustilised fully then they are going to run into issues.

Inherantly any bridged solution that relies on AGP at the back is not providing the full PEG16X features, unless they can full solve the upstream bandwidth and bidirection transfer issues at high bandwidth utilisation.
 
DaveBaumann said:
If someone looks at the PEG16X spefications and writes and application with those requirements in mind to be ustilised fully then they are going to run into issues.

Wouldn't the "only" problem be that i runs slower then expected ?

And realistically, how many applications will use PCI-Express features under the lifetime of the PCX cards ?

Inherantly any bridged solution that relies on AGP at the back is not providing the full PEG16X features, unless they can full solve the upstream bandwidth and bidirection transfer issues at high bandwidth utilisation.

That's true. And i'm guessing that Nvidia won't add a disclaimer about that fact either. Though judging by the specs of the cards, the people that buys them aren't likely to notice any of that.
 
DemoCoder said:
The peak speed difference isn't FUD, but IMHO, talk about potential failures and software compatibility is. I could just as easily point out that AGP is a well understood interface, and there is *more* potential for buggy implementations of PCI-E as it is new and immature, this applies to both bridge chips and native implementations.
With regards to the potential failures I definately think they have a point. If you have a bridge chip you have another point at which failure can occur. If I were someone that owned a NV card with a bridge chip and the bridge chip ended up frying itself after say 6 months I would be pretty peaved as I would have a working graphics chip that can't interface with my comp. Granted this is pretty hypothetical but it does show that there is merit with the failure part of the statement.
 
Bjorn said:
Wouldn't the "only" problem be that i runs slower then expected ?

Who knows. Running slower is likely to be most commonly manifested issue, but then if your app needs to be realtime and you are expecting 4GB/s upstream then you'll have to do something else of the bridged solutions out there. If you have apps that rely on the unidirection bandwidth at both times and run into bidirectional issues from the AGP end you could start getting glitches and stalls.

And realistically, how many applications will use PCI-Express features under the lifetime of the PCX cards ?

Whats the lifetime of a bridged solution? NVIDIA may not be selling them that long (for compatibility to AGP 3D cores at least), but the user thats just thought he bought a full PCI-E system, how long is it before he is expected to upgrade again?

Inherantly any bridged solution that relies on AGP at the back is not providing the full PEG16X features, unless they can full solve the upstream bandwidth and bidirection transfer issues at high bandwidth utilisation.

That's true.

Take that further - there are no specifications for bridged solutions, so the more types there are out there the more compatibility become an issue becuase you'll have numerous parts out there with some vague specifications and abilities that could float somewhere between standard AGP and PEG16X.
 
Bridged solutiions are for customers buying computers from Dell, Circuit City, and Best Buy. These customers will see no difference and no programmer in their right mind would write a program for this audience requiring a full PCI-Express card in the next few years. Compatibiblity is one of the most required aspects for this market. Look at high end games such as Far Cry, it doesn't even require a DVD drive as its still on CD's in the US market. (In my opinion if you can run Far Cry on your comptuer then you most likely have a DVD drive)

People who will use programs requiring a non-bridged solution will buy a non-bridged chip, and by that time both vendors will have a solution out and they won't be swayed by the statements by either vendor.

The people who will be swayed are the people buying from Dell, Circuit City, Best Buy etc... And it will be because some sales person sees the statement and tells the customer the one computer with such a solution is prone to failure, speed issues, and compatibility problmes. These customers know no better and will take their advice seeing huge problems, not the miniscule differences.

I'm not saying ATI is doing something wrong or unethical, but I am saying it will end up becoming that way for certain customers where they are getting a product they don't understand under false pretenses. The problem is the sales people, but ATI should know better and understand that fact.
 
ATI have a native PCIe solution, and are using it to differentiate their product. They consider native solutions to be better for the reasons they have outlined. Nvidia are using a bridge chip and claiming that there is no difference to a native solution, thus negating ATI's "native advantage".

You could look at this as Nvidia spreading FUD in order to negate ATI's advantage of a native solution because Nvidia don't have one. ATI is fighting back by explaining why they think native PCIe is better and bridged solutions are worse. Yet Nvidia insists that a bridge chip has the same advantages. Who is right, and who is spreading FUD about the other's solutions?

Personally, if what is claimed is true, I don't think it can be considered FUD. It's up to you as a customer to decide if what any company claims about anything is actually true, or just designed to created fear, uncertainty or doubt amongst potential customers of a rivals' products. So the question is what ATI says about the advantages of a native PCIe solution verses a bridge chip true or not? Is it ATI or Nvidia manipulating the facts?

Here's a related thought: Nvidia is implying very strongly (through screenshots and presentations) that without PS 3.0, you won't get good visuals. Nvidia are doing this because they know ATI won't have PS3.0 this generation. Is it true that you'll get worse visuals without PS3.0, or is it FUD?
 
My impression regarding the bridge chip was that Nvidia needs it first because they don't have a solid native solution yet and secondly to clear up inventories. But IIRC ATi has also stated that they will have a bridged solution. (please correct me if I am wrong here.) If so they are spreading "FUD" wrt their own products as well.
 
Sabastian said:
My impression regarding the bridge chip was that Nvidia needs it first because they don't have a solid native solution yet and secondly to clear up inventories. But IIRC ATi has also stated that they will have a bridged solution. (please correct me if I am wrong here.) If so they are spreading "FUD" wrt their own products as well.

AFAIK, Ati's will only use the bridge chip for going from PCI Express to AGP (motherboards). This won't create the same problems since it's going to a bus with less capabilties and it's going to be easy to identify. For bridged solutions, developers might need to check for the id of the chip also to verify it it's a native solution or not. Assuming that it could create potential problems with a non native solutions that is.
 
Sabastian said:
But IIRC ATi has also stated that they will have a bridged solution. (please correct me if I am wrong here.) If so they are spreading "FUD" wrt their own products as well.

They will have a bridge for retail market sales to make their PCI-E native cores compatible with AGP systems (as NVIDIA will turn their bridge around once they have fully transitioned their line to PCI-E native).
 
Anyone recall the FUD re:images purportedly indicating an "on die bridge chip" that one IHV was courteous to point out...?
 
Ahh, thanks for that. (Dave, Bjorn) So ATi will not bridge native AGP solutions to PCI-E. Rather PCI-E to AGP. That allows them to hold the "moral high ground" so to speak. I am sure they will capitalize on that as much as possible, of course I would expect the same from any company in this business. In which case I can only imagine the "FUD" nvidia would incite if the tables were turned.
 
Sabastian said:
In which case I can only imagine the "FUD" nvidia would incite if the tables were turned.

I don't have any doubts that we would see the same FUD from Nvidia if the tables were turned.
 
DaveBaumann said:
Its not a question of how it works, but what PCI-E delivers. If you write an application expecting unidirectional transfer rates, both upstream and downstream simultaneously at 4GB/s, as defined by the SIG, at the very least you are going to run into upstream bandwidth issues as it physically cannot provide the full upstream bandwidth (at that utlisistation you are likely to run into bidirectional transfer issues of AGP, but that needs to be tested). If someone looks at the PEG16X spefications and writes and application with those requirements in mind to be ustilised fully then they are going to run into issues.

Inherantly any bridged solution that relies on AGP at the back is not providing the full PEG16X features, unless they can full solve the upstream bandwidth and bidirection transfer issues at high bandwidth utilisation.

I don't mean to be argumentative for argumentation's sake, but *any* application with the requirement "4GB/s sustained full-duplex throughput" needs to *seriously* evaluate the consumer-PC 'state of affairs' before embarking on such development. A dual-channel (128-bit) PC3200 main-memory system (Athlon FX, Pentium4/800FSB) can deliver *at best* 6.4GB bandwidth (single-port RAM, so half-duplex.) Peripherals (hard-drive), CPU, and refresh overhead will all take their 'cut of the bandwidth pie.'

For a consumer-PC application that *requires* 4GB/sec full-duplex traffic for proper operation, the whole bridged-PCIe-X16 vs native-PCIe-X16 issue is the least of your worries. The weakest link will be the rest of the PC.
 
Bjorn said:
http://www.ati.com/products/PCIexpress/index.html

Other graphics companies have cards that are compatible with PCI Express, but they are still only AGP cards that are “bridgedâ€￾ by a second chip to be physically compatible with PCI Express slots on the motherboard. This architecture can only work at AGP speeds, and is more vulnerable to failure, performance bottlenecks and incompatibility with software applications.

According to NVidia, the bridged cards can work at twice the speed of the standard 8X agp. And why would a bridged card automatically be more vulnerable to failure ... ?

Bjorn said:
That's true but "marketing departments" and "having proof of something" are two sentences that usually doesn't combine that well.


It isn't FUD, or anything close to it. It's marketing 101, standard F&B (features & benefits) marketing. As others have said, it's a point of differentiation.

How else are they going to compete? By not comparing themselves to the competition?

EDIT - quotes
 
A dual-channel (128-bit) PC3200 main-memory system (Athlon FX, Pentium4/800FSB) can deliver *at best* 6.4GB bandwidth (single-port RAM, so half-duplex.)

Bear in mind that PCI-E sysems will have support for DDR and DDR2, so that can bring the memory bandwidth up. But its not necessarily the case that every piece of data that flows through the various system busses flows through system RAM as well.
 
DaveBaumann said:
A dual-channel (128-bit) PC3200 main-memory system (Athlon FX, Pentium4/800FSB) can deliver *at best* 6.4GB bandwidth (single-port RAM, so half-duplex.)

Bear in mind that PCI-E sysems will have support for DDR and DDR2, so that can bring the memory bandwidth up. But its not necessarily the case that every piece of data that flows through the various system busses flows through system RAM as well.


Intel may be making the move towards DDR2 sooner rather than later, but not AMD. That means that if you want an early PCI-E system with DDR2 you're going to have to buy a Prescott - something that many people flat out won't do. AMD probably won't offer DDR2 support until 2005 (at least that's what I've read - http://www.anandtech.com/showdoc.html?i=2006). So I don't really think that the benefits of first generation DDR2 will be very wide-reaching.
 
DaveBaumann said:
Its not a question of how it works, but what PCI-E delivers. If you write an application expecting unidirectional transfer rates, both upstream and downstream simultaneously at 4GB/s
I thought that even AGP parts doesn't take benefits of all the bw, so let me doubt that any consummer application will do so. Even if that kind of application exists, the first normal consequence is, as Bjorn said, a slower performance (1%?). Moreover, we still don't know if the future applications will take into account the Nv solution or not. If that's the case, where's the issue?

PS:How do the application knows that the bus is PCI-X or AGP? (i don't know)

Inherantly any bridged solution that relies on AGP at the back is not providing the full PEG16X features, unless they can full solve the upstream bandwidth and bidirection transfer issues at high bandwidth utilisation.
I agree.
 
Baalthazaar said:
Intel may be making the move towards DDR2 sooner rather than later, but not AMD. That means that if you want an early PCI-E system with DDR2 you're going to have to buy a Prescott - something that many people flat out won't do. AMD probably won't offer DDR2 support until 2005 (at least that's what I've read - http://www.anandtech.com/showdoc.html?i=2006). So I don't really think that the benefits of first generation DDR2 will be very wide-reaching.

Though i wonder why Intel are so desperate to move to DDR2 cause from what i've seen, it's going to be slower then DDR for quite some time because of the extra latency. And more expensive.
 
DaveBaumann said:
Take that further - there are no specifications for bridged solutions, so the more types there are out there the more compatibility become an issue becuase you'll have numerous parts out there with some vague specifications and abilities that could float somewhere between standard AGP and PEG16X.

I'd expect compatability to be a non-issue because it's an internal interface that should be transparent (with respect to compatability) to the user. Don't get me wrong, I prefer a native PCI Express solution to a bridge. But personally I can't see nVidia stumbling here.
 
Back
Top