ATI spreading FUD ?

It is FUD w/o a doubt, but that does not make it evil.

For all we actually know the failure rate for PCI-Ecpress implementaion could be higher than one with a bridge chip. Afterall AGP cards have been around quite awhile and people are fairly competent at making them.
 
tEd said:
the truth sometimes can be FUD ;)

There is no doubt about that.

edit: I find it funny when companies say "our competitor" then it can always be ambiguos... "No we meant matrox I swear" :)
 
Bjorn said:
http://www.ati.com/products/PCIexpress/index.html

Other graphics companies have cards that are compatible with PCI Express, but they are still only AGP cards that are “bridgedâ€￾ by a second chip to be physically compatible with PCI Express slots on the motherboard. This architecture can only work at AGP speeds, and is more vulnerable to failure, performance bottlenecks and incompatibility with software applications.

According to NVidia, the bridged cards can work at twice the speed of the standard 8X agp. And why would a bridged card automatically be more vulnerable to failure ... ?

There is no "AGP x16" standard from Intel. The bridge-chip implementation nVidia speaks of here is common to only nVidia--motherboards will support AGPx8 or PCIex16; you won't see any "AGP x16" motherboard support out there (core logic chipsets, etc.) What I believe ATi is simply saying here is that the fact that nVidia's bridge-chip implementation won't be either AGP x8 or PCIex16, but a custom "in-between mode," *might* prove problematic in certain situations. I don't think that information would classify as FUD, because it is accurate. By the same token it's far too early to know whether this custom AGPx16 implementation on top of PCIex16 motherboard logic might ever cause any problems. Evidently nVidia doesn't think that it will.

Still, that doesn't classify ATi's comments as whimsical or "straw man." What if for some reason the core logic in a PCIeX16 motherboard decides under certain conditions to do bi-directional traffic, because the motherboard logic is "fooled" by the bridge chip into configuring itself for PCIex16 operation--but the "AGP x16" bridged graphics card cannot handle it (because it's actually running in a uni-directional, AGP-only mode)? I don't know if this might be a valid happenstance, but it certainly seems *possible* I would think, at least theoretically.

I think as well, that ATi wants to make the point that it will be directly supporting PCIex16 natively in its PCIex16 pcbs (without a need for any pseudo-support mode), and as such that's an entirely reasonable distinction for ATi to make. If you are going to try and make the case that nVidia's "AGP x16" is "just as good" as PCIex16, then why doesn't nVidia simply follow ATi and do a native PCIex16 pcb itself and forego "AGP x16" completely as ATi has done? I think that it's a reasonable distinction to make, myself, and would expect that nVidia would be making it if the situation was reversed. Indeed, I fully expect nVidia to eventually natively support PCIex16 just as ATi is doing initially, so the "AGP x16" bridge chip is but a temporary solution and therefore becomes harder to ultimately justify, doesn't it?...;)
 
Bridge chip is just another thing that can fail. I see absolutely no reason that it would cause compatibility issues (except for reduced speed in some software) since it will appear to the motherboard chipset to be a normal PEG card. It will not function internally as a PEG card, which should have no ramifications on compatibility with motherboards at all.

Then again, I wonder about latency.
 
With bi-directional PCI-Express you could finally implement virtual graphics memory. You could write pages out to system ram while loading new pages into graphics card ram at the same time without having to worry about bandwidth tied up by other devices on the bus.

Say didn't ATI say their was something big for ATI coming in their drivers in a couple of months. ;)
 
The Baron said:
Bridge chip is just another thing that can fail. I see absolutely no reason that it would cause compatibility issues (except for reduced speed in some software) since it will appear to the motherboard chipset to be a normal PEG card. It will not function internally as a PEG card, which should have no ramifications on compatibility with motherboards at all.

Then again, I wonder about latency.

If so, does XP carry PEG support? ;)

I would think it would require a driver if it did recognize it as PEG card.
 
Hopefully, they've totally revamped their interface for controlling IQ. Right now, it sucks.
 
micron said:
The Baron said:
Hopefully, they've totally revamped their interface for controlling IQ. Right now, it sucks.
I thought I was the only one who believed that...
Nope. It's the most totally unintuitive, clumsy, and all-around stupid way to manage IQ. They seem to do it a HELL of a lot better on their Mac stuff--why can't we get some of that (supersampling not included)?
 
Any new standard is bound to cause incompatibility. There is always leeway in implementation, that's why some mainboard chipsets perform better than others. (Remember VIA and AGP?)

Claims that consumers are going to get bitten because developers will target applications and games at full duplex 4Gb/s communication is sophistry in action. People supporting NVidia make similar claims with respect to PS3.0, but we all know that PS3.0, like PCI-E, will be optional, and it will be awhile before apps that seriously put those features to good use will be widespread.

It's FUD, pure and simple, and it will be interesting to see how those playing up PCI-E differentiation on this BBS will react when the VS3.0/PS3.0 differentiations start coming out. I'm sure NVidia's PR on how SM3.0 will make some games look better or run faster will be quickly dismissed as FUD. It's the battle of the 1% applications. On ATI's side, we have HDTV editing and PCI-E. On NVDA's side, we have SM3.0 and Hi-Def video encoding.

None are required, all are cool to have.
 
Kombatant said:
First thing that comes to mind is that a bridge chip, is..well.. a bridge, to translate AGP to PCI Express. I sincerely doubt that would enchance performance, since all it does is "connecting" the two interfaces. Your speed can only be as fast as your source is. And if your source is AGP, you don't get faster than than.

I think you should've rather said "Your speed can only be as fast as your slowest link.. which is AGP 8x."

US
 
Heck how many games out there or in the works use ps 2.0... I dont read any web page or review of the 6800 that even mentions games that fully use PS 2.0. I bet we wont care until r500 is out on ps 3.0...
 
Unknown Soldier said:
Kombatant said:
First thing that comes to mind is that a bridge chip, is..well.. a bridge, to translate AGP to PCI Express. I sincerely doubt that would enchance performance, since all it does is "connecting" the two interfaces. Your speed can only be as fast as your source is. And if your source is AGP, you don't get faster than than.

I think you should've rather said "Your speed can only be as fast as your slowest link.. which is AGP 8x."

US

I could also have said that English is not my first language but i thought my location already gave that away 8)
 
hehe :) Just trying to help. :)

Glad to see you took it so positively. :D

Yep even with the HSI I can't see how a AGP 8x slot can become a AGP 16x even if it does have a HSI good chip. The mobo's slot still is 8X and would also be a limiting factor.
 
WaltC said:
There is no "AGP x16" standard from Intel. The bridge-chip implementation nVidia speaks of here is common to only nVidia--motherboards will support AGPx8 or PCIex16; you won't see any "AGP x16" motherboard support out there (core logic chipsets, etc.)
Why does this have any relevance? The "AGP x16" will only be active when the bridge chip is in use. The data transfer mode is there to make better use of the bridge than nVidia's older graphics processors will be able to. It was described as being able to emulate the bi-directional mode of PCI Express by very rapidly switching betweens ending and receiving data.

What I believe ATi is simply saying here is that the fact that nVidia's bridge-chip implementation won't be either AGP x8 or PCIex16, but a custom "in-between mode," *might* prove problematic in certain situations.
Now that is FUD, plain and simple.

I don't think that information would classify as FUD, because it is accurate.
It's FUD because of the "might."

If you are going to try and make the case that nVidia's "AGP x16" is "just as good" as PCIex16, then why doesn't nVidia simply follow ATi and do a native PCIex16 pcb itself and forego "AGP x16" completely as ATi has done?
I saw a good argument on one of the previews (forget which one, unfortunately):
By using a bridge chip, nVidia will have a much easier time clearing chip inventory. On the other hand, if ATI misreads the market, and sells more of one type of R42x than they planned, they will have a much harder time getting rid of the inventory of the other type.

Indeed, I fully expect nVidia to eventually natively support PCIex16 just as ATi is doing initially, so the "AGP x16" bridge chip is but a temporary solution and therefore becomes harder to ultimately justify, doesn't it?...;)
Yes, with the rest of the NV4x lineup. They will use the bridge chip too, however, to operate on the AGP bus.
 
pax said:
Heck how many games out there or in the works use ps 2.0... I dont read any web page or review of the 6800 that even mentions games that fully use PS 2.0. I bet we wont care until r500 is out on ps 3.0...

Yah, this is a riot. I've stayed away from the 3d web sites for quite awhile because of the disgusting partisanship from both sides. Finally come back to read up on the nextgen and see all the reviews with not much in the way of dx9 still to this day.

/wave, cya next review cycle. Hopefully dx9 will actually matter then, let alone PCI-e!

Edit: No FUD to be found on these boards or in this industry! Where would someone get that idea?
 
Back
Top