Don't get fooled by .... nvidia attacking Ati

radar1200gs said:
Personal attacks aren't supposed to be made in these forums. I thought that you, as a moderator would be the last person needing to be reminded of that...

It wasn't a personal attack, it was a personal observation..... if it had been an attack, I'd have replaced "unintellegent" with "friggin stupid"...... which I didn't!..... ;)

Guess my point is that you only show up here to reguritate misleading information and to try to attack some very knowledge people. You never add to a discussion. While you are right in that I really should'nt do it, and as such I apologize....... I just get tired of your baseless abusive comments. IF B3d treated you the same as your beloved nVnews treats non nVidia supporters, you would have been banned here long ago..... :rolleyes:

Of course, then where would so many here get their sigs? :LOL:
 
radar1200gs said:
Desperately awaiting a loosened up PCI Express Spec are we Dave? If the cad can't meet the specs, the specs will just have to meet the card...

Like the Inquirer points out it will be fascinating to see how various first gen PCI Express cards perform on future chipsets when they are no longer the latest and greatest cards. I have no doubt about who I think will have no problems running flawlessly and who will run into trouble and require all kinds of workarounds.

Of course you could have pretty much predicted this sort of outcome just by watching how ATi and nVidia coped with the introduction of AGP 8X...
Did ya read anything I wrote Radar or did you just come into the thread blindly swinging? :rolleyes:

BTW-Just a selective quote from the tale-end of me latest e-mail exchange with the author having linked him to me story on ATi's reply:

As for taking PR at face value, you should know better. This one was REALLY well researched, trust me there.
Methinks he is implying that I should trust his word over that of Patti's, which I would do.....IF I WERE AN IDIOTIC LUNATIC WITH DELUSIONS!!!!!!

Good gads does this guys stuff annoy me, it's just total BS that he always says "but I have seen proof of this, you gotta trust me on it". :rolleyes:
 
DW the quote from ATI doesn't really in my mind show that everything is complete BS, it seems very like damage control. Anyway we will see, I really don't much care but I am curious about how it will play out in the end.
I really do not feel that she cleared up much for you except the fact that perhaps it makes you happy she responded to you. She said nothing has finished testing yet, but they think it will very soon. Perhaps it will all may be peachy, but saying that they expect it doesn't make it so. In any case if you have a great trust in them than that is well and good but without that she really did not present any evidence to show that the inq article is unfounded. Yes she did specifically say that the "Because the circumstances of the testing are unknown, we will not comment on the actual test results except to say that they are not valid. " So she is going far anough to say the tests are not valid, but why didn't she say "ATI PCIe cards are compliant with the electrical standards"? If they actually are wouldn't that be a good way to refute the article? You see thanks to PR speak saying that the results are not valid is not equivalent to saying that the conclusions are not valid.
Patricia Mikula, an ATI’s spokeswoman
The author is suggesting that more white space around the red diamond in the eye diagrams is an indication of compliance. This is not correct. A product such as the GeForce 6600 (used as an example in the document) could pass the electrical test and not be deemed compliant,â€￾


That is from xbit and ATI's refutation. Of course you notice it doesn't actually say anything relevant. It states that it (the 6600) could pass the eletrical tests, which means that the picture actually does show whether it is passing the electrical test. Of course it could fail at another point, such as if there was no displayed output That is like saying just because you are not speeding you could still get a ticket, well um yes... but then not speeding defeinitely helps you not get a ticket compared to someone that is speeding.
 
DaveBaumann said:
Well, a cross bar usually indicates that more than one input can go to more than one output (as in the case of NV40), however with everything set up in "quads" there is only one output in NV43, which, IMO, negates the need for an actual crossbar. I'd guess its more than likely just a FIFO there.
I'm missing something here, as NV40 appears to be just NV43 writ large, to me (4 quads -> 8 ROPs vs. 2 quads -> 4 ROPs). Both GPUs appear to have the same proportion of pipes and ROPs, and the pipe:ROP proportion relative to the (equally proportioned) memory bus width is the same for both.

You lost me with "more than one input" to "more than one output." I suspect I just missed something in your 6600GT review, so I'll give it another read.
 
The first thing that you need to consider is that everything is done in quads – the pixel shader operate on 4 pixels and those 4 pixels will then be passed to the ROP groups which also work on 4 pixels. So then what we actually have with the likes of R420 and NV40 are 4 quad pipelines.

Here the low letters are the fragment processor quads, and the high the ROP Quads.

Code:
A B C D

W X Y Z

With R420 there is a direct relationship – the output of 4 values from fragment quad A will always go to W, B always to X, C to Y, etc. The “fragment crossbarâ€￾ in NV40 is a switch – it means that the output from A could end up being processed by W, X, Y or Z, and likewise with B, C, or D – the output from any will just get processed by the first available Quad ROP. This is what I mean that there is multiple inputs that can go to any one of many outputs (but bear in mind that we are always dealing with quads of pixels).

Now, NV43 is like this:

Code:
A B

 Z

Now, while there are two inputs, there is only one output, which negates the need for a "crossbar" - like I said, I should imagine there is just a FIFO between the fragment pipes and the ROP's.

The "Crossbar" term is not the important thing, though. What the real consideration is that there is no direct correlation between the ROP quads and fragment quads in NV4x, which is unlike R3x0, and can lead to design decisions like these (which ATI can't at the moment) and also gives other benefits (I think this is why it can do 2x Z/Stencil as it can bypass the fragment pipeline which can only accept one quad at a time).
 
Long time lurker... First post. 8)

I've been a profesional EE in the semi industry for quite a few years now, I'm suspicious about those eye diagrams provided in the Inq story (and evidently in some mysterious white paper - anyone got a link to that?). IMHO, those eye diagrams are not what they claim.

1) Eye diagrams are usually taken over many thousands of cycles, these only look like they have a few hundred, as evidenced by the clear outlying traces.
2) For some reason, not all the traces start at the same point in time and/or voltage. I've never seen a scope or analyzer that would trigger/record like that. To me, those lines look like they were drawn with MacPaint or something.
3) The eye diamond shows a height of approx +/- 250mV = 500mVp-p, which would about match with the PCIE spec. What the Inq diagram does not show is that this eye also has a max voltage spec. Specifically, The PCIE spec has two eye diagrams, one for "de-emphasized bits", which specifies 566mV >= Vp-p >= 505mV; the other for "transition bits" specifies Vp-p >= 800mV. Thus the second graph showing the "good" eye would be failing terribly because it has a 700mV swing, and there is no way to tell if it's a de-emphasized bit or a transition bit.

A quick Google provides an excellent link with a diagram of what a real PCIE eye diagram looks like.
Textronix PCIE testing software
fig3.gif


Finally, just to be snippy, the author clearly has no idea what "common mode" actually means, and seems to be pretty weak on the concept of jitter as well. Not to mention the sillyness of posting graph results and interpretation without any kind of experiment description. Just kind of pisses me off (enough to finally get off my lazy a$$ and post here).
 
Thanks much for the explanation, Dave! I didn't think of grouping the ROPs, as the pipes, in quads. Interesting point about the decoupled quads allowing for double z/s, but did the same "independent quads" apply to the NV30 and its double z/s ability? It doesn't seem so, as the NV30/35, at least, appeared to have only one quad...?

Welcome, fritz. Internet forums are nothing if not snippy, but at least you've contributed some protein along with that spice. :)
 
Well, so much for my brilliant first post. :(

I did a bit more Googling, and came across the an instruction manual describing the use of an Agilent scope for PCIE compliance testing using the "SIGTEST" software, evidently supplied by the PCI SIG itself. The eye diagrams appear to be the exact same style as those posted in the Inq (see page 45 in below pdf). In referring to this manual, what I do not yet understand is why the "Transition Bit" target is only ~500mVp-p when the PCIE spec states that the minimum eye is 800mVp-p. So I'm now confused. :? Sorry if I confused you too. I still don't know if the test was valid or not, but my initial objections are invalid.

(But, IMHO, the SIG software looks terrible compared to the signal integrity displays I'm used to seeing. The Lecroy or the Textronix look much better)

SIGTEST software manual pdf
 
The 4 ROP limitation of the NV43 is somewhat of a red herring, since you only bump up against that limitation if you have a nearly do-nothing pixel shader. Given that future games are heavy on pixel shader usage, it's more important how many pixel shader pipes you have, since it is very unlikely you will be ready to write out one pixel per clock. Although ROPs are still important for AA and stencil shadow/shadow map scenarios, it is somewhat misleading to focus on the 4 ROPs as some kind of crippling or tragic flaw.

In the future where shader units may be divorced from ROPs and available to be pooled and recombined in different configurations, it's even less relevant. Both shader power and ROPs should be increased, but I expect it depends on the scenario, which card has better balance, as shader limited games need less ROPs, but shadow-limited untextured un-shaded fillrate games will be limited by ROPs. It's hard to build a card that is simultaneously optimized for Half-Life 2 and Doom3/UE3.
 
DC, if that post was aimed at me, I certainly don't see the NV43's 4 ROPs as a limitation, and the benchmarks make that plain. Props to nV for a neat bit of engineering, especially if it lets them allocate more transistors to the parts of the pipeline that need it.

Even if it wasn't aimed specifically at me, thanks for the info.
 
radar1200gs said:
DW - I don't need to read what commentators are saying - the oscilliscope readouts tell me everything I need to know.

actually, Radar, you should have substituted "nVidia" for " the oscilliscope readouts"...... then many here would believe you...... ;)
 
martrox said:
radar1200gs said:
DW - I don't need to read what commentators are saying - the oscilliscope readouts tell me everything I need to know.

actually, Radar, you should have substituted "nVidia" for " the oscilliscope readouts"...... then many here would believe you...... ;)

Why is that Martrox???

I didn't need nVidia to tell me ATi was lying their arse off about DDR-2 on the R300.

I didn't need nVidia to tell me ATi was having problems with AGP 8x early on.

I could could go on, but I'm sure you are starting to get the picture...

I certainly don't need nVidia to tell me how to interpret the readouts.

(Could someone fix the posts above and make the forum work properly when you login while replying while they are at it?)
 
radar1200gs said:
I didn't need nVidia to tell me ATi was lying their arse off about DDR-2 on the R300.

9800 shipped with a DDR-2 version (256MB). There weren't any major changes in the memory type handling between R300 and R350.
 
DaveBaumann said:
radar1200gs said:
I didn't need nVidia to tell me ATi was lying their arse off about DDR-2 on the R300.

9800 shipped with a DDR-2 version (256MB). There weren't any major changes in the memory type handling between R300 and R350.

I'm talking 9700 and the Tech TV fiasco where ATi ran DDR-II memory in DDR-1 compatability mode Dave.
 
And btw the DDr-2 ATI put on the 9800 pro, was it not the DDR-2 stock ATI got a low cost from Samsung due to Nvidia unable to fulfill its orders ?
 
radar1200gs said:
I'm talking 9700 and the Tech TV fiasco where ATi ran DDR-II memory in DDR-1 compatability mode Dave.

Why would it need to run in compatibility mode if the design integrates a working DDR-2 bus?
 
radar1200gs said:
.

I could could go on, but I'm sure you are starting to get the picture...

Yes you keep making stuff up to attack Ati and protect and ignoring the facts that not fit your agenda.

I certainly don't need nVidia to tell me how to interpret the readouts.

All facts leads to the same conclusion in your world, nVidia good Ati evil. And if the facts show something else you make stuff up - we are all getting the picture.
 
Back
Top