Don't get fooled by .... nvidia attacking Ati

radar1200gs said:
So, :LOL: if :LOL: they were wrong, why hasn't ATi taken legal action against them?
EE times is not a consumer publication so any misleading articles published therein would have minimal impact on sales. Amongst the readership of EE times, those who’s reading of misleading or incorrect information would have a material effect on ATI’s sales most likely would have been contacted by ATI directly.

Even having a successful lawsuit against them would be potentially more detrimental than ignoring the mistake. In every conflict, irrespective of the conclusion, there are those who will disagree. Such a law suit would certainly make all the mainstream business news and even if only a small portion of those readers did not trust the conclusion then fallout would be much greater than ignoring it in the first place.
 
radar1200gs said:
My point at the time was that ATi was a liar.
You still have yet to provide proof of this.
(this was before GF-FX was widely available and nVidia regrettably lied about the number of pixel pixel pipelines. It's a shame they used the word pixel instead of shader).
Completely irrelevant to the discussion at hand.

-FUDie
 
radar1200gs said:
My point at the time was that ATi was a liar.
Well, they *did* release a card that worked with DDR-II. Did they specifically say they'd work in DDR-II mode, or just that their GPUs could work with DDR-II?
 
Don't bother, if i can use an image we say in my area :
"One can't make drink a donkey that is not thirsty".
Radar perfectly fits the job :)
 
FUDie said:
radar1200gs said:
My point at the time was that ATi was a liar.
You still have yet to provide proof of this.
(this was before GF-FX was widely available and nVidia regrettably lied about the number of pixel pixel pipelines. It's a shame they used the word pixel instead of shader).
Completely irrelevant to the discussion at hand.

-FUDie

God you fanATics crack me up. At least nVidia supporters can admit nVidia lied.

But don't mind me:

QUACK was "an unfortunate accident"

ATi had a fully functioning DDR-II card before nVidia

There are no Trilinear cheats in ATi's drivers

and the sun shines out of ATi's corporate arse.
 
radar1200gs said:
Gibber, gibber, gibber!

I've completely lost track of the point of this argument.

Just let radar think whatever he likes about ATI/NV - nobody is going to change his mind, whatever evidence/lack thereof is produced. :)
 
radar1200gs said:
Given the above, if the EE-Times, article was nothing but FUD (LOL!) why didn't ATi sue them or force a retraction of the article. If I were ATi I most certainly would have...

Suing a publication is always you last port of call since it inevitably brings worse publicity to you than anyone else; it is always a last resort.

However, OK, we have EE Times quoting "sources", but we have the benefits of history now, so let’s take a look.

The sources claim that "The part was not designed for use with DDR-II". Well, clearly, from an architectural standpoint it was since 9800 shipped with DDR2 RAM - I've yet to hear anyone claim that the shipping 256MB PRO's are running in compatibility mode.

Now, before you say "Yes, but 9800 was a different chip with architectural changes", that’s correct and incorrect - R350 did have opportunistic tweaks and changes, but nothing massive from an architectural point of view, the main benefits came from moving to a slightly different process. Its also pretty easy to tell that there were no major architectural overhauls to the memory bus between R300 and R350 since the issues with DDR2 were already apparent at when it was being respun, and rather than add support for DDR2 they would have sorted out the major issue with R300's memory bus which was getting it to operate at high frequencies - to this day the highest speed an R300 design has shipped at is on the memory is 365MHz; faster DDR(1) RAM was already available by the time it was released. If they were going to make changes to the memory bus then adding DDR2 support would not have been the priority, sorting out the bus to operate higher than the ~300-365MHz range would have been. I'd say its fairly easy to surmise that the running in DDR2 mode was already in place, the issue was that their bus doesn't really run operate at speeds that a beneficial to use DDR2 with as it couldn’t actually run at the maximum frequencies of DDR at the time (the reason for using it on the 256MB PRO was opportunistic in terms of cost and also probably a little element of PR coup).

radar1200gs said:
(this was before GF-FX was widely available and nVidia regrettably lied about the number of pixel pixel pipelines. It's a shame they used the word pixel instead of shader).

With NV3x (GF-FX) there is a one to one relation between pixel and shader pipelines - there was never more than 4 shader pipelines in any of the NV3x designs.
 
Radar are you really that desperate to bump your postcount?
you make countless posts that contain 2-3 lines of bla bla bla ATI is evil NVidia is good, ati only does bad, nvidia only does good. Most people here see the pattern of your posts. It's not necessary for you to post anymore. It's already known that if you were to post it would be like all your other posts, which are ATI evil, Nvidia good.
 
DaveBaumann said:
radar1200gs said:
Given the above, if the EE-Times, article was nothing but FUD (LOL!) why didn't ATi sue them or force a retraction of the article. If I were ATi I most certainly would have...

Suing a publication is always you last port of call since it inevitably brings worse publicity to you than anyone else; it is always a last resort.

However, OK, we have EE Times quoting "sources", but we have the benefits of history now, so let’s take a look.

The sources claim that "The part was not designed for use with DDR-II". Well, clearly, from an architectural standpoint it was since 9800 shipped with DDR2 RAM - I've yet to hear anyone claim that the shipping 256MB PRO's are running in compatibility mode.

Now, before you say "Yes, but 9800 was a different chip with architectural changes", that’s correct and incorrect - R350 did have opportunistic tweaks and changes, but nothing massive from an architectural point of view, the main benefits came from moving to a slightly different process. Its also pretty easy to tell that there were no major architectural overhauls to the memory bus between R300 and R350 since the issues with DDR2 were already apparent at when it was being respun, and rather than add support for DDR2 they would have sorted out the major issue with R300's memory bus which was getting it to operate at high frequencies - to this day the highest speed an R300 design has shipped at is on the memory is 365MHz; faster DDR(1) RAM was already available by the time it was released. If they were going to make changes to the memory bus then adding DDR2 support would not have been the priority, sorting out the bus to operate higher than the ~300-365MHz range would have been. I'd say its fairly easy to surmise that the running in DDR2 mode, the issue was that their bus doesn't really run operate at speeds that a beneficial to use DDR2 with as it couldn’t actually run at the maximum frequencies of DDR at the time (the reason for using it on the 256MB PRO was opportunistic in terms of cost and also probably a little element of PR coup).

radar1200gs said:
(this was before GF-FX was widely available and nVidia regrettably lied about the number of pixel pixel pipelines. It's a shame they used the word pixel instead of shader).

With NV3x (GF-FX) there is a one to one relation between pixel and shader pipelines - there was never more than 4 shader pipelines in any of the NV3x designs.

With shaders and NV3x you are forgetting that partial precision allows you the equivelant of 8 shader pipes (with the provision that they are SIMD, 2 fp16 virtual pipes in 1 fp32 real pipe and only the 4 real pipes can execute different (instruction wise) shaders). Of course only 4 results from the shaders will be output as pixels in any one cycle.

About the memory speeds, that could be why ATi ran the memory in DDR-1 compatability mode, i'm not certain (I'd have to reread the DDR-2 specs but I'm certain anything under 400mhz is considered to be DDR-1).

Of course a lawsuit would bring negative publicity. Just another reason why intelligent companies avoid presentations/announcements etc that could open them up to the possibility of a lawsuit in the first place.
 
Unit01 said:
Radar are you really that desperate to bump your postcount?
you make countless posts that contain 2-3 lines of bla bla bla ATI is evil NVidia is good, ati only does bad, nvidia only does good. Most people here see the pattern of your posts. It's not necessary for you to post anymore. It's already known that if you were to post it would be like all your other posts, which are ATI evil, Nvidia good.

Trust me when I tell you: I'm not here to win a popularity contest or bump post counts or anything remotely like that...
 
With shaders and NV3x you are forgetting that partial precision allows you the equivelant of 8 shader pipes (with the provision that they are SIMD, 2 fp16 virtual pipes in 1 fp32 real pipe and only the 4 real pipes can execute different (instruction wise) shaders). Of course only 4 results from the shaders will be output as pixels in any one cycle.

Again, you would be incorrect. The ALU's themselves can only operate on a single fragment (pixel) per cycle, they could not increase that by using FP16. A single FP16 instruction fundamentally executes at the same speed as FP32 in NV3x, the performance improvements primarily come from the register space which allowed NV3x to have more pixels in flight when using FP16 register space than FP32 - I believe that there was actually one instruction in NV35 that would take two cycles to operate in FP32 rather than FP16.

However, that makes no difference since instructions don't necessarily equate to fragments (pixels) since you are often executing many instructions on a single pixel. The fragment pipeline could only work on a single quad of pixels in paralell at any time; the only time that NV30/35 operated on two quads was in Z/Stencil mode which bypasses the fragment shader.

About the memory speeds, that could be why ATi ran the memory in DDR-1 compatability mode, i'm not certain (I'd have to reread the DDR-2 specs but I'm certain anything under 400mhz is considered to be DDR-1).

AFAIK its the termination and signalling thats important (but I've not read the specs).
 
What do you think registers and register space are Dave? By splitting an Fp32 register in half you double the register space. But you can still only output 4 pixels per cycle.
 
radar1200gs said:
What do you think registers and register space are Dave? By splitting an Fp32 register in half you double the register space. But you can still only output 4 pixels per cycle.

Its a pipeline - the size of the register requirements effects the number of pixels in flight within that pipeline, not the number of pixels that are exectuted in parallel. Only a single quad of pixels are ever executed in parallel.
 
Yes I know that. There are only 4 ROPs each tied to a 32 bit main pipe. So only 4 pixels can be produced per cycle.

however with fp16 8 shader results can be computed per cycle.

NV3x then outputs the entire fp32 register to the ROP (if you are using full precision) or the top or bottom half for FP16 (I don't know if the chip can choose which half to send per cycle - I'm guessing that it something that was probably intended but never worked correctly).

This setup is good for math intensive shader scenarios, obviously not so good if you want maximum pixel throughput.
 
radar1200gs said:
Yes I know that. There are only 4 ROPs each tied to a 32 bit main pipe. So only 4 pixels can be produced per cycle.

Clearly you didn't understand whats being written

however with fp16 8 shader results can be computed per cycle.

No, per ALU, only 4 instuctions can be executed in either FP16 or FP32.

The register space issues affects the number of pixels being executed serially.
 
radar1200gs said:
About the memory speeds, that could be why ATi ran the memory in DDR-1 compatability mode, i'm not certain (I'd have to reread the DDR-2 specs but I'm certain anything under 400mhz is considered to be DDR-1).

Of course a lawsuit would bring negative publicity. Just another reason why intelligent companies avoid presentations/announcements etc that could open them up to the possibility of a lawsuit in the first place.

Don't know if you have time, but do you think you could provide a link to the DDR-II specs?
 
Back
Top