ATI - Full Tri Performance Hit

Status
Not open for further replies.
Pete said:
Yep, because that's exactly what I believe happened. I believe ATi had engineers dedicated to too many other projects (Xbox 2, Gamecube 2) and was sitting on an already-class-leading architecture, so they stuck with SM2.0 for two reasons: one, b/c they didn't have enough resources to release a SM3.0 part on time, and two, b/c they wanted to cut the legs out from under SM3.0 by limiting their parts to 2.0, thus forcing the market to aim for the lowest common denominator, thus rendering SM3.0 mostly moot for this generation.

I'm not sure why my speculation is less valid than yours, though. You're sure ATi didn't see any benefit to SM3.0? Did they see a benefit to hiding trylinear, or to releasing the 8500 only to see it eclipsed by the GF4 in a matter of weeks? ATI is not all-powerful, and I believe their hand was somewhat forced, and they played it as best they could ATM (which is pretty well, considering they were in the position of power WRT mindshare). The fact is that we have gone from ATi being a tech generation ahead in features with the 8500, to nV being a tech generation ahead with the 6800. Whether things will play out the same way as 8500->9700 and GF4->GF FX is hard to say, particularly with the seemingly increasing process limitations. But I don't think you can say ATi had the power to implement SM3.0 and chose not to solely based on profit margins. They'd be setting themselves up to make even more money if they'd kept the tech lead for two generations in a row, rather than ceding it to nVidia after just one generation clearly on top.

Your just speculating. We don't know if they were actually low on manpower or not. A cost/benefit ratio seems much more likely at this stage. I also don't agree with nvidia being a tech generation ahead with the introduction of the 6800. As I said, SM3 is more of a small update to SM2 then anything else imo. We won't be seeing anything truely spectacular from it like we have with SM2 and will see with SM4.

Radar1200gs said:
Why would you want to? Like most other people I've spent a lot of time money and effort getting as far away from low refresh rates as its possible to get. I have no intention of returning anytime soon.

While this may not apply to graphics intensive games like Far Cry it is a good option to have for most other games. Call of Duty for instance gets around 250 fps on an X800 which is more then enough to suit TAA's requirements, even with high refresh rates.

(almost) everything ATi brought to the table with R420 pixel shader and vertex shader wise and more (considerably more on the vertex side) when it comes to flow control and looping. Isn't it funny how nVidia picked up some of ATi's OpenGL extensions with no fuss, yet ATi can't bring themselves to just use SM2.0a, no they have to have 2.0b, just to be different from nVidia.

ATI did a better job of implementing it with realtime performance in mind. Nvidia's idea was just to support content creation, in which framerates don't matter. And maybe ATI didn't use SM2a because the hardware is incapable of supporting all the features?

FP16 is good enough for George Lucas and ILM. We still aren't anywhere near cinema quality games yet. And the consumer certainly hasn't benefitted from high full precsion requirements in DX9, in fact this requirement has arguably delayed the massive uptake of DX9 featured games by 2 years.

Better precision at playable rates did not delay DX9 uptake, you can blame nvidia and it's NV3x for that.

Don't worry, ATi will be seeing the benefits of SM3.0 all too clearly real soon now...

Yeah, you keep telling yourself that.
 
Your just speculating. We don't know if they were actually low on manpower or not. A cost/benefit ratio seems much more likely at this stage.

We have already been given some insight into why ATI was not able to implement SM 3.0 this generation. ATI's CEO Dave Orton mentioned that an SM 3.0 part would require a significantly larger die size than what is currently had on the R4xx cards, and ATI was unsure about how producible such a design would be given the current processes. ATI also did not expect NV to move all the way up to 16 pipelines from 4 pipelines, and they did not expect NV to totally rearchitect their pipelines but rather expected NV to expand on internal processing. Finally, obviously ATI dedicated significant resources to projects like XBOX2. So logically, it appears that a combination of uncertainty about producing the SM 3.0 part, misconception about where NV was headed with their NV4x design, and possibly some issues with resource allocation are some reasons why ATI did not release an SM 3.0 part at this time. Trust me, if they could have released a quality SM 3.0 part at this time, they would have.

I also don't agree with nvidia being a tech generation ahead with the introduction of the 6800. As I said, SM3 is more of a small update to SM2 then anything else imo. We won't be seeing anything truely spectacular from it like we have with SM2 and will see with SM4.

The reality is that ATI still is studying the NV40 architecture, and trying to learn and understand more about it. With features like superscalar architecture, FP16/FP32, full support for SM 3.0, FP16 texture filtering and frame buffer blending, dedicated on-chip video processor, the NV40 has a general featureset that the entire industry is moving towards. The NV40 is clearly more of a forward-looking architecture, and only the most hardened of fanboys would argue against that notion.

Better precision at playable rates did not delay DX9 uptake, you can blame nvidia and it's NV3x for that.

I'd say it is wishful thinking to claim that the NV3x delayed DX9 uptake, considering that the majority of graphics cards sold today are integrated Intel graphics processors. In fact, the NV3x was the first top-to-bottom family of cards with DX9 support, irrespective of the performance limitations. ATI had DX9 support with their midrange and highend Radeon cards, but their most affordable value cards certainly did not have DX9 support. So ultimately, it was only the people who bought midrange and highend cards from NV (due to the somewhat low performance of FX 52xx series) and ATI (due to the lack of DX9 support in the value segment) who were able to take advantage of DX9 anyway.

Yeah, you keep telling yourself that.

There isn't too much to say, really. Developers are embracing this new technology as we speak. SM 3.0 not only adds efficiency with respect to performance but also efficiency with respect to coding. Most people would consider that to be a good thing, and a step in the right direction.
 
MuD said:
Reverend said:
Gosh, your explanations sounds like those two words are official 3D terminologies...

Nope, just trying to help. Anyone can surf to EB and look it up themselves, if they don't believe me. :)

The good Reverend's point is that these terms are not official 3D terms.

You do realize this I hope.
 
jimmyjames123 said:
I'd say it is wishful thinking to claim that the NV3x delayed DX9 uptake, considering that the majority of graphics cards sold today are integrated Intel graphics processors. In fact, the NV3x was the first top-to-bottom family of cards with DX9 support, irrespective of the performance limitations. ATI had DX9 support with their midrange and highend Radeon cards, but their most affordable value cards certainly did not have DX9 support. So ultimately, it was only the people who bought midrange and highend cards from NV (due to the somewhat low performance of FX 52xx series) and ATI (due to the lack of DX9 support in the value segment) who were able to take advantage of DX9 anyway.
It would be fair to say that, prior to the Nv40, 90% of the DX9 capable GPU’s sold by Nvidia did nothing to promote the uptake/use of DX9.

There isn't too much to say, really. Developers are embracing this new technology as we speak. SM 3.0 not only adds efficiency with respect to performance but also efficiency with respect to coding. Most people would consider that to be a good thing, and a step in the right direction.
You are speaking theoretically of course? Or do you have some SM3.0 benchmarks that show its potential?
 
ANova said:
Your just speculating. We don't know if they were actually low on manpower or not. A cost/benefit ratio seems much more likely at this stage. I also don't agree with nvidia being a tech generation ahead with the introduction of the 6800. As I said, SM3 is more of a small update to SM2 then anything else imo. We won't be seeing anything truely spectacular from it like we have with SM2 and will see with SM4.

What is SM4.0 then ? I mean, i hope it'll support SM4.0 and that SM4.0 is THE SM model but i don't know anything about it. And the only developer i've heard talking about it didn't seem to agree with you.

And i would go as far as saying that they've already admitted that it has to do with resources:

http://www.techreport.com/etc/2004q2/nalasco/index.x?pg=2

TR: We've heard that ATI started work on a chip code-named R400, and then decided to change direction to develop the R420. Why the mid-course correction, and what will become of the remains of the R400 project?

Dave Nalasco: When we generate our roadmaps, we're always looking multiple years ahead, and, you know, circumstances obviously are going to change over that course of time. If you look at the development cycle for a new architecture, you're talking in the vicinity of a couple of years. One of the things that happened in our case is that we had these additional design wins or partnerships that we've developed with Nintendo and Microsoft, and that obviously requires some re-thinking of how the resources in the company are allocated to address that. So I think that's what you're really kind of seeing is that we had to make sure that we were able to continue with the roadmap that we had promised to keep producing for our desktop chips while also meeting these new demands, and we're confident that we're going to be able to do that.
 
Pete said:
Quasar, you mean double the bandwidth, right? Or did you change the default clocks (4P@500/300 vs. 16P@525/560MHz = 4:1 core, 2:1 mem)? I figured improvements in HyperZ and bandwidth compression contributed to the X800XT's 32% advantage at 16x12 AF and 43% advantage at 16x12 AA+AF.

No, i meant quadruple.
R9600 has 128Bits of buswidth (256Bit logical) and X800 has 256Bits (512Bit logical).
I know, it's not "real" quadrupled, but quite close.
 
FUDie said:
Demirug said:
But this will not stop me to ask who gives you they information about how the detection works? I ask this because I want to know who have lie to you.

Perhaps you or somebody else will ask now from where I take this insolence? The answer is stupid simple. I was the person who wrote they special benchmark programm that have analyzed ATI "TRYlinear". The results of many different test runs allows my coworker and me to show the "optimisations" even with colored mipmaps that have harsh transitions.

They only thing that "TRYlinear" really tries is to hide itself from detection. But as I said and show before it even fail on this.
So because you wrote some "special benchmark program" you are somehow more qualified to speak about what ATI is doing than ATI themselves? That's rich!
But as always I am open for new aspects in this story.
It sure doesn't sound like you are open at all. You've already made up your mind, judging by your posts.

-FUDie

At least Demirug knows what he's talking about when things come to 3D-Development and capabilities/inabilities of 3D-chips. (You might remember the CineFX-Inside article over at 3Dcenter...)

PLUS he's not bound by corporate policy as to what he may disclose and what not - or do you really think ANY IHV would tell you the whole and uncompromised truths about it's core technologies?
Certainly not, don't you?
 
ANova said:
Quasar said:
You say, R300 was a big step ahead of NV30. Performance-wise you're correct. Feature-wise one could argue, which features are more important than other.

I'm curious, what features do you think the NV30 had over the R300?
I don't think you really are, but there are quite a lot of them, none of which is very debatable. Additionally, as you hint at in one of your following posts ("ATI did a better job of implementing it with realtime performance in mind."), you're not really after features, but at realtime usability.
A topic on which i expanded earlier on as my opinion is concerned. :)

ANova said:
Quasar said:
Additionally, these two were also designed under different circumstances and ideas as to what DirectX9 would eventually become. nV was proposing FP16 as "full precision" (with FP32 as an added bonus for the Pro-User doing scientific stuff in offline-Rendering, hence the "speed" penalty) with a fall-back option to INT12 as what is known as partial-precision. ATi finally got their idea of FP24 as "full-precision" cemented in DX9 and such nV30 was forced to use their FP32 intended for offline-rendering to render every DX9-Shader which did not include pp-hints.

Nvidia has no one to blaim but themselves for this. They assumed MS would accept their proposal of DX9 no questions asked, well...they assumed wrong. Besides, FP24 is better then FP16 so the consumer was the beneficiary.

Of course nVidia is to blame for this, did i say otherwise? It only explains why the situation with R300/nV30 is different than the one we have today between NV40/R420.

BTW, FP32 is better than FP16. If you're so concerned about consumers' welfare, maybe MS should have implemented this as minimum requirement for DirectX9-Full Precision?

Wait, that would mean, we only had one vendor with DX9-Chips.... bad idea! ;)

ANova said:
Quasar said:
But, to an extent, the same can be said about nVidia. Of course not in comparison with the R420 line of chips, but compared to FX5800U and FX5950, the 6800u as a whole uses less power in 3D-Applications, albeit having almost doubled on transistor count, added SM3, RGAA, FP-Texture Filtering, Tone-Mapping via RAMDAC, quadrupled the number of pipelines etc.

From what I've seen the 6800 Ultra's power consumption is a fair amount higher then the 5950. Why do you think it needs two molex connectors? The 5950 doesn't.

From what I've measured that what you've seen, might not be the whole truth.
Speaking of Molexes.... Do you think, the FX5200u, albeit manufactured in the same process and sporting ~66% of GF4 Ti4600s transistor counts, needs more power at almost the same clock speeds?
Maybe nVidia just wanted to take the safer path this time?


ANova said:
Pete said:
You make it sound as if ATI was unable to implement it, which isn't the case. ATI simply didn't see any benefit as the costs outweighted the advantages. It also would have effected ATI's yields as we are seeing with nvidia.
ATi told us, that if they had implemented it, the resulting chip would have been horribly slow - don't know, but i guess, they were referring to SM3.0-Performance only (hopefully....)
 
ANova said:
As I said, SM3 is more of a small update to SM2 then anything else imo. We won't be seeing anything truely spectacular from it like we have with SM2 and will see with SM4.

Could you share your Vision of SM4 with us? According to an article i've read on B3D, it's main feature is getting rid of some limits imposed on SM3.0 and uniting shader-units.

What kind of truly spectacular things/effects do you expect?
 
Heh adding 60 million transistors doesn't sound like a worth while update to a graphics card . That could also be the reason why the nv40s still are not avalible to buy and the x800s are .

As for sm 3.0 i suspect that if all the features of it were used the nv4x would be very very slow. So nvidia is going to pull one of those only use these features which will speed us up so we can claim we are funning sm 3.0 when we are really only using certian features that speed us up.


As for dx 9. IF the nv3x was as fast as the r3x0 series we'd have many more dx 9 games out on the market already. ITs the simple fact that even the 500$ nvidia cards of the last 2 years would run dx 9 shaders slower than the 200$ cards from ati that we don't see a ton of dx 9 games out.


Say what u will but thats the truth.
I think some people just have blinders on .

I'd love tyo see all the sm 3.0 support nvidia gets when 90% of the cards they sell this year wont be able to run sm 2.0 shaders at speed


Wonder how much tehy have to spend to get the sm 3.0 support.

Perhaps in farcry the new patch will block out res for ati users . Or mabye in doom3 when it comes out and it runs faster on ati's hardware nvidia will pay to get the benchmark taken out . :oops:


Oh well. At least you will have the "better " card huh
 
jvd said:
As for sm 3.0 i suspect that if all the features of it were used the nv4x would be very very slow. So nvidia is going to pull one of those only use these features which will speed us up so we can claim we are funning sm 3.0 when we are really only using certian features that speed us up.
As a matter of fact, there are games out there bringing all available (and announced ones) cards easily to their knees.
What exactly do you want to prove with your undeinable point, that extensive or even exhaustive use of 3.0-Shaders will slow nV40 quite down?
Analogously you could apply the same to R300: A full screen, say 1024x768, of mathematically challeging 96-instruktion shaders (speaking of using all features of the relevant Shader-Model) would not yield a playable framerate on R9700p either, don't you think?

jvd said:
As for dx 9. IF the nv3x was as fast as the r3x0 series we'd have many more dx 9 games out on the market already. ITs the simple fact that even the 500$ nvidia cards of the last 2 years would run dx 9 shaders slower than the 200$ cards from ati that we don't see a ton of dx 9 games out.
As for DX9. If ATi had not continued over the last 12 months or so to sell DX8 Cards in their low-end, we'd have a higher percentage of DX9 Cards built into PCs. And that's what convinces publishers to approve the widespread use of DX9 in their respective game titles.
If some cards are to slow for it, then likely the respective IHV will feel this in its Market Share quite soon.

jvd said:
Say what u will but thats the truth.
I think some people just have blinders on .
Sure. You are GOD (or the Pope, if you happen to be catholic) and therefore alone able to know everything.
(speaking of single-mindedness..)

jvd said:
I'd love tyo see all the sm 3.0 support nvidia gets when 90% of the cards they sell this year wont be able to run sm 2.0 shaders at speed

I'd love to see that too - might be a motivation for them to do better next time :)
What happens, if motivation is lacking you can perfectly see with nV30 and R420 (in their respective ways...).

jvd said:
Wonder how much tehy have to spend to get the sm 3.0 support.
Wonder how much the others have to spend, to avoid SM3.0 Support. ;)

jvd said:
Perhaps in farcry the new patch will block out res for ati users . Or mabye in doom3 when it comes out and it runs faster on ati's hardware nvidia will pay to get the benchmark taken out . :oops:
Oh well. At least you will have the "better " card huh

Sorry, but your whole posting sounds to me as if you're lacking reasonable arguments right now. You might have noticed that i tried to reply to some of your statements in this posting in the same manner.

Do you really think, bashing each other ('s fav IHV) leads us to anywhere?
 
As a matter of fact, there are games out there bringing all available (and announced ones) cards easily to their knees.
What exactly do you want to prove with your undeinable point, that extensive or even exhaustive use of 3.0-Shaders will slow nV40 quite down?
Analogously you could apply the same to R300: A full screen, say 1024x768, of mathematically challeging 96-instruktion shaders (speaking of using all features of the relevant Shader-Model) would not yield a playable framerate on R9700p either, don't you think?
The diffrence is the 9700pro was the first card that had advance features that were usable.

WHich is why the 9700pro in my pc can run farcry at 1027x768 with 2x fsaa and everything turn up as high as it could go.

My 5800ultra can't when forced to run the standard dx 9 path. My 5800ultra is lucky to get 30-40fps at 640x480. Both cards cost me the same money. Hell the 5800 was newer than the 9700pro.

Yes i'm sure that eq2 , half life 2 , doom 3 will slow down the r9700pro alot. But the card is 2 years old and it will still be faster than any other card released at around the same time even if they were running reduced shaders and persicion.

Will the same be said about the 6800ultras ?

As for DX9. If ATi had not continued over the last 12 months or so to sell DX8 Cards in their low-end, we'd have a higher percentage of DX9 Cards built into PCs. And that's what convinces publishers to approve the widespread use of DX9 in their respective game titles.
If some cards are to slow for it, then likely the respective IHV will feel this in its Market Share quite soon.

Thats a laugh. Ati sold cards to consumers that could actually run at decent speeds and res the features that were billed to it. I.e a 9200 will run p.s 1.4 shaders at 1027x768 at 40-60fps.

If they released a sub 100$ card i highly doubt it would have perfromed dx 9 tasks at more than 800x600 (look at the 9600 se)

While even the 5900ultra in dx 9 benchmarks (like half life 2) running full percision tasks would be slower than the 9600pros from ati. Not to mention that the fabled sub 100$ dx 9 gpu from nvidia was getting 7fps .

Are you telling me that if ati put out a dx 9 card capable of 7fps we would suddenly have a huge uptake of dx 9 ?

I'm telling you that if nvidia produced cards as capable of dx 9 features as ati with the same eas of programing then there would be many more dx 9 titles out there.

Sure. You are GOD (or the Pope, if you happen to be catholic) and therefore alone able to know everything.
(speaking of single-mindedness..)

No i just have hindsight. I can look back and see everyhting that has happened.

dx 8 nvidia lagged behind in features but sold many more cards thus dx 8.1 support never happened . Dx 9 nvidia lagged behind in usable features and because of that support was slowed down as nvidia was still popular .

now ati is behind in features but because of the r3x0 series is more popular than before. Nvidia has the new tech but because ati hasn't and because ati has sold the most dx 9 capable parts sm 2.0 will be the choice of dev platform.


Wonder how much the others have to spend, to avoid SM3.0 Support.
very little i'd say . considering that most of the cards sold in the last 2 years and going on through tihs year are only capable of sm 2.0 support. All they have to say is hey look you can program for sm 3.0 and have it work on the 6800s which still aren't out and will account for little sales wise this year. Or you can program for the vast amounts of r3x0s that are sold and will be sold this year and teh small amount of r420s sold this year .


Sorry, but your whole posting sounds to me as if you're lacking reasonable arguments right now. You might have noticed that i tried to reply to some of your statements in this posting in the same manner.

How so ? In tiger woods some reses were blocked out for ati users. When forced to see the cards as nvidia card suddenly those reses were opened up and were much faster on atis cards . In tomb raider aod (angel of darkness right?) the radeons were much faster in the game than the nv3xs and so a patch came that took out the benchmarking modes.

Both were the way its meant to be played tittles.

Why would this not be the case in this generation if the x800s remain faster than the nv40s ?

Do you really think, bashing each other ('s fav IHV) leads us to anywhere?

NO bashing g ets us no where. Either does witch hunting .

However by pointing out the past we can get a good idea of whats going to happen in the future .
 
Thank you for proving my point. :)

edit:
You don't want me to take your post again and un-"prove" it, paragraph for paragrahp, don't you?
 
Quasar said:
Thank you for proving my point. :)

edit:
You don't want me to take your post again and un-"prove" it, paragraph for paragrahp, don't you?
Go for it if u think you can .
 
jvd said:
Quasar said:
Thank you for proving my point. :)

edit:
You don't want me to take your post again and un-"prove" it, paragraph for paragrahp, don't you?
Go for it if u think you can .

I'll try to keep it short:

jvd said:
WHich is why the 9700pro in my pc can run farcry at 1027x768 with 2x fsaa and everything turn up as high as it could go.
Far Crys Shaders are nowhere in the vicinity of 96 intructions slots nor are they generally applied full-Screen. One Posting ago you were talking about "As for sm 3.0 i suspect that if all the features of it were used the nv4x would be very very slow."
Far Cry does not fulfill this criterium, so your comparison is pointless.


Thats a laugh. Ati sold cards to consumers that could actually run at decent speeds and res the features that were billed to it. I.e a 9200 will run p.s 1.4 shaders at 1027x768 at 40-60fps. [...]
Well, as seen in at least one Review (see following pages also, Max Payne2 is the only game, that fulfills your assumption) on the web, this is not true.


No i just have hindsight. I can look back and see everyhting that has happened.[...]
As i've said, YOUR opinioin and as such accepted, but since you cannot prove it, your statement of "Say what u will but thats the truth." is only the same - your opinion and maybe quite far from the truth...


All they have to say is hey look you can program for sm 3.0 and have it work on the 6800s which still aren't out and will account for little sales wise this year. Or you can program for the vast amounts of r3x0s that are sold and will be sold this year and teh small amount of r420s sold this year.
Wrong. You can program for both, SM3.0 and SM2.0 with only an additional option selected in the HLSL-Compiler. So theres literally no additional effort on the side of the developer to use some features of SM3.
This was different with SM2.0/SM2.a and vastly different with PS1.1-1.3 vs. PS1.4, because the latter ahd a completely different approach to programming and required massive efforts to take advantage of.

My remark was aimed at your post in general (as indicated by "your whole posting"), not with respect to that particular paragraph.

NO bashing g ets us no where. Either does witch hunting .
However by pointing out the past we can get a good idea of whats going to happen in the future .

Partially true. But just pointing out the past is only a part of the story. You also have to understand the past and apply it to the current situation. :)

Thank you for your patience, EOD for me, as it seems pointless to go on and on disproving all your points one by one and my RL is more important to me. :)
 
Far Crys Shaders are nowhere in the vicinity of 96 intructions slots nor are they generally applied full-Screen. One Posting ago you were talking about "As for sm 3.0 i suspect that if all the features of it were used the nv4x would be very very slow."
Far Cry does not fulfill this criterium, so your comparison is pointless.
And how about other dx 9 games coming out like half life 2 ? are they all not using 96 instruction slots nor applying them full-screen.

Which btw is not what I was talking about. I was talking about all the features of sm 3.0 i.e dynamic branching and some of the other things that will slow down the shaders .

Well, as seen in at least one Review (see following pages also, Max Payne2 is the only game, that fulfills your assumption) on the web, this is not true.
How many more times are we going to have to see halo used as an example ?

not to mention that the 9200 falls between the 5200 and the 5600 in terms of perfromance. Both of which are more expensive than the 9200

As i've said, YOUR opinioin and as such accepted, but since you cannot prove it, your statement of "Say what u will but thats the truth." is only the same - your opinion and maybe quite far from the truth...
So your saying the fact that nvidia did not have a comparable video card to the radeon 9700pro had nothing to do with the slow adoption of dx 9 features ?

That the reason was the 9200 ?

Can you back this up ?

Wrong. You can program for both, SM3.0 and SM2.0 with only an additional option selected in the HLSL-Compiler. So theres literally no additional effort on the side of the developer to use some features of SM3.
This was different with SM2.0/SM2.a and vastly different with PS1.1-1.3 vs. PS1.4, because the latter ahd a completely different approach to programming and required massive efforts to take advantage of.

Right some features of sm3 . What improvements will these features actually give us ? IF you listen to crytech it is only speed improvements . Not iq. Since the 6800s are already slower than the x800s in shaders the speed increase is hardly going to matter.

Which goes right back to my other point about what nvidia is going to do(using only some features and claiming its sm3.0 but never turning on the options that will slow it down)

Partially true. But just pointing out the past is only a part of the story. You also have to understand the past and apply it to the current situation.

RIght there is alot you didn't address. Like nvidia using money to have developers cast ati in a bad light. They did it before so my point of a doom 3 patch taking away a benchmarking program when ati is ahead or everquest 2 locking out reses for ati users . These are all things nvidia has done with other games and will do again .
 
I really wouldn't venture too far down the nVidia paying devs money to show them in a good light and ati in a bad light path if I were you.

You might just force me to bring up the topic of valve, gabe newell, half-life2, 5 million dollars and a shady day event.
 
Stop it, Quasar, it's absolutely pointless to discuss with jvd.

He's not interested in exchanging arguments and opinions, he just wants to shove his point of view up everyones' ass.

And he's abolutely neither interested nor capable of understanding facts or opinions that don't happen to support his own point.
 
radar1200gs said:
I really wouldn't venture too far down the nVidia paying devs money to show them in a good light and ati in a bad light path if I were you.

You might just force me to bring up the topic of valve, gabe newell, half-life2, 5 million dollars and a shady day event.

Right. Because if ati payed valve to cast nvidia in a bad light there wouldn't be any mix mode paths for the nv3x

He's not interested in exchanging arguments and opinions, he just wants to shove his point of view up everyones' ass.

And he's abolutely neither interested nor capable of understanding facts or opinions that don't happen to support his own point.

:rolleyes: Right . I'm uncapable of understanding facts or opinions . Thats right. Only problem is he has yet to point out any facts that don't support my own opinion .

I have tried my best to be respectfull of others. WHile i could be calling many people nvidiots and be right in that regard I have not.

You can at least show me enough respect not to insult me.

IF you do not like what I am saying please by all means don't read it. Don't insult me though. That i wont stand for .
 
Status
Not open for further replies.
Back
Top