NVIDIA doesn't recommend AMD CPUs


Makes some sense, because AMD CPU's are clearly holding the high end G80 parts back, compared to the Core 2 Duo/Extreme.
Using a worse CPU in a review the performance difference to the old DX9 counterparts could be perceived as less than it is with a more capable chip.
Maybe with their next CPU micro-architecture the roles will be reversed again, and AMD gets back on the review kit recommendations.
 
I guess their a bit jaded.

Charlie seems a bit jaded himself:

Overall, it looks like a really boring card, meant to satisfy a price point. Until it can make working Vista drivers and stop promising phantom features, it will be hard for me personally to recommend anything it makes.

By that definition every single card is boring since they're all versions of the same architecture meant to satisfy a price point. I guess he's just bitter about the state of Vista drivers but I have no idea how that translates into a condemnation of a specific card.
 
The press kit from Nvidia on this one looks really
52_52.gif
Still I see no reason why he's so upset so much. :LOL:
 
Nvidia has always recommends what makes their GPUS look best.
 
Nvidia has always recommends what makes their GPUS look best.

yes, but hardware changes fast. what would happen when K8L comes and takes the performance crown soundly? Whould that be represented on their retail boxes? I'm thinking no.

Nvidia recommends what is best for their business and their business partners.
 
What if I said at one point in the recent past AMD-ATi did not recommend that we use an AMD processor when submitting a review machine with an X1900XT 256MB in it?

:oops:
 
Makes some sense, because AMD CPU's are clearly holding the high end G80 parts back, compared to the Core 2 Duo/Extreme.
Using a worse CPU in a review the performance difference to the old DX9 counterparts could be perceived as less than it is with a more capable chip.
Maybe with their next CPU micro-architecture the roles will be reversed again, and AMD gets back on the review kit recommendations.
So you think a 2.8ghz A64/opteron would hold back a G80 at high res with plenty of Fsaa??
With 16x FSAA available and high res the G80 can quite easily be the bottleneck with even something like a 2.4ghz A64.
 
So you think a 2.8ghz A64/opteron would hold back a G80 at high res with plenty of Fsaa??
With 16x FSAA available and high res the G80 can quite easily be the bottleneck with even something like a 2.4ghz A64.

A large number of benchmarks would disagree with you.
 
http://www.firingsquad.com/hardware/geforce_8800_gtx_gts_amd_cpu_scaling/page11.asp
That's only with 4x fsaa.
As I said, I'm talking about using 8x or even 16 fsaa, which obviously stressed the GPU more.

You just made me choke, there's not even any Intel numbers there. How can you compare otherwise? Those numbers also clearly show scaling with processor speed, a C2D offering generally faster performance at the low end then AMD is offering at the very top, you'd also want to give your card the absolute best numbers possible, therefore its very easy and logical to assume Nvidia not recommending AMD CPUs.
 
You just made me choke, there's not even any Intel numbers there. How can you compare otherwise? Those numbers also clearly show scaling with processor speed, a C2D offering generally faster performance at the low end then AMD is offering at the very top, you'd also want to give your card the absolute best numbers possible, therefore its very easy and logical to assume Nvidia not recommending AMD CPUs.
That's because it's an article on scaling for AMD chips ;)
I'm pretty sure that's what I was talking about.
There's very little scaling with cpu speed in Oblivion, in quake 4 theres tons.
What do you think happens to those numbers when you throw in 16x fsaa?
You think it's gonna scale with the CPU even more?
Just so you know I'm using 1600x1200 and 1920x1200, since those are resolutions someone with a G80 would play with.
Now if you're basing the G80 being clearly cpu limited, please dont do so with games that are clearly cpu limited.
I think Oblivion is a good example of how "next gen" games will behave with the GPU largely being the bottleneck.
Source engined games, no so much.
 
That's because it's an article on scaling for AMD chips ;)
I'm pretty sure that's what I was talking about.
There's very little scaling with cpu speed in Oblivion, in quake 4 theres tons.
What do you think happens to those numbers when you throw in 16x fsaa?
You think it's gonna scale with the CPU even more?
Just so you know I'm using 1600x1200 and 1920x1200, since those are resolutions someone with a G80 would play with.
Now if you're basing the G80 being clearly cpu limited, please dont do so with games that are clearly cpu limited.
I think Oblivion is a good example of how "next gen" games will behave with the GPU largely being the bottleneck.
Source engined games, no so much.

http://enthusiast.hardocp.com/article.html?art=MTI2MiwxLCxoZW50aHVzaWFzdA==

Of course, arguing with you is pointless, but those numbers clearly show why you'd want the fastest CPU possible, even well beyond a 2.8Ghz processor from AMD. Intel often times in those has upwards of a average FPS lead of 20+. Your comment was "So you think a 2.8ghz A64/opteron would hold back a G80 at high res with plenty of Fsaa??" which is clearly the case that a 2.8Ghz Athlon 64 can and will hold back a extreme graphics system.
 
http://enthusiast.hardocp.com/article.html?art=MTI2MiwxLCxoZW50aHVzaWFzdA==

Of course, arguing with you is pointless, but those numbers clearly show why you'd want the fastest CPU possible, even well beyond a 2.8Ghz processor from AMD. Intel often times in those has upwards of a average FPS lead of 20+. Your comment was "So you think a 2.8ghz A64/opteron would hold back a G80 at high res with plenty of Fsaa??" which is clearly the case that a 2.8Ghz Athlon 64 can and will hold back a extreme graphics system.
Of course you want the fastest cpu with a fucking SLI 8800 system :LOL:
Jesus Christ you're fucking dense :LOL:
No doubt the C2D will beat the living crap out of a A64 cpu when you're cpu limited, but on a single card with high res and fsaa it's not as clear cut.
"Next gen" games like Oblivion clearly favor having a faster GPU vs a cpu, while something like the source engined or the doom 3 engine clearly loves fast cpus.
In any case, for real world gaming unless you have a FPS counter on you wont be able to percieve a real diff when the framerates we're dealin with are so high.
 
Last edited by a moderator:
Of course you want the fastest cpu with a fucking SLI 8800 system :LOL:
Jesus Christ you're fucking dense :LOL:
No doubt the C2D will beat the living crap out of a A64 cpu when you're cpu limited, but on a single card with high res and fsaa it's not as clear cut.
"Next gen" games like Oblivion clearly favor having a faster GPU vs a cpu, while something like the source engined or the doom 3 engine clearly loves fast cpus.
In any case, for real world gaming unless you have a FPS counter on you wont be able to percieve a real diff when the framerates we're dealin with are so high.

Sigh...
 
So you atleast gonna explain your use of an article with SLI cpu scaling to convey the 8800 as being cpu limitied?
No one here was suggesting that SLI 8800s didn't need all the CPU help they could get, yet you choose to show benchmarks of an sli 8800 in a feeble attempt to prove me wrong :rolleyes:
 
yes, but hardware changes fast. what would happen when K8L comes and takes the performance crown soundly? Whould that be represented on their retail boxes? I'm thinking no.

Nvidia recommends what is best for their business and their business partners.


Nvidia recommended AMD when AMD has the performance crown. They did it back in the A64 reign. They make motherboards for both IHVs. Theres no reason for them to not too go for whatever CPU makes them look best. There is a performance difference between a A64 and an Intel Core 2 Duo with 8800 hardware. Especially with SLI. Its Nvidia's best interest to always show their hardware on the best systems available.
 
Back
Top