NVIDIA doesn't recommend AMD CPUs

So you atleast gonna explain your use of an article with SLI cpu scaling to convey the 8800 as being cpu limitied?
No one here was suggesting that SLI 8800s didn't need all the CPU help they could get, yet you choose to show benchmarks of an sli 8800 in a feeble attempt to prove me wrong :rolleyes:

You proved yourself wrong with your very first link. The 8800 does scale with a faster CPU, as clearly your charts showed. You noted how it was not much, but clearly the Core 2 Duo line is a far superior processor to anything AMD currently offers, therefore it is perfectly logical that Nvidia would prefer you use the fastest processors possible to benchmark their cards, it makes the cards look better. What is so hard for you to understand is beyond me. The link I showed was to prove that a CPU offers much greater head room, where at a point your CPU does indeed bottleneck you and that indeed this does occur even with just a single 8800. All benchmarks could show you this, I could run my Core 2 Duo at stock (even in Oblivion) and then at its current speed (1.8Ghz to 2.8Ghz) and just with my simple X1950 Pro you'd see nice scaling, I've run the tests many times, across many games, I know the benefits of scaling up my processor and that having a faster one improves games performance. As much as a new graphics card? No, but that is not the point of this at all, which you clearly just seem to miss.

Thanks for the rep BTW, I was oh so deserving of it.
 
You proved yourself wrong with your very first link. The 8800 does scale with a faster CPU, as clearly your charts showed. You noted how it was not much, but clearly the Core 2 Duo line is a far superior processor to anything AMD currently offers, therefore it is perfectly logical that Nvidia would prefer you use the fastest processors possible to benchmark their cards, it makes the cards look better. What is so hard for you to understand is beyond me. The link I showed was to prove that a CPU offers much greater head room, where at a point your CPU does indeed bottleneck you and that indeed this does occur even with just a single 8800. All benchmarks could show you this, I could run my Core 2 Duo at stock (even in Oblivion) and then at its current speed (1.8Ghz to 2.8Ghz) and just with my simple X1950 Pro you'd see nice scaling, I've run the tests many times, across many games, I know the benefits of scaling up my processor and that having a faster one improves games performance. As much as a new graphics card? No, but that is not the point of this at all, which you clearly just seem to miss.

Thanks for the rep BTW, I was oh so deserving of it.
I have no doubt that the C2D is an awesome CPU, I simply dont agree that you need a C2D to really utilize a G80 effectivly in a single card config,
That minimal scaling was with 4x fsaa as well, I use 8x at a minimum.
I also brought up real world in the last post, not benchmarking so you have to keep that in mind as well, I dont care about seeing an additional 3 or 5 fps, I want uber scaling, something that happens in quake 4 with A64 scaling even at high res, now if every game was like that I wouldn't be arguing ;)
As for your 1950 pro, at what res and settings are you using in Oblivion?

Btw, no problem mate, showing me an SLI scaling article was the last straw ;)
It was vulgar of you to do so, seemed desperate :p
Besides, atleast maybe now you'll neg rep me as I havent been repped for many months, I'm "horny" if you will ;)
Though I havent been that active here either.
 
I have no doubt that the C2D is an awesome CPU, I simply dont agree that you need a C2D to really utilize a G80 effectivly in a single card config,
That minimal scaling was with 4x fsaa as well, I use 8x at a minimum.
I also brought up real world in the last post, not benchmarking so you have to keep that in mind as well, I dont care about seeing an additional 3 or 5 fps, I want uber scaling, something that happens in quake 4 with A64 scaling even at high res, now if every game was like that I wouldn't be arguing ;)
As for your 1950 pro, at what res and settings are you using in Oblivion?

Btw, no problem mate, showing me an SLI scaling article was the last straw ;)
It was vulgar of you to do so, seemed desperate :p
Besides, atleast maybe now you'll neg rep me as I havent been repped for many months, I'm "horny" if you will ;)
Though I havent been that active here either.

All those smileys and nonsense gave me a headache.

Clearly, from the low-end E4300 up until the ultra high-end QX6700 and X6800, the Core 2 architecture is superior to anything comparable from AMD's camp, and even at least one market segment above that.
If the AMD64 designs weren't holding the 8800 GTX/GTS back, then you would see progressively diminishing returns with each new CPU speed step up in both the Intel and the AMD chips.
But what we see is that even at the bottom of the barrel, simply having a Core 2 makes an ever more tangible difference.
And the more demanding the resolution/AA/AF setting, the more we see that difference, which is the opposite of the previously described situation.
Not to mention that Core 2 can do this while consuming less power.
I'm not saying that it isn't possible that DX10 titles will alleviate this CPU limitation somewhat, but for now we have to stick with what we have, and that's DX9 software.

So, if AMD wants to be back on track and recapture the overall performance lead (they deserved it for 3 or 4 years), they will need to address these issues, which are undeniable by anyone with a shred of common sense.
 
Last edited by a moderator:
yes, but hardware changes fast. what would happen when K8L comes and takes the performance crown soundly? Whould that be represented on their retail boxes? I'm thinking no.

Nvidia recommends what is best for their business and their business partners.
Big difference between what goes in a press kit and what goes on a retail box.
 
All those smileys and nonsense gave me a headache.

Clearly, from the low-end E4300 up until the ultra high-end QX6700 and X6800, the Core 2 architecture is superior to anything comparable from AMD's camp, and even at least one market segment above that.
If the AMD64 designs weren't holding the 8800 GTX/GTS back, then you would see progressively diminishing returns with each new CPU speed step up in both the Intel and the AMD chips.
But what we see is that even at the bottom of the barrel, simply having a Core 2 makes an ever more tangible difference.
And the more demanding the resolution/AA/AF setting, the more we see that difference, which is the opposite of the previously described situation.
Not to mention that Core 2 can do this while consuming less power.
I'm not saying that it isn't possible that DX10 titles will alleviate this CPU limitation somewhat, but for now we have to stick with what we have, and that's DX9 software.

So, if AMD wants to be back on track and recapture the overall performance lead (they deserved it for 3 or 4 years), they will need to address these issues, which are undeniable by anyone with a shred of common sense.
IIRC the only real games that showed massive cpu scaling was quake 4 and the source engine games, btw check out the min framerates.
I've never denied that the C2D is clearly a superior chip, however unless you have a low end A64/opt there wont be a big difference in modern games.
Now if you wanna make a big deal about a 3fps gain with a C2D, go ahead, but dont expect me to join in saying you clearly need a C2D with a G80.

From the 3800+ to the 4600+ there is no scaling going on at 1600x1200, it's only the 2 highest speed grades that give you 4-5fps, for some odd reason.
Heres some C2D x6800 numbers from their initial review, for good measure.
 
This is nothing. The real comedy starts when AMD doesn't recommend their own CPU's when it's time for the R600 reviews. :LOL:
 
This is nothing. The real comedy starts when AMD doesn't recommend their own CPU's when it's time for the R600 reviews. :LOL:

tahir2 said:
What if I said at one point in the recent past AMD-ATi did not recommend that we use an AMD processor when submitting a review machine with an X1900XT 256MB in it?

Has already happened. ;) Undoubtedly will happen again.
 
This is nothing. The real comedy starts when AMD doesn't recommend their own CPU's when it's time for the R600 reviews. :LOL:

Well, "Barcelona's" ES' work on Asus QuadFX boards, so why not hand out a review kit with it ? :D

That way, they would limit early R600 testing to AMD processors, giving us a taste of what's to come from both business units, leaving Intel out of the equation (momentarily...), and perhaps just keeping quiet about the Nvidia chipset until the in-house product emerges.

In fact Nvidia did this to some extent, by having plenty of 8800's reviewed together with the simultaneously unveiled NF680i SLI chipset.
 
Yeah I think you guys are arguing over the very obvious. You can make almost any modern game either CPU-bound or GPU-bound. Is there anyone that doesn't know about this? ;)

But If I had to choose C2D + G70 or A64 + G80 for gaming, I'd probably pick the latter. I believe that's what Radeonic2 is trying to say.
 
Yeah I think you guys are arguing over the very obvious. You can make almost any modern game either CPU-bound or GPU-bound. Is there anyone that doesn't know about this? ;)

But If I had to choose C2D + G70 or A64 + G80 for gaming, I'd probably pick the latter. I believe that's what Radeonic2 is trying to say.

The thread was not about picking it a C2D + G70 or A64 + G80. It was about benchmarking, would you rather have a C2D + G80 or a A64 + G80. Which rather clearly you would want the C2D.
 
The thread was not about picking it a C2D + G70 or A64 + G80. It was about benchmarking, would you rather have a C2D + G80 or a A64 + G80. Which rather clearly you would want the C2D.
Gotta get that extra 2 fps right?
You forgot about poland... er about the opteron btw :p
Very nice chip.
2.8 ghz for 150~ bucks when overclocked.
 
Gotta get that extra 2 fps right?
You forgot about poland... er about the opteron btw :p
Very nice chip.
2.8 ghz for 150~ bucks when overclocked.

You honestly think Nvidia cares about which solution is the cheapest when recommending what processor you run their card with for articles and benchmarks? They do not, they care about the best performance the card can have. Of course they are going to recommend the best performing processor to go with their card.
 
Back
Top