q9650 and 480gtx?

mito

beyond noob
Veteran
Greetings,

I'm starting to plan a SLI upgrade, perhaps in the next few months.

My CPU is the brave q9650 overclocked to 3.75Ghz.

Can it handle two 480GTX in SLI?

Should I really consider the i7?

I'm planning it for future games, everything maxed out @ 1920x1080.

Right now I'm running two 280gtx in SLI.
Just Cause 2 with all details maxed out sometimes the framerate drops to 35fps.

Thanks.
 
Greetings,

I'm starting to plan a SLI upgrade, perhaps in the next few months.

My CPU is the brave q9650 overclocked to 3.75Ghz.

Can it handle two 480GTX in SLI?

Should I really consider the i7?

I'm planning it for future games, everything maxed out @ 1920x1080.

Right now I'm running two 280gtx in SLI.
Just Cause 2 with all details maxed out sometimes the framerate drops to 35fps.

Thanks.
Where are you going to get to 480s? ;)
 
Your CPU will most certainly be a bottleneck, but I doubt there's a CPU out there that wouldn't be a bottleneck for GTX 480 SLI. You can likely overclock further if you wish, I run a Q9550 @ 4GHz 24x7.
 
Your CPU will most certainly be a bottleneck
Only if you're running 3dmark, and want the highest possible scores. Games today don't tend to be CPU limited in the framerate region of typical display devices in use right now.

With a faster CPU he could get max framerates in the 200-300 region easily in most any game but Crysis (and now Metro 2033 or whatsitscalled), but to our eyes that would look no better than 60fps because that's what the flat panel shows. Chances are it'd look worse actually, due to extreme horizontal banding...

The reason I would get SLI'd 480s, would be to bump up the graphics settings in games to max, and that doesn't bottleneck the CPU at all (unless the game does a lot of software-based physics, and there's like zero such titles).
 
Only if you're running 3dmark, and want the highest possible scores. Games today don't tend to be CPU limited in the framerate region of typical display devices in use right now.

With a faster CPU he could get max framerates in the 200-300 region easily in most any game but Crysis (and now Metro 2033 or whatsitscalled), but to our eyes that would look no better than 60fps because that's what the flat panel shows. Chances are it'd look worse actually, due to extreme horizontal banding...

The reason I would get SLI'd 480s, would be to bump up the graphics settings in games to max, and that doesn't bottleneck the CPU at all (unless the game does a lot of software-based physics, and there's like zero such titles).

That is what I was thinking.

Even though my CPU isn't a brand new i7, it's still a powerful QuadCore CPU running at 3.75Ghz. Right?
 
If you're going to get dual cards, I think your processor is still going to bottleneck you.

Here's Crysis Warhead scaling with dual 5870's across a range of CPU platforms

Here's Far Cry 2 (which nobody plays, but makes a good benchmark ;)

Here's Stalker CoP

World in Conflict

Company of Heroes

Supreme Commander

This review isn't a compendium of all knowledge, but it gives you something to ponder. The cliff's notes are this: CPU limitations for games really are bigger than you might think on your overclocked Core 2 Quad. In fact, it appears in most games that the Core 2 Quad series seems to be even more negatively affected than the Core 2 Duo's from the data above. I'm assuming due to the architectural limitation of each pair of dies having to talk to the other pair via Intel's FSB rather than the newer i3/5/7 architecture where they can talk via L3.

Oddly enough, for gamers, it looks like the i5 with it's integrated PCIE controller is actually better than the i7 in terms of raw performance for multi-card setups.
 
If you're going to get dual cards, I think your processor is still going to bottleneck you.

Here's Crysis Warhead scaling with dual 5870's across a range of CPU platforms

Here's Far Cry 2 (which nobody plays, but makes a good benchmark ;)

Here's Stalker CoP

World in Conflict

Company of Heroes

Supreme Commander

This review isn't a compendium of all knowledge, but it gives you something to ponder. The cliff's notes are this: CPU limitations for games really are bigger than you might think on your overclocked Core 2 Quad. In fact, it appears in most games that the Core 2 Quad series seems to be even more negatively affected than the Core 2 Duo's from the data above. I'm assuming due to the architectural limitation of each pair of dies having to talk to the other pair via Intel's FSB rather than the newer i3/5/7 architecture where they can talk via L3.

Oddly enough, for gamers, it looks like the i5 with it's integrated PCIE controller is actually better than the i7 in terms of raw performance for multi-card setups.

While I agree with the premise that i7 outperforms C2Q, particularly when using multiple GPUs, something is severely wrong with the benchmarks in question. I am positive my Q9550 @ 4GHz outperforms my E8400 @ 4GHz in everything I do, including every game you've listed except for COH and Stalker COP, which I don't have. All of the other games on the list I have installed on both machines.
 
This review isn't a compendium of all knowledge, but it gives you something to ponder. The cliff's notes are this: CPU limitations for games really are bigger than you might think on your overclocked Core 2 Quad. In fact, it appears in most games that the Core 2 Quad series seems to be even more negatively affected than the Core 2 Duo's from the data above. I'm assuming due to the architectural limitation of each pair of dies having to talk to the other pair via Intel's FSB rather than the newer i3/5/7 architecture where they can talk via L3.

^^^ I hate you!


:D
 
Also, Anandtech's benches on i7 socket 1100 vs 1300 showed 1300 processor to be superior in dual-card situations, due to full 16x bus to each card, except where the socket 1100 i7's higher turbo modes caused a performance imbalance in favor of the newer hardware iteration.

I don't see why an integrated PCIe controller would matter one bit for games, which are virtually exclusively one-way in data transfers (host -> GPU), so slightly less latency to the GPU should not matter.
 
Back
Top