Wow, Price vs Perf right now in PC's is nice...

Absolutely false. Microsoft's new "open" XML format cannot be read by the last version of Office without an add-on and support outside MS products is sparse at best (less than beta).

Their Outlook database is not XML or open in any way. MS's XML is not approved (failed the standards ballot).

And tell me, when you take that graphic-intensive MS Word doc in "open" XML over to 2003 or Oo_O how does it look :)

Since you ask, Office 2003 didn't support XML document types -- go figure, I can't imagine why it doesn't open. Oh, but don't worry about that whole backwards compatibility bit built straight into Office 2007 that allows down-level saves. It's much more impressive to shoot down "the Microsoft man" than to even hint that they might be doing something right or even headed in the right direction.

I'm arguing about you acting like an asshole when someone makes a friendly conversational suggestion based on an unwillingness to do extensive research on your behalf.
Funny, let's review your little reply to Shaidar and Skrying (I wasn't even part of that discussion until a bit later)
Okay, so shaidar and sky tell me this power draw:
Q6600, 2 SATA drives (7200 rpm), 4 gigs, a 320 8800GTS and a DVD-RW...

Also, please elaborate on longevity of PSU's at 60/80/90% capacity and potential component damage upon failure

So, here you are stretching your materials scientist arms with a big of your own chest-thump, and yet you were still proven wrong by Sky and Shaidar both. What's more interesting, is that you threw ME under the bus by saying I was talking smack to you. Wanna see my first reply back to you? Here it is as a friendly reminder:
ME said:
Indeed, I'm FAR from worried about the PSU in this thing. He will not be overclocking in any way,shape or form -- he's not an enthusiast at that level. he likes his video games, and for better than 50 years old, I think it's cool that he still plays Halo / COD / Crysis / et al.

There's obviously places we could trade off a bit, or hell, we could slap in another $50 (still wouldn't break $1100) and have an even bigger and still high quality powersupply if need-be.

That was about as friendly as I needed to be, as you were already long since done being proven wrong. You decided to come back and recover some face or something, which really doesn't matter in the least.

So, here's how it's gonna go: you're going to continue replying, because I continue replying. And you'll still have nothing more to add to this ENTIRE thread from this point forward, because you'll be hung up on how Sky and Shaidar proved your entire point wrong and how you're now taking it out on me.

So here's the solution: meet my very first ignore, ever.
 
Jeez man, you're right, I will reply and when you click "view post" because you're tempted here's what you'll see...

Shaidar and Sky had knowledge I wanted. I relied on PSU calculators and they showed me they're a joke. I asked what my system would draw because I have multiple systems PSUs and actually wanted to know since my only way was either a PSU calculator (joke) or an ammeter (no thanks). I wanted to know about duty cylces because I killed a PSU. It was a good conversation with nobody taking any swipes until you stepped in with your first rude post saying (amongst other things):
Let's take a tally: we have people who don't understand the reality of power draw, who don't understand the reality and economics of "future proofing" a piece of PC hardware, and who can't really comprehend why a five year old OS isn't the perfect choice for a brand new PC?
Nobody proved me wrong because the only thing I ever said was that the PSU might be to wimpy based on what I gathered off the net. Once informed I never stood my ground against anything other than you and your attitude/rudeness. I was wrong, shown wrong and pursued more information - the logical thing to do when you are learning.

You, on the other hand, were rude.

If your implication is that I was rude asking Shaidar or Sky for info you're simply mistaken. They're smart folks and have helped me (Shaidar in this case) with getting my folding on the right track. I figured they could help me figure out why my 520 died.

Anyway, thanks for the ignore. I hope nobody else ever asks you for a modicum of politeness.

Chow.
 
Buffering doesn't eliminate the latency penalty for synchronizing out of phase clock domains. If clock cycle is incomplete in the DRAM clock domain when the FSB clock is ready (that is, mismatched base clocks), it is not safe to pull data from any buffer that is receiving data because the DRAM bus may not have settled at the final state.

Whatever margin there is between the clock domains falls on the receiving side as a duty cycle (or more if there is a wide disparity), if it is faster, or a slight increase in overall latency because of the extra buffering.
It's a small penalty, but it is one that exists in most cases regardless.

Buffering also does not fake peak bandwidth. If the FSB is already saturated, the RAM's extra bandwidth has a limited effect if the northbridge services the occasional DMA request.

The problem with counting on extra bandwidth for that is that it is possible in FSB-limited situations that the CPU may be involved in orchestrating some of those DMA transfers or the CPU is working on critical data that determines what other memory clients should do. This means the slowed execution caused by the FSB causes DMA traffic to slow anyway.

Why else do most benchmarks show such limited gains to memory bandwidth that exceeds the FSB's bandwidth? Usually, the CPU is doing something pretty important when the FSB is saturated. It's only under some limited circumstances that other components can continue without rapidly running out of commands the CPU should be generating.

You don't need to hide latency since there is enough clocks to hide it in. The buffer have at least two stages to separate completed transfers. You can saturate FSB, but L2 path? Hard to imagine in real world...
Benchmarks show limited gains because core2 in common configurations is not bandwith starved. Lower memory clocks below 666 means same limited performance drops. Still it is scaling slowly but steadily with memory up to 2 effective GHz (faster was not yet tested).
 
You don't need to hide latency since there is enough clocks to hide it in. The buffer have at least two stages to separate completed transfers.
That doesn't work for new requests, it only works for transfers in-progress. New fetches will still get the same latency.

Putas said:
You can saturate FSB, but L2 path? Hard to imagine in real world...
Is L2 cache still driven from the northbridge? Are you sure? Because in the days of 486 and 586 where L2 cache wasn't ON the CPU die, this was the case. But we're in the 21st century now, and processors contain L2 caches directly to the die; there's no L2 cache bus from CPU to northbridge to CPU again -- that would be ludicrous.

Putas said:
Benchmarks show limited gains because core2 in common configurations is not bandwith starved. Lower memory clocks below 666 means same limited performance drops. Still it is scaling slowly but steadily with memory up to 2 effective GHz (faster was not yet tested).

Where are these benchmarks demonstrating scaling of system performance directly tied to memory running faster than FSB? Do you have any links? Because I can't find anything ANYWHERE on Google that demonstrates what you're asking. I see people keeping the same CPU speed but altering multipliers and FSB values to increase performance, but that's entirely proving the point we're making about memory not gaining an inch over 1:1 speed ratios...
 
anyway everything is great on the PC described on first post, the RAM issue we're arguing about have little consequences (and good 2GB ddr2 666 sticks are'nt something to sneeze at)
 
That doesn't work for new requests, it only works for transfers in-progress. New fetches will still get the same latency.

Is L2 cache still driven from the northbridge? Are you sure? Because in the days of 486 and 586 where L2 cache wasn't ON the CPU die, this was the case. But we're in the 21st century now, and processors contain L2 caches directly to the die; there's no L2 cache bus from CPU to northbridge to CPU again -- that would be ludicrous.

Where are these benchmarks demonstrating scaling of system performance directly tied to memory running faster than FSB? Do you have any links? Because I can't find anything ANYWHERE on Google that demonstrates what you're asking. I see people keeping the same CPU speed but altering multipliers and FSB values to increase performance, but that's entirely proving the point we're making about memory not gaining an inch over 1:1 speed ratios...

New fetches will be in progress at some point, no?
21st century with intel's 256 bits L2 paths compared to 64 bit memory paths. I did not say anything about NB driving L2.
How about: http://www.anandtech.com/memory/showdoc.aspx?i=3121&p=4

Don't get me wrong, I am not sneezing at ddr2 666. I just want to say you get more performance with faster memory, synced FSB does not matter.
 
This isn't some enthusiast that's going to replace CPU's every six months.
Maybe he does want to upgrade to a FSB1600 quad core (if there will be such a thing) one year down the line. I don't really get why you're arguing so strongly about this. I looked up prices: 2GB DDR2-800 go for €30, 2GB DDR-667 go for €26, and that's brand-named stuff (Aeneon, Quimonda's retail brand).
 
New fetches will be in progress at some point, no?
21st century with intel's 256 bits L2 paths compared to 64 bit memory paths. I did not say anything about NB driving L2.
So then why did you just get done saying this:
Putas said:
You can saturate FSB, but L2 path? Hard to imagine in real world...
We're talking about the bottleneck of northbridge to CPU as it affects the bus between northbridge and main memory. So since that's what we're talking about, L2 path never comes into play.

Indeed! I'm qute glad you posted that; let's look at the Sandra UNBuffered scores for the very first row of results for those "Cell shock" DDR3 modules. In fact, let's look at the very first benchmark score, and the very last benchmark score: memory at 400mhz, bus at 333mhz -> memory at 1000mhz, bus at 500mhz.

They increased memory speed by 150%, and increased bus speed by 50%. How much did the unbuffered ram score go up? First one was 4364, last one was 6760. What's the difference between these two? 55%.

Fifty five percent performance increase for a fifty percent bus speed increase and a ONE HUNDRED AND FIFTY percent memory speed increase.

A logical person might consider the possibility that memory bandwidth is sorely limited by the front side bus speed. Are you a logical person?
 
Maybe he does want to upgrade to a FSB1600 quad core (if there will be such a thing) one year down the line. I don't really get why you're arguing so strongly about this. I looked up prices: 2GB DDR2-800 go for €30, 2GB DDR-667 go for €26, and that's brand-named stuff (Aeneon, Quimonda's retail brand).

But why? Maybe he'll want a whole new computer. Maybe he'll buy a Mac. Maybe he won't even want a computer anymore, because the world will have come to an immediate stop due to some asteroid.

Here's a hint: I know the guy, you don't. He won't upgrade this computer, because he's not a computer guy. He'll play games on it until it dies (just like he has with the last three or four I've known him to own) and will then replace it and give away the broken one.

Why spend even five more euro (what is that, like $10 US? :oops: :cry: ) for something he will never use? The financial goal was to get the closest to $1000 without going under, and you're putting it further over. Again, why?
 
... and the fact that memory timings vary a lot doesn't mean a thing?

I wonder how much latency changes with higher clocked RAM.
 
... and the fact that memory timings vary a lot doesn't mean a thing?

I wonder how much latency changes with higher clocked RAM.

Sure, do a benchmark with the same timings if you're so inclined. But the reality is, anyone who can do math can literally spend 30 sceonds to figure out why it doesn't matter. The ram can run at two billion gigahertz, but the fact is that the bus between the CPU and northbridge is the bottleneck at anything higher than 1:1.

This isn't some hypothetical "Well maybe the bits just go around the bus and magically appear", this is straight forward and long-understood digital signal bussing.

EDIT
You know what? If increasing the memory speed by 150% doesn't somehow nullify the extra two cycles of latency, then nothing will. If you wanted to make a strong argument for increasing memory speed beyond what the northbridge <-> CPU bus can sustain, maybe it can be for eliminating some amount of latency. And maybe with HIGH latency ram (perhaps like DDR3?) then it could make an impact.

But that impact will be tiny at best. Go back to the Anand benchmark, and look at the gaming benchmarks. Look at the scores increase... Each memory speed bump of 33% netted a LESS THAN 2% increase in performance. But when the bus jumped up? Well, another 2% :p But notice at that point, increasing memory speed did basically nothing.
 
... and the fact that memory timings vary a lot doesn't mean a thing?

I wonder how much latency changes with higher clocked RAM.

It's often a wash. Higher clocked RAM at higher timings often results in roughly the same latency as lower clocked RAM with lower timings. For example, DDR2-800 with CL5 is pretty much the same as DDR2-667 CL4 or DDR-400 with CL2.5 when it comes to latency.
 
It's often a wash. Higher clocked RAM at higher timings often results in roughly the same latency as lower clocked RAM with lower timings. For example, DDR2-800 with CL5 is pretty much the same as DDR2-667 CL4 or DDR-400 with CL2.5 when it comes to latency.
It shouldn't be too hard to find 800MHz RAM with same timings as 667MHz. Also higher clock speed also lowers latency a bit.
 
It shouldn't be too hard to find 800MHz RAM with same timings as 667MHz. Also higher clock speed also lowers latency a bit.

Which again is why it's a wash. Meaning we're back to square one: bottleneck is the front side bus, not latency.
 
We're talking about the bottleneck of northbridge to CPU as it affects the bus between northbridge and main memory. So since that's what we're talking about, L2 path never comes into play.

No L2 path in the game- I am stoned.

A logical person might consider the possibility that memory bandwidth is sorely limited by the front side bus speed. Are you a logical person?

A logical person would see the bandwith increase when just memory clock is higher, out of sync. That's what I am talking about.
 
Which again is why it's a wash. Meaning we're back to square one: bottleneck is the front side bus, not latency.
I hope you know that majority of applications are not bandwidth bottlenecked. If they were then K8 would wipe the floor with all Core2 CPUs, including the quads.
 
Ugh. So then, let's tally up again.

Majority of apps aren't bandwidth bottlenecked.
The scant few apps that ARE memory bandwith bottlenecked respond several orders of magitude better when the FSB is increased in-step with memory speed versus simply increasing memory speed to any point higher than FSB speed.

So... Back to my original point in this whole memory speed debacle:

Why are we burning up more electricity, spending more money, and introducing more heat into our system by putting memory in that is faster than the bus speed, when by the above two accounts it nets us close enough to zero as to be within the margin of testing error?

Why?
 
Why are we burning up more electricity, spending more money, and introducing more heat into our system by putting memory in that is faster than the bus speed, when by the above two accounts it nets us close enough to zero as to be within the margin of testing error?

Even within margin of error, it is not error, since you get same results everytime. Increase in heat and consumption is also in margin of error anyway. Don't forget besides 10+ GB/s for CPU, you have another concurrent 10+ GB/s for PCI and southbridge.
 
Even within margin of error, it is not error, since you get same results everytime. Increase in heat and consumption is also in margin of error anyway. Don't forget besides 10+ GB/s for CPU, you have another concurrent 10+ GB/s for PCI and southbridge.

What are the PCI and southbridge going to do with memory? DMA transfers, maybe? The reality is it's not going to be doing DMA transfers at the exact same time that the CPU is 100% utilizing the memory bus, so it's still a 100% non-issue. And DMA is the only point where the CPU doesn't intervene in memory access between peripheral devices and main memory.

And since power draw is directly tied to frequency, it's not a "margin of error" addition. Nor is heat generation, which is also directly tied to power draw.

And it isn't my rig, so I'm not buying jack. And I'm not suggesting he buy anything different, either.
 
Back
Top