Supercomputer

KimB

Legend
I'm a bit excited about this:
http://investors.cray.com/phoenix.zhtml?c=98390&p=irol-newsArticle&ID=894829&highlight=

New supercomputer going up at NERSC that will consist of over 19,000 2.6GHz AMD Opteron cores (9.5k dual-core CPU's). Anyway, I'm hoping to have some fun putting these beasts into action for my own projects in the coming year :) It'll be quite interesting thinking of ways to make use of more than a couple dozen processors...

And just for a sense of scale, this might be compared to NERSC's current largest supercomputer, Seaborg, which contains just over 6,000 300MHz PowerPC cores, or Jacquard, which I worked on last, which contains 712 2.2GHz Opteron CPU's. I'm rather excited about having this much more processing power with which to do work :)
 
Well, perhaps not. From what I understand, the Core 2's still don't gain performance from a 64-bit code, while the Athlon 64 architecture does. So that should close the performance gap a fair amount.

But the real nail in the coffin is that I don't think Intel even has a Core 2-based Xeon available yet, making such an idea interesting but just not possible at this time.
 
Opteron is already proven in the supercomputer market. I expect Core 2 chips to even things out a lot in the next year or two, but it's not going to happen overnight.
 
Wont the Xeon's have trouble when the number of cores keeps going up. Is it not still just two cores slapped together like the PD Xeon's? Of course the FSB is higher, but how far can that take you?
 
The hyper transport interconnect will be useful, but I'm not sure how 9,500 chips are hooked together, and whether the hyper transport interconnect makes a significant difference.
 
The hyper transport interconnect will be useful, but I'm not sure how 9,500 chips are hooked together, and whether the hyper transport interconnect makes a significant difference.
They use a proprietary Cray communications interface between nodes.
 
What kind of computations do you do with these toys at NERSC?
Right now, largely parameter estimation for CMB data. Basically, it takes on the order of 5 seconds for a modern PC CPU to calculate the power spectrum of the CMB sky from physical parameters, for the simplest assumptions about the universe. Performing a good MCMC run requires a few hundred thousand of these calculations. Other statistics may require an order of magnitude more.

So one CPU might take a couple of weeks to perform an MCMC run, and I may want to do multiple runs with different assumptions made. On Jacquard I can shorten this wait time to a single day. On this new supercomputer, I'll have fun thinking of what kinds of new things I can do with the added computing power :)
 
Back
Top