The San Diego Supercomputer Center
One of the highlights of our meeting in La Jolla last week was a quick tour through the computing room at the San Diego Supercomputer Center. WARNING: The following post includes some serious geekery! Those who aren’t turned on by massive parallel computing prowess might want to stop reading now.
My research group is known for their innovative approaches to global circulation modeling, and if there’s anybody in the world who uses a lot of cycles on big computers, it’s definitely us. So, it made sense for a few of us to go check out the big computers at the UC San Diego center where our research has run more than a few trillion calculations.
The building and computer rooms are always in a state of flux in this kind of facility. There’s new machines brought in all of the time, new groups form to study new projects and new people come and go. Last week, when I walked into the building, the first door on the right was for a Neural Network (Artificial Intelligence) in-house research group. If anybody is going to create a computer that takes over the world, it would be these people.
We wandered through the computer room, looking at huge supercomputers, both old and new. The newest, biggest machine was the Triton Resource. This computer has 256-nodes with 8 processing cores on each node, which gives it a processing power of more than 500x that of the most powerful desktop computers. This certainly isn’t the most powerful supercomputer in the world today, but it has some unique features. Each of those 256 8-processor nodes comes with 24 GB of memory, which makes this computer very, very good at shifting through huge amounts of data very quickly.
This is the specific challenge of supercomputing that UCSD has decided to tackle: the overwhelming tsunami of data that results from these huge model runs. The image above is of a room-sized harddrive array. These people don’t even really know how much storage they have, the numbers are too big to wrap your brain around. But it’s what we need right now. With climate models doing 200-year runs, and saving the state of the entire world 4 times a simulated day, the trick is not having the cycles to run the model, but having the space available to store all that data. And UCSD’s Supercomputer Center has it all!
Nice! I used to work in a computing lab in college where we had several clusters. Nothing close to this though!
thanks for the spoiler. i didn’t read it.
Thanks for the comments! Narc, I knew you were a geek, but now I’m starting to understand just how much! And likewise for you, S, except for whatever the opposite of geek is.
In my undergrad days I had a penchant for installing Linux on old computers. If I had continued with the CHAOS project I would now have a model running in more detail than the real universe!
http://tldp.org/LDP/LG/issue30/vrenios.html
Awesome link dylan! If you can’t get faster cycles, I guess you can always get more! That is an old computer axiom isn’t it? You get more bandwidth from a dumptruck full of tapes than the fastest fiber-optics available. It’s odd, but true, and our group maintains a library of harddrives that we *mail* back and forth, because it takes less time to fedex the data than it would to ftp it.