Help:CSDMS HPCC: Difference between revisions
(Make HPCC picture bigger) |
m (Add couple of links resize picture again) |
||
Line 8: | Line 8: | ||
[[File:sgi_logo_hires.jpg | right | 250px ]] | [[File:sgi_logo_hires.jpg | right | 250px ]] | ||
The CSDMS High Performance Computing Cluster is an [http://www.sgi.com SGI] [http://www.sgi.com/products/servers/altix/xe Altix XE] 1300 that consists of 64 Altix XE 320 compute nodes (for a total of 512 cores). The compute nodes are configured with two quad-core 3.0GHz E5473 (Harpertown) processors. 54 of the 64 nodes have 2 GB of memory per core, while the remaining nodes have 4 GB of memory per core. Internode communication is accomplished through either gigabit ethernet or over a non-blocking InfiniBand fabric. | The CSDMS High Performance Computing Cluster is an [http://www.sgi.com SGI] [http://www.sgi.com/products/servers/altix/xe Altix XE] 1300 that consists of 64 Altix XE 320 compute nodes (for a total of 512 cores). The compute nodes are configured with two quad-core 3.0GHz E5473 (Harpertown) processors. 54 of the 64 nodes have 2 GB of memory per core, while the remaining nodes have 4 GB of memory per core. Internode communication is accomplished through either gigabit ethernet or over a non-blocking [http://en.wikipedia.org/wiki/InfiniBand InfiniBand] fabric. | ||
Each node has 250 GB of local temporary storage. However, all nodes are able to access 36TB of RAID storage through NFS. | Each node has 250 GB of local temporary storage. However, all nodes are able to access 36TB of RAID storage through NFS. | ||
Line 15: | Line 15: | ||
4.10 | 4.10 | ||
=== Hardware Summary === | |||
{| | {| | ||
! align=left width=150 | Node | ! align=left width=150 | Node | ||
Line 49: | Line 50: | ||
== Software == | == Software == | ||
[[Image:HPCC.png | 350px | [[Image:HPCC.png | 350px | right | The CSDMS HPCC]] | ||
Below is a list of some of the software that we have installed on beach. If there is a particular software package that is not listed below and would like to use it, please feel free to send an email to [mailto:CSDMSsupport@colorado.edu us] outlining what it is you need. | Below is a list of some of the software that we have installed on beach. If there is a particular software package that is not listed below and would like to use it, please feel free to send an email to [mailto:CSDMSsupport@colorado.edu us] outlining what it is you need. |
Revision as of 11:07, 30 July 2009
The CSDMS High Performance Computing Cluster (Code name: beach)
The CSDMS High Performance Computing Cluster (HPCC) provides CSDMS researchers a state-of-the-art HPC cluster.
Use of the CSDMS HPCC is available free of charge to the CSDMS community! To get an account on our machine your will need to: become a member of the CSDMS project, and sign up for an account. That's it!
Hardware
The CSDMS High Performance Computing Cluster is an SGI Altix XE 1300 that consists of 64 Altix XE 320 compute nodes (for a total of 512 cores). The compute nodes are configured with two quad-core 3.0GHz E5473 (Harpertown) processors. 54 of the 64 nodes have 2 GB of memory per core, while the remaining nodes have 4 GB of memory per core. Internode communication is accomplished through either gigabit ethernet or over a non-blocking InfiniBand fabric.
Each node has 250 GB of local temporary storage. However, all nodes are able to access 36TB of RAID storage through NFS.
The CSDMS system will be tied in to the larger 7000 core (>100 Tflop) Front Range Computing Consortium. This supercomputer will consist of 10 Sun Blade 6048 Modular System racks, nine deployed to form a tightly integrated computational plant, and the remaining rack to serve as a GPU-based accelerated computing system.In addition, the Grid environment will provide access to NCAR’s mass storage system. 4.10
Hardware Summary
Node | Type | Processors | Memory | Internal Storage |
---|---|---|---|---|
beach.colorado.edu | Head | 2 Quad-Core Xeon[1] | 16GB[2] | -- |
cl1n001 - cl1n056 | Compute | 2 Quad-Core Xeon [1] | 16GB [2] | 250GB SATA |
cl1n057 - cl1n064 | Compute | 2 Quad-Core Xeon [1] | 32GB [2] | 250GB SATA |
Software
Below is a list of some of the software that we have installed on beach. If there is a particular software package that is not listed below and would like to use it, please feel free to send an email to us outlining what it is you need.
Compilers
Name | Version | Module Name | Location |
---|---|---|---|
gcc | 4.1 | gcc/4.1 | /usr |
gcc | 4.3 | gcc/4.3 | /usr/local/gcc |
gfortran | 4.1 | gcc/4.1 | /usr |
gfortran | 4.3 | gcc/4.3 | /usr/local/gcc |
icc | 11.0 | intel | /usr/local/intel |
ifort | 11.0 | intel | /usr/local/intel |
mpich2 | 1.1 | mpich2/1.1 | /usr/local/mpich |
mvapich2 | 1.2 | mvaich2/1.2 | /usr/local/mvapich |
openmpi | 1.3 | openmpi/1.3 | /usr/local/openmpi |
Languages
Name | Version | Module Name | Location |
---|---|---|---|
Python[1] | 2.4 | python/2.4 | /usr |
Python[2] | 2.6 | python/2.6 | /usr/local/python |
Java | 1.5 | -- | -- |
Java | 1.6 | -- | -- |
perl | 5.8.8 | -- | /usr |
MATLAB | 2008b | -- | /usr/local/matlab |
Libraries
Name | Version | Module Name | Location |
---|---|---|---|
Udunits | 1.12.9 | udunits | /usr/local/udunits |
netcdf | 4.0.1 | netcdf | /usr/local/netcdf |
hdf5 | 1.8 | hdf5 | /usr/local/hdf5 |
libxml2 | 2.7.3 | libxml2 | /data/progs/lib/libxml2 |
glib-2.0 | 2.18.3 | -- | /usr/local/glib |
petsc | 3.0.0p3 | -- | /usr/local/petsc |
Tools
Name | Version | Module Name | Location |
---|---|---|---|
cmake | 2.6p2 | cmake | /usr/local/cmake |
scons | 1.2.0 | scons | /usr/local/scons |
subversion | 1.6.2 | subversion | /usr/local/subversion |
torque | 2.3.5 | torque | /opt/torque |
Environment modules | 3.2.6 | -- | /usr/local/modules |