Help:CSDMS HPCC: Difference between revisions

From CSDMS
m (added new hpcc picture)
(Update HPCC description, software, and add picture/logo)
Line 1: Line 1:
=== The CSDMS High Performance Computing Cluster ===
= The CSDMS High Performance Computing Cluster (Code name: beach) =
__TOC__


The CSDMS High Performance Computing Cluster (HPCC) provides CSDMS researchers a state-of-the-art HPC cluster.


==== Hardware ====
Use of the CSDMS HPCC is available free of charge to the CSDMS community!  To get an account on our machine your will need to: [[Join Workinggroup | become a member ]] of the CSDMS project, and [[Help:HPCC_account_request | sign up for an account]]. That's it!
[[Image:HPCC.png | 250px | right | CSDMS HPCC]]
Our '''CSDMS HPCC System''' is an SGI Altix XE 1300 with integrated 512 x 3.0GHz/12M/1600MHz/80W E5472 processors, using non-blocking Infiniband Interconnect with 1.152TB of memory, with one head node, 28 compute nodes, 4 compute nodes with heavy memory, associated infrastructure, 72TB/7200RPM/SATA Raid storage, web server 4 x 2.33GHz/8GB RAM E5420 processor.<br>
The CSDMS-HPCC (≈ 6Tflops) is configured with two HPC approaches:
# massive shared memory among fewer processors,
# the more typical parallel configuration each running Linux Red Hat with Fortran, C and C++ compilers.
This system offers to potential CSDMS researchers a state of the art HPC, once their code can be scaled up to take advantage of the capaboilities of these systems.


The CSDMS system will be tied in to the larger 7000 core (>100 Tflop) '''Front Range Computing Consortium'''.  This supercomputer will consist of 10 Sun Blade 6048 Modular System racks, nine deployed to form a tightly integrated computational plant, and the remaining rack to serve as a GPU-based accelerated computing system.In addition, the Grid environment will provide access to NCAR’s mass storage system.
== Hardware ==
4.10
[[File:sgi_logo_hires.jpg | right | 250px ]]


==== Software ====
The CSDMS High Performance Computing Cluster is an SGI Altix XE 1300 that consists of 64 Altix XE 320 compute nodes (for a total of 512 cores). The compute nodes are configured with two quad-core 3.0GHz E5473 (Harpertown) processors. 54 of the 64 nodes have 2 GB of memory per core, while the remaining nodes have 4 GB of memory per core. Internode communication is accomplished through either gigabit ethernet or over a non-blocking InfiniBand fabric.
'''Compute Nodes:'''
: Free stuff:
# [http://www.pythonware.com/products/pil Python Imaging Library (PIL)]
# [http://www.unidata.ucar.edu/software/udunits Udunits 1.12.9]
# [http://www.unidata.ucar.edu/software/netcdf netcdf]
# [http://www.openmotif.org openmotif 2.3.2]
# [http://www.astro.caltech.edu/~tjp/pgplot Pgplot]
# [http://www.hdfgroup.org/HDF5/release/index.html HDF5]
# [http://www.mathworks.com Matlab]
# [http://eucalyptus.cs.ucsb.edu/ Eucalyptus]
# Languages:
## Python 2.6, 3000 (along with numpy and scipy where possible)
## Java 1.6
## GNU compilers > 4.0 (gcc, g++, gfortran)


==== Request an HPCC account ====
Each node has 250 GB of local temporary storage.  However, all nodes are able to access 36TB of RAID storage through NFS.
The HPCC is for members who participate in one of the three ways outlined below.


# They have submitted code into the CSDMS Repository, to either run their models in advance of science, or to advance their developing modeling efforts that will ultimately become part of the Repository.  Provide the community beforehand with [[Models questionnaire|metadata of the model]] you want to run on the HPCC
The CSDMS system will be tied in to the larger 7000 core (>100 Tflop) '''Front Range Computing Consortium'''.  This supercomputer will consist of 10 Sun Blade 6048 Modular System racks, nine deployed to form a tightly integrated computational plant, and the remaining rack to serve as a GPU-based accelerated computing system.In addition, the Grid environment will provide access to NCAR’s mass storage system.
# The HPCC is also for members who wish to apply compliant CSDMS models developed by others within the CSDMS framework, to help them advance their science.   
4.10
# The HPCC is for members who wish to experiment with new data systems in support CSDMS models, or to develop visualizations of the model runs.
 
Once you meet the above requirements you can request an [[HPCC account request | '''CSDMS HPCC account''']].
   
Your HPCC guest account will be valid for ''one year''. You will receive an email as soon as your account expires. Your data (model, source code, simulations, etc) will be removed from the HPCC if you don't extend your account (by email to [mailto:csdms@colorado.edu CSDMS@colorado.edu]). Unfortunately, we have to charge a fee if data needs to be recovered after an account expires.
 
==== HPCC Access ====
Once you have an account you can access the CSDMS HPCC with any secure-shell (SSH)  application (primarily ssh, scp, sftp) from workstations located in the CU Internet domain (*.colorado.edu) or from workstations connected to the colorado.edu domain through a virtual private network (VPN) connection. A VPN account will automatically be created for users outside the colorado.edu domain.


You will need the following software to establish a connection to the colorado.edu domain and to login to the HPCC:
== Software ==
[[Image:HPCC.png | 250px | right | The CSDMS HPCC]]


# VPN: [http://www.colorado.edu/its/vpn/clients.html Download VPN software] if you do not have installed it already on your machine. Choose your operating platform and simply follow the installation procedure. You need your ''IdentiKey username'' and ''password'' (both provided to you when you applied for a HPCC account).
Below is a list of some of the software that we have installed on beach. If there is a particular software package that is not listed below and would like to use it, please feel free to send an email to [mailto:CSDMSsupport@colorado.edu us] outlining what it is you need.
# SSH: [http://www.colorado.edu/its/MSG/filedist/ssh.html Download SSH] if you do not have installed it yet.


Then:
Compilers:
#Start a VPN connection to the University of Colorado (CU)
* [http://gcc.gnu.org/ gcc] 4.1 and 4.3
#Open a SSH window and type from the command prompt: <syntaxhighlight lang="bash" lines="0">
* [http://gcc.gnu.org/wiki/GFortran gfortran] 4.1 and 4.3
> ssh <username>@beach.colorado.edu </syntaxhighlight> (where <username> is your own username)
* icc 11.0
#Provide your password when asked and your connected to the HPCC!
* ifort 11.0
* [http://www.mcs.anl.gov/research/projects/mpich2/ mpich2] 1.1
* [http://mvapich.cse.ohio-state.edu/ mvapich] 1.2
* [http://www.open-mpi.org/ openmpi] 1.3


==== SSH Tunneling X Windows ====
Languages:
Displaying of the graphical desktop of the HPCC master-control node to your personal workstation is possible through SSH Tunneling X Windows software. This might require prior installation and configuration of software on your workstation. See information below on how to operate the graphical desktop for [[#SSH Tunneling X Windows for Mac OSX | Mac]] and for [[#SSH Tunneling X Windows for Windows|Windows]] operating systems.
* Python 2.4
** [http://numpy.scipy.org/ numpy] 1.2.1
** [http://www.scipy.org/ scipy] 0.6.0
** [http://www.pythonware.com/products/pil Python Imaging Library (PIL)]
* Python 2.6
** [http://numpy.scipy.org/ numpy] 1.3.0
** [http://www.scipy.org/ scipy] 0.7.1rc3
* Java 1.5 and 1.6
* perl 5.8.8
* [http://www.mathworks.com/ MATLAB] 2008b


===== SSH Tunneling X Windows for Mac OSX =====
Libraries:
You will need X11 to tunnel X Windows to a Mac. Fortunately, Mac OSX comes with X11. If you're using an older version of OSX, [http://http://www.apple.com/downloads/macosx/apple/macosx_updates/x11formacosx.html download X11 from the apple site].<br>
* [http://www.unidata.ucar.edu/software/udunits Udunits] 1.12.9
Open X11, select '''Applications''' and then '''Terminal'''.
* [http://www.unidata.ucar.edu/software/netcdf netcdf] 4.0.1
In the terminal type:
* [http://www.hdfgroup.org/HDF5 hdf5] 1.8
<syntaxhighlight lang=bash>
* [http://xmlsoft.org/index.html libxml2] 2.7.3
> ssh -Y beach.colorado.edu -l <username>
* [http://www.gtk.org/ glib-2.0] 2.18.3
</syntaxhighlight>
* petsc 3.0.0p3
Type <password> and that's it. Now you can test the Tunneling by for example typing <matlab>.


===== SSH Tunneling X Windows for Windows =====
Tools:
Install [http://www.straightrunning.com/XmingNotes/ Xming] on your windows machine.  
* [http://www.cmake.org/ cmake] 2.6p2
''Needs more info''
* [http://www.scons.org/ scons] 1.2.0
* [http://subversion.tigris.org/ subversion] 1.6.2
* [http://www.clusterresources.com/torquedocs21/ torque] 2.3.5
* [http://modules.sourceforge.net/ Environment modules] 3.2.6

Revision as of 19:26, 29 July 2009

The CSDMS High Performance Computing Cluster (Code name: beach)

The CSDMS High Performance Computing Cluster (HPCC) provides CSDMS researchers a state-of-the-art HPC cluster.

Use of the CSDMS HPCC is available free of charge to the CSDMS community! To get an account on our machine your will need to: become a member of the CSDMS project, and sign up for an account. That's it!

Hardware

Sgi logo hires.jpg

The CSDMS High Performance Computing Cluster is an SGI Altix XE 1300 that consists of 64 Altix XE 320 compute nodes (for a total of 512 cores). The compute nodes are configured with two quad-core 3.0GHz E5473 (Harpertown) processors. 54 of the 64 nodes have 2 GB of memory per core, while the remaining nodes have 4 GB of memory per core. Internode communication is accomplished through either gigabit ethernet or over a non-blocking InfiniBand fabric.

Each node has 250 GB of local temporary storage. However, all nodes are able to access 36TB of RAID storage through NFS.

The CSDMS system will be tied in to the larger 7000 core (>100 Tflop) Front Range Computing Consortium. This supercomputer will consist of 10 Sun Blade 6048 Modular System racks, nine deployed to form a tightly integrated computational plant, and the remaining rack to serve as a GPU-based accelerated computing system.In addition, the Grid environment will provide access to NCAR’s mass storage system. 4.10

Software

The CSDMS HPCC

Below is a list of some of the software that we have installed on beach. If there is a particular software package that is not listed below and would like to use it, please feel free to send an email to us outlining what it is you need.

Compilers:

Languages:

Libraries:

Tools: