Wooki Cluster Quick Start

Quickstart Guide

Wooki is a high performance and high throughput computing facility for the research group of Tom Woo. It has over a thousand CPU cores available located in several racks in the uOttawa Research Data Centre. It is shared with several groups in the department and also used for teaching a number of classes.

The cluster runs the Rocks distribution of CentOS 6 Linux. There are several software packages managed centrally including up-to-date compilers for working with your own software developments. Running of jobs is managed by the Gridengine software although we provide a number of convenience scripts for submitting jobs to the queue system. Please familiarise yourself with day-to-day operations in Linux to make the most of the cluster. It is assumed here that you know how to run commands and navigate a filesystem.

Logging in

You will need an account to access the facilities. Contact one of the system administrators to set up an account for you. Your username will be you first initial followed by your surname, all in lowercase, i.e. Joe Person -> jperson. If you forget your password you must contact an administrator to reset it for you.

Access to the frontend is via an SSH client. On Windows systems, we suggest that you download the freely available client PuTTY. If you are using a Unix-like machine, you can ssh from the command prompt. To access the cluster ssh to wooki.chem.uolocal or directly to the IP address This is only available on campus or through the VPN, otherwise see OffCampusAccess.

After connecting to the machine, you will be prompted for your credentials. Once you have logged in you should change your initial password and make sure that it is only known to you. To change your password, type

  • $ passwd

You will be prompted for your current password and a new password.

Wooki is set up to allow X forwarding, and also provides complete graphical login desktops with the NX protocol (see OpenNX for a client).

Data Storage

Wooki has two distinct data storage areas home and scratch.

  • /share/scratch/username is the user’s working space. This is a fast network storage with no limits on use. You are expected to run the majority of your work from here. This is the most efficient place to work and keeps the cluster and frontend responsive. You will find a symbolic link scratch in your home directory, or you can access it directly. In the future, old files may be deleted but an email warning will be sent to alert the user if this will happen. This space is not backed up.

  • /home/username should be used as permanent storage, less intensive processing of files and transfer to and from the cluster. This space is rigorously backed up but has quotas on storage space and number of files. Once your calculations are complete you should move your results to this space taking care to exclude the large intermediate and scratch files. You should use this space for data that you will process interactively on the frontend as it will reduce network traffic. If you have large files or many small files, you can help to keep the cluster running smoothly by taring and gziping them into fewer compressed files.

Additional locations that are useful to know are:

  • /state/partition1 on a node, this is the local drive. Use this if possible as it will be the fastest storage and reduce network traffic.

  • /share/scratch/shared is also provided as an area that can be used to share or transfer files between users. Very old files in here might be deleted without warning.

  • /share/apps/ is where all the centralised software is located.

File Transfer

Windows users can use programs like FileZilla, or WinSCP for graphical file transfer, see Wooki/FileTransfers for more information. On the command line you could also use scp or sftp.

Interacting with Wooki

The frontend is intended for managing jobs and files, and not for heavy computing since it manages the queue and many other services including the interactive sessions of all the users. It is, however, a very powerful machine, and for tasks that will take less than 15 minutes of CPU time or for testing is is fine to run these interactively anything longer than that is subject to being killed without notice.

Any longer or “heavier” job must be submitted to the compute hosts via the scheduler, which manages the available resources and assigns them to waiting jobs.

Please refer to the Wooki/JobSubmission page for information on how to use the queue and mange jobs, including example job scripts, and Wooki/CheatSheet for a quick reference of commands.

Supported packages and Submitting Jobs

Wooki has a number of supported software packages. The easiest way to see what is available is to run module avail; modules can be further queried for usage instructions using, for example module help vasp or module help gaussian. If your desired package is not available, please raise a ticket in the support system.

Most packages will provide usage instructions, and will often have a submission script that will construct an SGE script and submit jobs to the queue.

Here is more detailed information on Submitting Jobs on Wooki

Code development

The cluster has recent versions of Intel and GCC compilers for programming in Fortran, C and C++, including the Intel MKL. Shared memory (OpenMP) and message-passing (MPI) parallel programming are supported, but due to the highly heterogeneous nature of the cluster, please consult one of the administrators for information on the best way to run your code, or to set up an effective environment for you.

Other languages that are standard on Linux are available. Python is very well supported, and includes high performance libraries in the Anaconda distribution, others, such as Java, Perl, are the standard CentOS versions.

Reporting issues

If you notice any issues, please raise a ticket at http://titan.chem.uottawa.ca/otrs/customer.pl so that you can keep track of the response and action to your queries. If this does not work for you, you can also email any of the administrators at the bottom of the page.

Administrator Roster