Part I: Configuring the Grid Middleware

The VDT (Virtual Data Toolkit) distributes a variety of grid middleware, making it easy to install Condor-G and DAGMan (as well as Globus, Chimera and other Grid components). (Official Virtual Data Toolkit web site)

 Accessing VDT installation

All of the tools you'll be using are installed as part of Virtual Data Toolkit (VDT v1.1.13). We won't go through the VDT installation, but it is quite easy to do it yourself. The standard VDT installation is in /opt/vdt1.1.13. This is what the top level directory of a VDT installation looks like:

$ ls /opt/vdt1.1.13
MonaLisa doc gpt platform
Pacman.db edg jdk1.4 post-install vds
caches expat licenses pyglobus-url-copy vdt
classads ftsh netlogger setup.csh vdt-install.log

Notice the condor, globus directories. This is where Condor (Condor-G) and Globus are installed, respectively. Condor-G  is a part of Condor that has to do with submitting jobs to external Grid portals such as Globus. (These used to be separate distributions but now they are part of the same Condor installation).

Setting your environment

VDT provides an easy way to set the environment variables relevant to each software. Simply source (using . in bash) the (or setup.csh depending on your shell). We'll assume the user has a bash shell for the rest of this tutorial.

$ source /opt/vdt1.1.13/
Now let's check a few envrionment variables that have been set:

Starting up Condor

In this tutorial you will each start a copy of Condor-G. Condor has been configured to only run the SchedD and Master daemons (which is all that is needed for Condor-G). We need to configure Condor to use the configuration files that have been pre-placed in your home directory. These configuration files are identical to the original Condor configuration files that came with VDT, except they are modified to instruct Condor to use a subdirectory of your home directory as its working (local) directory (to prevent different invocations of Condor from conflicting with one another).

$ cd ~
$ ls condor_local
condor_config condor_config.local execute log spool $ export CONDOR_CONFIG=~/condor_local/condor_config
Let's use  condor_config_val to verify that some Condor configuration values are set correctly. ("condor_config_val -config" shows which configuration files are actually in use).
$ condor_config_val -config
Config file:
Local config file:

$ condor_config_val LOCAL_DIR
/home/user??/condor_local $ condor_config_val SCHEDD_LOG

Now we're ready to start Condor. We do it by running condor_master, a daemon that, among other things, is responsible for starting other daemons (in this case only SchedD).
$ condor_master
Verify that the Master and SchedD are running:
$ ps -x
17608 pts/1 S 0:00 -bash
18349 ? S 0:00 condor_master
18350 ? S 0:00 condor_schedd -f
18359 pts/1 R 0:00 ps -x
Let's run condor_q, a tool that display all the jobs in the Condor queue. The queue should be empty at the moment, since we haven't submitted any jobs yet.
$ condor_q
-- Submitter: : <> :
0 jobs; 0 idle, 0 running, 0 held

Initializing the GSI Proxy

Each of you will need to create a GSI proxy for this tutorial. A GSI proxy is the credential that you pass to remote sites to verify your identity. Every remote job submission (be it through Condor-G or Globus tools) require a presence of a valid proxy.

(The default proxy length is 12 hours. For a long lived task, you might create proxies with lifespans of 24 hours, several days, or even several months.) grid-proxy-init uses your public/private keys stored in ~/.globus/usercert.pem and ~/.globus/userkey.pem respectivelyThese are the credentials required to generate a proxy.  Generally you would obtain them from one of the well-known Grid CA's (certificate authorities).

The password to create the proxy is Condor.

By default, a proxy is generated into /tmp/x509up_u???? (where ???? is the user id). This is a standard location, based on Globus GSI standards. This is where Condor-G will expect to find it every time you submit a job (in the upcoming examples).

Create a proxy with Globus tool called grid-proxy-init (you can verify that it is located in your Globus distribution in /opt/vdt1.1.13/globus).

$ which grid-proxy-init

$ grid-proxy-init -hours 4 -verify
our identity: /C=US/ST=Wisconsin/L=Madison/O=University of Wisconsin -- Madison/O=Computer Sciences Department/OU=Condor Project/CN=Test Condor-G User 11/
Enter GRID pass phrase for this identity: Condor
Creating proxy ........................................... Done
Proxy Verify OK
Your proxy is valid until Thu Jul 10 16:06:13 2003
At any time you can check the status of your proxy (including time left on it) using grid-proxy-info
$ grid-proxy-info -all
subject : /C=US/ST=Wisconsin/L=Madison/O=University of Wisconsin -- Madison/O=Computer Sciences Department/OU=Condor Project/CN=Test Condor-G User ??/
issuer : /C=US/ST=Wisconsin/L=Madison/O=University of Wisconsin -- Madison/O=Computer Sciences Department/OU=Condor Project/CN=Test Condor-G User ??/
identity : /C=US/ST=Wisconsin/L=Madison/O=University of Wisconsin -- Madison/O=Computer Sciences Department/OU=Condor Project/CN=Test Condor-G User ??/
type : full
strength : 512 bits
timeleft : 3:59:57

Submitting Your First Job to Globus

Let's submit a simple job to Globus to make sure everything is working. We'll use a Globus tool called globus-job-run. It takes

It runs the executable remotely and returns the output to the user's terminal. In this tutorial we will be using a Globus gatekeeper that's running on a server called my-gatekeeper at University of Wisconsin (to be honest, my-gatekeeper is actually the same host as you'll be submitting from, but it usually isn't).  In general you have to know the gatekeeper host and port in advance. Furthermore, the gatekeeper must be configured (e.g. by the remote site's system administrator) to accept jobs from you (based on your GSI credentials).

In this example we will run /bin/date (an simple program that display the date/time) on the remote side.

$ globus-job-run /bin/date
Wed Jul 9 17:57:49 CDT 2003

Our basic Globus setup appears to be functioning. We are ready for some real examples!