Submitting a standard universe jobs

What is the standard universe?

Your first job was considered a vanilla universe job. This meant that it was a plain old job. Condor also supports standard universe jobs. If you have the source code for your program and if it meets certain requirements, you can re-link your program and Condor will provide two major features for you:

Your tutorial leader will give you more details about standard universe, or you can read about them online.

Linking a program for standard universe

First, you need a job to run. We'll use the same job as before. In case you don't have it, here it is. Save it in simple.c:
#include <stdio.h>

main(int argc, char **argv)
{
    int sleep_time;
    int input;
    int failure;

    if (argc != 3) {
        printf("Usage: simple <sleep-time> <integer>\n");
        failure = 1;
    } else {
        sleep_time = atoi(argv[1]);
        input      = atoi(argv[2]);

        printf("Thinking really hard for %d seconds...\n", sleep_time);
        sleep(sleep_time);
        printf("We calculated: %d\n", input * 2);
        failure = 0;
    }
    return failure;
}

Now compile the program using condor_compile. This doesn't change how the program is compiled, just how it is linked. Take note that the executable is named differently.

nova 1% condor_compile gcc -o simple.std simple.c
LINKING FOR CONDOR : /usr/bin/ld -L/usr/local/lib/condor/lib -Bstatic 
-m elf_i386 -dynamic-linker /lib/ld-linux.so.2 -o simple.std 
/usr/local/lib/condor/lib/condor_rt0.o /usr/lib/crti.o 
/usr/local/lib/gcc-lib/i686-pc-linux-gnu/2.95.3/crtbegin.o 
-L/usr/local/lib/condor/lib -L/usr/local/lib/gcc-lib/i686-pc-linux-gnu/2.95.3 
-L/usr/local/lib /tmp/ccDAx8Uq.o /usr/local/lib/condor/lib/libcondorzsyscall.a 
/usr/local/lib/condor/lib/libz.a -lgcc -lc -lnss_files -lnss_dns -lresolv
-lc -lnss_files -lnss_dns -lresolv -lc -lgcc 
/usr/local/lib/gcc-lib/i686-pc-linux-gnu/2.95.3/crtend.o /usr/lib/crtn.o 
/usr/local/lib/condor/lib/libcondorc++support.a
/usr/local/lib/condor/lib/libcondorzsyscall.a(condor_file_agent.o): 
    In function `CondorFileAgent::open(char const *, int, int)':
    /home/condor/execute/dir_16680/condor6.6.7/src/condor_ckpt/condor_file_agent.C:99: 
    the use of `tmpnam' is dangerous, better use `mkstemp'
gcc: file path prefix `/usr/local/lib/condor/lib/' never used

nova 2% ls -lh simple.std
-rwxr-xr-x    1 alainroy math       3.4M Dec 18 00:57 simple.std

There are a lot of warnings there--you can safely ignore those warnings. You can also see just how many libraries we link the program against. It's a lot! And yes, the executable is much bigger now. Partly that's the price of having checkpointing and partly it is because the program is now statically linked, but you can make it slightly smaller if you want by getting rid of debugging symbols:

nova 3% strip simple.std

nova 4% ls -lh simple.std
-rwxr-xr-x    1 alainroy math         886k Dec 19 01:26 simple.std

Submitting a standard universe program

Submitting a standard universe job is almost the same as a vanilla universe job. Just change the universe to standard. Here is a sample submit file. I suggest making it run for a longer time, so we can experiment with the checkpointing while it runs. Also, get rid of the multiple queue commands that we had. Here is the complete submit file:

Universe   = standard
Executable = simple.std
Arguments  = 120 10
Log        = simple.log
Output     = simple.out
Error      = simple.error
Queue

Then submit it as you did before, with condor_submit:

nova 5% rm simple.log

nova 6% % condor_submit submit.std
Submitting job(s).
Logging submit event(s).
1 job(s) submitted to cluster 6080.

nova 7% condor_q -sub alainroy


-- Submitter: alainroy@cs.tau.ac.il : <132.67.192.133:43609> : nova.cs.tau.ac.il
 ID      OWNER            SUBMITTED     RUN_TIME ST PRI SIZE CMD               
6080.0   alainroy       12/19 02:26   0+00:00:13 R  0   0.9  simple.std 120 10 

1 jobs; 0 idle, 1 running, 0 held

...

nova 8% cat simple.log
000 (6080.000.000) 12/19 02:26:34 Job submitted from host: <132.67.192.133:43609>
...
001 (6080.000.000) 12/19 02:26:40 Job executing on host: <132.67.105.243:47512>
...
005 (6080.000.000) 12/19 02:28:40 Job terminated.
        (1) Normal termination (return value 0)
                Usr 0 00:00:00, Sys 0 00:00:00  -  Run Remote Usage
                Usr 0 00:00:00, Sys 0 00:00:00  -  Run Local Usage
                Usr 0 00:00:00, Sys 0 00:00:00  -  Total Remote Usage
                Usr 0 00:00:00, Sys 0 00:00:00  -  Total Local Usage
        848  -  Run Bytes Sent By Job
        908328  -  Run Bytes Received By Job
        848  -  Total Bytes Sent By Job
        908328  -  Total Bytes Received By Job
...

Notice that the log file has a bit more information this time: we can see how much data was transfered to and from the job because it's in the standard universe. The remote usage was not very interesting because the job just slept, but a real job would have some interesting numbers there.

Advanced tricks in the standard universe

At this point in the tutorial, I will demonstrate how you can force your job to be checkpointed and what it will look like. We will use a command called condor_checkpoint that you normally never to use, so we can demonstrate.

Begin by submitting your job, and figuring out where it is running:

nova 9% rm simple.log

nova 10% condor_submit submit.std
Submitting job(s).
Logging submit event(s).
1 job(s) submitted to cluster 6090

nova 11% condor_q -sub alainroy

-- Submitter: alainroy@cs.tau.ac.il : <132.67.192.133:49346> : nova.cs.tau.ac.il
 ID      OWNER            SUBMITTED     RUN_TIME ST PRI SIZE CMD               
6090.0   alainroy       12/20 16:58   0+00:00:00 R  0   0.9  simple.std 120 10 

1 jobs; 0 idle, 1 running, 0 held

nova 12% cat submit.log
000 (6090.000.000) 12/20 16:58:13 Job submitted from host: <132.67.192.133:49346>
...
001 (6090.000.000) 12/20 16:58:22 Job executing on host: <132.67.105.236:35676>
...

nova 13% host 132.67.105.236
236.105.67.132.in-addr.arpa domain name pointer plab-155.cs.tau.ac.il.

By looking at the IP address of the job and converting that to a name, we know where the job is running. Move along quickly now, because the job will only run for two minutes. Now let's tell Condor to checkpoint and see what happens.

nova 14% condor_checkpoint plab-155

nova 15% cat simple.log
000 (6090.000.000) 12/20 16:58:13 Job submitted from host: <132.67.192.133:49346>
...
001 (6090.000.000) 12/20 16:58:22 Job executing on host: <132.67.105.236:35676>
...
006 (6090.000.000) 12/20 16:58:33 Image size of job updated: 1560
...
003 (6090.000.000) 12/20 16:58:34 Job was checkpointed.
        Usr 0 00:00:00, Sys 0 00:00:00  -  Run Remote Usage
                Usr 0 00:00:00, Sys 0 00:00:00  -  Run Local Usage
...
005 (6090.000.000) 12/20 16:58:35 Job terminated.
        (1) Normal termination (return value 0)
                Usr 0 00:00:00, Sys 0 00:00:00  -  Run Remote Usage
                Usr 0 00:00:00, Sys 0 00:00:00  -  Run Local Usage
                Usr 0 00:00:00, Sys 0 00:00:00  -  Total Remote Usage
                Usr 0 00:00:00, Sys 0 00:00:00  -  Total Local Usage
        690471  -  Run Bytes Sent By Job
        908630  -  Run Bytes Received By Job
        690471  -  Total Bytes Sent By Job
        908630  -  Total Bytes Received By Job
...

Voila! We checkpointed our job correctly.

Advanced note You might notice that the job finished right after it was checkpointed. Why? The job was checkpointed while executing sleep(), then essentially restarted from the checkpoint (though Condor doesn't consider this to be a restart since the job didn't leave the computer). Condor didn't keep track of how much time had elapsed in the sleep call, so the job finished right away. Don't worry--Condor handles other system calls just fine. It's not clear how to handle checkpointing sleep()--if your job is interrupted during the sleep and restarted sometime later, how much time should Condor force the job to sleep for? Do we rely on wall clock time? Run time?

Normally, you never need to use condor_checkpoint: we just used it as a demonstration. Condor will checkpoint your jobs periodically (the default is every three hours) or when your job is forced to leave a computer to give time to another user. So you should never need to use condor_checkpoint.


Extra Credit You can customize the behavior of the standard universe quite a bit. For instance, you can force some files to be accesssed locally instead of via remote I/O. You can change the buffering of remote I/O to get better performance. You can disable checkpointing. You can kill a job that has been restarted from its checkpoint more than three times. How do you do these things? Hint, look at the condor_submit manual page

Next: Submitting a Java job