Before you continue, let's switch to the real Condor pool that will let you use the entire Condor pool we have set up here. First, let's shut down our Condor pool by telling the Condor master to quit. It will shut down the rest of your personal Condor pool. Note: edit this command as appropriate!
% ps -x PID TTY STAT TIME COMMAND 26222 pts/2 S 0:00 bash 26527 ? S 0:18 condor_master 26528 ? S 0:00 condor_collector -f 26529 ? S 0:00 condor_negotiator -f 26530 ? S 0:07 condor_startd -f 26531 ? S 0:00 condor_schedd -f 28129 pts/2 R 0:00 ps -x % kill 26527 % ps -x PID TTY STAT TIME COMMAND 26222 pts/2 S 0:00 bash 28129 pts/2 R 0:00 ps -x
Now switch to the other Condor:
% export CONDOR_CONFIG=/opt/condor-6.6.10/etc/condor_config % export PATH=/opt/condor-6.6.10/bin:${PATH} % export PATH=/opt/condor-6.6.10/sbin:${PATH}
Make sure that Condor is running on this computer:
% ps -auwx | grep condorWhat is different about this Condor setup? Why is it different?
Run condor_q
and condor_status
. Does
everything look good?
Now we are ready to move forward!
Your first job was considered a vanilla universe job. This meant that it was a plain old job. Condor also supports standard universe jobs. If you have the source code for your program and if it meets certain requirements, you can re-link your program and Condor will provide two major features for you:
#include <stdio.h> main(int argc, char **argv) { int sleep_time; int input; int failure; if (argc != 3) { printf("Usage: simple <sleep-time> <integer>\n"); failure = 1; } else { sleep_time = atoi(argv[1]); input = atoi(argv[2]); printf("Thinking really hard for %d seconds...\n", sleep_time); sleep(sleep_time); printf("We calculated: %d\n", input * 2); failure = 0; } return failure; }
Now compile the program using condor_compile. This doesn't change how the program is compiled, just how it is linked. Take note that the executable is named differently. On these computers, condor_compile seems to be particularly slow. I think it's because your home directory is on NFS: I haven't seen it be this slow before.
% condor_compile gcc -o simple.std simple.c LINKING FOR CONDOR : /usr/bin/ld -L/unsup/condor/lib -Bstatic --eh-frame-hdr -m elf_i386 -dynamic-linker /lib/ld-linux.so.2 -o simple.std /unsup/condor/lib/condor_rt0.o /usr/lib/crti.o /afs/cs.wisc.edu/s/gcc-3.4.1/i386_rh9/bin/../lib/gcc/i686-pc-linux-gnu/3.4.1/crtbeginT.o -L/unsup/condor/lib -L/afs/cs.wisc.edu/s/gcc-3.4.1/i386_rh9/bin/../lib/gcc/i686-pc-linux-gnu/3.4.1 -L/afs/cs.wisc.edu/s/gcc-3.4.1/i386_rh9/bin/../lib/gcc -L/s/gcc-3.4.1/i386_rh9/lib/gcc/i686-pc-linux-gnu/3.4.1 -L/afs/cs.wisc.edu/s/gcc-3.4.1/i386_rh9/bin/../lib/gcc/i686-pc-linux-gnu/3.4.1/../../.. -L/s/gcc-3.4.1/i386_rh9/lib/gcc/i686-pc-linux-gnu/3.4.1/../../../tmp/cc6zawpv.o /unsup/condor/lib/libcondorsyscall.a /unsup/condor/lib/libz.a /unsup/condor/lib/libcomp_libstdc++.a /unsup/condor/lib/libcomp_libgcc.a /unsup/condor/lib/libcomp_libgcc_eh.a /unsup/condor/lib/libcomp_libgcc_eh.a -lc -lnss_files -lnss_dns -lresolv -lc -lnss_files -lnss_dns -lresolv -lc /unsup/condor/lib/libcomp_libgcc.a /unsup/condor/lib/libcomp_libgcc_eh.a /unsup/condor/lib/libcomp_libgcc_eh.a /afs/cs.wisc.edu/s/gcc-3.4.1/i386_rh9/bin/../lib/gcc/i686-pc-linux-gnu/3.4.1/crtend.o /usr/lib/crtn.o /unsup/condor/lib/libcondorsyscall.a(condor_file_agent.o)(.text+0x250): In function `CondorFileAgent::open(char const*, int, int)': /home/condor/execute/dir_12578/co % ls -lh simple.std -rwxr-x--- 1 temp-01 temp-01 12M Mar 15 16:32 simple.std*
There are a lot of warnings there--you can safely ignore those warnings. You can also see just how many libraries we link the program against. It's a lot! And yes, the executable is much bigger now. Partly that's the price of having checkpointing and partly it is because the program is now statically linked, but you can make it slightly smaller if you want by getting rid of debugging symbols:
% strip simple.std % ls -lh simple.std -rwxr-x--- 1 temp-01 temp-01 1.6M Mar 15 16:40 simple.std*
Submitting a standard universe job is almost the same as a vanilla universe job. Just change the universe to standard. Here is a sample submit file. I suggest making it run for a longer time, so we can experiment with the checkpointing while it runs. Also, get rid of the multiple queue commands that we had. Here is the complete submit file, I suggest naming it submit.std.
Universe = standard Executable = simple.std Arguments = 120 10 Log = simple.log Output = simple.out Error = simple.error Queue
Then submit it as you did before, with condor_submit:
% rm simple.log % condor_submit submit.std Submitting job(s). Logging submit event(s). 1 job(s) submitted to cluster 3. % condor_q -- Submitter: ws-03.gs.unina.it : <192.167.2.23:34353> : ws-03.gs.unina.it ID OWNER SUBMITTED RUN_TIME ST PRI SIZE CMD 3.0 temp-01 3/15 16:41 0+00:00:00 I 0 1.3 simple.std 120 10 1 jobs; 1 idle, 0 running, 0 held royal01(49)% condor_q -- Submitter: ws-03.gs.unina.it : <192.167.2.23:34353> : ws-03.gs.unina.it ID OWNER SUBMITTED RUN_TIME ST PRI SIZE CMD 3.0 temp-01 3/15 16:41 0+00:00:01 R 0 1.3 simple.std 120 10 1 jobs; 0 idle, 1 running, 0 held Two minutes pass... % condor_q -- Submitter: ws-03.gs.unina.it : <192.167.2.23:34353> : ws-03.gs.unina.it ID OWNER SUBMITTED RUN_TIME ST PRI SIZE CMD 0 jobs; 0 idle, 0 running, 0 held % cat simple.log 000 (003.000.000) 03/15 16:41:24 Job submitted from host: <192.167.2.23:34353> ... 001 (003.000.000) 03/15 16:41:26 Job executing on host: <192.167.2.24:45736> ... 005 (003.000.000) 03/15 16:43:26 Job terminated. (1) Normal termination (return value 0) Usr 0 00:00:00, Sys 0 00:00:00 - Run Remote Usage Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage Usr 0 00:00:00, Sys 0 00:00:00 - Total Remote Usage Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage 1170 - Run Bytes Sent By Job 1326572 - Run Bytes Received By Job 1170 - Total Bytes Sent By Job 1326572 - Total Bytes Received By Job
Notice that the log file has a bit more information this time: we can see how much data was transfered to and from the job because it's in the standard universe. The remote usage was not very interesting because the job just slept, but a real job would have some interesting numbers there.
At this point in the tutorial, I will demonstrate how you can force your job to be checkpointed and what it will look like. We will use a command called condor_checkpoint that you normally never to use, so we can demonstrate. One reason that it normally isn't used is because it checkpoints all jobs running on a computer, not just the job you want to checkpoint. Be warned.
Begin by submitting your job, and figuring out where it is running:
% rm simple.log % condor_submit submit.std Submitting job(s). Logging submit event(s). 1 job(s) submitted to cluster 5. % condor_q -- Submitter: ws-03.gs.unina.it : <192.167.2.23:34353> : ws-03.gs.unina.it ID OWNER SUBMITTED RUN_TIME ST PRI SIZE CMD 5.0 temp-01 3/15 17:01 0+00:00:03 R 0 1.3 simple.std 120 10 1 jobs; 0 idle, 1 running, 0 held % cat simple.log 000 (005.000.000) 03/15 17:02:00 Job submitted from host: <192.167.2.23:34353> ... 001 (005.000.000) 03/15 17:02:02 Job executing on host: <192.167.2.30:46393> ... % host 192.167.2.30 30.2.167.192.in-addr.arpa domain name pointer ws-10.gs.unina.it
By looking at the IP address of the job and converting that to a name, we know where the job is running. Move along quickly now, because the job will only run for two minutes. Now let's tell Condor to checkpoint and see what happens. Update this to be appropriate for your job!
% condor_checkpoint ws-10.gs.unina.it % cat simple.log 000 (005.000.000) 03/15 17:02:00 Job submitted from host: <192.167.2.23:34353> ... 001 (005.000.000) 03/15 17:02:02 Job executing on host: <192.167.2.30:46393> ... 006 (005.000.000) 03/15 17:02:27 Image size of job updated: 1972 ... 003 (005.000.000) 03/15 17:02:28 Job was checkpointed. Usr 0 00:00:00, Sys 0 00:00:00 - Run Remote Usage Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage ... 005 (005.000.000) 03/15 17:02:28 Job terminated. (1) Normal termination (return value 0) Usr 0 00:00:00, Sys 0 00:00:00 - Run Remote Usage Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage Usr 0 00:00:00, Sys 0 00:00:00 - Total Remote Usage Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage 694993 - Run Bytes Sent By Job 1326902 - Run Bytes Received By Job 694993 - Total Bytes Sent By Job 1326902 - Total Bytes Received By Job ...
Voila! We checkpointed our job correctly. (Note: if you don't see the checkpoint event in the log, you may be experiencing the same bug that your exercise leader is experiencing. That event is normally listed, and he's not sure why it is missing in some cases. Checkpointing definitely happened: ask him to explain if you want to know how you can tell.)
Normally, you never need to use condor_checkpoint: we just used it as a demonstration. Condor will checkpoint your jobs periodically (the default is every three hours) or when your job is forced to leave a computer to give time to another user. So you should never need to use condor_checkpoint.