This is an outdated version of the HTCondor Manual. You can find current documentation at http://htcondor.org/manual.
next up previous contents index
Next: 3.3 Configuration Up: 3. Administrators' Manual Previous: 3.1 Introduction   Contents   Index

Subsections


3.2 Installation

This section contains the instructions for installing Condor. The installation will have a default configuration that can be customized. Sections of the manual that follow this one explain customization.

Read this entire section before starting installation.

Please read the copyright and disclaimer information in section [*] on page [*] of the manual. Installation and use of Condor is acknowledgment that you have read and agree to the terms.


3.2.1 Obtaining Condor

The first step to installing Condor is to download it from the Condor web site, http://www.cs.wisc.edu/condor. The downloads are available from the downloads page, at http://www.cs.wisc.edu/condor/downloads/.

The platform-dependent Condor files are currently available from two sites. The main site is at the University of Wisconsin-Madison, Madison, Wisconsin, USA. A second site is the Istituto Nazionale di Fisica Nucleare Sezione di Bologna, Bologna, Italy. Please choose the site nearest to you.

Make note of the location of where you download the binary into.

The Condor binary distribution is packaged in the following files and directories:

DOC
directions on where to find Condor documentation
INSTALL
these installation directions
LICENSE-2.0.TXT
the licensing agreement. By installing Condor, you agree to the contents of this file
README
general information
condor_configure
the Perl script used to install and configure Condor
condor_install
the Perl script used to install Condor
examples
directory containing C, Fortran and C++ example programs to run with Condor
bin
directory which contains the distribution Condor user programs.
sbin
directory which contains the distribution Condor system programs.
etc
directory which contains the distribution Condor configuration data.
lib
directory which contains the distribution Condor libraries.
libexec
directory which contains the distribution Condor programs that are only used internally by Condor.
man
directory which contains the distribution Condor manual pages.
src
directory which contains the distribution Condor source code for CHIRP and DRMAA.

Before you install, please consider joining the condor-world mailing list. Traffic on this list is kept to an absolute minimum. It is only used to announce new releases of Condor. To subscribe, send a message to majordomo@cs.wisc.edu with the body:

   subscribe condor-world


3.2.2 Preparation

Before installation, make a few important decisions about the basic layout of your pool. The decisions answer the questions:

  1. What machine will be the central manager?
  2. What machines should be allowed to submit jobs?
  3. Will Condor run as root or not?
  4. Who will be administering Condor on the machines in your pool?
  5. Will you have a Unix user named condor and will its home directory be shared?
  6. Where should the machine-specific directories for Condor go?
  7. Where should the parts of the Condor system be installed?
  8. Am I using AFS?
  9. Do I have enough disk space for Condor?

1. What machine will be the central manager?

One machine in your pool must be the central manager. Install Condor on this machine first. This is the centralized information repository for the Condor pool, and it is also the machine that does match-making between available machines and submitted jobs. If the central manager machine crashes, any currently active matches in the system will keep running, but no new matches will be made. Moreover, most Condor tools will stop working. Because of the importance of this machine for the proper functioning of Condor, install the central manager on a machine that is likely to stay up all the time, or on one that will be rebooted quickly if it does crash.

Also consider network traffic and your network layout when choosing your central manager. All the daemons send updates (by default, every 5 minutes) to this machine. Memory requirements for the central manager differ by the number of machines in the pool. A pool with up to about 100 machines will require approximately 25 Mbytes of memory for the central manager's tasks. A pool with about 1000 machines will require approximately 100 Mbytes of memory for the central manager's tasks.

A faster CPU will improve the time to do matchmaking.

2. Which machines should be allowed to submit jobs?

Condor can restrict the machines allowed to submit jobs. Alternatively, it can allow any machine the network allows to connect to a submit machine to submit jobs. If the Condor pool is behind a firewall, and all machines inside the firewall are trusted, the HOSTALLOW_WRITE configuration entry can be set to *. Otherwise, it should be set to reflect the set of machines permitted to submit jobs to this pool. Condor tries to be secure by default, so out of the box, the configuration file ships with an invalid definition for this configuration variable. This invalid value allows no machine to connect and submit jobs, so after installation, change this entry. Look for the entry defined with the value YOU_MUST_CHANGE_THIS_INVALID_CONDOR_CONFIGURATION_VALUE.

3. Will Condor run as root or not?

Start up the Condor daemons as the Unix user root. Without this, Condor can do very little to enforce security and policy decisions. You can install Condor as any user, however there are both serious security and performance consequences. Please see section 3.6.13 on page [*] in the manual for the details and ramifications of running Condor as a Unix user other than root.

4. Who will administer Condor?

Either root will be administering Condor directly, or someone else would be acting as the Condor administrator. If root has delegated the responsibility to another person, keep in mind that as long as Condor is started up as root, it should be clearly understood that whoever has the ability to edit the condor configuration files can effectively run arbitrary programs as root.

5. Will you have a Unix user named condor, and will its home directory be shared?

To simplify installation of Condor, create a Unix user named condor on all machines in the pool. The Condor daemons will create files (such as the log files) owned by this user, and the home directory can be used to specify the location of files and directories needed by Condor. The home directory of this user can either be shared among all machines in your pool, or could be a separate home directory on the local partition of each machine. Both approaches have advantages and disadvantages. Having the directories centralized can make administration easier, but also concentrates the resource usage such that you potentially need a lot of space for a single shared home directory. See the section below on machine-specific directories for more details.

Note that the user condor must not be an account into which a person can log in. If a person can log in as user condor, it permits a major security breach, in that the user condor could submit jobs that run as any other user, providing complete access to the user's data by the jobs. A standard way of not allowing log in to an account on Unix platforms is to enter an invalid shell in the password file.

If you choose not to create a user named condor, then you must specify either via the CONDOR_IDS environment variable or the CONDOR_IDS config file setting which uid.gid pair should be used for the ownership of various Condor files. See section 3.6.13 on UIDs in Condor on page [*] in the Administrator's Manual for details.

6. Where should the machine-specific directories for Condor go?

Condor needs a few directories that are unique on every machine in your pool. These are spool, log, and execute. Generally, all three are subdirectories of a single machine specific directory called the local directory (specified by the LOCAL_DIR macro in the configuration file). Each should be owned by the user that Condor is to be run as.

If you have a Unix user named condor with a local home directory on each machine, the LOCAL_DIR could just be user condor's home directory (LOCAL_DIR = $(TILDE) in the configuration file). If this user's home directory is shared among all machines in your pool, you would want to create a directory for each host (named by host name) for the local directory (for example, LOCAL_DIR = $(TILDE)/hosts/$(HOSTNAME)). If you do not have a condor account on your machines, you can put these directories wherever you'd like. However, where to place them will require some thought, as each one has its own resource needs:

execute
This is the directory that acts as the current working directory for any Condor jobs that run on a given execute machine. The binary for the remote job is copied into this directory, so there must be enough space for it. (Condor will not send a job to a machine that does not have enough disk space to hold the initial binary). In addition, if the remote job dumps core for some reason, it is first dumped to the execute directory before it is sent back to the submit machine. So, put the execute directory on a partition with enough space to hold a possible core file from the jobs submitted to your pool.

spool
The spool directory holds the job queue and history files, and the checkpoint files for all jobs submitted from a given machine. As a result, disk space requirements for the spool directory can be quite large, particularly if users are submitting jobs with very large executables or image sizes. By using a checkpoint server (see section 3.8 on Installing a Checkpoint Server on page [*] for details), you can ease the disk space requirements, since all checkpoint files are stored on the server instead of the spool directories for each machine. However, the initial checkpoint files (the executables for all the clusters you submit) are still stored in the spool directory, so you will need some space, even with a checkpoint server.

log
Each Condor daemon writes its own log file, and each log file is placed in the log directory. You can specify what size you want these files to grow to before they are rotated, so the disk space requirements of the directory are configurable. The larger the log files, the more historical information they will hold if there is a problem, but the more disk space they use up. If you have a network file system installed at your pool, you might want to place the log directories in a shared location (such as /usr/local/condor/logs/$(HOSTNAME)), so that you can view the log files from all your machines in a single location. However, if you take this approach, you will have to specify a local partition for the lock directory (see below).

lock
Condor uses a small number of lock files to synchronize access to certain files that are shared between multiple daemons. Because of problems encountered with file locking and network file systems (particularly NFS), these lock files should be placed on a local partition on each machine. By default, they are placed in the log directory. If you place your log directory on a network file system partition, specify a local partition for the lock files with the LOCK parameter in the configuration file (such as /var/lock/condor).

Generally speaking, it is recommended that you do not put these directories (except lock) on the same partition as /var, since if the partition fills up, you will fill up /var as well. This will cause lots of problems for your machines. Ideally, you will have a separate partition for the Condor directories. Then, the only consequence of filling up the directories will be Condor's malfunction, not your whole machine.

7. Where should the parts of the Condor system be installed?

Configuration Files
There are a number of configuration files that allow you different levels of control over how Condor is configured at each machine in your pool. The global configuration file is shared by all machines in the pool. For ease of administration, this file should be located on a shared file system, if possible. In addition, there is a local configuration file for each machine, where you can override settings in the global file. This allows you to have different daemons running, different policies for when to start and stop Condor jobs, and so on. You can also have configuration files specific to each platform in your pool. See section 3.13.4 on page [*] about Configuring Condor for Multiple Platforms for details.

In general, there are a number of places that Condor will look to find its configuration files. The first file it looks for is the global configuration file. These locations are searched in order until a configuration file is found. If none contain a valid configuration file, Condor will print an error message and exit:

  1. File specified in the CONDOR_CONFIG environment variable
  2. /etc/condor/condor_config
  3. /usr/local/etc/condor_config
  4. ~condor/condor_config
  5. $(GLOBUS_LOCATION)/etc/condor_config

If you specify a file in the CONDOR_CONFIG environment variable and there's a problem reading that file, Condor will print an error message and exit right away, instead of continuing to search the other options. However, if no CONDOR_CONFIG environment variable is set, Condor will search through the other options.

Next, Condor tries to load the local configuration file(s). The only way to specify the local configuration file(s) is in the global configuration file, with the LOCAL_CONFIG_FILE macro. If that macro is not set, no local configuration file is used. This macro can be a list of files or a single file.

Release Directory

Every binary distribution contains a contains five subdirectories: bin, etc, lib, sbin, and libexec. Wherever you choose to install these five directories we call the release directory (specified by the RELEASE_DIR macro in the configuration file). Each release directory contains platform-dependent binaries and libraries, so you will need to install a separate one for each kind of machine in your pool. For ease of administration, these directories should be located on a shared file system, if possible.

  • User Binaries:

    All of the files in the bin directory are programs the end Condor users should expect to have in their path. You could either put them in a well known location (such as /usr/local/condor/bin) which you have Condor users add to their PATH environment variable, or copy those files directly into a well known place already in the user's PATHs (such as /usr/local/bin). With the above examples, you could also leave the binaries in /usr/local/condor/bin and put in soft links from /usr/local/bin to point to each program.

  • System Binaries:

    All of the files in the sbin directory are Condor daemons and agents, or programs that only the Condor administrator would need to run. Therefore, add these programs only to the PATH of the Condor administrator.

  • Private Condor Binaries:

    All of the files in the libexec directory are Condor programs that should never be run by hand, but are only used internally by Condor.

  • lib Directory:

    The files in the lib directory are the Condor libraries that must be linked in with user jobs for all of Condor's checkpointing and migration features to be used. lib also contains scripts used by the condor_compile program to help re-link jobs with the Condor libraries. These files should be placed in a location that is world-readable, but they do not need to be placed in anyone's PATH. The condor_compile script checks the configuration file for the location of the lib directory.

  • etc Directory:

    etc contains an examples subdirectory which holds various example configuration files and other files used for installing Condor. etc is the recommended location to keep the master copy of your configuration files. You can put in soft links from one of the places mentioned above that Condor checks automatically to find its global configuration file.

Documentation

The documentation provided with Condor is currently available in HTML, Postscript and PDF (Adobe Acrobat). It can be locally installed wherever is customary at your site. You can also find the Condor documentation on the web at: http://www.cs.wisc.edu/condor/manual.

7. Am I using AFS?

If you are using AFS at your site, be sure to read the section 3.13.1 on page [*] in the manual. Condor does not currently have a way to authenticate itself to AFS. A solution is not ready for Version 7.6.10. This implies that you are probably not going to want to have the LOCAL_DIR for Condor on AFS. However, you can (and probably should) have the Condor RELEASE_DIR on AFS, so that you can share one copy of those files and upgrade them in a centralized location. You will also have to do something special if you submit jobs to Condor from a directory on AFS. Again, read manual section 3.13.1 for all the details.

8. Do I have enough disk space for Condor?

Condor takes up a fair amount of space. This is another reason why it is a good idea to have it on a shared file system. The compressed downloads currently range from a low of about 100 Mbytes for Windows to about 500 Mbytes for Linux. The compressed source code takes approximately 16 Mbytes.

In addition, you will need a lot of disk space in the local directory of any machines that are submitting jobs to Condor. See question 6 above for details on this.


3.2.3 Newer Unix Installation Procedure

The Perl script condor_configure installs Condor. Command-line arguments specify all needed information to this script. The script can be executed multiple times, to modify or further set the configuration. condor_configure has been tested using Perl 5.003. Use this or a more recent version of Perl.

After download, all the files are in a compressed, tar format. They need to be untarred, as

  tar xzf completename.tar.gz
After untarring, the directory will have the Perl scripts condor_configure and condor_install, as well as a ``bin'', ``etc'', ``examples'', ``include'', ``lib'', ``libexec'', ``man'', ``sbin'', ``sql'' and ``src'' subdirectories.

condor_configure and condor_install are the same program, but have different default behaviors. condor_install is identical to running ``condor_configure -install=.''. condor_configure and condor_install work on above directories (``sbin'', etc.). As the names imply, condor_install is used to install Condor, whereas condor_configure is used to modify the configuration of an existing Condor install.

condor_configure and condor_install are completely command-line driven; it is not interactive. Several command-line arguments are always needed with condor_configure and condor_install. The argument

  --install=/path/to/release.
specifies the path to the Condor release directories (see above). The default for condor_install is ``-install=.''. The argument
 --install-dir=directory
or
 --prefix=directory
specifies the path to the install directory.

The argument

--local-dir=directory
specifies the path to the local directory.

The -type option to condor_configure specifies one or more of the roles that a machine may take on within the Condor pool: central manager, submit or execute. These options are given in a comma separated list. So, if a machine is both a submit and execute machine, the proper command-line option is

--type=manager,execute

Install Condor on the central manager machine first. If Condor will run as root in this pool (Item 3 above), run condor_install as root, and it will install and set the file permissions correctly. On the central manager machine, run condor_install as follows.

% condor_install --prefix=~condor \
	--local-dir=/scratch/condor --type=manager

To update the above Condor installation, for example, to also be submit machine:

% condor_configure --prefix=~condor \
	--local-dir=/scratch/condor --type=manager,submit

As in the above example, the central manager can also be a submit point or and execute machine, but this is only recommended for very small pools. If this is the case, the -type option changes to manager,execute or manager,submit or manager,submit,execute.

After the central manager is installed, the execute and submit machines should then be configured. Decisions about whether to run Condor as root should be consistent throughout the pool. For each machine in the pool, run

% condor_install --prefix=~condor \
	--local-dir=/scratch/condor --type=execute,submit

See the condor_configure manual page in section 9 on page [*] for details.


3.2.4 Starting Condor Under Unix After Installation

Now that Condor has been installed on the machine(s), there are a few things to check before starting up Condor.

  1. Read through the $<$release_dir$>$/etc/condor_config file. There are a lot of possible settings and you should at least take a look at the first two main sections to make sure everything looks okay. In particular, you might want to set up security for Condor. See the section 3.6.1 on page [*] to learn how to do this.

  2. For Linux platforms, run the condor_kbdd to monitor keyboard and mouse activity on all machines within the pool that will run a condor_startd; these are machines that execute jobs. To do this, the subsystem KBDD will need to be added to the DAEMON_LIST configuration variable definition.

    For Unix platforms other than Linux, Condor can monitor the activity of your mouse and keyboard, provided that you tell it where to look. You do this with the CONSOLE_DEVICES entry in the condor_startd section of the configuration file. On most platforms, reasonable defaults are provided. For example, the default device for the mouse is 'mouse', since most installations have a soft link from /dev/mouse that points to the right device (such as tty00 if you have a serial mouse, psaux if you have a PS/2 bus mouse, etc). If you do not have a /dev/mouse link, you should either create one (you will be glad you did), or change the CONSOLE_DEVICES entry in Condor's configuration file. This entry is a comma separated list, so you can have any devices in /dev count as 'console devices' and activity will be reported in the condor_startd's ClassAd as ConsoleIdleTime.

  3. (Linux only) Condor needs to be able to find the utmp file. According to the Linux File System Standard, this file should be /var/run/utmp. If Condor cannot find it there, it looks in /var/adm/utmp. If it still cannot find it, it gives up. So, if your Linux distribution places this file somewhere else, be sure to put a soft link from /var/run/utmp to point to the real location.

To start up the Condor daemons, execute $<$release_dir$>$/sbin/condor_master. This is the Condor master, whose only job in life is to make sure the other Condor daemons are running. The master keeps track of the daemons, restarts them if they crash, and periodically checks to see if you have installed new binaries (and if so, restarts the affected daemons).

If you are setting up your own pool, you should start Condor on your central manager machine first. If you have done a submit-only installation and are adding machines to an existing pool, the start order does not matter.

To ensure that Condor is running, you can run either:

        ps -ef | egrep condor_
or
        ps -aux | egrep condor_
depending on your flavor of Unix. On a central manager machine that can submit jobs as well as execute them, there will be processes for: On a central manager machine that does not submit jobs nor execute them, there will be processes for: For a machine that only submits jobs, there will be processes for: For a machine that only executes jobs, there will be processes for:

Once you are sure the Condor daemons are running, check to make sure that they are communicating with each other. You can run condor_status to get a one line summary of the status of each machine in your pool.

Once you are sure Condor is working properly, you should add condor_master into your startup/bootup scripts (i.e. /etc/rc ) so that your machine runs condor_master upon bootup. condor_master will then fire up the necessary Condor daemons whenever your machine is rebooted.

If your system uses System-V style init scripts, you can look in $<$release_dir$>$/etc/examples/condor.boot for a script that can be used to start and stop Condor automatically by init. Normally, you would install this script as /etc/init.d/condor and put in soft link from various directories (for example, /etc/rc2.d) that point back to /etc/init.d/condor. The exact location of these scripts and links will vary on different platforms.

If your system uses BSD style boot scripts, you probably have an /etc/rc.local file. Add a line to start up $<$release_dir$>$/sbin/condor_master.

Now that the Condor daemons are running, there are a few things you can and should do:

  1. (Optional) Do a full install for the condor_compile script. condor_compile assists in linking jobs with the Condor libraries to take advantage of all of Condor's features. As it is currently installed, it will work by placing it in front of any of the following commands that you would normally use to link your code: gcc, g++, g77, cc, acc, c89, CC, f77, fort77 and ld. If you complete the full install, you will be able to use condor_compile with any command whatsoever, in particular, make. See section 3.13.5 on page [*] in the manual for directions.

  2. Try building and submitting some test jobs. See examples/README for details.

  3. If your site uses the AFS network file system, see section 3.13.1 on page [*] in the manual.

  4. We strongly recommend that you start up Condor (run the condor_master daemon) as user root. If you must start Condor as some user other than root, see section 3.6.13 on page [*].


3.2.5 Installation on Windows

This section contains the instructions for installing the Windows version of Condor. The install program will set up a slightly customized configuration file that may be further customized after the installation has completed.

Please read the copyright and disclaimer information in section [*] on page [*] of the manual. Installation and use of Condor is acknowledgment that you have read and agree to the terms.

Be sure that the Condor tools are of the same version as the daemons installed. The Condor executable for distribution is packaged in a single file named similar to:

  condor-7.4.3-winnt50-x86.msi
This file is approximately 80 Mbytes in size, and it may be removed once Condor is fully installed.

Before installing Condor, please consider joining the condor-world mailing list. Traffic on this list is kept to an absolute minimum. It is only used to announce new releases of Condor. To subscribe, follow the directions given at http://www.cs.wisc.edu/condor/mail-lists/.

For any installation, Condor services are installed and run as the Local System account. Running the Condor services as any other account (such as a domain user) is not supported and could be problematic.

3.2.5.1 Installation Requirements


3.2.5.2 Preparing to Install Condor under Windows

Before installing the Windows version of Condor, there are two major decisions to make about the basic layout of the pool.

  1. What machine will be the central manager?
  2. Is there enough disk space for Condor?

If the answers to these questions are already known, skip to the Windows Installation Procedure section below, section 3.2.5 on page [*]. If unsure, read on.


3.2.5.3 Installation Procedure Using the MSI Program

Installation of Condor must be done by a user with administrator privileges. After installation, the Condor services will be run under the local system account. When Condor is running a user job, however, it will run that user job with normal user permissions.

Download Condor, and start the installation process by running the installer. The Condor installation is completed by answering questions and choosing options within the following steps.

If Condor is already installed.

If Condor has been previously installed, a dialog box will appear before the installation of Condor proceeds. The question asks if you wish to preserve your current Condor configuration files. Answer yes or no, as appropriate.

If you answer yes, your configuration files will not be changed, and you will proceed to the point where the new binaries will be installed.

If you answer no, then there will be a second question that asks if you want to use answers given during the previous installation as default answers.

STEP 1: License Agreement.

The first step in installing Condor is a welcome screen and license agreement. You are reminded that it is best to run the installation when no other Windows programs are running. If you need to close other Windows programs, it is safe to cancel the installation and close them. You are asked to agree to the license. Answer yes or no. If you should disagree with the License, the installation will not continue.

Also fill in name and company information, or use the defaults as given.

STEP 2: Condor Pool Configuration.

The Condor configuration needs to be set based upon if this is a new pool or to join an existing one. Choose the appropriate radio button.

For a new pool, enter a chosen name for the pool. To join an existing pool, enter the host name of the central manager of the pool.

STEP 3: This Machine's Roles.

Each machine within a Condor pool may either submit jobs or execute submitted jobs, or both submit and execute jobs. A check box determines if this machine will be a submit point for the pool.

A set of radio buttons determines the ability and configuration of the ability to execute jobs. There are four choices:

Do not run jobs on this machine.
This machine will not execute Condor jobs.
Always run jobs and never suspend them.
Run jobs when the keyboard has been idle for 15 minutes.
Run jobs when the keyboard has been idle for 15 minutes, and the CPU is idle.

For testing purposes, it is often helpful to use the always run Condor jobs option.

For a machine that is to execute jobs and the choice is one of the last two in the list, Condor needs to further know what to do with the currently running jobs. There are two choices:

Keep the job in memory and continue when the machine meets the condition chosen for when to run jobs.
Restart the job on a different machine.

This choice involves a trade off. Restarting the job on a different machine is less intrusive on the workstation owner than leaving the job in memory for a later time. A suspended job left in memory will require swap space, which could be a scarce resource. Leaving a job in memory, however, has the benefit that accumulated run time is not lost for a partially completed job.

STEP 4: The Account Domain.

Enter the machine's accounting (or UID) domain. On this version of Condor for Windows, this setting is only used for user priorities (see section 3.4 on page [*]) and to form a default e-mail address for the user.

STEP 5: E-mail Settings.

Various parts of Condor will send e-mail to a Condor administrator if something goes wrong and requires human attention. Specify the e-mail address and the SMTP relay host of this administrator. Please pay close attention to this e-mail, since it will indicate problems in the Condor pool.

STEP 6: Java Settings.
In order to run jobs in the java universe, Condor must have the path to the jvm executable on the machine. The installer will search for and list the jvm path, if it finds one. If not, enter the path. To disable use of the java universe, leave the field blank.

STEP 7: Host Permission Settings.
Machines within the Condor pool will need various types of access permission. The three categories of permission are read, write, and administrator. Enter the machines or domain to be given access permissions, or use the defaults provided. Wild cards and macros are permitted.

Read
Read access allows a machine to obtain information about Condor such as the status of machines in the pool and the job queues. All machines in the pool should be given read access. In addition, giving read access to *.cs.wisc.edu will allow the Condor team to obtain information about the Condor pool, in the event that debugging is needed.
Write
All machines in the pool should be given write access. It allows the machines you specify to send information to your local Condor daemons, for example, to start a Condor job. Note that for a machine to join the Condor pool, it must have both read and write access to all of the machines in the pool.
Administrator
A machine with administrator access will be allowed more extended permission to do things such as change other user's priorities, modify the job queue, turn Condor services on and off, and restart Condor. The central manager should be given administrator access and is the default listed. This setting is granted to the entire machine, so care should be taken not to make this too open.

For more details on these access permissions, and others that can be manually changed in your configuration file, please see the section titled Setting Up IP/Host-Based Security in Condor in section section 3.6.9 on page [*].

STEP 8: VM Universe Setting.
A radio button determines whether this machine will be configured to run vm universe jobs utilizing VMware. In addition to having the VMware Server installed, Condor also needs Perl installed. The resources available for vm universe jobs can be tuned with these settings, or the defaults listed may be used.

Version
Use the default value, as only one version is currently supported.
Maximum Memory
The maximum memory that each virtual machine is permitted to use on the target machine.
Maximum Number of VMs
The number of virtual machines that can be run in parallel on the target machine.
Networking Support
The VMware instances can be configured to use network support. There are four options in the pull-down menu.
  • None: No networking support.
  • NAT: Network address translation.
  • Bridged: Bridged mode.
  • NAT and Bridged: Allow both methods.
Path to Perl Executable
The path to the Perl executable.

STEP 9: HDFS Settings.
A radio button enables support for the Hadoop Distributed File System (HDFS). When enabled, a further radio button specifies either name node or data node mode.

Running HDFS requires Java to be installed, and Condor must know where the installation is. Running HDFS in data node mode also requires the installation of Cygwin, and the path to the Cygwin directory must be added to the global PATH environment variable.

HDFS has several configuration options that must be filled in to be used.

Primary Name Node
The full host name of the primary name node.
Name Node Port
The port that the name node is listening on.
Name Node Web Port
The port the name node's web interface is bound to. It should be different from the name node's main port.

STEP 10: Choose Setup Type

The next step is where the destination of the Condor files will be decided. We recommend that Condor be installed in the location shown as the default in the install choice: C:\Condor. This is due to several hard coded paths in scripts and configuration files. Clicking on the Custom choice permits changing the installation directory.

Installation on the local disk is chosen for several reasons. The Condor services run as local system, and within Microsoft Windows, local system has no network privileges. Therefore, for Condor to operate, Condor should be installed on a local hard drive, as opposed to a network drive (file server).

The second reason for installation on the local disk is that the Windows usage of drive letters has implications for where Condor is placed. The drive letter used must be not change, even when different users are logged in. Local drive letters do not change under normal operation of Windows.

While it is strongly discouraged, it may be possible to place Condor on a hard drive that is not local, if a dependency is added to the service control manager such that Condor starts after the required file services are available.


3.2.5.4 Unattended Installation Procedure Using the Included Set Up Program

This section details how to run the Condor for Windows installer in an unattended batch mode. This mode is one that occurs completely from the command prompt, without the GUI interface.

The Condor for Windows installer uses the Microsoft Installer (MSI) technology, and it can be configured for unattended installs analogous to any other ordinary MSI installer.

The following is a sample batch file that is used to set all the properties necessary for an unattended install.

@echo on
set ARGS=
set ARGS=NEWPOOL="N"
set ARGS=%ARGS% POOLNAME=""
set ARGS=%ARGS% RUNJOBS="C"
set ARGS=%ARGS% VACATEJOBS="Y"
set ARGS=%ARGS% SUBMITJOBS="Y"
set ARGS=%ARGS% CONDOREMAIL="you@yours.com"
set ARGS=%ARGS% SMTPSERVER="smtp.localhost"
set ARGS=%ARGS% HOSTALLOWREAD="*"
set ARGS=%ARGS% HOSTALLOWWRITE="*"
set ARGS=%ARGS% HOSTALLOWADMINISTRATOR="$(IP_ADDRESS)"
set ARGS=%ARGS% INSTALLDIR="C:\Condor"
set ARGS=%ARGS% POOLHOSTNAME="$(IP_ADDRESS)"
set ARGS=%ARGS% ACCOUNTINGDOMAIN="none"
set ARGS=%ARGS% JVMLOCATION="C:\Windows\system32\java.exe"
set ARGS=%ARGS% USEVMUNIVERSE="N"set ARGS=%ARGS% VMMEMORY="128"
set ARGS=%ARGS% VMMAXNUMBER="$(NUM_CPUS)"
set ARGS=%ARGS% VMNETWORKING="N"
set ARGS=%ARGS% USEHDFS="N"
set ARGS=%ARGS% NAMENODE=""
set ARGS=%ARGS% HDFSMODE="HDFS_NAMENODE"
set ARGS=%ARGS% HDFSPORT="5000"
set ARGS=%ARGS% HDFSWEBPORT="4000"
 
msiexec /qb /l* condor-install-log.txt /i condor-7.1.0-winnt50-x86.msi %ARGS%

Each property corresponds to answers that would have been supplied while running an interactive installer. The following is a brief explanation of each property as it applies to unattended installations:

NEWPOOL = $<$ Y | N $>$
determines whether the installer will create a new pool with the target machine as the central manager.

POOLNAME
sets the name of the pool, if a new pool is to be created. Possible values are either the name or the empty string "".

RUNJOBS = $<$ N | A | I | C $>$
determines when Condor will run jobs. This can be set to:

VACATEJOBS = $<$ Y | N $>$
determines what Condor should do when it has to stop the execution of a user job. When set to Y, Condor will vacate the job and start it somewhere else if possible. When set to N, Condor will merely suspend the job in memory and wait for the machine to become available again.

SUBMITJOBS = $<$ Y | N $>$
will cause the installer to configure the machine as a submit node when set to Y.

CONDOREMAIL
sets the e-mail address of the Condor administrator. Possible values are an e-mail address or the empty string "".

HOSTALLOWREAD
is a list of host names that are allowed to issue READ commands to Condor daemons. This value should be set in accordance with the HOSTALLOW_READ setting in the configuration file, as described in section 3.6.9 on page [*].

HOSTALLOWWRITE
is a list of host names that are allowed to issue WRITE commands to Condor daemons. This value should be set in accordance with the HOSTALLOW_WRITE setting in the configuration file, as described in section 3.6.9 on page [*].

HOSTALLOWADMINISTRATOR
is a list of host names that are allowed to issue ADMINISTRATOR commands to Condor daemons. This value should be set in accordance with the HOSTALLOW_ADMINISTRATOR setting in the configuration file, as described in section 3.6.9 on page [*].

INSTALLDIR
defines the path to the directory where Condor will be installed.

POOLHOSTNAME
defines the host name of the pool's central manager.

ACCOUNTINGDOMAIN
defines the accounting (or UID) domain the target machine will be in.

JVMLOCATION
defines the path to Java virtual machine on the target machine.

SMTPSERVER
defines the host name of the SMTP server that the target machine is to use to send e-mail.

VMMEMORY
an integer value that defines the maximum memory each VM run on the target machine.

VMMAXNUMBER
an integer value that defines the number of VMs that can be run in parallel on the target machine.

VMNETWORKING = $<$ N | A | B | C $>$
determines if VM Universe can use networking. This can be set to:

USEVMUNIVERSE = $<$ Y | N $>$
will cause the installer to enable VM Universe jobs on the target machine.

PERLLOCATION
defines the path to Perl on the target machine. This is required in order to use the vm universe.

USEHDFS = $<$ Y | N $>$
determines if HDFS is run.

HDFSMODE $<$ HDFS_DATANODE | HDFS_NAMENODE $>$
sets the mode HDFS runs in.

NAMENODE
sets the host name of the primary name node.

HDFSPORT
sets the port number that the primary name node listens to.

HDFSWEBPORT
sets the port number that the name node web interface is bound to.

After defining each of these properties for the MSI installer, the installer can be started with the msiexec command. The following command starts the installer in unattended mode, and it dumps a journal of the installer's progress to a log file:

msiexec /qb /lxv* condor-install-log.txt /i condor-7.2.2-winnt50-x86.msi [property=value] ...

More information on the features of msiexec can be found at Microsoft's website at http://www.microsoft.com/resources/documentation/windows/xp/all/proddocs/en-us/msiexec.mspx.


3.2.5.5 Manual Installation Condor on Windows

If you are to install Condor on many different machines, you may wish to use some other mechanism to install Condor on additional machines rather than running the Setup program described above on each machine.

WARNING: This is for advanced users only! All others should use the Setup program described above.

Here is a brief overview of how to install Condor manually without using the provided GUI-based setup program:

The Service
The service that Condor will install is called "Condor". The Startup Type is Automatic. The service should log on as System Account, but do not enable "Allow Service to Interact with Desktop". The program that is run is condor_master.exe.

The Condor service can be installed and removed using the sc.exe tool, which is included in Windows XP and Windows 2003 Server. The tool is also available as part of the Windows 2000 Resource Kit.

Installation can be done as follows:

sc create Condor binpath= c:\condor\bin\condor_master.exe

To remove the service, use:

sc delete Condor

The Registry
Condor uses a few registry entries in its operation. The key that Condor uses is HKEY_LOCAL_MACHINE/Software/Condor. The values that Condor puts in this registry key serve two purposes.
  1. The values of CONDOR_CONFIG and RELEASE_DIR are used for Condor to start its service.

    CONDOR_CONFIG should point to the condor_config file. In this version of Condor, it must reside on the local disk.

    RELEASE_DIR should point to the directory where Condor is installed. This is typically C:\Condor, and again, this must reside on the local disk.

  2. The other purpose is storing the entries from the last installation so that they can be used for the next one.

The File System
The files that are needed for Condor to operate are identical to the Unix version of Condor, except that executable files end in .exe. For example the on Unix one of the files is condor_master and on Condor the corresponding file is condor_master.exe.

These files currently must reside on the local disk for a variety of reasons. Advanced Windows users might be able to put the files on remote resources. The main concern is twofold. First, the files must be there when the service is started. Second, the files must always be in the same spot (including drive letter), no matter who is logged into the machine.

Note also that when installing manually, you will need to create the directories that Condor will expect to be present given your configuration. This normally is simply a matter of creating the log, spool, and execute directories.


3.2.5.6 Starting Condor Under Windows After Installation

After the installation of Condor is completed, the Condor service must be started. If you used the GUI-based setup program to install Condor, the Condor service should already be started. If you installed manually, Condor must be started by hand, or you can simply reboot. NOTE: The Condor service will start automatically whenever you reboot your machine.

To start Condor by hand:

  1. From the Start menu, choose Settings.
  2. From the Settings menu, choose Control Panel.
  3. From the Control Panel, choose Services.
  4. From Services, choose Condor, and Start.

Or, alternatively you can enter the following command from a command prompt:

         net start condor

Run the Task Manager (Control-Shift-Escape) to check that Condor services are running. The following tasks should be running:

Also, you should now be able to open up a new cmd (DOS prompt) window, and the Condor bin directory should be in your path, so you can issue the normal Condor commands, such as condor_q and condor_status.


3.2.5.7 Condor is Running Under Windows ... Now What?

Once Condor services are running, try submitting test jobs. Example 2 within section 2.5.1 on page [*] presents a vanilla universe job.


3.2.6 RPMs

RPMs are available in Condor Version 7.6.10. We provide a Yum repository, as well as installation and configuration in one easy step. This RPM installation is currently available for Red Hat-compatible systems only. As of Condor version 7.5.1, the Condor RPM installs into FHS locations.

Yum repositories are at http://www.cs.wisc.edu/condor/yum/ . The repositories are named to distinguish stable releases from development releases and by Red Hat version number. The 4 repositories are:

Here are an ordered set of steps that get Condor running using the RPM.

  1. The Condor package will automatically add a condor user/group, if it does not exist already. Sites wishing to control the attributes of this user/group should add the condor user/group manually before installation.

  2. Download and install the meta-data that describes the appropriate YUM repository. This example is for the stable series, on RHEL 5.
      cd /etc/yum.repos.d
      wget http://www.cs.wisc.edu/condor/yum/repo.d/condor-stable-rhel5.repo
    
    Note that this step need be done only once; do not get the same repository more than once.

  3. Install Condor. For 32-bit machines:
      yum install condor
    
    For 64-bit machines:
      yum install condor.x86_64
    

  4. As needed, edit the Condor configuration files to customize. The configuration files are in the directory /etc/condor/ . Do not use condor_configure or condor_install for configuration. The installation will be able to find configuration files without additional administrative intervention, as the configuration files are placed in /etc, and Condor searches this directory.

  5. Start Condor daemons:
      /sbin/service condor start
    


3.2.7 Debian Packages

Debian packages are available in Condor Version 7.6.10. We provide an APT repository, as well as installation and configuration in one easy step. These Debian packages of Condor are currently available for Debian 4 (Etch) and Debian 5 (Lenny). As of Condor version 7.5.1, the Condor Debian package installs into FHS locations.

The Condor APT repositories are specified at http://www.cs.wisc.edu/condor/debian/ . See this web page for repository information.

Here are an ordered set of steps that get Condor running.

  1. The Condor package will automatically add a condor user/group, if it does not exist already. Sites wishing to control the attributes of this user/group should add the condor user/group manually before installation.

  2. If not already present, set up access to the appropriate APT repository; they are distinguished as stable or development release, and by operating system. Ensure that the correct one of the following release and operating system-specific lines is in the file /etc/apt/sources.list .
    deb http://www.cs.wisc.edu/condor/debian/stable/ etch contrib
    deb http://www.cs.wisc.edu/condor/debian/development/ etch contrib
    deb http://www.cs.wisc.edu/condor/debian/stable/ lenny contrib
    deb http://www.cs.wisc.edu/condor/debian/development/ lenny contrib
    
    Note that this step need be done only once; do not add the same repository more than once.

  3. Install and start Condor services:
      apt-get update
      apt-get install condor
    

  4. As needed, edit the Condor configuration files to customize. The configuration files are in the directory /etc/condor/ . Do not use condor_configure or condor_install for configuration. The installation will be able to find configuration files without additional administrative intervention, as the configuration files are placed in /etc, and Condor searches this directory.

    Then, if any configuration changes are made, restart Condor with

      /etc/init.d/condor restart
    


3.2.8 Upgrading - Installing a Newer Version of Condor

Section 3.10.1 on page [*] within the section on Pool Management describes strategies for doing an upgrade: changing the running version of Condor from the current installation to a newer version.


3.2.9 Installing the CondorView Client Contrib Module

The CondorView Client contrib module is used to automatically generate World Wide Web pages to display usage statistics of a Condor pool. Included in the module is a shell script which invokes the condor_stats command to retrieve pool usage statistics from the CondorView server, and generate HTML pages from the results. Also included is a Java applet, which graphically visualizes Condor usage information. Users can interact with the applet to customize the visualization and to zoom in to a specific time frame. Figure 3.2 on page [*] is a screen shot of a web page created by CondorView. To get a further feel for what pages generated by CondorView look like, view the statistics for the University of Wisconsin-Madison pool by visiting the URL http://condor-view.cs.wisc.edu/condor-view-applet.

Figure 3.2: Screen shot of CondorView Client
\includegraphics{admin-man/view-screenshot.ps}

After unpacking and installing the CondorView Client, a script named make_stats can be invoked to create HTML pages displaying Condor usage for the past hour, day, week, or month. By using the Unix cron facility to periodically execute make_stats, Condor pool usage statistics can be kept up to date automatically. This simple model allows the CondorView Client to be easily installed; no Web server CGI interface is needed.


3.2.9.1 Step-by-Step Installation of the CondorView Client

  1. Make certain that the CondorView Server is configured. Section 3.13.7 describes configuration of the server. The server logs information on disk in order to provide a persistent, historical database of pool statistics. The CondorView Client makes queries over the network to this database. The condor_collector includes this database support. To activate the persistent database logging, add the following entries to the configuration file for the condor_collector chosen to act as the ViewServer.
        POOL_HISTORY_DIR = /full/path/to/directory/to/store/historical/data 
        KEEP_POOL_HISTORY = True
    

  2. Create a directory where CondorView is to place the HTML files. This directory should be one published by a web server, so that HTML files which exist in this directory can be accessed using a web browser. This directory is referred to as the VIEWDIR directory.

  3. Download the view_client contrib module. Follow links for contrib modules on the downloads page at http://www.cs.wisc.edu/condor/downloads-v2/download.pl.

  4. Unpack or untar this contrib module into the directory VIEWDIR. This creates several files and subdirectories. Further unpack the jar file within the VIEWDIR directory with:
     
      jar -xf condorview.jar
    

  5. Edit the make_stats script. At the beginning of the file are six parameters to customize. The parameters are

    ORGNAME
    A brief name that identifies an organization. An example is ``Univ of Wisconsin''. Do not use any slashes in the name or other special regular-expression characters. Avoid the characters $\mathtt{\backslash}$^ and $.

    CONDORADMIN
    The e-mail address of the Condor administrator at your site. This e-mail address will appear at the bottom of the web pages.

    VIEWDIR
    The full path name (not a relative path) to the VIEWDIR directory set by installation step 2. It is the directory that contains the make_stats script.

    STATSDIR
    The full path name of the directory which contains the condor_stats binary. The condor_stats program is included in the $<$release_dir$>$/bin directory. The value for STATSDIR is added to the PATH parameter by default.

    PATH
    A list of subdirectories, separated by colons, where the make_stats script can find the awk, bc, sed, date, and condor_stats programs. If perl is installed, the path should also include the directory where perl is installed. The following default works on most systems:
     
            PATH=/bin:/usr/bin:$STATSDIR:/usr/local/bin
    

  6. To create all of the initial HTML files, run
            ./make_stats setup
    
    Open the file index.html to verify that things look good.

  7. Add the make_stats program to cron. Running make_stats in step 6 created a cronentries file. This cronentries file is ready to be processed by the Unix crontab command. The crontab manual page contains details about the crontab command and the cron daemon. Look at the cronentries file; by default, it will run make_stats hour every 15 minutes, make_stats day once an hour, make_stats week twice per day, and make_stats month once per day. These are reasonable defaults. Add these commands to cron on any system that can access the VIEWDIR and STATSDIR directories, even on a system that does not have Condor installed. The commands do not need to run as root user; in fact, they should probably not run as root. These commands can run as any user that has read/write access to the VIEWDIR directory. The command
     
      crontab cronentries
    
    can set the crontab file; note that this command overwrites the current, existing crontab file with the entries from the file cronentries.

  8. Point the web browser at the VIEWDIR directory to complete the installation.


3.2.10 Dynamic Deployment

Dynamic deployment is a mechanism that allows rapid, automated installation and start up of Condor resources on a given machine. In this way any machine can be added to a Condor pool. The dynamic deployment tool set also provides tools to remove a machine from the pool, without leaving residual effects on the machine such as leftover installations, log files, and working directories.

Installation and start up is provided by condor_cold_start. The condor_cold_start program determines the operating system and architecture of the target machine, and transfers the correct installation package from an ftp, http, or grid ftp site. After transfer, it installs Condor and creates a local working directory for Condor to run in. As a last step, condor_cold_start begins running Condor in a manner which allows for later easy and reliable shut down.

The program that reliably shuts down and uninstalls a previously dynamically installed Condor instance is condor_cold_stop. condor_cold_stop begins by safely and reliably shutting off the running Condor installation. It ensures that Condor has completely shut down before continuing, and optionally ensures that there are no queued jobs at the site. Next, condor_cold_stop removes and optionally archives the Condor working directories, including the log directory. These archives can be stored to a mounted file system or to a grid ftp site. As a last step, condor_cold_stop uninstalls the Condor executables and libraries. The end result is that the machine resources are left unchanged after a dynamic deployment of Condor leaves.

3.2.10.1 Configuration and Usage

Dynamic deployment is designed for the expert Condor user and administrator. Tool design choices were made for functionality, not ease-of-use.

Like every installation of Condor, a dynamically deployed installation relies on a configuration. To add a target machine to a previously created Condor pool, the global configuration file for that pool is a good starting point. Modifications to that configuration can be made in a separate, local configuration file used in the dynamic deployment. The global configuration file must be placed on an ftp, http, grid ftp, or file server accessible by condor_cold_start. The local configuration file is to be on a file system accessible by the target machine. There are some specific configuration variables that may be set for dynamic deployment. A list of executables and directories which must be present for Condor to start on the target machine may be set with the configuration variables DEPLOYMENT_REQUIRED_EXECS and DEPLOYMENT_REQUIRED_DIRS . If defined and the comma-separated list of executables or directories are not present, then condor_cold_start exits with error. Note this does not affect what is installed, only whether start up is successful.

A list of executables and directories which are recommended to be present for Condor to start on the target machine may be set with the configuration variables DEPLOYMENT_RECOMMENDED_EXECS and DEPLOYMENT_RECOMMENDED_DIRS . If defined and the comma-separated lists of executables or directories are not present, then condor_cold_start prints a warning message and continues. Here is a portion of the configuration relevant to a dynamic deployment of a Condor submit node:

DEPLOYMENT_REQUIRED_EXECS    = MASTER, SCHEDD, PREEN, STARTER, \
                               STARTER_STANDARD, SHADOW, \
                               SHADOW_STANDARD, GRIDMANAGER, GAHP, CONDOR_GAHP
DEPLOYMENT_REQUIRED_DIRS     = SPOOL, LOG, EXECUTE
DEPLOYMENT_RECOMMENDED_EXECS = CREDD
DEPLOYMENT_RECOMMENDED_DIRS  = LIB, LIBEXEC

Additionally, the user must specify which Condor services will be started. This is done through the DAEMON_LIST configuration variable. Another excerpt from a dynamic submit node deployment configuration:

DAEMON_LIST  = MASTER, SCHEDD

Finally, the location of the dynamically installed Condor executables is tricky to set, since the location is unknown before installation. Therefore, the variable DEPLOYMENT_RELEASE_DIR is defined in the environment. It corresponds to the location of the dynamic Condor installation. If, as is often the case, the configuration file specifies the location of Condor executables in relation to the RELEASE_DIR variable, the configuration can be made dynamically deployable by setting RELEASE_DIR to DEPLOYMENT_RELEASE_DIR as

RELEASE_DIR = $(DEPLOYMENT_RELEASE_DIR)

In addition to setting up the configuration, the user must also determine where the installation package will reside. The installation package can be in either tar or gzipped tar form, and may reside on a ftp, http, grid ftp, or file server. Create this installation package by tar'ing up the binaries and libraries needed, and place them on the appropriate server. The binaries can be tar'ed in a flat structure or within bin and sbin. Here is a list of files to give an example structure for a dynamic deployment of the condor_schedd daemon.

% tar tfz latest-i686-Linux-2.4.21-37.ELsmp.tar.gz
bin/
bin/condor_config_val
bin/condor_q
sbin/
sbin/condor_preen
sbin/condor_shadow.std
sbin/condor_starter.std
sbin/condor_schedd
sbin/condor_master
sbin/condor_gridmanager
sbin/gt4_gahp
sbin/gahp_server
sbin/condor_starter
sbin/condor_shadow
sbin/condor_c-gahp
sbin/condor_off

next up previous contents index
Next: 3.3 Configuration Up: 3. Administrators' Manual Previous: 3.1 Introduction   Contents   Index
htcondor-admin@cs.wisc.edu