In the case that a key machine no longer functions, HTCondor can be configured such that another machine takes on the key functions. This is called High Availability. While high availability is generally applicable, there are currently two specialized cases for its use: when the central manager (running the condor_negotiator and condor_collector daemons) becomes unavailable, and when the machine running the condor_schedd daemon (maintaining the job queue) becomes unavailable.
For a pool where all jobs are submitted through a single machine in the pool, and there are lots of jobs, this machine becoming nonfunctional means that jobs stop running. The condor_schedd daemon maintains the job queue. No job queue due to having a nonfunctional machine implies that no jobs can be run. This situation is worsened by using one machine as the single submission point. For each HTCondor job (taken from the queue) that is executed, a condor_shadow process runs on the machine where submitted to handle input/output functionality. If this machine becomes nonfunctional, none of the jobs can continue. The entire pool stops running jobs.
The goal of High Availability in this special case is to transfer the condor_schedd daemon to run on another designated machine. Jobs caused to stop without finishing can be restarted from the beginning, or can continue execution using the most recent checkpoint. New jobs can enter the job queue. Without High Availability, the job queue would remain intact, but further progress on jobs would wait until the machine running the condor_schedd daemon became available (after fixing whatever caused it to become unavailable).
HTCondor uses its flexible configuration mechanisms to allow the transfer of the condor_schedd daemon from one machine to another. The configuration specifies which machines are chosen to run the condor_schedd daemon. To prevent multiple condor_schedd daemons from running at the same time, a lock (semaphore-like) is held over the job queue. This synchronizes the situation in which control is transferred to a secondary machine, and the primary machine returns to functionality. Configuration variables also determine time intervals at which the lock expires, and periods of time that pass between polling to check for expired locks.
To specify a single machine that would take over, if the machine running the condor_schedd daemon stops working, the following additions are made to the local configuration of any and all machines that are able to run the condor_schedd daemon (becoming the single pool submission point):
Configuration macro MASTER_HA_LIST identifies the condor_schedd daemon as the daemon that is to be watched to make sure that it is running. Each machine with this configuration must have access to the lock (the job queue) which synchronizes which single machine does run the condor_schedd daemon. This lock and the job queue must both be located in a shared file space, and is currently specified only with a file URL. The configuration specifies the shared space (SPOOL), and the URL of the lock. condor_preen is not currently aware of the lock file and will delete it if it is placed in the SPOOL directory, so be sure to add file SCHEDD.lock to VALID_SPOOL_FILES .
As HTCondor starts on machines that are configured to run the single condor_schedd daemon, the condor_master daemon of the first machine that looks at (polls) the lock and notices that no lock is held. This implies that no condor_schedd daemon is running. This condor_master daemon acquires the lock and runs the condor_schedd daemon. Other machines with this same capability to run the condor_schedd daemon look at (poll) the lock, but do not run the daemon, as the lock is held. The machine running the condor_schedd daemon renews the lock periodically.
If the machine running the condor_schedd daemon fails to renew the lock (because the machine is not functioning), the lock times out (becomes stale). The lock is released by the condor_master daemon if condor_off or condor_off -schedd is executed, or when the condor_master daemon knows that the condor_schedd daemon is no longer running. As other machines capable of running the condor_schedd daemon look at the lock (poll), one machine will be the first to notice that the lock has timed out or been released. This machine (correctly) interprets this situation as the condor_schedd daemon is no longer running. This machine’s condor_master daemon then acquires the lock and runs the condor_schedd daemon.
See section 3.5.7, in the section on condor_master Configuration File Macros for details relating to the configuration variables used to set timing and polling intervals.
Remote job submission requires identification of the job queue, submitting with a command similar to:
This implies the identification of a single condor_schedd daemon, running on a single machine. With the high availability of the job queue, there are multiple condor_schedd daemons, of which only one at a time is acting as the single submission point. To make remote submission of jobs work properly, set the configuration variable SCHEDD_NAME in the local configuration to have the same value for each potentially running condor_schedd daemon. In addition, the value chosen for the variable SCHEDD_NAME will need to include the at symbol (@), such that HTCondor will not modify the value set for this variable. See the description of MASTER_NAME in section 3.5.7 on page 648 for defaults and composition of valid values for SCHEDD_NAME. As an example, include in each local configuration a value similar to:
Then, with this sample configuration, the submit command appears as:
The HTCondor high availability mechanisms discussed in this section currently do not work well in configurations involving flocking. The individual problems listed listed below interact to make the situation worse. Because of these problems, we advise against the use of flocking to pools with high availability mechanisms enabled.
The condor_negotiator and condor_collector daemons are the heart of the HTCondor matchmaking system. The availability of these daemons is critical to an HTCondor pool’s functionality. Both daemons usually run on the same machine, most often known as the central manager. The failure of a central manager machine prevents HTCondor from matching new jobs and allocating new resources. High availability of the condor_negotiator and condor_collector daemons eliminates this problem.
Configuration allows one of multiple machines within the pool to function as the central manager. While there are may be many active condor_collector daemons, only a single, active condor_negotiator daemon will be running. The machine with the condor_negotiator daemon running is the active central manager. The other potential central managers each have a condor_collector daemon running; these are the idle central managers.
All submit and execute machines are configured to report to all potential central manager machines.
Each potential central manager machine runs the high availability daemon, condor_had. These daemons communicate with each other, constantly monitoring the pool to ensure that one active central manager is available. If the active central manager machine crashes or is shut down, these daemons detect the failure, and they agree on which of the idle central managers is to become the active one. A protocol determines this.
In the case of a network partition, idle condor_had daemons within each partition detect (by the lack of communication) a partitioning, and then use the protocol to chose an active central manager. As long as the partition remains, and there exists an idle central manager within the partition, there will be one active central manager within each partition. When the network is repaired, the protocol returns to having one central manager.
Through configuration, a specific central manager machine may act as the primary central manager. While this machine is up and running, it functions as the central manager. After a failure of this primary central manager, another idle central manager becomes the active one. When the primary recovers, it again becomes the central manager. This is a recommended configuration, if one of the central managers is a reliable machine, which is expected to have very short periods of instability. An alternative configuration allows the promoted active central manager (in the case that the central manager fails) to stay active after the failed central manager machine returns.
This high availability mechanism operates by monitoring communication between machines. Note that there is a significant difference in communications between machines when
The high availability mechanism distinguishes between these two, and it operates based only on first (when a central manager machine is down). A lack of executing daemons does not cause the protocol to choose or use a new active central manager.
The central manager machine contains state information, and this includes information about user priorities. The information is kept in a single file, and is used by the central manager machine. Should the primary central manager fail, a pool with high availability enabled would lose this information (and continue operation, but with re-initialized priorities). Therefore, the condor_replication daemon exists to replicate this file on all potential central manager machines. This daemon promulgates the file in a way that is safe from error, and more secure than dependence on a shared file system copy.
The condor_replication daemon runs on each potential central manager machine as well as on the active central manager machine. There is a unidirectional communication between the condor_had daemon and the condor_replication daemon on each machine. To properly do its job, the condor_replication daemon must transfer state files. When it needs to transfer a file, the condor_replication daemons at both the sending and receiving ends of the transfer invoke the condor_transferer daemon. These short lived daemons do the task of file transfer and then exit. Do not place TRANSFERER into DAEMON_LIST, as it is not a daemon that the condor_master should invoke or watch over.
The high availability of central manager machines is enabled through configuration. It is disabled by default. All machines in a pool must be configured appropriately in order to make the high availability mechanism work. See section 3.5.26, for definitions of these configuration variables.
The condor_had and condor_replication daemons use the condor_shared_port daemon by default. If you want to use more than one condor_had or condor_replication daemon with the condor_shared_port daemon under the same master, you must configure those additional daemons to use nondefault socket names. (Set the -sock option in <NAME>_ARGS.) Because the condor_had daemon must know the condor_replication daemon’s address a priori, you will also need to set <NAME>.REPLICATION_SOCKET_NAME appropriately.
The stabilization period is the time it takes for the condor_had daemons to detect a change in the pool state such as an active central manager failure or network partition, and recover from this change. It may be computed using the following formula:
To disable the high availability of central managers mechanism, it is sufficient to remove HAD, REPLICATION, and NEGOTIATOR from the DAEMON_LIST configuration variable on all machines, leaving only one condor_negotiator in the pool.
To shut down a currently operating high availability mechanism, follow the given steps. All commands must be invoked from a host which has administrative permissions on all central managers. The first three commands kill all condor_had, condor_replication, and all running condor_negotiator daemons. The last command is invoked on the host where the single condor_negotiator daemon is to run.
When configuring condor_had to control the condor_negotiator, if the default backoff constant value is too small, it can result in a churning of the condor_negotiator, especially in cases in which the primary negotiator is unable to run due to misconfiguration. In these cases, the condor_master will kill the condor_had after the condor_negotiator exists, wait a short period, then restart condor_had. The condor_had will then win the election, so the secondary condor_negotiator will be killed, and the primary will be restarted, only to exit again. If this happens too quickly, neither condor_negotiator will run long enough to complete a negotiation cycle, resulting in no jobs getting started. Increasing this value via MASTER_HAD_BACKOFF_CONSTANT to be larger than a typical negotiation cycle can help solve this problem.
To run a high availability pool without the replication feature, do the following operations:
This section provides sample configurations for high availability.
We begin with a sample configuration using shared port, and then include a sample configuration for not using shared port. Both samples relate to the high availability of central managers.
Each sample is split into two parts: the configuration for the central manager machines, and the configuration for the machines that will not be central managers.
The following shared-port configuration is for the central manager machines.
The following shared-port configuration is for the machines which that will not be central managers.
The following configuration sets fixed port numbers for the central manager machines.
The configuration for machines that will not be central managers is identical for the fixed- and shared- port cases.