HTCondor uses priorities to determine machine allocation for jobs. This section details the priorities and the allocation of machines (negotiation).
For accounting purposes, each user is identified by username@uid_domain. Each user is assigned a priority value even if submitting jobs from different machines in the same domain, or even if submitting from multiple machines in the different domains.
The numerical priority value assigned to a user is inversely related to the goodness of the priority. A user with a numerical priority of 5 gets more resources than a user with a numerical priority of 50. There are two priority values assigned to HTCondor users:
However, if the user decreases the number of resources used, the RUP gets better. The rate at which the priority value decays can be set by the macro PRIORITY_HALFLIFE , a time period defined in seconds. Intuitively, if the PRIORITY_HALFLIFE in a pool is set to 86400 (one day), and if a user whose RUP was 10 has no running jobs, that user's RUP would be 5 one day later, 2.5 two days later, and so on.
The number of resources that a user may receive is inversely related to the ratio between the EUPs of submitting users. Therefore user with EUP=5 will receive twice as many resources as user with EUP=10 and four times as many resources as user with EUP=20. However, if does not use the full number of resources that may be given, the available resources are repartitioned and distributed among remaining users according to the inverse ratio rule.
HTCondor supplies mechanisms to directly support two policies in which EUP may be useful:
The priority boost factors for individual users can be set with the setfactor option of condor_userprio. Details may be found in the condor_userprio manual page on page .
Too many preemptions lead to thrashing, a condition in which negotiation for a machine identifies a new job with a better priority most every cycle. Each job is, in turn, preempted, and no job finishes. To avoid this situation, the PREEMPTION_REQUIREMENTS configuration variable is defined for and used only by the condor_negotiator daemon to specify the conditions that must be met for a preemption to occur. It is usually defined to deny preemption if a current running job has been running for a relatively short period of time. This effectively limits the number of preemptions per resource per time interval. Note that PREEMPTION_REQUIREMENTS only applies to preemptions due to user priority. It does not have any effect if the machine's RANK expression prefers a different job, or if the machine's policy causes the job to vacate due to other activity on the machine. See section 3.5.9 for a general discussion of limiting preemption.
The following ephemeral attributes may be used within policy definitions. Care should be taken when using these attributes, due to their ephemeral nature; they are not always defined, so the usage of an expression to check if defined such as
(RemoteUserPrio =?= UNDEFINED)is likely necessary.
Within these attributes, those with names that contain the string Submitter refer to characteristics about the candidate job's user; those with names that contain the string Remote refer to characteristics about the user currently using the resource. Further, those with names that end with the string ResourcesInUse have values that may change within the time period associated with a single negotiation cycle. Therefore, the configuration variables PREEMPTION_REQUIREMENTS_STABLE and and PREEMPTION_RANK_STABLE exist to inform the condor_negotiator daemon that values may change. See section 3.3.17 on page for definitions of these configuration variables.
<N>
on the machine.
The RUP of a user at time , , is calculated
every time interval using the formula
The EUP of user at time ,
is calculated by
As mentioned previously, the RUP calculation is designed so that at steady state, each user's RUP stabilizes at the number of resources used by that user. The definition of ensures that the calculation of can be calculated over non-uniform time intervals without affecting the calculation. The time interval varies due to events internal to the system, but HTCondor guarantees that unless the central manager machine is down, no matches will be unaccounted for due to this variance.
Negotiation is the method HTCondor undergoes periodically to match queued jobs with resources capable of running jobs. The condor_negotiator daemon is responsible for negotiation.
During a negotiation cycle, the condor_negotiator daemon accomplishes the following ordered list of items.
The condor_negotiator asks the condor_schedd for the "next job" from a given submitter/user. Typically, the condor_schedd returns jobs in the order of job priority. If priorities are the same, job submission time is used; older jobs go first. If a cluster has multiple procs in it and one of the jobs cannot be matched, the condor_schedd will not return any more jobs in that cluster on that negotiation pass. This is an optimization based on the theory that the cluster jobs are similar. The configuration variable NEGOTIATE_ALL_JOBS_IN_CLUSTER disables the cluster-skipping optimization. Use of the configuration variable SIGNIFICANT_ATTRIBUTES will change the definition of what the condor_schedd considers a cluster from the default definition of all jobs that share the same ClusterId.
HTCondor schedules in a variety of ways. First, it takes all users who have submitted jobs and calculates their priority. Then, it totals the number of resources available at the moment, and using the ratios of the user priorities, it calculates the number of machines each user could get. This is their pie slice.
The HTCondor matchmaker goes in user priority order, contacts each user, and asks for job information. The condor_schedd daemon (on behalf of a user) tells the matchmaker about a job, and the matchmaker looks at available resources to create a list of resources that match the requirements expression. With the list of resources that match, it sorts them according to the rank expressions within ClassAds. If a machine prefers a job, the job is assigned to that machine, potentially preempting a job that might already be running on that machine. Otherwise, give the machine to the job that the job ranks highest. If the machine ranked highest is already running a job, we may preempt running job for the new job. A default policy for preemption states that the user must have a 20% better priority in order for preemption to succeed. If the job has no preferences as to what sort of machine it gets, matchmaking gives it the first idle resource to meet its requirements.
This matchmaking cycle continues until the user has received all of the machines in their pie slice. The matchmaker then contacts the next highest priority user and offers that user their pie slice worth of machines. After contacting all users, the cycle is repeated with any still available resources and recomputed pie slices. The matchmaker continues spinning the pie until it runs out of machines or all the condor_schedd daemons say they have no more jobs.
By default, HTCondor does all accounting on a per-user basis, and this accounting is primarily used to compute priorities for HTCondor's fair-share scheduling algorithms. However, accounting can also be done on a per-group basis. Multiple users can all submit jobs into the same accounting group, and all jobs with the same accounting group will be treated with the same priority. Jobs that do not specify an accounting group have all accounting and priority based on the user, which may be identified by the job ClassAd attribute Owner. Jobs that do specify an accounting group have all accounting and priority based on the specified accounting group.
To use an accounting group, each job inserts an attribute into the job ClassAd which defines the accounting group name for the job. A common name is decided upon and used for the group. The following line is an example that defines the attribute within the job's submit description file:
+AccountingGroup = "group_physics"
The AccountingGroup attribute is a string, and it therefore must be enclosed in double quote marks. The string may have a maximum length of 40 characters. The name should not be qualified with a domain. Certain parts of the HTCondor system do append the value $(UID_DOMAIN) (as specified in the configuration file on the submit machine) to this string for internal use. For example, if the value of UID_DOMAIN is example.com, and the accounting group name is as specified, condor_userprio will show statistics for this accounting group using the appended domain, for example
Effective User Name Priority ------------------------------ --------- group_physics@example.com 0.50 user@example.com 23.11 heavyuser@example.com 111.13 ...
Additionally, the condor_userprio command allows administrators to remove an entity from the accounting system in HTCondor. The -delete option to condor_userprio accomplishes this if all the jobs from a given accounting group are completed, and the administrator wishes to remove that group from the system. The -delete option identifies the accounting group with the fully-qualified name of the accounting group. For example
condor_userprio -delete group_physics@example.com
HTCondor removes entities itself as they are no longer relevant. Intervention by an administrator to delete entities can be beneficial when the use of thousands of short term accounting groups leads to scalability issues.
Note that the name of an accounting group may include a period (.). Inclusion of a period character in the accounting group name only has relevance if the portion of the name before the period matches a group name, as described in the next section on group quotas.
Quotas can be applied to the allocation of slots/machines for groups of users. The use of these group quotas modifies the negotiation for available resources (machines) within an HTCondor pool. When accounting groups together with group quotas are specified, the priorities and usage information are calculated per user. Numbers of slots/machines can be negotiated for preferentially by group. This may be useful when different groups (of varying size) own computers, and the groups choose to combine their computers to form an HTCondor pool. Consider an imaginary HTCondor pool example with thirty computers; twenty computers are owned by the physics group and ten computers are owned by the chemistry group. One notion of fair allocation could be implemented by configuring the twenty machines owned by the physics group to prefer (using the RANK configuration macro) jobs submitted by the users identified as associated with the physics group. Likewise, the ten machines owned by the chemistry group are configured to prefer jobs from users associated with the the chemistry group. This routes jobs to execute on specific machines, perhaps causing more preemption than necessary. The (fair allocation) policy desired is likely somewhat different, if these thirty machines have been pooled. The desired policy does not tie users to specific sets of machines, but to numbers of slots/machines (a quota). Given thirty similar machines, the desired policy allows users within the physics group to have preference on up to twenty of the machines within the pool, and the machines can be any of the machines that are available.
The implementation of quotas is hierarchical, such that quotas may be described for groups, subgroups, sub subgroups, etc. The hierarchy is described by adherence to a naming scheme set up in advance.
A quota for a set of users requires an identification of the set; members are called group users. Jobs under the group quota specify the group user with the AccountingGroup job ClassAd attribute. This is the same attribute as is used with group accounting.
The submit description file syntax for specifying a job is to be part of a group includes a series of names separated by the period character ('.'). Example syntax that shows only 2 levels of a (limited) hierarchy is
+AccountingGroup = "<group>.<subgroup>.<user>"Both
<group>
and <subgroup>
are names chosen for the group.
Group names are case-insensitive for negotiation.
The topmost level group name is not required to begin with the
string "group_",
as in the examples
"group_physics.newton" and "group_chemistry.curie",
but it is a useful convention,
because group names must not conflict with subgroup or user names.
Note that a job specifying a value for the AccountingGroup
ClassAd attribute that lacks at least one period in the specification
will cause the job to not be considered part of a
group when negotiating, even if the group name
(highest within the hierarchy) has a quota.
Furthermore, there will be no warnings that the group quota is not
in effect for the job, as this syntax defines group accounting.
Configuration controls the order of negotiation for groups, subgroups within the hierarchy defined, and individual users, as well as sets quotas (preferentially allocated numbers of machines) for the groups.
Quotas are categorized as either static or dynamic. A static quota specifies an integral numbers of machines (slots), independent of the size of the pool. A dynamic quota specifies a percentage of machines (slots) calculated based on the current number of machines in the pool. It is intended that only one of static or a dynamic quotas are defined for a specified group and for the pool. If both are defined for a group, then the static quota is implemented, and the dynamic quota is ignored. The behavior is not defined and not documented for pools with both static and dynamic quotas defined.
GROUP_QUOTA_group_physics = 100 GROUP_QUOTA_group_physics.experiment1 = 20 GROUP_QUOTA_group_physics.experiment2 = 70In this case, the unused quota of 10 machines is assigned to the group_physics submitters.
In the second case, the specification for the numbers of machines of a set of subgroups totals to more than the specification for the group's quota. For example:
GROUP_QUOTA_group_chemistry = 100 GROUP_QUOTA_group_chemistry.lab1 = 40 GROUP_QUOTA_group_chemistry.lab2 = 80In this case, a warning is written to the log for the condor_negotiator daemon, and each of the subgroups will have their static quota scaled. In this example, the ratio 100/120 scales each subgroup. lab1 will have a revised (floating point) quota of 33.333 machines, and lab2 will have a revised (floating point) quota of 66.667 machines. As numbers of machines are always integer values, the floating point values are truncated for quota allocation. Fractional remainders resulting from the truncation are summed and assigned to the next higher level within the group hierarchy.
Like static quota specification, there are two cases defined: when the dynamic quotas of all sub groups of a specific group sum to a fraction less than 1.0, and when the dynamic quotas of all sub groups of a specific group sum to greater than 1.0.
Here is an example configuration in which dynamic group quotas are assigned for a single group and its subgroups.
GROUP_QUOTA_DYNAMIC_group_econ = .6 GROUP_QUOTA_DYNAMIC_group_econ.project1 = .2 GROUP_QUOTA_DYNAMIC_group_econ.project2 = .15 GROUP_QUOTA_DYNAMIC_group_econ.project3 = .2The sum of dynamic quotas for the subgroups is .55, which is less than 1.0. If the pool has 100 slots, then the project1 subgroup is assigned a quota that equals (100)(.6)(.2) = 12 machines. The project2 subgroup is assigned a quota that equals (100)(.6)(.15) = 9 machines. The project3 subgroup is assigned a quota that equals (100)(.6)(.2) = 12 machines. The 60-33=27 machines unused by the subgroups are assigned for use by job submitters in the parent group_econ group.
If the calculated dynamic quota of the subgroups resulted in non integer numbers of machines, integer numbers of machines are assigned based on the truncation of the non integer dynamic group quota. The unused, surplus quota of machines resulting from fractional remainders resulting from the truncation are summed and assigned to the next higher level within the group hierarchy.
Here is another example configuration in which dynamic group quotas are assigned for a single group and its subgroups.
GROUP_QUOTA_DYNAMIC_group_stat = .5 GROUP_QUOTA_DYNAMIC_group_stat.project1 = .4 GROUP_QUOTA_DYNAMIC_group_stat.project2 = .3 GROUP_QUOTA_DYNAMIC_group_stat.project3 = .4In this case, the sum of dynamic quotas for the subgroups is 1.1, which is greater than 1.0 . A warning is written to the log for the condor_negotiator daemon, and each of the subgroups will have their dynamic group quota scaled for this example. .4 becomes .4/1.1=.3636, and .3 becomes .3/1.1=.2727 . If the pool has 100 slots, then each of the project1 and project3 subgroups is assigned a dynamic quota of (100)(.5)(.3636), which is 18.1818 machines. The project2 subgroup is assigned a dynamic quota of (100)(.5)(.2727), which is 13.6364 machines. The quota for each of project1 and project3 results in the truncated amount of 18 machines, and project2 results in the truncated amount of 13 machines, with the 0.1818 + .6364 + .1818 = 1.0 remaining machine assigned to job submitters in the parent group, group_stat.