Achieving certain behaviors in an HTCondor pool often requires setting the values of a number of configuration macros in concert with each other. We have added configuration templates as a way to do this more easily, at a higher level, without having to explicitly set each individual configuration macro.
Configuration templates are pre-defined; users cannot define their own templates.
Note that the value of an individual configuration macro that is set by a configuration template can be overridden by setting that configuration macro later in the configuration.
Detailed information about configuration templates (such as the macros they set) can be obtained using the condor_config_val use option (see 12). (This document does not contain such information because the condor_config_val command is a better way to obtain it.)
use <category name> : <template name>
The use key word is case insensitive. There are no requirements for white space characters surrounding the colon character. More than one <template name> identifier may be placed within a single use line. Separate the names by a space character. There is no mechanism by which the administrator may define their own custom <category name> or <template name>.
Each predefined <category name> has a fixed, case insensitive name for the sets of configuration that are predefined. Placement of a use line in the configuration brings in the predefined configuration it identifies.
As of version 8.5.6, some of the configuration templates take arguments (as described below).
There are four <category name> values. Within a category, a predefined, case insensitive name identifies the set of configuration it incorporates.
Settings needed for when a single machine is the entire pool.
Settings needed to allow this machine to submit jobs to the pool. May be combined with Execute and CentralManager roles.
Settings needed to allow this machine to execute jobs. May be combined with Submit and CentralManager roles.
Settings needed to allow this machine to act as the central manager for the pool. May be combined with Submit and Execute roles.
Enables the use of condor_config_val -rset to the machine with this configuration. Note that there are security implications for use of this configuration, as it potentially permits the arbitrary modification of configuration. Variable SETTABLE_ATTRS_CONFIG must also be defined.
Enables the use of condor_config_val -set to the machine with this configuration. Note that there are security implications for use of this configuration, as it potentially permits the arbitrary modification of configuration. Variable SETTABLE_ATTRS_CONFIG must also be defined.
Enables use of the vm universe with VMware virtual machines. Note that this feature depends on Perl.
Sets configuration based on detection with the condor_gpu_discovery tool, and defines a custom resource using the name GPUs. Supports both OpenCL and CUDA, if detected. Automatically includes the GPUsMonitor feature.
Also adds configuration to report the usage of NVidia GPUs.
Configures a custom machine resource monitor with the given name, mode, period, executable, and metrics. See 4.4.3 for the definitions of these terms.
Sets up a partitionable slot of the specified slot type number and allocation (defaults for slot_type_num and allocation are 1 and 100% respectively). See 3.7.1 for information on partitionalble slot policies.
Create a one-shot condor_startd job hook. (See 4.4.3 for more information about job hooks.)
Create a periodic-shot condor_startd job hook. (See 4.4.3 for more information about job hooks.)
Create a (nearly) continuous condor_startd job hook. (See 4.4.3 for more information about job hooks.)
Create a one-shot condor_schedd job hook. (See 4.4.3 for more information about job hooks.)
Create a periodic-shot condor_schedd job hook. (See 4.4.3 for more information about job hooks.)
Create a (nearly) continuous condor_schedd job hook. (See 4.4.3 for more information about job hooks.)
Create a one-shot job hook. (See 4.4.3 for more information about job hooks.)
Create a periodic job hook. (See 4.4.3 for more information about job hooks.)
Create a (nearly) continuous job hook. (See 4.4.3 for more information about job hooks.)
Configuration values used in the UWCS_DESKTOP policy. (Note that these values were previously in the parameter table; configuration that uses these values will have to use the UWCS_Desktop_Policy_Values template. For example, POLICY : UWCS_Desktop uses the FEATURE : UWCS_Desktop_Policy_Values template.)
Always start jobs and run them to completion, without consideration of condor_negotiator generated preemption or suspension. This is the default policy, and it is intended to be used with dedicated resources. If this policy is used together with the Limit_Job_Runtimes policy, order the specification by placing this Always_Run_Jobs policy first.
This was the default policy before HTCondor version 8.1.6. It is intended to be used with desktop machines not exclusively running HTCondor jobs. It injects UWCS into the name of some configuration variables.
An updated and reimplementation of the UWCS_Desktop policy, but without the UWCS naming of some configuration variables.
Limits running jobs to a maximum of the specified time using preemption. (The default limit is 24 hours.) If this policy is used together with the Always_Run_Jobs policy, order the specification by placing this Limit_Job_Runtimes policy second.
If the startd observes the number of CPU cores used by the job exceed the number of cores in the slot by more than 0.8 on average over the past minute, preempt the job immediately ignoring any job retirement time.
If the startd observes the number of CPU cores used by the job exceed the number of cores in the slot by more than 0.8 on average over the past minute, immediately place the job on hold ignoring any job retirement time. The job will go on hold with a reasonable hold reason in job attribute HoldReason and a value of 101 in job attribute HoldReasonCode. The hold reason and code can be customized by specifying HOLD_REASON_CPU_EXCEEDED and HOLD_SUBCODE_CPU_EXCEEDED respectively.
Standard universe jobs can't be held by startd policy expressions, so this metaknob automatically ignores them.
If the startd observes the memory usage of the job exceed the memory provisioned in the slot, preempt the job immediately ignoring any job retirement time.
If the startd observes the memory usage of the job exceed the memory provisioned in the slot, immediately place the job on hold ignoring any job retirement time. The job will go on hold with a reasonable hold reason in job attribute HoldReason and a value of 102 in job attribute HoldReasonCode. The hold reason and code can be customized by specifying HOLD_REASON_MEMORY_EXCEEDED and HOLD_SUBCODE_MEMORY_EXCEEDED respectively.
Standard universe jobs can't be held by startd policy expressions, so this metaknob automatically ignores them.
Preempt jobs according to the specified policy. policy_variable must be the name of a configuration macro containing an expression that evaluates to True if the job should be preempted.
See an example here: 3.4.4.
Add the given policy to the WANT_HOLD expression; if the WANT_HOLD expression is defined, policy_variable is prepended to the existing expression; otherwise WANT_HOLD is simply set to the value of the textttpolicy_variable macro.
Standard universe jobs can't be held by startd policy expressions, so this metaknob automatically ignores them.
See an example here: 3.4.4.
Publish the number of CPU cores being used by the job into to slot ad as attribute CpusUsage. This value will be the average number of cores used by the job over the past minute, sampling every 5 seconds.
The default security model (based on IPs and DNS names). Do not combine with User_Based security.
Grants permissions to an administrator and uses With_Authentication. Do not combine with Host_Based security.
Requires both authentication and integrity checks.
Requires authentication, encryption, and integrity checks.
For pools that are transitioning to using this new syntax in configuration,
while still having some tools and daemons with HTCondor versions
earlier than 8.1.6,
special syntax in the configuration will cause those daemons to
fail upon start up,
rather than use the new, but misinterpreted, syntax.
Newer daemons will ignore the extra syntax.
Placing the @
character before the use key word
causes the older daemons to fail when they attempt to
parse this syntax.
As an example, consider the condor_startd as it starts up. A condor_startd previous to HTCondor version 8.1.6 fails to start when it sees:
@use feature : GPUsRunning an older condor_config_val also identifies the @use line as being bad. A condor_startd of HTCondor version 8.1.6 or more recent sees
use feature : GPUs
MEMORY_EXCEEDED = (isDefined(MemoryUsage) && MemoryUsage > RequestMemory) use POLICY : PREEMPT_IF(MEMORY_EXCEEDED)
MEMORY_EXCEEDED = (isDefined(MemoryUsage) && MemoryUsage > RequestMemory) use POLICY : WANT_HOLD_IF(MEMORY_EXCEEDED, 102, memory usage exceeded request_memory)
use FEATURE : StartdCronPeriodic(DYNGPU, 15*60, $(LOCAL_DIR)\dynamic_gpu_info.pl, $(LIBEXEC)\condor_gpu_discovery -dynamic)
where dynamic_gpu_info.pl is a simple perl script that strips off the DetectedGPUs line from textttcondor_gpu_discovery:
#!/usr/bin/env perl my @attrs = `@ARGV`; for (@attrs) { next if ($_ =~ /^Detected/i); print $_; }