condor_annex -help
condor_annex -setup [/full/path/to/access/key/file [/full/path/to/secret/key/file]]
condor_annex [-aws-on-demand] -annex-name <name of the annex> -count <integer number of instances> [-aws-on-demand-*] [common options]
condor_annex [-aws-spot-fleet] -annex-name <name of the annex> -slots <integer weight> [-aws-spot-fleet-*] [common options]
condor_annex -annex-name <name of the annex> -duration hours
condor_annex -annex-name <name of the annex> -status [-classad]
condor_annex -check-setup
condor_annex <condor_annex options> status <condor_status options>
condor_annex adds clouds resources to the pool. (``The pool'' is determined in the usual manner for HTCondor daemons and tools.) Version 8.7.2 supports only Amazon Web Services (`AWS'). To add ``on-demand'' instances, use the third form listed above; to add ``spot'' instances, use the fourth. For an explanation of terms, consult either the HTCondor manual (chapter 6) or the AWS documentation.
Using condor_annex with AWS requires a one-time setup procedure performed by invoking condor_annex with the -setup flag (the second form listed above). You may check if this procedure has been performed with the -check-setup flag (the seventh form listed above).
To reset the lease on an existing annex, invoke condor_annex with only the -annex-name option and -duration flag (the fifth form listed above).
To determine which of the instances previously requested for a particular annex are not currently in the pool, invoke condor_annex with the -status flag and the -annex-name option (the sixth form listed above). The output of this command is intended to be human-readable; specifying the -classad flag will produce the same information in ClassAd format.
Starting in 8.7.3, you may instead invoke condor_annex with status as a command argument (the eighth form listed above). This will cause condor_annex to use condor_status to present annex instance data. Arguments and options on the command line after status will be passed unmodified to condor_status, but not all arguments and options will behave as expected. (See below.) condor_annex will construct an ad for each annex instance and pass that information to condor_status; condor_status will (unless you specify otherwise using its command line) query the collector for more information about the instances. Information from the collector will be presented as usual; instances which did not have ads in the collector will be presented last, in their own table. These instances can not be presented in the usual way because the annex instance ads generated by condor_annex do not (and can not) have the same information in them as ads generated by a condor_startd running in the instance. See the condor_status documentation (section 12) for details about the ``merge'' mode of condor_status used by this command argument. Note that both condor_annex and condor_status have -annex-name options; if you're interested in a particular annex, put this flag on the command line before the status command argument to avoid confusing results.
Common options are listed first, followed by options specific to AWS, followed by options specific to AWS' on-demand instances, followed by options specific to AWS' spot instances, followed by options intended for use by experts.
As of v8.7.1, only AWS is supported. The AMI configured by setup runs HTCondor v8.6.0 on Amazon Linux 2016.09, and the default instance type is ``m4.large''. The default AMI has the appropriate software for AWS' ``p2'' family of GPU instance types.
To start an on-demand annex named `MyFirstAnnex' with one core, using the default AMI and instance type, run
condor_annex -count 1 -annex-name MyFirstAnnex
You will be asked to confirm that the defaults are what you want.
As of 2017-04-17, the following example will cost a minimum of $90.
To start an on-demand annex with 100 GPUs that job owners `big' and `little' may use (be sure to include yourself!), run
condor_annex -count 100 -annex-name MySecondAnnex \ -aws-on-demand-instance-type p2.xlarge -owner "big, little"
condor_annex will exit with a status value of 0 (zero) on success.