HTCondor Week 2015

University of Wisconsin
Madison, Wisconsin
May 19–22, 2015

picture of Madison

Schedule: Tuesday, May 19th, 2015

 8:00 am  9:00 am Breakfast and Registration
scrambled eggs, bacon, roasted potatoes, whole wheat toast, coffee, tea, orange juice
Session Moderators: Jaime Frey, Greg Thain
 9:00 am 10:15 am Tutorial: An Introduction to Using HTCondor
Karen Miller
Center for High Throughput Computing
10:15 am 10:35 am Tutorial: HTCondor and Workflows: An Introduction
Kent Wenger
Center for High Throughput Computing
10:35 am 10:50 am Break
Clif Bars, trail mix, biscotti, coffee, tea, soda
Session Moderators: Todd Miller, Brian Lin
10:50 am 12:00 pm Tutorial: Administrating HTCondor
Alan De Smet
Center for High Throughput Computing
12:00 pm  1:00 pm Lunch
rigatoni primavera, chicken parmesan, Caesar salad, grilled vegetable medley, garlic breadsticks, tiramisu mousse
Session Moderators: Lauren Michael, Zach Miller
 1:00 pm  1:50 pm Topic: Docker
Greg Thain
Center for High Throughput Computing
 1:55 pm  2:40 pm Tutorial: HTCondor and Workflows: Advanced Tutorial
Kent Wenger
Center for High Throughput Computing
 2:45 pm  3:15 pm Break
coffee, tea, soda, pretzels, mixed nuts, cheese platter
Session Moderators: Christopher Green, Todd Miller
 3:15 pm  3:35 pm New condor_submit features
John "TJ" Knoeller
Center for High Throughput Computing
 3:40 pm  4:00 pm Look What I Can Do: Unorthodox Uses of HTCondor in the Open Science Grid
HTCondor is more than just a batch system! It has a number of special features that make it an invaluable part in the infrastructure of the Open Science Grid. In this talk I will explore periodic execution, classads and the collector, the job router, the master, and provide an overview of how these are used to solve various technical challenges in the OSG.
Mátyás Selmeci
Center for High Throughput Computing
 4:05 pm  5:00 pm Tutorial: Scientific Workflows with Pegasus and DAGMan.
In this tutorial, we will focus on how to model scientific analysis as a workflow, that can be executed on the UW Madison CHTC cluster using Pegasus WMS ( ) . Pegasus allows users to design workflows at a high-level of abstraction, that is independent of the resources available to execute them and the location of data and executables. It compiles these abstract workflows to executable workflows that can be deployed onto distributed resources such local campus clusters, computational clouds and grids such as XSEDE and Open Science Grid. Through hands-on exercises ( ), we will cover issues of workflow composition, how to design a workflow in a portable way, workflow execution and how to run the workflow efficiently and reliably. An important component of the tutorial will be how to monitor, debug and analyze workflows using Pegasus-provided tools.
Karan Vahi
USC Information Sciences Institute

Specific talks and times are subject to change.