Thursday, May 21
8:00 am | – | 9:00 am | Breakfast and Registration
bagels, cream cheese, fresh fruit, sliced cucumbers and tomatoes, bacon, tea, coffee, orange juice, english muffin with egg and cheese
|
|
Session Moderators: Greg Thain, Bill Taylor | ||||
9:00 am | – | 9:20 am | Networking and High Throughput Computing
Overview of how HTCondor uses and abuses the network. Talk includes problems and solutions with firewalls and NATs, IPv6 support, and future extensibility via plugins.
|
Garhan Attebury University of Nebraska – Lincoln
|
9:25 am | – | 9:45 am | Custom templates for streamlined DAG workflows |
Chris Cox University of Wisconsin – Madison
|
9:50 am | – | 10:10 am | Shooting for the sky: Testing the limits of Condor
HTCondor as any other batch software does not scale infinitely with the number of resources to handle. To prepare for LHC Run 2, we probed the scalability limits of new versions and configurations of HTCondor with a goal of reaching 200,000 simultaneous running jobs in a single internationally distributed dynamic pool. As well we showed the scaling limits of the new OSG Software product, the HTCondor-CE.
|
Edgar Fajardo University of California, San Diego
|
10:10 am | – | 10:45 am | Break
coffee, tea, soda, croissants, fresh fruit
|
|
Session Moderators: Christopher Green, John "TJ" Knoeller | ||||
10:45 am | – | 11:05 am | Building a DeepDive Application Infrastructure
DeepDive is a machine learning data analysis application which allows users extract relationships from large bodies of text. This talk focuses on fetching, processing, and organizing large numbers of published scientific works to produce application-ready bodies of text.
|
Ian Ross University of Wisconsin – Madison
|
11:10 am | – | 11:30 am | Exploring Nuclear Fuel Cycle Simulation using HTCondor |
Matthew Gidden University of Wisconsin – Madison
|
11:35 am | – | 11:55 am | High-Throughput HPC Computing in LIGO |
Peter Couvares Syracuse University
|
12:00 pm | – | 1:00 pm | Lunch
grilled chicken breast, hummus, tzatziki, sweet chutney, currey-dusted roasted vegetables, herb and lemon scented faro, mixed greens with cucumbers, tomatoes, and goat cheese, grilled flatbreads, spanikopita
|
|
Cloud Computing | ||||
Session Moderators: Bill Taylor, Zach Miller | ||||
1:00 pm | – | 1:20 pm | HTCondor at Cycle Computing: Better Answers. Faster.
This talk describes two recent projects at Cycle Computing that make use of HTCondor to provide high-throughput clusters on Amazon EC2 that allow customers to perform computation quickly and cheaply. Scaling to large core counts and volumes of data are discussed.
|
Ben Cotton Cycle Computing
|
1:25 pm | – | 1:45 pm | Optimizations in running large-scale Genomics workloads in Globus Genomics using HTCondor |
Ravi Madduri Argonne National Laboratory
|
1:50 pm | – | 2:10 pm | Topic: Amazon Web Services |
Jamie Kinney Amazon Web Services
|
2:15 pm | – | 2:45 pm | Panel: The Money Road to the Cloud |
Moderator: Miron Livny Center for High Throughput Computing Bruce Maas University of Wisconsin – Madison Steve Elliott Amazon Web Services Ravi Madduri Argonne National Laboratory
|
2:50 pm | – | 3:25 pm | Break
coffee, tea, soda, dried fruits, pretzels, bruschetta
|
|
Session Moderators: Kent Wenger, Christina Koch | ||||
3:25 pm | – | 3:45 pm | The History and Future of the HTCondor Python Bindings |
Brian Bockelman University of Nebraska – Lincoln
|
3:50 pm | – | 4:10 pm | High-Energy Physics workloads on 10k non-dedicated opportunistic cores with Lobster.
This talk presents our experience with the design of Lobster, a system for deploying data intensive high throughput applications on non-dedicated clusters for high energy physics analysis. We demonstrate that Lobster runs effectively on 10k cores, producing throughput at a level comparable with some of the largest dedicated clusters in the LHC infrastructure.
|
Benjamin Tovar University of Notre Dame
|
4:15 pm | – | 4:35 pm | A Year of HTCondor Monitoring: Lessons Learnt and Insights Gained
OSG Connect uses HTCondor to provide users with a virtual cluster interface to the distributed resources of the Open Science Grid. In this presentation, we discuss our attempts over the past year to improve collection, analysis and visualization of HTCondor job statistics and resource consumption.
|
Lincoln Bryant & Suchandra Thapa University of Chicago
|
4:40 pm | – | 5:00 pm | Scaling Glidein WMS to manage more jobs on more heterogeneous resources |
Marco Mambelli Fermi National Accelerator Laboratory
|
Specific talks and times are subject to change.