Also available as an iCal calendar (subscribe or download).
See the ACM Digital Library for complete proceedings, including full copies of all papers.
WODA 2009 Chairs:
Dynamic Analysis: Looking Back and the Road Ahead [slides]
A Runtime Environment for Online Processing of Operating System Kernel Events [abstract] [slides] [paper]
Different approaches were proposed for the logging of operating system kernel events. In general, the resulting logfiles are huge and have to be analyzed by administrators, who try to identify problems and derive adequate actions. The idea of autonomic computing is to automate such tasks.
As an important step towards this vision, computer systems have to be self-aware, i.e. they must be able to detect their runtime state and react if certain problems are detected. In contrast to control-theory based approaches for autonomic computing, the processing of discrete eventstreams offers the possibility of detecting singular events such as attacks or failing components directly.
Our proposed runtime environment (1) processes event pattern descriptions, (2) combines events generated by usermode applications and the operating system kernel, (3) can be integrated into the operating system kernel to handle the events as close to their source as possible, (4) adaptively chooses relevant events to keep system disturbance low, and (5) provides an API for the implementation of ideas of autonomic computing in context of reactions to event patterns.
In this paper, the event pattern specification language and the runtime environment are described. The described prototype implements the envisioned runtime environment in user-mode and is able to look for event patterns in prerecorded event logfiles. Additionally, an outlook on the planned operating system kernel integration is given.
Using AOP for Detailed Runtime Monitoring Instrumentation [abstract] [slides] [paper]
Although AOP has long been used for monitoring purposes, the level of detail that virtually all AOP frameworks support is not enough to support broad classes of monitoring applications. Often the method or function call is the basic unit of code weaving, and if some data access weaving is supported it is usually very limited, e.g., to object field access. In this paper we demonstrate the need for AOP to be extended if it is to support broad runtime monitoring needs, and then present two new joinpoint types for AspectJ, the basic block and loop back edge types. These allow much more fine-grained weaving of advice, which in turn supports a much larger category of runtime monitoring applications. We demonstrate the feasibility of these extensions with example prototype implementations using the AspectBench Compiler (abc).
Tiddle: A Trace Description Language for Generating Concurrent Benchmarks to Test Dynamic Analyses [abstract] [slides] [paper]
Dynamic analysis is a promising technique for finding concurrency bugs in multithreaded programs. However, testing a dynamic analysis tool can be difficult. Researchers end up writing large amounts of small benchmark programs. Since the benchmarks themselves are concurrent programs, they may execute nondeterministically, complicating testing of the analysis tool.
We propose testing dynamic analyses by writing traces in a simple trace description language, Tiddle. Our implementation, written in Haskell, generates deterministic multithreaded Java programs for testing dynamic analyses. We report that it is substantially easier to write programs with incriminating bugs such as race conditions in Tiddle than the corresponding Java source code version, reducing the amount of source code to maintain and understand. Although our implementation is targeted towards Java, the ideas extend to any other languages which support mutable fields and multiple threads.
(Quickly) Testing the Tester via Path Coverage [abstract] [slides] [paper]
The configuration complexity and code size of an automated testing framework may grow to a point that the tester itself becomes a significant software artifact, prone to poor configuration and implementation errors. Unfortunately, testing the tester by using old versions of the software under test (SUT) may be impractical or impossible: test framework changes may have been motivated by interface changes in the tested system, or fault detection may become too expensive in terms of computing time to justify running until errors are detected on older versions of the software. We propose the use of path coverage measures as a “quick and dirty” method for detecting many faults in complex test frameworks. We also note the possibility of using techniques developed to diversify state-space searches in model checking to diversify test focus, and an associated classification of tester changes into focus-changing and non-focus-changing modifications.
Test Case Filtering and Prioritization Based on Coverage of Combinations of Program Elements [abstract] [slides] [paper]
Test case filtering is concerned with selecting from a test suite T a subset T′ that is capable of revealing most of the defects revealed by T. A smaller T′ is desirable since it translates to fewer test runs to be audited manually. Test case prioritization, a related technique, aims at scheduling the tests in T so that the defects are revealed as early as possible when T gets executed. We propose techniques that are based on coverage of combinations of program elements of different types. Clearly, exploring all possible combinations induced at runtime is infeasible, which calls for the use of an approximation algorithm. In this paper we investigate the use of a genetic algorithm to select a number of suitable combinations of program elements to be covered. We compared our technique to other coverage-based techniques that consider program elements of the same type and that do not take into account their combinations; our preliminary results were promising. For example, after filtering the original test suite T for JTidy, the resulting T′ revealed all the defects in T and was only 14.1% its size.
Cooperative Crug Isolation [abstract] [slides] [paper]
With the widespread deployment of multi-core hardware, writing concurrent programs has become inescapable. This has made fixing concurrency bugs (or crugs) critical in modern software systems. Static analysis techniques to find crugs such as data races and atomicity violations are not scalable, while dynamic approaches incur high run-time overheads. Crugs pose a greater challenge since they manifest only under specific execution interleavings that may not arise during in-house testing. Thus there is a pressing need for a low-overhead program monitoring technique that can be used post-deployment.
We present Cooperative Crug Isolation (CCI), a low-overhead instrumentation technique to isolate the root causes of crugs. CCI inserts instrumentation that records occurrences of specific thread interleavings at run-time by tracking whether successive accesses to a memory location were by the same thread or by distinct threads. The overhead of this instrumentation is kept low by using a novel cross-thread random sampling strategy. We have implemented CCI on top of the Cooperative Bug Isolation framework. CCI correctly diagnoses bugs in several nontrivial concurrent applications while incurring only 2–7% run-time overhead.
Limits of Parallelism Using Dynamic Dependency Graphs [abstract] [slides] [paper]
The advance of multi-core processors has led to renewed interest in extracting parallelism from programs. It is sometimes useful to know how much parallelism is exploitable in the limit for general programs, to put into perspective the speedups of various parallelisation techniques. Wall’s study [19] was one of the first to examine limits of parallelism in detail. We present an extension of Wall’s analysis of limits of parallelism, by constructing Dynamic Dependency Graphs from execution traces of a number of benchmark programs, allowing us better visualisation of the types of dependencies which limit parallelism, as well as flexibility in transforming graphs when exploring possible optimisations. Some of the results of Wall and subsequent studies are confirmed, including the fact that average available parallelism is often above 100, but requires effective measures to resolve control dependencies, as well as memory renaming. We also study how certain compiler artifacts affect the limits of parallelism. In particular we show that the use of a spaghetti stack, as a technique to implicitly rename stack memory and break chains on true dependencies on the stack pointer, can lead to a doubling of potential parallelism.