Motion Textures: A Two-Level Statistical Model for Character Motion Synthesis by Li et al. SIGGRAPH 2002 Summary: This paper discusses a two level statistical model of motion that can be learned from example motion data and then used to generate similar motion. The motion created from the model can be constrained or not and can be generated from noise in order to create theoretically infinite sequences. Problem: Because capturing motion data is expensive and editing motion data is difficult, the authors of this paper were trying to develop an alternative way to capture information about the rhythms and patterns of human motion in order to make generating and editing similar motion easier. Special focus was given to the particular class of dancing motions. Methodology: The authors of this paper built a probability based framework that gathered data about the example data on two levels. Individual textons, each represented as a linear dynamic system and initial distribution of "states", or key frames representing starting poses (just a list in the author's implementation), captured information about local linear dynamics, while the probability graph of transitions between textons (referred to as a transition matrix) captured information about the global non-linear dynamics. The authors used an iterative, greedy approach to find segments of the motion data representing textons and included a user specified fitting error tolerance to keep textons realistically related to the data. Once the textons were found and the transition matrix calculated, this data was used to approximate possible related motion sequences and "edit" existing sequences by specifying key frame constraints. The editing constraints as well as many of the aspects of motion generation were formulated as relatively simple constraint problems with a noise driven aspect. Key Ideas: The two levels in the model capture both individual sections of texture as well as the likelihood of transitions between these sections. Users can use the two level model to edit generated motions on either a local or global level and still maintain distinct properties of the existing motion. Contributions: This method captures distinct patterns in motion from input data without any user annotation, key frame selection, segmentation, or other user involvement beyond specifying the texton fitting error tolerance. Questions: The only type of motion that is used in the author's examples is from a specific 20 minute motion capture session with a dancer. I was curious how well this method of modeling would hold up to other forms of motion or shorter sequences of motion data. I was also curious how much a smaller or larger initial size for the textons when partitioning the input data would change the quality and usefulness of the generated model.