On analyzing video with very small motions

Michael Dixon, Austin Abrams, Nathan Jacobs, and Robert Pless.


Abstract

We characterize an important class of videos consisting of very small, but potentially very complicated, motions. We find that in these scenes, linear appearance variations have a direct relationship to scene motions. We show how to interpret appearance variations captured through a PCA decomposition of the image set as a scene-specific non-parametric motion basis. We propose very fast, robust tools for dense flow estimates that are effective in scenes with very small motions and potentially large image noise. We show example results in a variety of applications, including motion segmentation and long-term point tracking.


Citation




Figure 1. (top) A single frame extracted from a one minute video of gently waving trees. (middle) The first component of the PCA decomposition of the video shows characteristic patterns found in the PCA components of images of moving textures. In such scenes, PCA often fails to provide high quality image reconstructions; despite this, it still encodes the motion in the scene. (bottom) Using multiple PCA components-and interpreting them in terms of observed scene motion-enables an efficient method for segmenting a scene into locations of coherent motion, among other applications. Click thumbnails for larger versions.

M. Dixon, A. Abrams, N. Jacobs, and R. Pless. On analyzing video with very small motions. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, June 2011. [BibTeX]

Download the .pdf
Download the supplemental material


Project

This project is about understanding and parameterizing motion in a class of videos that can fairly be described as very boring-long videos of scenes with changes due to very small motions. This includes video captured by a camera observing the breathing of someone asleep, watching trees wave gently in the wind, or observing a car engine as it vibrates when it starts, and also includes video from cameras whose viewpoint jitters because they are handheld or mounted on a shaky support. Within this class of videos, there are a wide variety of problem domains that require understanding and segmenting motions within the scene.

Figure 2. Although PCA decompositions are usually interpreted as discovering the underlying variations in appearance, it can also be used as an interpretation of the complex motion in a scene. This re-interpretation allows us to quickly solve for (left) noise-resistant dense optical flow, and (right) motion segmentation. Roll over for flow image, and click thumbnails for larger versions.

One natural intermediate representation to support these applications is the dense motion field between all frames in the video sequence. The traditional approach to solving for a dense motion field is to combine independent frame-to-frame flow estimates. This approach does not take advantage of the similarities between all frames, and is therefore needlessly slow. In addition, as we will show, it does not always give the best result.

Thus, this project offers a very fast and robust algorithm for computing dense motion estimates within this class of videos, where the motions are very small and perhaps are repeated (periodically or not) over time. The approach is based upon computing the PCA decomposition of the set of images within the video. The PCA decomposition creates component images that approximately span the space of image variation-when this variation is caused by small motions, the component images are often similar to local, directional derivative filters.


Results

For (a)-(e), we show (left) an example image, (top center) the first two principal component images, (bottom center) their corresponding principal motion components, and (right) the computed flow for an image in the sequence.