Publications



Structure from Shadow Motion

Austin Abrams, Ian Schillebeeckx, Robert Pless.


Hover to animate

In outdoor images, cast shadows define 3D constraints between the sun, the points casting a shadow, and the surfaces onto which shadows are cast. This cast shadow structure provides a powerful cue for 3D reconstruction, but requires that shadows be tracked over time, and this is difficult as shadows have minimal texture. Thus, we develop a shadow tracking system that enforces geometric consistency for each track and then combines thousands of tracking results to create a 3D model of scene geometry. We demonstrate reconstruction results on a variety of outdoor scenes, including some that show the 3D structure of occluders never directly observed by the camera.
To be presented at the International Conference on Computational Photography 2014.
Download the .pdf (6 MB). Authors' personal version. The conference version will appear on IEEE Explore soon. Supplemental Video

The Episolar Constraint: Monocular Shape from Shadow Correspondence

Austin Abrams, Kylia Miskell, Robert Pless.

Shadows encode a powerful geometric cue: if one pixel casts a shadow onto another, then the two pixels are colinear with the lighting direction. Given many images over many lighting directions, this constraint can be leveraged to recover the depth of a scene from a single viewpoint. For outdoor scenes with solar illumination, we term this the episolar constraint, which provides a convex optimization to solve for the sparse depth of a scene from shadow correspondences, a method to reduce the search space when finding shadow correspondences, and a method to geometrically calibrate a camera using shadow constraints. Our method constructs a dense network of nonlocal constraints which complements recent work on outdoor photometric stereo and cloud based cues for 3D. We demonstrate results across a variety of time-lapse sequences from webcams "in the wild."
Presented at IEEE Computer Vision and Pattern Recognition (CVPR) 2013.
Download the .pdf (9 MB)


Heliometric Stereo: Shape from Sun Position

Austin Abrams, Christopher Hawley, Robert Pless.

In this work, we present a method to uncover shape from webcams "in the wild." We present a variant of photometric stereo which uses the sun as a distant light source, so that lighting direction can be computed from known GPS and timestamps. We propose an iterative, non-linear optimization process that optimizes the error in reproducing all images from an extended time-lapse with an image formation model that accounts for ambient lighting, shadows, changing light color, dense surface normal maps, radiometric calibration, and exposure. Unlike many approaches to uncalibrated outdoor image analysis, this procedure is automatic, and we report quantitative results by comparing extracted surface normals to Google Earth 3D models. We evaluate this procedure on data from a varied set of scenes and emphasize the advantages of including imagery from many months.
Presented at European Conference on Computer Vision (ECCV) 2012.
Download the .pdf (3 MB) (Author's version. The original publication is available at www.springerlink.com)


Web-Accessible Geographic Integration and Calibration of Webcams

Austin Abrams, Robert Pless.

A global network of webcams offers unique viewpoints from tens of thousands of locations. Understanding the geographic context of this imagery is vital in using these cameras for quantitative environmental monitoring or surveillance applications. We derive robust geo-calibration constraints that allow users to geo-register static or pan-tilt-zoom cameras by specifying a few corresponding points, and describe our web interface suitable for novices. We discuss design decisions that support our scalable, publicly-accessible web service that allows webcam textures to be displayed live on 3D geographic models. Finally, we demonstrate several multimedia applications for geocalibrated cameras.


LOST: Longterm Observation of Scenes (with Tracks)

Austin Abrams, Jim Tucek, Joshua Little, Nathan Jacobs, Robert Pless

We introduce the Longterm Observation of Scenes (with Tracks) dataset. This dataset comprises videos taken from streaming outdoor webcams, capturing the same half hour, each day, for over a year. LOST contains rich metadata, including geolocation, day-by-day weather annotation, object detections, and tracking results. We believe that sharing this dataset opens opportunities for computer vision research involving very long-term outdoor surveillance, robust anomaly detection, and scene analysis methods based on trajectories. Efficient analysis of changes in behavior in a scene at very long time scale requires features that summarize large amounts of trajectory data in an economical way. We describe a trajectory clustering algorithm and aggregate statistics about these exemplars through time and show that these statistics exhibit strong correlations with external meta-data, such as weather signals and day of the week.
Presented at IEEE Workshop on Applications of Computer Vision (WACV) 2012.
Download the .pdf (8.1 MB)


Tools for Richer Crowd Source Image Annotations

Joshua Little, Austin Abrams, Robert Pless

Crowd-sourcing tools such as Mechanical Turk are popular for annotation of large scale image data sets. Typically, these annotations consist of bounding boxes or coarse outlines of objects, in order to keep the interface as simple as possible and to respect browser constraints. However, as most browsers now contain functionality to quickly process images and render shapes to the browser through JavaScript, better annotations can feasibly be generated through the browser given an easy-to-use interface. In this paper, we develop a suite of annotation tools for high-fidelity object contouring and 3D pose working within the limitation that, to be accessible to most Mechanical Turk users, the tools must be available through browsers with no plug-ins or extra downloads. We show comparative results exploring the annotation accuracy relative to existing annotation tools.
Presented at IEEE Workshop on Applications of Computer Vision (WACV) 2012.
Download the .pdf (6.4 MB)
Download the supplemental material (1.7 MB)


On analyzing video with very small motions

Michael Dixon, Austin Abrams, Nathan Jacobs, and Robert Pless.

We characterize an important class of videos consisting of very small, but potentially very complicated, motions. We find that in these scenes, linear appearance variations have a direct relationship to scene motions. We show how to interpret appearance variations captured through a PCA decomposition of the image set as a scene-specific non-parametric motion basis. We propose very fast, robust tools for dense flow estimates that are effective in scenes with very small motions and potentially large image noise. We show example results in a variety of applications, including motion segmentation and long-term point tracking.
Presented at IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2011.
Download the .pdf (4 MB)   Download the supplemental material (9 MB)


Exploratory Analysis of Time-Lapse Imagery with Fast Subset PCA

Austin Abrams, Emily Feder, Robert Pless

In surveillance and environmental monitoring applications, it is common to have millions of images of a particular scene. While there exist tools to find particular events, anomalies, human actions and behaviors, there has been little investigation of tools which allow more exploratory searches in the data. This paper proposes modifications to PCA that enable users to quickly recompute low-rank decompositions for select spatial and temporal subsets of the data. This process returns decompositions orders of magnitude faster than general PCA and are close to optimal in terms of reconstruction error. We show examples of real exploratory data analysis across several applications, including an interactive web application.
Presented at IEEE Workshop on Applications of Computer Vision (WACV) 2011.
Download the .pdf (3 MB)


Webcams in Context: Web Interfaces to Create Live 3D Environments

Austin Abrams, Robert Pless.

Web services supporting deep integration between video data and geographic information systems (GIS) empower a large user base to build on popular tools such as Google Earth and Google Maps. Here we extend web interfaces designed explicitly for novice users to integrate streaming video with 3D GIS, and work to dramatically simplify the task of retexturing 3D scenes from live imagery. We also derive and implement constraints to use corresponding points to calibrate popular pan-tilt-zoom webcams with respect to GIS applications, so that the calibration is automatically updated as web users adjust the camera zoom and view direction. These contributions are demonstrated in a live web application implemented on the Google Earth Plug-in, within which hundreds of users have already geo-registered streaming cameras in hundreds of scenes to create live, updating textures in 3D scenes.


Participatory Integration of Live Webcams into GIS

Austin Abrams, Nick Fridrich, Nathan Jacobs, Robert Pless.

Global satellite imagery provides nearly ubiquitous views of the Earth’s surface, and the tens of thousands of webcams provide live views from near Earth viewpoints. Combining these into a single application creates live views in the global context, where cars move through intersections, trees sway in the wind, and students walk across campus in real- time. This integration of the camera requires registration, which takes time, effort, and expertise. Here we report on two participatory interfaces that simplify this registration by providing applications which allow anyone to use live webcam streams to create virtual overhead views or to map live texture onto 3D models. We highlight system design issues that affect the scalability of such a service, and offer a case study of how we overcame these in building a system which is publicly available and integrated with Google Maps and the Google Earth Plug-in. Imagery registered to features in GIS applications can be considered as richly geotagged, and we discuss opportunities for this rich geotagging.
Presented at COM.Geo 2010: The 1st International Conference on Computing for Geospatial Research & Application.
Download the .pdf (1.2 MB)


The Global Network of Outdoor Webcams: Properties and Applications

Nathan Jacobs and Walker Burgin and Nick Fridrich and Austin Abrams and Kylia Miskell and Bobby H. Braswell and Andrew D. Richardson and Robert Pless

There are thousands of outdoor webcams which offer live images freely over the Internet. We report on methods for discovering and organizing this already existing and massively distributed global sensor, and argue that it provides an interesting alternative to satellite imagery for global-scale remote sensing applications. In particular, we characterize the live imaging capabilities that are freely available as of the summer of 2009 in terms of the spatial distribution of the cameras, their update rate, and characteristics of the scene in view. We offer algorithms that exploit the fact that webcams are typically static to simplify the tasks of inferring relevant environmental and weather variables directly from image data. Finally, we show that organizing and exploiting the large, ad-hoc, set of cameras attached to the web can dramatically increase the data available for studying particular problems in phenology.
Presented at AACM International Conference on Advances in Geographic Information Systems (SIGSPATIAL GIS)
Download the .pdf.