Who is Kylia Miskell?

Kylia Miskell is an artist, programmer and designer based in Saint Louis, Missouri. They received their B.S. in computer science (with a minor in art) from Washington University in St. Louis (WUSTL) in 2011. They received their M.S. in computer science in 2015 working in the WUSTL Media and Machines lab under the supervision of Dr. Robert Pless.

Contact Information

You can get in touch with Kylia through their LinkedIn or Twitter accounts or via e-mail at kylia (dot) miskell (at) gmail (dot) com.


The Episolar Constraint: Monocular Shape from Shadow Correspondence

Austin Abrams, Kylia Miskell, Robert Pless

CVPR 2013

Shadows encode a powerful geometric cue: if one pixel casts a shadow onto another, then the two pixels are colinear with the lighting direction. Given many images over many lighting directions, this constraint can be leveraged to recover the depth of a scene from a single viewpoint. For outdoor scenes with solar illumination, we term this the episolar constraint, which provides a convex optimization to solve for the sparse depth of a scene from shadow correspondences, a method to reduce the search space when finding shadow correspondences, and a method to geometrically calibrate a camera using shadow constraints. Our method constructs a dense network of nonlocal constraints which complements recent work on outdoor photometric stereo and cloud based cues for 3D. We demonstrate results across a variety of time-lapse sequences from webcams "in the wild."

Webcam Geolocation Using Aggregate Light Levels

Nathan Jacobs, Kylia Miskell, Robert Pless


We consider the problem of geo-locating static cameras from long-term time-lapse imagery. This problem has received significant attention recently, with most methods making strong assumptions on the geometric structure of the scene. We explore a simple, robust cue that relates overall image intensity to the zenith angle of the sun (which need not be visible). We characterize the accuracy of geo-location based on this cue as a function of different models of the zenith-intensity relationship and the amount of imagery available. We evaluate our algorithm on a dataset of more than 60 million images captured from outdoor webcams located around the globe. We find that using our algorithm with images sampled every 30 minutes, yields localization errors of less than 100km for the majority of cameras.

The Global Network of Outdoor Webcams : Properties and Applications

Nathan Jacobs et al.


There are thousands of outdoor webcams which offer live images freely over the Internet. We report on methods for discovering and organizing this already existing and massively distributed global sensor, and argue that it provides an interesting alternative to satellite imagery for global-scale remote sensing applications. In particular, we characterize the live imaging capabilities that are freely available as of the summer of 2009 in terms of the spatial distribution of the cameras, their update rate, and characteristics of the scene in view. We offer algorithms that exploit the fact that webcams are typically static to simplify the tasks of inferring relevant environmental and weather variables directly from image data. Finally, we show that organizing and exploiting the large, ad-hoc, set of cameras attached to the web can dramatically increase the data available for studying particular problems in phenology.