Depth Extraction from Video Using Non-parametric Sampling
Karsch, K.; Liu, C.; Kang, S.B.
European Conference on Computer Vision, 2012.
Automatically inferring depth from pictures and videos is a fundamental problem in computer vision. Understanding the space of a scene is requisite for autonomous systems, and over the last few years, RGBD (RGB + depth) images have become increasingly useful for a large number computer vision tasks. Using non-parametric learning, we show how to transfer depth values from a dataset of range scans to videos and even single images. Most interestingly, our findings suggest that appearance is highly correlated with depth.[Show publications]
Current image editing software only allows 2D manipulations with no regard to the high level spatial information that is present in a given scene, and 3D modeling tools are sometimes complex and tedious for a novice user. Our goal is to extract 3D scene information from single images to allow for seamless object insertion, removal, and relocation. This process can be broken into three somewhat independent phases: luminaire inference, perspective estimation (depth, occlusion, camera parameters), and texture replacement. We are working on developing novel solutions to each of these phases, in hopes of creating a new class of physically-aware image editors.[Show publications]
By re-parameterizing a typical active contour (snake) with the surface of a mesh rather than a grid of pixels, we believe that we can find interesting properties of meshes that current techniques can only approximate (ridges, valleys, planar maps, and so on). Using these contours, we can also create non-photorealistic graphics as well, such as vector art. By examining the contours over time, we can efficiently create vector animations from 3D mesh objects.[Show publications]
Most amateur photography has no aesthetic guidance. We wish to create new techniques for aiding users in creating aesthetically pleasing images based on guidelines that artists and photographers have used for years, such as the rule of 3rds (among others). Currently, we are working on three specific algorithms: automatic cropping, automatic position, and dynamic rim lighting. Our hope is that these methods can be incorporated in physical media (cameras, phones, etc), as well as in sites filled with user content, such as Facebook.
Over the summer of 2009, I worked at the Naval Research Lab with Dr. Mark Livingston. I researched and implemented graphical representations for occluded objects in the Battlefield Augmented Reality System (BARS). To evaluate the effectiveness of these representations, I conducted a user study to determine which representation should be used in an augmented reality system in development for unmounted US military soldiers.[Show publications]
As part of my undergraduate thesis, I researched ways to use Light Detection And Ranging (LIDAR) data for reducing wartime casualties and improving roadway safety. This included developing architectural surface reconstruction algorithms and inferring local and global statistics about point clouds. For this research, I worked with Prof. Ye Duan and Prof. Norbert Maerz.
Most of my undergraduate research with Prof. Ye Duan was focused on medical image segmentation. We began by developing semiautomatic segmentation techniques for 2D MRI slices, and soon incorporated some learning techniques to automate this process. We then extended this work to 3D to create a framework for general brain structure segmentation from MRI.[Show publications]
Working with Prof. Ye Duan, Dr. Judith Miles, and Prof. Shawn Christ, we developed new shape and structural analysis tests for brain structures. We used these techniques and previous MRI segmentation results to obtain new information about autism and phenylketonuria (PKU).
In a separate project, Brain Grinstead and I created a brain volume calculation tool which is available in an online interface on my downloads page.
I also did some neuroimaging work during my summer internship at Washington University in 2007, where I worked with Dr. Robert Drzymala and Dr. Joseph Deasy. I developed a paperless documentation system for the Gamma Knife, as well as an importing technique to visualize and quantify Gamma Knife results (the Gamma Knife is a non-invasive radiation device).[Show publications]