Computer Science & Engineering
|Home | Publications | Research | Code | VisualSFM | Video|
Schematic Surface Reconstruction
This paper introduces a schematic representation for architectural scenes together with robust algorithms for reconstruction from sparse 3D point cloud data. The representation is extremely concise, composed of a handful of planar curves, and easily interpretable by humans. By incorporating a displacement map on top of the schematic surface, it is possible to recover fine details. Experiments show the ability to reconstruct extremely clean and simple models from sparse structure-from-motion point clouds of complex architectural scenes.
the design and implementation of new inexact Newton
type Bundle Adjustment algorithms that exploit hardware
parallelism for efficiently solving large scale
3D scene reconstruction problems. We explore the
use of multicore CPU as well as multicore
GPUs for this purpose. We show that overcoming the
severe memory and bandwidth limitations of current
generation GPUs not only leads to more space efficient
algorithms, but also to surprising savings in runtime.
Our CPU based system is up to ten times and our
GPU based system is up to thirty times faster than
current state of the art methods, while maintaining
comparable convergence behavior.
We present a novel approach for dense reconstruction from a single-view of a repetitive scene structure. We develop a MRF-based framework to balance the high-level constraint of geometric repetition/symmetry, and the standard constraints of the photometric consistency and the spatial smoothness of the reconstructed scene.
We proposed a novel robust and efficient framework to analyze large repetitive structures in urban scenes. A particular contribution of the approach is that it finds the salient boundaries of the repeating elements even when the repetition exists along only one direction. To evaluate the repetition quality of an image patch w.r.t a given repetition interval, We introduced a novel measure that suppresses the ambiguity from integer multiples of repetition intervals, and determine the region boundary accurately. Experiments demonstrate the robustness and repeatability of the proposed repetition detection. The detection repeating elements can server as features for scene recognition.
Changchang Wu, Jan-Michael Frahm, Marc Pollefeys, " Detecting Large Repetitive Structures with Salient Boundaries", ECCV 2010( poster, video)
We have developed a novel class of viewpoint invariant features to deal with large viewpoint changes in 3D reconstruction. By leveraging the 3D geometry recovered from stereo, we extract features in the orthogonal view of 3D local patches to achieve projective invariance. The 3D information of each VIP feature determines the 3D similarity transformation from a single match, and allows us to develop an efficient 3D matching algorithm by testing hypotheses hierarchically.
Changchang Wu, Brian Clipp,
Xiaowei Li, Jan-Michael Frahm, Marc Pollefeys, "
3D Model Matching with Viewpoint
Invariant Patches (VIPs)", CVPR 2008 (oral, video ).
The code package for this project is available at vipcode_v1.zip(GPL)
3D Reconstruction of Internet Photo Collections
I worked on the SfM part of the Modeling and Recognition of Landmark Image Collections Using Iconic Scene Graphs and Building Rome on a Cloudless Day project. I worked on GPU feature detection, feature matching, incremental reconstruction, and bundle adjustment.