Computer Graphics and Visualization Lab
Department of Computer Science at Purdue University

CGVLab Projects

Acquisition and Modeling
Appearance Editing: Modifying the Appearance of Real-World Objects

Appearance editing offers a unique way to view visually altered objects with various appearances or visualizations. By carefully controlling how an object is illuminated using digital projectors, we obtain stereoscopic imagery for any number of observers with everything visible to the naked eye (i.e., no need for head-mounts or goggles). Such an ability is useful for various applications, including scientific visualization, virtual restoration of cultural heritage, and display systems.
Urban Modeling and Visualization

Our project efforts have focused on obtaining digital models of large-scale urban structures in order to enable simulating physical phenomena and human activities in city-size environments. To date, we have developed several algorithms and large-scale software systems using ground-level imagery, aerial imagery, GIS data, and forward and inverse procedural modeling to create/modify 3D and 2D urban models.
Embedding Information into Physical Objects

Our work provides methods to embed into a physical object information for a variety of purposes, including genuinity detection, tamper detection, and multiple appearance generation. Genuinity detection refers to encoding fragile or robust signatures so that a copy, or tampered, version can be differentiated from the original object. Multiple appearance generation refers to generalizing the encoded information from a signature to a different appearance of the same physical object.
A Photogeometric Framework for Capturing 3D Objects

We introduce a photogeometric framework for acquiring 3D objects with sub-millimeter accuracy. The defining characteristic of our framework is leveraging the complementary advantages of photometric and geometric acquisition. The two approaches are tightly integrated in an iterative acquisition process that achieves self-calibration, multi-viewpoint sampling, and high level of detail.
Pose-Free 3D Reconstruction

Conventional 3D reconstruction from digital photographs requires (pre-calibration) or computes (self-calibration) camera pose for each photograph. We have developed a mathematical framework where the parameters defining camera poses are eliminated from the nonlinear system of 3-D reconstruction equations, which leads to significantly more robust and accurate 3D models.
Modeling Scenes with Strong Inter-reflections

Structured light is a powerful approach for acquiring 3-D models of real world scenes. The scene is illuminated with a custom pattern of light and imaged with a digital camera. An important challenge in structured light acquisition comes from glossy and specular objects which reflect the patterns of light and create false positives. We have developed an iterative and adaptive algorithm that reduces the inter-reflection within the scene, which leads to robust pixel classification and to accurate and dense 3-D reconstruction.
Modeling Repetitive Motions in Real-World 3D Scenes

Most 3-D acquisition systems assume that the scene is static. We have taken significant steps towards supporting the acquisition of dynamic scenes by developing algorithms that detect and leverage repetitive motion in the scene (e.g. person walking, flag waving). Our approach produces space-time 3D models using as few as two cameras or one camera-projector pair.
Model Camera: Real-Time Modeling from Dense Color and Sparse Depth

The ModelCamera is a fast, easy to use, and inexpensive 3D scene modeling system. The ModelCamera acquires dense color (720×480 video frames) augmented with sparse depth (7×7 to 11×11 depth samples). The frames are registered and merged into an evolving model at the rate of five frames per second. The model is displayed continually to provide immediate operator feedback.
Occlusion-Resistant Camera Designs: Acquiring Active Environments

Obtaining image sequences of popular and active environments is often hindered by unwanted interfering occluders. In this work, we propose a family of Occlusion-Resistant Camera designs for acquiring such environments. Our cameras explicitly remove interfering occluders from acquired data in real-time, during live capture.
Sea of Images

We present an image-based approach to providing interactive and photorealistic walkthroughs of complex indoor environments. Our strategy is to obtain a dense sampling of viewpoints in a large static environment with omnidirectional images and to replace the 3D reconstruction challenges with easier problems of motorized-cart control, dense image-based sampling, and compression.
Rendering
Camera Model Design

The camera model design paradigm advocates designing the set of rays that best suit a given application and optimizing it dynamically according to the data currently sampled. Camera model design is a flexible framework for generating images with multiple viewpoints and with a variable sampling rate. Like conventional images, the generated images are continuous, non-redundant, and can be computed efficiently with the help of graphics hardware.
Study of Shape Perception Using Volumetric 3D Images

We study 3D shape perception using a volumetric 3D display that allows several users to observe simultaneously a sculpture of light without the disadvantages of uncomfortable eyewear, jittery trackers, rendering pipeline latency, and strenuous vergence and accommodation. A secondary goal of this project is to evaluate this alternative display technology.
Graphics and Education
Mixed Reality and Tablet PCs

We are developing novel and intuitive interfaces in educational scenarios. In particular, we have developed a mixed-reality tabletop (MRT) and are creating portable Tablet PC applications.
Effective Distance Learning through Sustained Interactivity and Visual Realism

The goal is to research, implement, deploy and assess a distance learning system that extends the real classroom to accommodate remotely located students. The system will convey to both remote and local participants a strong sense that the remote participants are actually present in the classroom.
Scientific Computing and Visualization
High-Fidelity Visualization of Large-Scale Simulations

The goal of our team was to produce a visualization of the September 11, 2001 attacks on the Pentagon and the World Trade Center. The immediate motivation for the project was to understand the behavior of the building under the impact. The longer term motivation was to establish a path for producing high-quality visualizations of large scale simulations.
Geometric Computing with Graphics Hardware Support

The project explores the capabilities of the Larrabee platform. At this time, the project focuses on developing new algorithms suitable for the architecture and fair to evaluate for performance and accuracy with the GP-GPU.
Model Reduction for Dynamical Systems

Model reduction and real-time control find applications in diverse areas. These include simulation and control of large-scale structures, weather prediction, air quality management, molecular dynamics simulations, simulation and control of chemical reactors (e.g. Chemical Vapor Deposition), and simulation and control of micro-electro-mechanical systems (e.g., micromirrors), to name but a few. We seek to replace a large-scale systems of differential or difference equations by a system of substantially lower dimension that has nearly the same response characteristics. Ultimately, we intend to utilize these new reduced order modeling techniques to design low-order real-time controllers for large-scale dynamical systems.
Computational Geometry
Robust Computational Geometry

Robust computational geometry is a fundamental computer science problem with a long research history. The task is to implement analytic geometry with computer arithmetic. We are currently developing computational geometry algorithms that employ floating point arithmetic and numerical algorithms, yet have rigorous running time and error bounds.
Path Planning

We are developing path planning algorithms for robots in complex environments. We are combining configuration space methods with randomized planning to handle crowded environments and narrow channels, which have proved difficult for pure randomized planners.
Former Projects
Massive Model Rendering

Rendering and visualizing large 3D synthetic models is a crucial component of many engineering disciplines and is becoming increasingly more important for simulations, gaming, and education. Although rendering hardware continues to improve, the desire to render even larger models continues to increase. Historically, large models could only be rendered on highly specialized computers. However, today's PC are an attractive platform for interactive rendering as well. In this work, we investigate several approaches to rendering acceleration of large 3D models.
Computer-Aided Mechanical Assembly Design Using Configuration Spaces

We are developing computer-aided design mechanical design software in which all tasks are performed within a single computational paradigm. In particular, we have developed a prototype design environment called HIPAIR for general planar assemblies. HIPAIR supports the key design tasks of simulation, parametric design, and functional tolerancing for a broad range of mechanical systems, such as mechanisms, part feeders, robotic arms, and knee prostheses.
Spatial Geometric Constraints and Design Intent

This project aims to advance the state of the art in geometric constraint solving, especially spatial constraint solving. Key results of the work include an efficient general purpose decomposition algorithm for large-scale constraint problems, and many new techniques for solving nonlinear algebraic equation systems that arise in the decomposition.
 

projects.txt · Last modified: 2011/10/18 16:04 by cvanegas
  [Main Page]   [Projects]   [Publications]   [People]   [Courses]   [Talks]