Email

ignaciogarciadorado{at}gmail{dot}com

Ignacio Garcia-Dorado is currently working at Google Research as December 2015.

He received his Ph.D. in the Department of Computer Science at Purdue University, USA. He worked as a research assistant under the supervision of Professor D. Aliaga, focusing on inverse procedural modeling, 3D urban reconstruction, and human vision.

He also holds M.S. degrees in Electrical Engineering (UPM, Spain, 2008); Computer Engineering (LTH, Sweden, 2008); and Computer Science (Purdue University, USA, 2014). From 2008 to 2010, Ignacio worked at the ESA as a Computer Engineer in Noordwijk, The Netherlands. From January to May 2010, he worked as a Research Assistant at McGill University, Canada. After this, he was awarded a Fulbright Scholarship to initiate Ph.D. studies at Purdue University. During his Ph.D. at Purdue, he worked as a Research Intern at NVidia during the Summer of 2013 and as a Research Assistant at U.C. Berkeley during the Summer of 2014. After his Ph.D. defense on Octuber 2015, he moved to Mountain View to work in the Computational Photography team at Google Research.


Publications

* The first two authors contributed equally to this work.

Research

Computational Imaging

Handheld Multi-Frame Super-Resolution TOG.

  Compared to DSLR cameras, smartphone cameras have smaller sensors, which limits their spatial resolution; smaller apertures, which limits their light gathering ability; and smaller pixels, which reduces their signal-tonoise ratio. The use of color filter arrays (CFAs) requires demosaicing, which further degrades resolution. In this paper, we supplant the use of traditional demosaicing in single-frame and burst photography pipelines with a multiframe super-resolution algorithm that creates a complete RGB image directly from a burst of CFA raw images. We harness natural hand tremor, typical in handheld photography, to acquire a burst of raw frames with small offsets. These frames are then aligned and merged to form a single image with red, green, and blue values at every pixel site. This approach, which includes no explicit demosaicing step, serves to both increase image resolution and boost signal to noise ratio. Our algorithm is robust to challenging scene conditions: local motion, occlusion, or scene changes. It runs at 100 milliseconds per 12-megapixel RAW input burst frame on mass-produced mobile phones. Specifically, the algorithm is the basis of the Super-Res Zoom feature, as well as the default merge method in Night Sight mode (whether zooming or not) on Google’s flagship phone.

Image Stylization: From Predefined to Personalized IET Computer VIsion

  We present a framework for interactive design of new image stylizations using a wide range of predefined filter blocks. Both novel and off-the-shelf image filtering and rendering techniques are extended and combined to allow the user to unleash their creativity to intuitively invent, modify, and tune new styles from a given set of filters. In parallel to this manual design, we propose a novel procedural approach that automatically assembles sequences of filters, leading to unique and novel styles. An important aim of our framework is to allow for interactive exploration and design, as well as to enable videos and camera streams to be stylized on the fly. In order to achieve this real-time performance, we use the "Best Linear Adaptive Enhancement" (BLADE) framework -- an interpretable shallow machine learning method that simulates complex filter blocks in real time. Our representative results include over a dozen styles designed using our interactive tool, a set of styles created procedurally, and new filters trained with our BLADE approach.
















Urban Procedural Models: Inverse Design and Traffic

Designing Large-Scale Interactive Traffic Animations for Urban Modeling EG.

  We presented an approach to interactively "paint" a desired vehicular traffic behavior and animation and then the system automatically computes a realistic 3D urban model yielding the specified behavior. We used our system to control traffic behaviors such as road occupancy, travel time, and CO emission. Our framework includes a novel traffic microsim- ulation approach which yields the high performance needed for our interactive design tool. Our traffic manipulation strategy adapts a MCMC method to explore the solution space by performing a set of road network changes.

Inverse Design of Urban Procedural Models TOG

  We have coupled an automatic inverse design approach for urban procedural modeling with forward procedural modeling. Urban indicators are intuitive metrics for measuring the desirability of urban areas. The relationship of indicators to the procedural model is in general unknown and complex which has until now hindered their direct specification. We tackle the well-known open problem of controlling procedural modeling by providing a generalized mechanism that allows users to specify arbitrary target indicators and automatically compute the optimal parameters to obtain the desired output.



Building Reconstruction

Automatic Urban Modeling using Volumetric Reconstruction with Surface Graph Cuts Computer & Graphics.

  We have presented an automatic urban-scale modeling approach using volumetric reconstruction from aerial calibrated images with surface graph-cut based texture generation. Our method generates building proxies using voxel and color consistency, exploits surface graph-cuts for recovering occluded facades and ground imagery and for assembling a seamless plausible texture mapping, and outputs 3D urban models comparable to other public systems.

Automatic Modeling of Planar-Hinged Buildings EG

  We present a framework to automatically model and reconstruct buildings in a dense urban area. Our method is robust to noise and recovers planar features and sharp edges, producing a water-tight triangulation suitable for texture mapping and interactive rendering.



UrbanVision Project

UrbanVision is an open source software system for visualizing alternative land use and transportation scenarios at scales ranging from large metropolitan areas to individual neighborhoods. The motivation behind this system to fill the gap between the outputs of existing land use and transportation models and the automatic generation of 3D urban models and visualizations. The project is a collaborative effort between University of California Berkeley and Purdue University, led by Prof. Paul Waddell (Berkeley) and by Profs. Daniel Aliaga and Bedrich Benes (both at Purdue). The initial system is deployed to the San Francisco Bay Area CA, spanning over 7 million people and 1.5 million parcels of land.

My contribution:
  UrbanVision supports automatically generating a plausible set of 3D building envelope models based on GIS input and simulation outputs. We use this information to create a set of parametric building types (e.g., 14 in the case of San Francisco), which are configured using parameters to depict a rich variety of building geometries. While each base type (e.g., school, big-retail buildings, offices, etc.) captures common structural characteristics, in total a much larger number of building styles are possible due to the parameterization.



Projector-Camera Systems

Fully Automatic Multi-Projector Calibration with an Uncalibrated Camera CVPRW.

  We describe a fully automated brightness and geometric calibration system for a multi-projector display using an uncalibrated camera. The geometric calibration can achieve, within minutes, pixel-accurate calibration using an inexpensive camera and vision-based techniques. This entire process is completely automatic. This allows for re-calibration of the system for each use, if required, without imposing any additional burden on the user. Moreover, our method does not make any assumptions concerning the number of projectors or their configuration.

CV




CV