Vladimir (Vova) Kim
Senior Research Scientist, Adobe Research, Seattle
Vova Kim
I work on geometry analysis algorithms at the intersection of graphics, vision, and machine learning, enabling novel interfaces for creative tasks. My recent research focuses on making it easier to understand, model, manipulate, and process geometric data such as models of 3D objects, interior environments, articulated characters, and fonts.

[
Bio

Dr. Vladimir Kim is a Senior Research Scientist at Adobe Research. His research interests include processing and analysis of large geometric datasets by developing novel machine learning and optimization algorithms. He was a postdoctoral scholar at Stanford University (2013–2015), received his PhD in the Computer Science Department at Princeton University (2008-2013), and his BA in Mathematics and Computer Science at Simon Fraser University (2003-2008). He has been a member of the International Program Committee for SIGGRAPH (2015), and for Symposium on Geometry processing (2013–2018). He regularly reviews for top graphics and vision venues such as SIGGRAPH, Transaction on Graphics, CVPR, ICCV, and ECCV.

] [
Personal

The short version of Vladimir is Vova.
My name reflects my mixed ethnic background. My father, George Kim, is Korean and my mother, Irina Kozyreva, is Russian.
The 'G' in the middle of my name is a patronym, the full Russian version of my name is Vladimir Georgievich Kim.
Originally, I am from the little town of Kara-Balta, Kyrgyzstan.

]

vokim@adobe.com
vova.g.kim@gmail.com


Publications
iconOptCuts: Joint Optimization of Surface Cuts and Parameterization
Minchen Li, Danny Kaufman, Vladimir G. Kim, Justin Solomon, and Alla Sheffer
SIGGRAPH Asia 2018

[Code and Data TBA]  [Paper: 56mb  8mb]  [
Abstract

Low-distortion mapping of three-dimensional surfaces to the plane is a critical problem in geometry processing. The intrinsic distortion introduced by such UV mappings is highly dependent on the choice of surface cuts that form seamlines which break mapping continuity. Parameterization applications typically require UV maps with an application-specific upper bound on distortion to avoid mapping artifacts; at the same time they seek to reduce cut lengths to minimize discontinuity artifacts. We propose OptCuts, an algorithm for jointly optimizing the parameterization and cutting of a three-dimensional mesh. OptCuts starts from an arbitrary initial embedding and a user-requested distortion bound. It requires no parameter setting and automatically seeks to minimize seam lengths subject to satisfying the distortion bound of the mapping computed using these seams. OptCuts alternates between topology and geometry update steps that consistently decrease distortion and seam length, producing a UV map with compact boundaries that strictly satisfies the distortion bound. OptCuts automatically produces high-quality, globally bijective UV maps without user intervention. While OptCuts can thus be a highly effective tool to create new mappings from scratch, we also show how it can be employed to improve pre-existing embeddings. Additionally, when semantic or other priors on seam placement are desired, OptCuts can be extended to respect these user preferences as constraints during optimization of the parameterization.We demonstrate the scalable performance of OptCuts on a wide range of challenging benchmark parameterization examples, as well as in comparisons with state-of-the-art UV methods and commercial tools.

]  [
BibTex

@article{Li18,
Author = {Minchen Li and Danny Kaufman and Vladimir G. Kim and Justin Solomon and Alla Sheffer},
Journal = {SIGGRAPH Asia},
Title = {OptCuts: Joint Optimization of Surface Cuts and Parameterization},
Year = {2018}}

]  
icon3D-CODED : 3D Correspondences by Deep Deformation
Thibault Groueix, Matthew Fisher, Vladimir G. Kim, Bryan C. Russell, and Mathieu Aubry
ECCV 2018

[Code and Data]  [Paper: 10mb  2mb]  [
Abstract

We present a new deep learning approach for matching deformable shapes by introducing Shape Deformation Networks which jointly encode 3D shapes and correspondences. This is achieved by factoring the surface representation into (i) a template, that parameterizes the surface, and (ii) a learnt global feature vector that parameterizes the transformation of the template into the input surface. By predicting this feature for a new shape, we implicitly predict correspondences between this shape and the template. We show that these correspondences can be improved by an additional step which improves the shape feature by minimizing the Chamfer distance between the input and transformed template. We demonstrate that our simple approach improves on state-of-the-art results on the difficult FAUST-inter challenge, with an average correspondence error of 2.88cm. We show, on the TOSCA dataset, that our method is robust to many types of perturbations, and generalizes to non-human shapes. This robustness allows it to perform well on real unclean, meshes from the the SCAPE dataset.

]  [
BibTex

@article{Groueix18a,
Author = {Thibault Groueix and Matthew Fisher and Vladimir G. Kim and Bryan C. Russell and Mathieu Aubry},
Journal = {ECCV},
Title = {3D-CODED : 3D Correspondences by Deep Deformation},
Year = {2018}}

]  
iconReal-Time Hair Rendering using Sequential Adversarial Networks
Lingyu Wei, Liwen Hu, Vladimir G. Kim, Ersin Yumer, and Hao Li
ECCV 2018

[Code and Data]  [Paper: 20mb  1mb]  [
Abstract

We present an adversarial network for rendering photorealistic hair as an alternative to conventional computer graphics pipelines. Our deep learning approach does not require low-level parameter tuning nor ad-hoc asset design. Our method simply takes a strand-based 3D hair model as input and provides intuitive user-control for color and lighting through reference images. To handle the diversity of hairstyles and its appearance complexity, we disentangle hair structure, color, and illumination properties using a sequential GAN architecture and a semi-supervised training approach. We also introduce an intermediate edge activation map to orientation field conversion step to ensure a successful CG-to-photoreal transition, while preserving the hair structures of the original input data. As we only require a feed-forward pass through the network, our rendering performs in real-time. We demonstrate the synthesis of photorealistic hair images on a wide range of intricate hairstyles and compare our technique with state-of-the-art hair rendering methods.

]  [
BibTex

@article{Wei18,
Author = {Lingyu Wei and Liwen Hu and Vladimir G. Kim and Ersin Yumer and Hao Li},
Journal = {ECCV},
Title = {Real-Time Hair Rendering using Sequential Adversarial Networks},
Year = {2018}}

]  
iconSEETHROUGH: Finding Objects in Heavily Occluded Indoor Scene Images
Moos Hueting, Pradyumna Reddy, Ersin Yumer, Vladimir G. Kim, Nathan Carr, and Niloy J. Mitra
3DV 2018 (oral)

[Code and Data]  [Paper: 30mb  8mb]  [
Abstract

Discovering 3D arrangements of objects from single indoor images is important given its many applications including interior design, content creation, etc. Although heavily researched in the recent years, existing approaches break down under medium or heavy occlusion as the core object detection module starts failing in absence of directly visible cues. Instead, we take into account holistic contextual 3D information, exploiting the fact that objects in indoor scenes co-occur mostly in typical near-regular configurations. First, we use a neural network trained on real indoor annotated images to extract 2D keypoints, and feed them to a 3D candidate object generation stage. Then, we solve a global selection problem among these 3D candidates using pairwise co-occurrence statistics discovered from a large 3D scene database. We iterate the process allowing for candidates with low keypoint response to be incrementally detected based on the location of the already discovered nearby objects. Focusing on chairs, we demonstrate significant performance improvement over combinations of state-of-the-art methods, especially for scenes with moderately to severely occluded objects.

]  [
BibTex

@article{Hueting18,
Author = {Moos Hueting and Pradyumna Reddy and Ersin Yumer and Vladimir G. Kim and Nathan Carr and Niloy J. Mitra},
Journal = {3DV},
Title = {SEETHROUGH: Finding Objects in Heavily Occluded Indoor Scene Images},
Year = {2018}}

]  
iconLearning Material-Aware Local Descriptors for 3D Shapes
Hubert Lin, Melinos Averkiou, Evangelos Kalogerakis, Balazs Kovacs, Siddhant Ranade, Vladimir G. Kim, Siddhartha Chaudhuri, and Kavita Bala
3DV 2018

[Code and Data TBA]  [Paper: 6mb  1mb]  [
Abstract

Material understanding is critical for design, geometric modeling, and analysis of functional objects. We enable material-aware 3D shape analysis by employing a projective convolutional neural network architecture to learn material-aware descriptors from view-based representations of 3D points for point-wise material classification or material-aware retrieval. Unfortunately, only a small fraction of shapes in 3D repositories are labeled with physical mate-rials, posing a challenge for learning methods. To address this challenge, we crowdsource a dataset of30803D shapes with part-wise material labels. We focus on furniture models which exhibit interesting structure and material variability. In addition, we also contribute a high-quality expert-labeled benchmark of115shapes from Herman-Miller andIKEA for evaluation. We further apply a mesh-aware conditional random field, which incorporates rotational and reflective symmetries, to smooth our local material predictions across neighboring surface patches. We demonstrate the effectiveness of our learned descriptors for automatic texturing, material-aware retrieval, and physical simulation.

]  [
BibTex

@article{Lin18,
Author = {Hubert Lin and Melinos Averkiou and Evangelos Kalogerakis and Balazs Kovacs and Siddhant Ranade and Vladimir Kim and Siddhartha Chaudhuri and Kavita Bala},
Journal = {3DV},
Title = {Learning Material-Aware Local Descriptors for 3D Shapes},
Year = {2018}}

]  
iconToonSynth: Example-Based Synthesis of Hand-Colored Cartoon Animations
Marek Dvorožňák, Wilmot Li, Vladimir G. Kim, and Daniel Sýkora
SIGGRAPH 2018

[Code and Data]  [Paper: 8mb]  [
Abstract

We present a new example-based approach for synthesizing hand-colored cartoon animations. Our method produces results that preserve the specific visual appearance and stylized motion of manually authored animations without requiring artists to draw every frame from scratch. In our framework, the artist first stylizes a limited set of known source skeletal animations from which we extract a style-aware puppet that encodes the appearance and motion characteristics of the artwork. Given a new target skeletal motion, our method automatically transfers the style from the source examples to create a hand-colored target animation. Compared to previous work, our technique is the first to preserve both the detailed visual appearance and stylized motion of the original hand-drawn content. Our approach has numerous practical applications including traditional animation production and content creation for games.

]  [
BibTex

@article{Dvoroznak18,
Author = {Marek Dvoro\v{z}\v{n}\'{a}k and Wilmot Li and Vladimir G. Kim and Daniel S\'{y}kora},
Journal = {SIGGRAPH},
Title = {ToonSynth: Example-Based Synthesis of Hand-Colored Cartoon Animations},
Year = {2018}}

]  
iconToonCap: A Layered Deformable Model for Capturing Poses From Cartoon Characters
Xinyi Fan, Amit H. Bermano, Vladimir G. Kim, Jovan Popovic, and Szymon Rusinkiewicz
Expressive 2018

[Webpage]  [Paper: 13mb  5mb]  [
Abstract

Characters in traditional artwork such as children’s books or cartoon animations are typically drawn once, in fixed poses, with little opportunity to change the characters’ appearance or re-use them in a different animation. To enable these applications one can fit a consistent parametric deformable model — a puppet - to different images of a character, thus establishing consistent segmentation, dense semantic correspondence, and deformation parameters across poses. In this work, we argue that a layered deformable puppet is a natural representation for hand-drawn characters, providing an effective way to deal with the articulation, expressive deformation, and occlusion that are common to this style of art-work. Our main contribution is an automatic pipeline for fitting these models to unlabeled images depicting the same character in various poses. We demonstrate that the output of our pipeline can be used directly for editing and re-targeting animations.

]  [
BibTex

@article{Fan18,
Author = {Xinyi Fan and Amit Bermano and Vladimir G. Kim and Jovan Popovic and Szymon Rusinkiewicz},
Journal = {Expressive},
Title = {ToonCap: A Layered Deformable Model for Capturing Poses From Cartoon Characters},
Year = {2018}}

]  
iconLearning Fuzzy Set Representations of Partial Shapes on Dual Embedding Spaces
Minhyuk Sung, Anastasia Dubrovina, Vladimir G. Kim, and Leonidas Guibas
SGP 2018 (Symposium on Geometry Processing)

[Code and Data]  [Paper: 7mb  5mb]  [
Abstract

Modeling relations between components of 3D objects is essential for many geometry editing tasks. Existing techniques commonly rely on labeled components, which requires substantial annotation effort and limits components to a dictionary of pre-defined semantic parts. We propose a novel framework based on neural networks that analyzes an uncurated collection of 3D models from the same category and learns two important types of semantic relations among full and partial shapes: complementarity and interchangeability. The former helps to identify which two partial shapes make a complete plausible object, and the latter indicates that interchanging two partial shapes from different objects preserves the object plausibility. Our key idea is to jointly encode both relations by embedding partial shapes as fuzzy sets in dual embedding spaces. We model these two relations as fuzzy set operations performed across the dual embedding spaces, and within each space, respectively. We demonstrate the utility of our method for various retrieval tasks that are commonly needed in geometric modeling interfaces.

]  [
BibTex

@article{Sung18,
Author = {Minhyuk Sung and Anastasia Dubrovina and Vladimir G. Kim and Leonidas Guibas},
Journal = {SGP},
Title = {Learning Fuzzy Set Representations of Partial Shapes on Dual Embedding Spaces},
Year = {2018}}

]  
iconAtlasNet: A Papier-Mâché Approach to Learning 3D Surface Generation
Thibault Groueix, Matthew Fisher, Vladimir G. Kim, Bryan C. Russell, and Mathieu Aubry
CVPR 2018 (spotlight)

[Code and Data]  [Paper: 5mb  2mb]  [
Abstract

We introduce a method for learning to generate the surface of 3D shapes. Our approach represents a 3D shape as a collection of parametric surface elements and, in contrast to methods generating voxel grids or point clouds, naturally infers a surface representation of the shape. Beyond its novelty, our new shape generation framework, AtlasNet, comes with significant advantages, such as improved precision and generalization capabilities, and the possibility to generate a shape of arbitrary resolution without memory issues. We demonstrate these benefits and compare to strong baselines on the ShapeNet benchmark for two applications: (i) auto-encoding shapes, and (ii) single-view reconstruction from a still image. We also provide results showing its potential for other applications, such as morphing, parametrization, super-resolution, matching, and co-segmentation.

]  [
BibTex

@article{Groueix18,
Author = {Thibault Groueix and Matthew Fisher and Vladimir G. Kim and Bryan C. Russell and Mathieu Aubry},
Journal = {CVPR},
Title = {AtlasNet: A Papier-M\^ach\'e Approach to Learning 3D Surface Generation},
Year = {2018}}

]  
iconTags2Parts: Discovering Semantic Regions from Shape Tags
Sanjeev Muralikrishnan, Vladimir G. Kim, and Siddhartha Chaudhuri
CVPR 2018

[Code and Data]  [Paper: 5mb]  [
Abstract

We propose a novel method for discovering shape regions that strongly correlate with user-prescribed tags. For example, given a collection of chairs tagged as either "has armrest" or "lacks armrest", our system correctly highlights the armrest regions as the main distinctive parts between the two chair types. To obtain point-wise predictions from shape-wise tags we develop a novel neural network architecture that is trained with tag classification loss, but is designed to rely on segmentation to predict the tag. Our network is inspired by U-Net, but we replicate shallow U structures several times with new skip connections and pooling layers, and call the resulting architecture "WU-Net". We test our method on segmentation benchmarks and show that even with weak supervision of whole shape tags, our method can infer meaningful semantic regions, without ever observing shape segmentations. Further, once trained, the model can process shapes for which the tag is entirely unknown. As a bonus, our architecture is directly operational under full supervision and performs strongly on standard benchmarks. We validate our method through experiments with many variant architectures and prior baselines, and demonstrate several applications.

]  [
BibTex

@article{Muralikrishnan18,
Author = {Sanjeev Muralikrishnan and Vladimir G. Kim and Siddhartha Chaudhuri},
Journal = {CVPR},
Title = {Tags2Parts: Discovering Semantic Regions from Shape Tags},
Year = {2018}}

]  
iconMulti-Content GAN for Few-Shot Font Style Transfer
Samaneh Azadi, Matthew Fisher, Vladimir G. Kim, Zhaowen Wang, Eli Shechtman, and Trevor Darrell
CVPR 2018 (spotlight)

[Code and Data]  [Paper: 8mb]  [
Abstract

In this work, we focus on the challenge of taking partial observations of highly-stylized text and generalizing the observations to generate unobserved glyphs in the ornamented typeface. To generate a set of multi-content images following a consistent style from very few examples, we propose an end-to-end stacked conditional GAN model considering content along channels and style along network layers. Our proposed network transfers the style of given glyphs to the contents of unseen ones, capturing highly stylized fonts found in the real-world such as those on movie posters or infographics. We seek to transfer both the typographic stylization (ex. serifs and ears) as well as the textual stylization (ex. color gradients and effects.) We base our experiments on our collected data set including 10,000 fonts with different styles and demonstrate effective generalization from a very small number of observed glyphs

]  [
BibTex

@article{Azadi18,
Author = {Samaneh Azadi and Matthew Fisher and Vladimir G. Kim and Zhaowen Wang and Eli Shechtman and Trevor Darrell},
Journal = {CVPR},
Title = {Multi-Content GAN for Few-Shot Font Style Transfer},
Year = {2018}}

]  
iconLearning A Stroke-Based Representation for Fonts
Elena Balashova, Amit H. Bermano, Vladimir G. Kim, Stephen DiVerdi, Aaron Hertzmann, and Thomas Funkhouser
Computer Graphics Forum, 2018

[Paper: 27mb  9mb]  [
Abstract

Designing fonts and typefaces is a difficult process for both beginner and expert typographers. Existing workflows require the designer to create every glyph, while adhering to many loosely defined design suggestions to achieve an aesthetically appealing and coherent character set. This process can be significantly simplified by exploiting the similar structure character glyphs present across different fonts (e.g., letter ‘e’ is typically drawn with four similar strokes) and the shared stylistic elements within the same font (e.g., serifs, ears, tails will be similar for ‘b’ and ‘p’ in the same font). To capture these correlations we propose learning a stroke-based font representation from a collection of existing typefaces. To enable this, we develop a stroke-based geometric model for glyphs, a fitting procedure to re-parametrize arbitrary fonts to our representation. We demonstrate the effectiveness of our model through a manifold learning technique that estimates a low-dimensional font space. Our representation captures a wide range of everyday fonts with topological variations and naturally handles discrete and continuous variations, such as presence and absence of stylistic elements as well as slants and weights. We show that our learned representation can be used for iteratively improving fit quality, as well as exploratory style applications such as completing a font from a subset of observed glyphs, interpolating or adding and removing stylistic elements in existing fonts.

]  [
BibTex

@article{Balashova18,
Author = {Elena Balashova and Amit Bermano and Vladimir G. Kim and Stephen DiVerdi and Aaron Hertzmann and Thomas Funkhouser},
Journal = {CGF},
Title = {Learning A Stroke-Based Representation for Fonts},
Year = {2018}}

]  
iconLearning Local Shape Descriptors from Part Correspondences With Multi-view Convolutional Networks
Haibin Huang, Evangelos Kalogerakis, Siddhartha Chaudhuri, Duygu Ceylan, Vladimir G. Kim, and Ersin Yumer
Transactions on Graphics, 2018 (Presented at SIGGRAPH 2018)

[Code and Data]  [Paper: 55mb  2mb]  [
Abstract

We present a new local descriptor for 3D shapes, directly applicable to a wide range of shape analysis problems such as point correspondences, semantic segmentation, affordance prediction, and shape-to-scan matching. Our key insight is that the neighborhood of a point on a shape is effectively captured at multiple scales by a succession of progressively zoomed out views, taken from care fully selected camera positions. We propose a convolutional neural network that uses local views around a point to embed it to a multidimensional descriptor space, such that geometrically and semantically similar points are close to one another. To train our network, we leverage two extremely large sources of data. First, since our network processes 2D images, we repurpose architectures pre-trained on massive image datasets. Second, we automatically generate a synthetic dense correspondence dataset by part-aware, non-rigid alignment of a massive collection of 3D models. As a result of these design choices, our view-based architecture effectively encodes multi-scale local context and fine-grained surface detail. We demonstrate through several experiments that our learned local descriptors are more general and robust compared to state of the art alternatives, and have a variety of applications without any additional fine-tuning.

]  [
BibTex

@article{Huang18,
Author = {Haibin Huang and Evangelos Kalogerakis and Siddhartha Chaudhuri and Duygu Ceylan and Vladimir G. Kim and Ersin Yumer},
Journal = {Transactions on Graphics},
Title = {Learning Local Shape Descriptors from Part Correspondences With Multi-view Convolutional Networks},
Year = {2018}}

]  
iconComplementMe: Weakly-supervised Component Suggestion for 3D Modeling
Minhyuk Sung, Hao Su, Vladimir G. Kim, Siddhartha Chaudhuri, Leonidas Guibas
SIGGRAPH Asia 2017

[Code and Data]  [Paper: 15mb  3mb]  [
Abstract

Assembly-based tools provide a powerful modeling paradigm for non-expert shape designers. However, choosing a component from a large shape repository and aligning it to a partial assembly can become a daunting task. In this paper we describe novel neural network architectures for suggesting complementary components and their placement for an incomplete 3D part assembly. Unlike most existing techniques, our networks are trained on unlabeled data obtained from public online repositories, and do not rely on consistent part segmentations or labels. Absence of labels poses a challenge in indexing the database of parts for the retrieval. We address it by jointly training embedding and retrieval networks, where the first indexes parts by mapping them to a low-dimensional feature space, and the second maps partial assemblies to appropriate complements. The combinatorial nature of part arrangements poses another challenge, since the retrieval network is not a function: several complements can be appropriate for the same input. Thus, instead of predicting a single output, we train our network to predict a probability distribution over the space of part embeddings. This allows our method to deal with ambiguities and naturally enables a UI that seamlessly integrates user preferences into the design process. We demonstrate that our method can be used to design complex shapes with minimal or no user input. To evaluate our approach we develop a novel benchmark for component suggestion systems demonstrating significant improvement over state-of-the-art techniques.

]  [
BibTex

@article{Sung17,
Author = {Minhyuk Sung and Hao Su and Vladimir G. Kim and Siddhartha Chaudhuri and Leonidas Guibas},
Journal = {SIGGRAPH Asia},
Title = {ComplementMe: Weakly-supervised Component Suggestion for 3D Modeling},
Year = {2017}}

]  
iconGWCNN: A Metric Alignment Layer for Deep Shape Analysis
Danielle Ezuz, Justin Solomon, Vladimir G. Kim, and Mirela Ben-Chen
SGP 2017 (Symposium on Geometry Processing)

[Paper: 19mb  2mb]  [
Abstract

Deep neural networks provide a promising tool for incorporating semantic information in geometry processing applications. Unlike image and video processing, however, geometry processing requires handling unstructured geometric data, and thus data representation becomes an important challenge in this framework. Existing approaches tackle this challenge by converting point clouds, meshes, or polygon soups into regular representations using, e.g., multi‐view images, volumetric grids or planar parameterizations. In each of these cases, geometric data representation is treated as a fixed pre‐process that is largely disconnected from the machine learning tool. In contrast, we propose to optimize for the geometric representation during the network learning process using a novel metric alignment layer. Our approach maps unstructured geometric data to a regular domain by minimizing the metric distortion of the map using the regularized Gromov–Wasserstein objective. This objective is parameterized by the metric of the target domain and is differentiable; thus, it can be easily incorporated into a deep network framework. Furthermore, the objective aims to align the metrics of the input and output domains, promoting consistent output for similar shapes. We show the effectiveness of our layer within a deep network trained for shape classification, demonstrating state‐of‐the‐art performance for nonrigid shapes.

]  [
BibTex

@article{Ezuz17,
Author = {Danielle Ezuz and Justin Solomon and Vladimir G. Kim and Mirela Ben-Chen},
Journal = {SGP},
Title = {GWCNN: A Metric Alignment Layer for Deep Shape Analysis},
Year = {2017}}

]  
iconLearning Hierarchical Shape Segmentation and Labeling from Online Repositories
Li Yi, Leonidas Guibas, Aaron Hertzmann, Vladimir G. Kim, Hao Su, and Ersin Yumer
SIGGRAPH 2017

[Code and Data]  [Paper: 36mb  10mb]  [
Abstract

We propose a method for converting geometric shapes into hierarchically segmented parts with part labels. Our key idea is to train category-specific models from the scene graphs and part names that accompany 3D shapes in public repositories. These freely-available annotations represent an enormous, untapped source of information on geometry. However, because the models and corresponding scene graphs are created by a wide range of modelers with different levels of expertise, modeling tools, and objectives, these models have very inconsistent segmentations and hierarchies with sparse and noisy textual tags. Our method involves two analysis steps. First, we perform a joint optimization to simultaneously cluster and label parts in the database while also inferring a canonical tag dictionary and part hierarchy. We then use this labeled data to train a method for hierarchical segmentation and labeling of new 3D shapes. We demonstrate that our method can mine complex information, detecting hierarchies in man-made objects and their constituent parts, obtaining finer scale details than existing alternatives. We also show that, by performing domain transfer using a few supervised examples, our technique outperforms fully-supervised techniques that require hundreds of manually-labeled models.

]  [
BibTex

@article{Yi17,
Author = {Li Yi and Leonidas Guibas and Aaron Hertzmann and Vladimir G. Kim and Hao Su and Ersin Yumer },
Journal = {SIGGRAPH},
Title = {Learning Hierarchical Shape Segmentation and Labeling from Online Repositories},
Year = {2017}}

]  
iconConvolutional Neural Networks on Surfaces via Seamless Toric Covers
Haggai Maron, Meirav Galun, Noam Aigerman, Miri Trope, Nadav Dym, Ersin Yumer, Vladimir G. Kim, and Yaron Lipman
SIGGRAPH 2017

[Code]  [Paper: 73mb  4mb]  [
Abstract

The recent success of convolutional neural networks (CNNs) for image processing tasks is inspiring research efforts attempting to achieve similar success for geometric tasks. One of the main challenges in applying CNNs to surfaces is defining a natural convolution operator on surfaces. In this paper we present a method for applying deep learning to sphere-type shapes using a global seamless parameterization to a planar flat-torus, for which the convolution operator is well defined. As a result, the standard deep learning framework can be readily applied for learning semantic, high-level properties of the shape. An indication of our success in bridging the gap between images and surfaces is the fact that our algorithm succeeds in learning semantic information from an input of raw low-dimensional feature vectors. We demonstrate the usefulness of our approach by presenting two applications: human body segmentation, and automatic landmark detection on anatomical surfaces. We show that our algorithm compares favorably with competing geometric deep-learning algorithms for segmentation tasks, and is able to produce meaningful correspondences on anatomical surfaces where hand-crafted features are bound to fail.

]  [
BibTex

@article{Maron17,
Author = {Haggai Maron and Meirav Galun and Noam Aigerman and Miri Trope and Nadav Dym and Ersin Yumer and Vladimir G. Kim and Yaron Lipman},
Journal = {SIGGRAPH},
Title = {Convolutional Neural Networks on Surfaces via Seamless Toric Covers},
Year = {2017}}

]  
iconCustomized Software to Optimize Circumferential Pharyngoesophageal Free Flap Reconstruction
Oleksandr Butskiy, Vladimir G. Kim, Brent Chang, Donald Anderson, and Eitan Prisman
Laryngoscope, 2017

[Webpage]  [Paper: link]  [
Abstract

This is not a computer science paper. I helped out a friend by building a software that generates parametric developable patches to reconstruct part of a human throat. This software was used for some surgeries at Vancouver General Hospital.

]  [
BibTex

@article{Butskiy17,
Author = {Oleksandr Butskiy and Vladimir G. Kim and Brent Chang and Donald Anderson and Eitan Prisman},
Journal = {The Laryngoscope},
Title = {Customized Software to Optimize Circumferential Pharyngoesophageal Free Flap Reconstruction},
Year = {2017},
issn = {1531-4995},
url = {http://dx.doi.org/10.1002/lary.26497},
doi = {10.1002/lary.26497}}

]  
iconA Scalable Active Framework for Region Annotation in 3D Shape Collections
Li Yi, Vladimir G. Kim, Duygu Ceylan, I-Chao Shen, Mengyan Yan, Hao Su, Cewu Lu, Qixing Huang, Alla Sheffer, and Leonidas Guibas
SIGGRAPH Asia 2016

[Code and Data]  [Paper: 23mb  7mb]  [
Abstract

Large repositories of 3D shapes provide valuable input for data- driven analysis and modeling tools. They are especially powerful once annotated with semantic information such as salient regions and functional parts. We propose a novel active learning method capable of enriching massive geometric datasets with accurate se- mantic region annotations. Given a shape collection and a user- specified region label our goal is to correctly demarcate the corre- sponding regions with minimal manual work. Our active frame- work achieves this goal by cycling between manually annotating the regions, automatically propagating these annotations across the rest of the shapes, manually verifying both human and automatic annotations, and learning from the verification results to improve the automatic propagation algorithm. We use a unified utility func- tion that explicitly models the time cost of human input across all steps of our method. This allows us to jointly optimize for the set of models to annotate and for the set of models to verify based on the predicted impact of these actions on the human effi- ciency. We demonstrate that incorporating verification of all pro- duced labelings within this unified objective improves both accu- racy and efficiency of the active learning procedure. We automati- cally propagate human labels across a dynamic shape network us- ing a conditional random field (CRF) framework, taking advantage of global shape-to-shape similarities, local feature similarities, and point-to-point correspondences. By combining these diverse cues we achieve higher accuracy than existing alternatives. We validate our framework on existing benchmarks demonstrating it to be sig- nificantly more efficient at using human input compared to previous techniques. We further validate its efficiency and robustness by an- notating a massive shape dataset, labeling over 93,000 shape parts, across multiple model classes, and providing a labeled part collec- tion more than one order of magnitude larger than existing ones

]  [
BibTex

@article{Yi16,
Author = {Li Yi and Vladimir G. Kim and Duygu Ceylan and I-Chao Shen and Mengyan Yan and Hao Su and Cewu Lu and Qixing Huang and Alla Sheffer and Leonidas Guibas},
Journal = {SIGGRAPH Asia},
Title = {A Scalable Active Framework for Region Annotation in 3D Shape Collections},
Year = {2016}}

]  
iconData-Driven Shape Analysis and Processing
Kai Xu, Vladimir G. Kim, Qixing Huang, Niloy J. Mitra, and Evangelos Kalogerakis
SIGGRAPH Asia 2016 (course notes)

[Wikipage]  [Paper: 13mb  2mb]  [
Abstract

Data-driven methods play an increasingly important role in discovering geometric, structural, and semantic relationships between 3D shapes in collections, and applying this analysis to support intelligent modeling, editing, and visualization of geometric data. In contrast to traditional approaches, a key feature of data-driven approaches is that they aggregate information from a collection of shapes to improve the analysis and processing of individual shapes. In addition, they are able to learn models that reason about properties and relationships of shapes without relying on hard-coded rules or explicitly programmed instructions. We provide an overview of the main concepts and components of these techniques, and discuss their application to shape classification, segmentation, matching, reconstruction, modeling and exploration, as well as scene analysis and synthesis, through reviewing the literature and relating the existing works with both qualitative and numerical comparisons. We conclude our report with ideas that can inspire future research in data-driven shape analysis and processing.

]  [
BibTex

@article{Xu16,
Author = {Kai Xu and Vladimir G. Kim and Qixing Huang and Niloy J. Mitra and Evangelos Kalogerakis},
Journal = {SIGGRAPH Asia Course notes},
Title = {Data-Driven Shape Analysis and Processing},
Year = {2016}}

]  
iconEntropic Metric Alignment for Correspondence Problems
Justin Solomon, Gabriel Peyre, Vladimir G. Kim, and Suvrit Sra
SIGGRAPH 2016

[Code]  [Paper: 31mb  1mb]  [
Abstract

Many shape and image processing tools rely on computation of correspondences between geometric domains. Efficient methods that stably extract "soft" matches in the presence of diverse geometric structures have proven to be valuable for shape retrieval and transfer of labels or semantic information. With these applications in mind, we present an algorithm for probabilistic correspondence that optimizes an entropy-regularized Gromov-Wasserstein (GW) objective. Built upon recent developments in numerical optimal transportation, our algorithm is compact, provably convergent, and applicable to any geometric domain expressible as a metric measure matrix. We provide comprehensive experiments illustrating the convergence and applicability of our algorithm to a variety of graphics tasks. Furthermore, we expand entropic GW correspondence to a framework for other matching problems, incorporating partial distance matrices, user guidance, shape exploration, symmetry detection, and joint analysis of more than two domains. These applications expand the scope of entropic GW correspondence to major shape analysis problems and are stable to distortion and noise.

]  [
BibTex

@article{Solomon16,
Author = {Justin Solomon,
Gabriel Peyre,
Vladimir G. Kim,
Suvrit Sra},
Journal = {Transactions on Graphics (Proc. of SIGGRAPH)},
Title = {Entropic Metric Alignment for Correspondence Problems},
Year = {2016}}

]  
iconPhysics-driven Pattern Adjustment for Direct 3D Garment Editing
Aric Bartle, Alla Sheffer, Vladimir G. Kim, Danny Kaufman, Nicholas Vining, Floraine Berthouzoz
SIGGRAPH 2016

[Webpage]  [Paper: 37mb  3mb]  [Video]  [
Abstract

Designers frequently reuse existing designs as a starting point for creating new garments. In order to apply garment modifications, which the designer envisions in 3D, existing tools require meticulous manual editing of 2D patterns. These 2D edits need to account both for the envisioned geometric changes in the 3D shape, as well as for various physical factors that affect the look of the draped garment. We propose a new framework that allows designers to directly apply the changes they envision in 3D space; and creates the 2D patterns that replicate this envisioned target geometry when lifted into 3D via a physical draping simulation. Our framework removes the need for laborious and knowledge-intensive manual 2D edits and allows users to effortlessly mix existing garment designs as well as adjust for garment length and fit. Following each user specified editing operation we first compute a target 3D garment shape, one that maximally preserves the input garment’s style–its proportions, fit and shape–subject to the modifications specified by the user. We then automatically compute 2D patterns that recreate the target garment shape when draped around the input mannequin within a user-selected simulation environment. To generate these patterns, we propose a fixed-point optimization scheme that compensates for the deformation due to the physical forces affecting the drape and is independent of the underlying simulation tool used. Our experiments show that this method quickly and reliably converges to patterns that, under simulation, form the desired target look, and works well with different black-box physical simulators. We demonstrate a range of edited and resimulated garments, and further validate our approach via expert and amateur critique, and comparisons to alternative solutions.

]  [
BibTex

@article{Bartle16,
Author = {Aric Bartle and Alla Sheffer and Vladimir Kim and Danny Kaufman and Nicholas Vining and Floraine Berthouzoz},
Journal = {Transactions on Graphics (Proc. of SIGGRAPH)},
Title = {Physics-driven Pattern Adjustment for Direct 3D Garment Editing},
Year = {2016}}

]  
iconAutocorrelation Descriptor for Efficient Co-alignment of 3D Shape Collections
Melinos Averkiou, Vladimir G. Kim, and Niloy J. Mitra
Computer Graphics Forum, 2016 (presented at Eurographics 2016)

[Code and Data]  [Paper: 5mb  1mb]  [
Abstract

MISSING ABSTRACT

]  [
BibTex

@article{Averkiou16,
Author = {Melinos Averkiou and Vladimir G. Kim and Niloy J. Mitra},
Journal = {{Computer Graphics Forum}}

]  
iconData-Driven Structural Priors for Shape Completion
Minhyuk Sung, Vladimir G. Kim, Roland Angst, and Leonidas Guibas
SIGGRAPH Asia 2015

[Code and Data]  [Paper: 5mb  3mb]  [
Abstract

Acquiring 3D geometry of an object is a tedious and time-consuming task, typically requiring scanning the surface from multiple viewpoints. In this work we focus on reconstructing complete geometry from a single scan acquired with a low-quality consumer-level scanning device. Our method uses a collection of example 3D shapes to build structural part-based priors that are necessary to complete the shape. In our representation, we associate a local coordinate system to each part and learn the distribution of positions and orientations of all the other parts from the database, which implicitly also defines positions of symmetry planes and symmetry axes. At the inference stage, this knowledge enables us to analyze incomplete point clouds with substantial occlusions, because observing only a few regions is still sufficient to infer the global structure. Once the parts and the symmetries are estimated, both data sources, symmetry and database, are fused to complete the point cloud. We evaluate our technique on a synthetic dataset containing 481 shapes, and on real scans acquired with a Kinect scanner. Our method demonstrates high accuracy for the estimated part structure and detected symmetries, enabling higher quality shape completions in comparison to alternative techniques.

]  [
BibTex

@article{Sung15,
Author = {Minhyuk Sung and Vladimir G. Kim and Roland Angst and Leonidas Guibas},
Journal = {Transactions on Graphics (Proc. of SIGGRAPH Asia)},
Title = {Data-driven Structural Priors for Shape Completion},
Year = {2015}}

]  
iconData-Driven Shape Analysis and Processing
Kai Xu, Vladimir G. Kim, Qixing Huang, and Evangelos Kalogerakis
Computer Graphics Forum (STAR) 2015

[Wikipage]  [Paper: 13mb  2mb]  [
Abstract

Data-driven methods play an increasingly important role in discovering geometric, structural, and semantic relationships between 3D shapes in collections, and applying this analysis to support intelligent modeling, editing, and visualization of geometric data. In contrast to traditional approaches, a key feature of data-driven approaches is that they aggregate information from a collection of shapes to improve the analysis and processing of individual shapes. In addition, they are able to learn models that reason about properties and relationships of shapes without relying on hard-coded rules or explicitly programmed instructions. We provide an overview of the main concepts and components of these techniques, and discuss their application to shape classification, segmentation, matching, reconstruction, modeling and exploration, as well as scene analysis and synthesis, through reviewing the literature and relating the existing works with both qualitative and numerical comparisons. We conclude our report with ideas that can inspire future research in data-driven shape analysis and processing.

]  [
BibTex

@article{Xu15,
Author = {Kai Xu and Vladimir G. Kim and Qixing Huang and Evangelos Kalogerakis},
Journal = {Computer Graphics Forum},
Title = {Data-Driven Shape Analysis and Processing},
Year = {2015}}

]  
iconCreating Consistent Scene Graphs Using a Probabilistic Grammar
Tianqiang Liu, Siddhartha Chaudhuri, Vladimir G. Kim, Qi-Xing Huang, Niloy J. Mitra, and Thomas Funkhouser
SIGGRAPH Asia 2014

[Code and Data]  [Paper: 13mb  1mb]  [
Abstract

Growing numbers of 3D scenes in online repositories provide new opportunities for data-driven scene understanding, editing, and synthesis. Despite the plethora of data now available online, most of it cannot be effectively used for data-driven applications because it lacks consistent segmentations, category labels, and/or functional groupings required for co-analysis. In this paper, we develop algorithms that infer such information via parsing with a probabilistic grammar learned from examples. First, given a collection of scene graphs with consistent hierarchies and labels, we train a probabilistic hierarchical grammar to represent the distributions of shapes, cardinalities, and spatial relationships of semantic objects within the collection. Then, we use the learned grammar to parse new scenes to assign them segmentations, labels, and hierarchies consistent with the collection. During experiments with these algorithms, we find that: they work effectively for scene graphs for in- door scenes commonly found online (bedrooms, classrooms, and libraries); they outperform alternative approaches that consider only shape similarities and/or spatial relationships without hierarchy; they require relatively small sets of training data; they are robust to moderate over-segmentation in the inputs; and, they can robustly transfer labels from one data set to another. As a result, the proposed algorithms can be used to provide consistent hierarchies for large collections of scenes within the same semantic class.

]  [
BibTex

@article{Liu14,
Author = {Tianqiang Liu and Siddhartha Chaudhuri and Vladimir G. Kim and Qi-Xing Huang and Niloy J. Mitra and Thomas Funkhouser},
Journal = {Transactions on Graphics (Proc. of SIGGRAPH Asia)},
Number = {6},
Title = {{Creating Consistent Scene Graphs Using a Probabilistic Grammar}}

]  
iconShape2Pose: Human-Centric Shape Analysis
Vladimir G. Kim, Siddhartha Chaudhuri, Leonidas Guibas, and Thomas Funkhouser
SIGGRAPH 2014

[Code and Data]  [Paper: 8mb  3mb]  [Talk]  [
Abstract

As 3D acquisition devices and modeling tools become widely available there is a growing need for automatic algorithms that analyze the semantics and functionality of digitized shapes. Most recent research has focused on analyzing geometric structures of shapes. Our work is motivated by the observation that a majority of man-made shapes are designed to be used by people. Thus, in order to fully understand their semantics, one needs to answer a fundamental question: “how do people interact with these objects?” As an initial step towards this goal, we offer a novel algorithm for automatically predicting a static pose that a person would need to adopt in order to use an object. Specifically, given an input 3D shape, the goal of our analysis is to predict a corresponding human pose, including contact points and kinematic parameters. This is especially challenging for man-made objects that commonly exhibit a lot of variance in their geometric structure. We address this challenge by observing that contact points usually share consistent local geometric features related to the anthropometric properties of corresponding parts and that human body is subject to kinematic constraints and priors. Accordingly, our method effectively combines local region classification and global kinematically-constrained search to successfully predict poses for various objects. We also evaluate our algorithm on six diverse collections of 3D polygonal models (chairs, gym equipment, cockpits, carts, bicycles, and bipedal devices) containing a total of 147 models. Finally, we demonstrate that the poses predicted by our algorithm can be used in several shape analysis problems, such as establishing correspondences between objects, detecting salient regions, finding informative viewpoints, and retrieving functionally-similar shapes.

]  [
BibTex

@article{Kim14,
Author = {Vladimir G. Kim and Siddhartha Chaudhuri and Leonidas Guibas and Thomas Funkhouser},
Journal = {Transactions on Graphics (Proc. of SIGGRAPH)},
Number = {4},
Title = {{Shape2Pose: Human-Centric Shape Analysis}}

]  
iconStructure-Aware Shape Processing
Niloy J. Mitra, Michael Wand, Hao Zhang, Daniel Cohen-Or, Vladimir G. Kim, and Qixing Huang
SIGGRAPH 2014 (course notes)

[Webpage]  [Paper: 31mb  4mb]  [Talk]  [
Abstract

Shape structure is about the arrangement and relations between shape parts. Structure-aware shape processing goes beyond local geometry and low level processing, and analyzes and processes shapes at a high level. It focuses more on the global inter and intra semantic relations among the parts of shape rather than on their local geometry. With recent developments in easy shape acquisition, access to vast repositories of 3D models, and simple-to-use desktop fabrication possibilities, the study of structure in shapes has become a central research topic in shape analysis, editing, and modeling. A whole new line of structure-aware shape processing algorithms has emerged that base their operation on an attempt to understand such structure in shapes. The algorithms broadly consist of two key phases: an analysis phase, which extracts structural information from input data; and a (smart) processing phase, which utilizes the extracted information for exploration, editing, and synthesis of novel shapes. In this course, we will organize, summarize, and present the key concepts and methodological approaches towards efficient structure-aware shape processing. We discuss common models of structure, their implementation in terms of mathematical formalism and algorithms, and explain the key principles in the context of a number of state-of-the-art approaches. Further, we attempt to list the key open problems and challenges, both at the technical and at the conceptual level, to make it easier for new researchers to better explore and contribute to this topic. Our goal is to both give the practitioner an overview of available structure-aware shape processing techniques, as well as identify future research questions in this important, emerging, and fascinating research area.

]  [
BibTex

@article{Mitra14,
Author = {Niloy J. Mitra and Michael Wand and Hao Zhang and Daniel Cohen-Or and Vladimir G. Kim and Qi-Xing Huang},
Journal = {SIGGRAPH Course notes},
Title = {{Structure-Aware Shape Processing}}

]  
iconShapeSynth: Parameterizing Model Collections for Coupled Shape Exploration and Synthesis
Melinos Averkiou, Vladimir G. Kim, Youyi Zheng, and Niloy J. Mitra
Eurographics 2014

[Code and Data]  [Paper: 7mb]  [Video]  [
Abstract

Recent advances in modeling tools enable non-expert users to synthesize novel shapes by assembling parts extracted from model databases. A major challenge for these tools is to provide users with relevant parts, which is especially difficult for large repositories with significant geometric variations. In this paper we analyze unorganized collections of 3D models to facilitate explorative shape synthesis by providing high-level feedback of possible synthesizable shapes. By jointly analyzing arrangements and shapes of parts across models, we hierarchically embed the models into low-dimensional spaces. The user can then use the parameterization to explore the existing models by clicking in different areas or by selecting groups to zoom on specific shape clusters. More importantly, any point in the embedded space can be lifted to an arrangement of parts to provide an abstracted view of possible shape variations. The abstraction can further be realized by appropriately deforming parts from neighboring models to produce synthesized geometry. Our experiments show that users can rapidly generate plausible and diverse shapes using our system, which also performs favorably with respect to previous modeling tools.

]  [
BibTex

@article{Averkiou14,
Author = {Melinos Averkiou and Vladimir G. Kim and Youyi Zheng and Niloy J. Mitra},
Journal = {Computer Graphics Forum (Proc. of Eurographics)},
Number = {2},
Title = {{ShapeSynth: Parameterizing Model Collections for Coupled Shape Exploration and Synthesis}}

]  
iconStructure-Aware Shape Processing
Niloy J. Mitra, Michael Wand, Hao Zhang, Daniel Cohen-Or, Vladimir G. Kim, and Qixing Huang
SIGGRAPH Asia 2013 (course notes)

[Paper: 26mb  3mb]  [Talk]  [
Abstract

Shape structure is about the arrangement and relations between shape parts. Structure-aware shape processing goes beyond local geometry and low level processing, and analyzes and processes shapes at a high level. It focuses more on the global inter and intra semantic relations among the parts of shape rather than on their local geometry. With recent developments in easy shape acquisition, access to vast repositories of 3D models, and simple-to-use desktop fabrication possibilities, the study of structure in shapes has become a central research topic in shape analysis, editing, and modeling. A whole new line of structure-aware shape processing algorithms has emerged that base their operation on an attempt to understand such structure in shapes. The algorithms broadly consist of two key phases: an analysis phase, which extracts structural information from input data; and a (smart) processing phase, which utilizes the extracted information for exploration, editing, and synthesis of novel shapes. In this course, we will organize, summarize, and present the key concepts and methodological approaches towards efficient structure-aware shape processing. We discuss common models of structure, their implementation in terms of mathematical formalism and algorithms, and explain the key principles in the context of a number of state-of-the-art approaches. Further, we attempt to list the key open problems and challenges, both at the technical and at the conceptual level, to make it easier for new researchers to better explore and contribute to this topic. Our goal is to both give the practitioner an overview of available structure-aware shape processing techniques, as well as identify future research questions in this important, emerging, and fascinating research area.

]  [
BibTex

@article{Mitra13,
Author = {Niloy J. Mitra and Michael Wand and Hao Zhang and Daniel Cohen-Or and Vladimir G. Kim and Qi-Xing Huang},
Journal = {SIGGRAPH Asia Course notes},
Title = {{Structure-Aware Shape Processing}}

]  
iconUnderstanding the Structure of Large, Diverse Collections of Shapes
Vladimir G. Kim
PhD Dissertaion, Princeton University, 2013

[Code and Data]  [Paper: 31mb  11mb]  [Talk]  [
Abstract

My dissertation is mainly based on three papers: Blended Intrinsic Maps, Exploring Collections of 3D Models using Fuzzy Correspondences, and Learning Part-based Templates from Large Collections of 3D Shapes.

]  [
BibTex

@article{Kim13a,
Author = {Vladimir G. Kim},
Journal = {Doctoral Dissertation,
Princeton University},
Title = {{Understanding the Structure of Large,
Diverse Collections of Shapes}}

]  
iconLearning Part-based Templates from Large Collections of 3D Shapes
Vladimir G. Kim, Wilmot Li, Niloy J. Mitra, Siddhartha Chaudhuri, Stephen DiVerdi, and Thomas Funkhouser
SIGGRAPH 2013

[Code and Data]  [Paper: 13mb  4mb]  [
Note

Errata
Equation 7 has a typeout, here is the corrected version:

]  [
Abstract

As large repositories of 3D shape collections continue to grow, understanding the data, especially encoding the inter-model similarity and their variations, is of central importance. For example, many data-driven approaches now rely on access to semantic segmentation information, accurate inter-model point-to-point correspondence, and deformation models that characterize the model collections. Existing approaches, however, are either supervised requiring manual labeling; or employ super-linear matching algorithms and thus are unsuited for analyzing large collections spanning many thousands of models. We propose an automatic algorithm that starts with an initial template model and then jointly optimizes for part segmentation, point-to-point surface correspondence, and a compact deformation model to best explain the input model collection. As output, the algorithm produces a set of probabilistic part-based templates that groups the original models into clusters of models capturing their styles and variations. We evaluate our algorithm on several standard datasets and demonstrate its scalability by analyzing much larger collections of up to thousands of shapes.

]  [
BibTex

@article{Kim13,
Author = {Vladimir G. Kim and Li,
Wilmot and Mitra,
Niloy J. and Chaudhuri,
Siddhartha and DiVerdi,
Stephen and Funkhouser,
Thomas},
Journal = {Transactions on Graphics (Proc. of SIGGRAPH)},
Number = {4},
Title = {{Learning Part-based Templates from Large Collections of 3D Shapes}}

]  
iconExploring Collections of 3D Models using Fuzzy Correspondences
Vladimir G. Kim, Wilmot Li, Niloy J. Mitra, Stephen DiVerdi, and Thomas Funkhouser
SIGGRAPH 2012

[Code and Data]  [Paper: 23mb  5mb]  [Talk]  [Video]  [
Abstract

Large collections of 3D models are now commonly available via many public repositories, opening new possibilities for data mining, visualization, and synthesis of new models. However, exploring such collections remains challenging because similarity relationships between points on 3D surfaces are often ambiguous and/or difficult to infer automatically. To address this challenge, we introduce an encoding of similarity relationships using fuzzy point correspondences. Based on the observation that correspondence space is low-dimensional, we propose a robust and efficient computational framework to estimate fuzzy correspondences using only a sparse set of pairwise model alignments. We evaluate our algorithm on a range of correspondence benchmarks and report substantial improvements both in terms of accuracy and speed compared to existing alternatives. Further, we use fuzzy correspondences to process large model collections collectively and demonstrate applications towards view alignment, smart exploration, and faceted browsing.

]  [
BibTex

@article{Kim12,
Author = {Kim,
Vladimir G. and Li,
Wilmot and Mitra,
Niloy J. and DiVerdi,
Stephen and Funkhouser,
Thomas},
Journal = {Transactions on Graphics (Proc. of SIGGRAPH)},
Number = {4},
Read = {0},
Title = {{Exploring Collections of 3D Models using Fuzzy Correspondences}}

]  
iconSymmetry-Guided Texture Synthesis and Manipulation
Vladimir G. Kim, Yaron Lipman, and Thomas Funkhouser
Transactions on Graphics, 2012 (Presented at SIGGRAPH 2012)

[Code and Data]  [Paper: 74mb  4mb]  [Talk]  [
Abstract

This paper presents a framework for symmetry-guided texture synthesis and processing. It is motivated by the long-standing problem of how to optimize, transfer, and control the spatial patterns in textures. The key idea is that symmetry representations that measure autocorrelations with respect to all transformations of a group are a natural way to describe spatial patterns in many real-world textures. To leverage this idea, we provide methods to transfer symmetry representations from one texture to another, process the symmetries of a texture, and optimize textures with respect to properties of their symmetry representations. These methods are automatic and robust, as they don't require explicit detection of discrete symmetries. Applications are investigated for optimizing, processing and transferring symmetries and textures.

]  [
BibTex

@article{Kim12a,
Author = {Vladimir G. Kim and Yaron Lipman and Thomas Funkhouser},
Journal = {Transactions on Graphics},
Number = {3},
Title = {{Symmetry-Guided Texture Synthesis and Manipulation}}

]  
iconSimple Formulas For Quasiconformal Plane Deformations
Yaron Lipman, Vladimir G. Kim, and Thomas Funkhouser
Transactions on Graphics, 2012 (Presented at SIGGRAPH 2012)

[Code]  [Paper: 6mb  1mb]  [
Abstract

We introduce a simple formula for 4-point planar warping that produces provably good 2D deformations. In contrast to previous work, the new deformations minimizes the maximum conformal distortion and spreads the distortion equally across the domain. We derive closed-form formulas for computing the 4-point interpolant and analyze its properties. We further explore applications to 2D shape deformations by building local deformation operators that use Thin-Plate Splines to further deform the 4-point interpolant to satisfy certain boundary conditions. Although our theory y does not extend to this case, we demonstrate that, practically, these local operators can be used to create compound deformations with fewer control points and smaller worst-case distortions in comparisons to the state-of-the-art.

]  [
BibTex

@article{Lipman12,
Author = {Yaron Lipman and Vladimir G. Kim and Thomas Funkhouser},
Journal = {Transactions on Graphics},
Number = {5},
Title = {{Simple Formulas For Quasiconformal Plane Deformations}}

]  
iconFinding Surface Correspondences Using Symmetry Axis Curves
Tianqiang Liu, Vladimir G. Kim, and Thomas Funkhouser
SGP 2012 (Symposium on Geometry Processing)

[Code and Data]  [Paper: 8mb  1mb]  [
Abstract

In this paper, we propose an automatic algorithm for finding a correspondence map between two 3D surfaces. The key insight is that global reflective symmetry axes are stable, recognizable, semantic features of most real-world surfaces. Thus, it is possible to find a useful map between two surfaces by first extracting symmetry axis curves, aligning the extracted curves, and then extrapolating correspondences found on the curves to both surfaces. The main advantages of this approach are efficiency and robustness: the difficult problem of finding a surface map is reduced to three significantly easier problems: symmetry detection, curve alignment, and correspondence extrapolation, each of which has a robust, polynomial-time solution (e.g., optimal alignment of 1D curves is possible with dynamic programming). We investigate of this approach on a wide range of examples, including both intrinsically symmetric surfaces and polygon soups, and find that it is superior to previous methods in cases where two surfaces have different overall shapes but similar reflective symmetry axes, a common case in computer graphics.

]  [
BibTex

@article{Liu12,
Author = {Tianqiang Liu and Vladimir G. Kim and Thomas Funkhouser},
Journal = {Computer Graphics Forum (Proc. of SGP)},
Number = {5},
Title = {{Finding Surface Correspondences Using Symmetry Axis Curves}}

]  
iconBlended Intrinsic Maps
Vladimir G. Kim, Yaron Lipman, and Thomas Funkhouser
SIGGRAPH 2011

[Code and Data]  [Paper: 6mb  1mb]  [Talk]  [
Abstract

This paper describes a fully automatic pipeline for finding an intrinsic map between two non-isometric, genus zero surfaces. Our approach is based on the observation that efficient methods exist to search for nearly isometric maps (e.g., Möbius Voting or Heat Kernel Maps), but no single solution found with these methods provides low-distortion everywhere for pairs of surfaces differing by large deformations. To address this problem, we suggest using a weighted combination of these maps to produce a “blended map.” This approach enables algorithms that leverage efficient search procedures, yet can provide the flexibility to handle large deformations.
The main challenges of this approach lie in finding a set of candidate maps mi and their associated blending weights bi(p) for every point p on the surface. We address these challenges specifically for conformal maps by making the following contributions. First, we provide a way to blend maps, defining the image of p as the weighted geodesic centroid of mi(p). Second, we provide a definition for smooth blending weights at every point p that are proportional to the area preservation of mi at p. Third, we solve a global optimization problem that selects candidate maps based both on their area preservation and consistency with other selected maps. During experiments with these methods, we find that our algorithm produces blended maps that align semantic features better than alternative approaches over a variety of data sets.

]  [
BibTex

@article{Kim11,
Author = {Vladimir G. Kim and Yaron Lipman and Thomas Funkhouser},
Journal = {Transactions on Graphics (Proc. of SIGGRAPH)},
Number = {4},
Title = {{Blended Intrinsic Maps}}

]  
iconMöbius Transformations for Global Intrinsic Symmetry Analysis
Vladimir G. Kim, Yaron Lipman, Xiaobai Chen, and Thomas Funkhouser
SGP 2010 (Symposium on Geometry Processing)

[Code and Data]  [Paper: 7mb  1mb]  [Talk]  [
Note

Errata
Geodesic distances were not normalized properly, and thus the first row of Tables 1, 2, and 3 are not meaningful. Please, refer to project website for corrections.

]  [
Abstract

The goal of our work is to develop an algorithm for automatic and robust detection of global intrinsic symmetries in 3D surface meshes. Our approach is based on two core observations. First, symmetry invariant point sets can be detected robustly using critical points of the Average Geodesic Distance (AGD) function. Second, intrinsic symmetries are self-isometries of surfaces and as such are contained in the low dimensional group of Möbius transformations. Based on these observations, we propose an algorithm that: 1) generates a set of symmetric points by detecting critical points of the AGD function, 2) enumerates small subsets of those feature points to generate candidate Möbius transformations, and 3) selects among those candidate Möbius transformations the one(s) that best map the surface onto itself. The main advantages of this algorithm stem from the stability of the AGD in predicting potential symmetric point features and the low dimensionality of the Möbius group for enumerating potential self-mappings. During experiments with a benchmark set of meshes augmented with human-specified symmetric correspondences, we find that the algorithm is able to find intrinsic symmetries for a wide variety of object types with moderate deviations from perfect symmetry.

]  [
BibTex

@article{Kim10,
Author = {Vladimir G. Kim and Yaron Lipman and Xiaobai Chen and Thomas Funkhouser},
Journal = {Computer Graphics Forum (Proc. of SGP)},
Number = {5},
Title = {{M\"{o}bius Transformations For Global Intrinsic Symmetry Analysis}}

]  
iconShape-based Recognition of 3D Point Clouds in Urban Environments
Aleksey Golovinskiy, Vladimir G. Kim, and Thomas Funkhouser
ICCV 2009

[Paper: 3mb  1mb]  [
Abstract

This paper investigates the design of a system for recognizing objects in 3D point clouds of urban environments. The system is decomposed into four steps: locating, segmenting, characterizing, and classifying clusters of 3D points. Specifically, we first cluster nearby points to form a set of potential object locations (with hierarchical clustering). Then, we segment points near those locations into foreground and background sets (with a graph-cut algorithm). Next, we build a feature vector for each point cluster (based on both its shape and its context). Finally, we label the feature vectors using a classifier trained on a set of manually labeled objects. The paper presents several alternative methods for each step. We quantitatively evaluate the system and tradeoffs of different alternatives in a truthed part of a scan of Ottawa that contains approximately 100 million points and 1000 objects of interest. Then, we use this truth data as a training set to recognize objects amidst approximately 1 billion points of the remainder of the Ottawa scan.

]  [
BibTex

@article{Golovinskiy09,
Author = {Aleksey Golovinskiy and Vladimir G. Kim and Thomas Funkhouser},
Journal = {ICCV},
Title = {{Shape-based Recognition of 3D Point Clouds in Urban Environments}}

]  
Talks
Data-Driven Geometry Processing, UBC & SFU, May 2017
Part Structures In Large Collections of 3D Models, Dagstuhl 2017 (Seminar 17021, Functoriality in Geometric Data)
Finding Structure In Large Collections of 3D Models, GI 2016 (Graphic Interfaces)
Structure and Function in Large Collections of 3D Shapes, Cornell 2015
Program Committee
2018: Eurographics, SGP (Symposium on Geometry Processing), SMI (Shape Modeling International)
2017: SGP, PG (Pacific Graphics), CAD/Graphics
2016: SGP, SMI, PG
2015: SIGGRAPH, SGP, Eurographics, PG, CAD/Graphics
2014: SGP, Eurographics (short papers)
2013: SGP
Reviewer: CVPR, ICCV, ECCV, SIGGRAPH, Transactions on Graphics
Teaching
Deep Learning for Graphics, EG 2018 tutorial
    with Niloy Mitra, Iasonas Kokkinos, Paul Guerrero, Konstantinos Rematas, and Tobias Ritschel
Data-Driven Shape Analysis and Processing, SIGGRAPH Asia 2016 course and EG 2016 tutorial
    with Kai Xu, Qixing Huang, Niloy J. Mitra, and Evangelos Kalogerakis
CS 468 (Data-driven Shape Analysis), Stanford, 2014, Instructor
    with Qixing Huang
Structure-Aware Shape Processing, SIGGRAPH 2014 and SIGGRAPH Asia 2013 Course
    with Niloy J. Mitra, Michael Wand, Hao Zhang, Daniel Cohen-Or, and Qixing Huang
COS 426 (Computer Graphics), Princeton, 2011, TA
COS 116 (The Computational Universe), Princeton, 2010, TA