I work on geometry analysis algorithms at the intersection of graphics, vision, and machine learning, enabling novel interfaces for creative tasks. My recent research focuses on making it easier to understand, model, manipulate, and process geometric data such as models of 3D objects, interior environments, articulated characters, and fonts. [Bio Dr. Vladimir Kim is a Research Scientist at Adobe Research. His research interests include processing and analysis of large geometric datasets by developing novel machine learning and optimization algorithms. He was a postdoctoral scholar at Stanford University (2013–2015), received his PhD in the Computer Science Department at Princeton University (2008-2013), and his BA in Mathematics and Computer Science at Simon Fraser University (2003-2008). He has been a member of the International Program Committee for SIGGRAPH (2015), and for Symposium on Geometry processing (2013–2018). He regularly reviews for top graphics and vision venues such as SIGGRAPH, Transaction on Graphics, CVPR, ICCV, and ECCV. ] [Personal The short version of Vladimir is Vova. My name reflects my mixed ethnic background. My father, George Kim, is Korean and my mother, Irina Kozyreva, is Russian. The 'G' in the middle of my name is a patronym, the full Russian version of my name is Vladimir Georgievich Kim. Originally, I am from the little town of Kara-Balta, Kyrgyzstan. ] vokim@adobe.com vova.g.kim@gmail.com

Publications
 3D-CODED : 3D Correspondences by Deep DeformationThibault Groueix, Matthew Fisher, Vladimir G. Kim, Bryan C. Russell, and Mathieu AubryECCV 2018[Code and Data]  [Paper: 10mb  2mb]  [AbstractWe present a new deep learning approach for matching deformable shapes by introducing Shape Deformation Networks which jointly encode 3D shapes and correspondences. This is achieved by factoring the surface representation into (i) a template, that parameterizes the surface, and (ii) a learnt global feature vector that parameterizes the]  [BibTex@article{Groueix18a,Author = {Thibault Groueix and Matthew Fisher and Vladimir G. Kim and Bryan C. Russell and Mathieu Aubry},Journal = {ECCV},Title = {3D-CODED : 3D Correspondences by Deep Deformation},Year = {2018}}]
 Real-Time Hair Rendering using Sequential Adversarial NetworksLingyu Wei, Liwen Hu, Vladimir G. Kim, Ersin Yumer, and Hao LiECCV 2018[Code and Data]  [Paper: 20mb  1mb]  [AbstractWe present an adversarial network for rendering photorealistic hair as an alternative to conventional computer graphics pipelines. Our deep learning approach does not require low-level parameter tuning nor ad-hoc asset design. Our method simply takes a strand-based 3D hair model as input and provides intuitive user-control for color and lighting through reference images. To handle the diversity of hairstyles and its appearance complexity,]  [BibTex@article{Wei18,Author = {Lingyu Wei and Liwen Hu and Vladimir G. Kim and Ersin Yumer and Hao Li},Journal = {ECCV},Title = {Real-Time Hair Rendering using Sequential Adversarial Networks},Year = {2018}}]
 SEETHROUGH: Finding Objects in Heavily Occluded Indoor Scene ImagesMoos Hueting, Pradyumna Reddy, Ersin Yumer, Vladimir G. Kim, Nathan Carr, and Niloy J. Mitra3DV 2018 (oral)[Code and Data]  [Paper: 30mb  8mb]  [AbstractDiscovering 3D arrangements of objects from single indoor images is important given its many applications including interior design, content creation, etc. Although heavily researched in the recent years, existing approaches break down under medium or heavy occlusion as the core object detection module starts failing in absence of directly visible cues. Instead, we take into account holistic contextual 3D information, exploiting the fact that objects in indoor scenes co-occur mostly in typical near-regular configurations. First, we use a neural network trained on real indoor annotated images to extract 2D keypoints, and feed them to a 3D candidate object generation stage. Then, we solve a global selection problem among these 3D candidates using pairwise co-occurrence statistics discovered from a large 3D scene database. We iterate the process allowing for candidates with low keypoint response to be incrementally detected based on the location of the already discovered nearby objects. Focusing on chairs, we demonstrate significant performance improvement over combinations of state-of-the-art methods, especially for scenes with moderately to severely occluded objects.]  [BibTex@article{Hueting18,Author = {Moos Hueting and Pradyumna Reddy and Ersin Yumer and Vladimir G. Kim and Nathan Carr and Niloy J. Mitra},Journal = {3DV},Title = {SEETHROUGH: Finding Objects in Heavily Occluded Indoor Scene Images},Year = {2018}}]
 Learning Material-Aware Local Descriptors for 3D ShapesHubert Lin, Melinos Averkiou, Evangelos Kalogerakis, Balazs Kovacs, Siddhant Ranade, Vladimir G. Kim, Siddhartha Chaudhuri, and Kavita Bala3DV 2018[Code and Data TBA]  [Paper: 6mb  1mb]  [AbstractMaterial understanding is critical for design, geometric modeling, and analysis of functional objects. We enable material-aware 3D shape analysis by employing a projective convolutional neural network architecture to learn material-aware descriptors from view-based representations of 3D points for point-wise material classification or material-aware retrieval. Unfortunately, only a small fraction of shapes in 3D repositories are labeled with physical mate-rials, posing a challenge for learning methods. To address this challenge, we crowdsource a dataset of30803D shapes with part-wise material labels. We focus on furniture models which exhibit interesting structure and material variability. In addition, we also contribute a high-quality expert-labeled benchmark of115shapes from Herman-Miller andIKEA for evaluation. We further apply a mesh-aware conditional random field, which incorporates rotational and reflective symmetries, to smooth our local material predictions across neighboring surface patches. We demonstrate the effectiveness of our learned descriptors for automatic texturing, material-aware retrieval, and physical simulation.]  [BibTex@article{Lin18,Author = {Hubert Lin and Melinos Averkiou and Evangelos Kalogerakis and Balazs Kovacs and Siddhant Ranade and Vladimir Kim and Siddhartha Chaudhuri and Kavita Bala},Journal = {3DV},Title = {Learning Material-Aware Local Descriptors for 3D Shapes},Year = {2018}}]
 ToonSynth: Example-Based Synthesis of Hand-Colored Cartoon AnimationsMarek Dvorožňák, Wilmot Li, Vladimir G. Kim, and Daniel SýkoraSIGGRAPH 2018[Code and Data]  [Paper: 8mb]  [AbstractWe present a new example-based approach for synthesizing hand-colored cartoon animations. Our method produces results that preserve the specific visual appearance and stylized motion of manually authored animations without requiring artists to draw every frame from scratch. In our framework, the artist first stylizes a limited set of known source skeletal animations from which we extract a style-aware puppet that encodes the appearance and motion characteristics of the artwork. Given a new target skeletal motion, our method automatically transfers the style from the source examples to create a hand-colored target animation. Compared to previous work, our technique is the first to preserve both the detailed visual appearance and stylized motion of the original hand-drawn content. Our approach has numerous practical applications including traditional animation production and content creation for games.]  [BibTex@article{Dvoroznak18,Author = {Marek Dvoro\v{z}\v{n}\'{a}k and Wilmot Li and Vladimir G. Kim and Daniel S\'{y}kora},Journal = {SIGGRAPH},Title = {ToonSynth: Example-Based Synthesis of Hand-Colored Cartoon Animations},Year = {2018}}]
 ToonCap: A Layered Deformable Model for Capturing Poses From Cartoon CharactersXinyi Fan, Amit Bermano, Vladimir G. Kim, Jovan Popovic, and Szymon RusinkiewiczExpressive 2018[Webpage]  [Paper: 13mb  5mb]  [AbstractCharacters in traditional artwork such as children’s books or cartoon animations are typically drawn once, in fixed poses, with little opportunity to change the characters’ appearance or re-use them in a different animation. To enable these applications one can fit a consistent parametric deformable model — a puppet - to different images of a character, thus establishing consistent segmentation, dense semantic correspondence, and deformation parameters across poses. In this work, we argue that a layered deformable puppet is a natural representation for hand-drawn characters, providing an effective way to deal with the articulation, expressive deformation, and occlusion that are common to this style of art-work. Our main contribution is an automatic pipeline for fitting these models to unlabeled images depicting the same character in various poses. We demonstrate that the output of our pipeline can be used directly for editing and re-targeting animations.]  [BibTex@article{Fan18,Author = {Xinyi Fan and Amit Bermano and Vladimir G. Kim and Jovan Popovic and Szymon Rusinkiewicz},Journal = {Expressive},Title = {ToonCap: A Layered Deformable Model for Capturing Poses From Cartoon Characters},Year = {2018}}]
 Learning Fuzzy Set Representations of Partial Shapes on Dual Embedding SpacesMinhyuk Sung, Anastasia Dubrovina, Vladimir G. Kim, and Leonidas GuibasSGP 2018 (Symposium on Geometry Processing)[Code and Data]  [Paper: 7mb  5mb]  [AbstractModeling relations between components of 3D objects is essential for many geometry editing tasks. Existing techniques commonly rely on labeled components, which requires substantial annotation effort and limits components to a dictionary of pre-defined semantic parts. We propose a novel framework based on neural networks that analyzes an uncurated collection of 3D models from the same category and learns two important types of semantic relations among full and partial shapes: complementarity and interchangeability. The former helps to identify which two partial shapes make a complete plausible object, and the latter indicates that interchanging two partial shapes from different objects preserves the object plausibility. Our key idea is to jointly encode both relations by embedding partial shapes as fuzzy sets in dual embedding spaces. We model these two relations as fuzzy set operations performed across the dual embedding spaces, and within each space, respectively. We demonstrate the utility of our method for various retrieval tasks that are commonly needed in geometric modeling interfaces.]  [BibTex@article{Sung18,Author = {Minhyuk Sung and Anastasia Dubrovina and Vladimir G. Kim and Leonidas Guibas},Journal = {SGP},Title = {Learning Fuzzy Set Representations of Partial Shapes on Dual Embedding Spaces},Year = {2018}}]
 AtlasNet: A Papier-Mâché Approach to Learning 3D Surface GenerationThibault Groueix, Matthew Fisher, Vladimir G. Kim, Bryan C. Russell, and Mathieu AubryCVPR 2018 (spotlight)[Code and Data]  [Paper: 5mb  2mb]  [AbstractWe introduce a method for learning to generate the surface of 3D shapes. Our approach represents a 3D shape as a collection of parametric surface elements and, in contrast to methods generating voxel grids or point clouds, naturally infers a surface representation of the shape. Beyond its novelty, our new shape generation framework, AtlasNet, comes with significant advantages, such as improved precision and generalization capabilities, and the possibility to generate a shape of arbitrary resolution without memory issues. We demonstrate these benefits and compare to strong baselines on the ShapeNet benchmark for two applications: (i) auto-encoding shapes, and (ii) single-view reconstruction from a still image. We also provide results showing its potential for other applications, such as morphing, parametrization, super-resolution, matching, and co-segmentation.]  [BibTex@article{Groueix18,Author = {Thibault Groueix and Matthew Fisher and Vladimir G. Kim and Bryan C. Russell and Mathieu Aubry},Journal = {CVPR},Title = {AtlasNet: A Papier-M\^ach\'e Approach to Learning 3D Surface Generation},Year = {2018}}]
 Tags2Parts: Discovering Semantic Regions from Shape TagsSanjeev Muralikrishnan, Vladimir G. Kim, and Siddhartha ChaudhuriCVPR 2018[Code and Data]  [Paper: 5mb]  [AbstractWe propose a novel method for discovering shape regions that strongly correlate with user-prescribed tags. For example, given a collection of chairs tagged as either "has armrest" or "lacks armrest", our system correctly highlights the armrest regions as the main distinctive parts between the two chair types. To obtain point-wise predictions from shape-wise tags we develop a novel neural network architecture that is trained with tag classification loss, but is designed to rely on segmentation to predict the tag. Our network is inspired by U-Net, but we replicate shallow U structures several times with new skip connections and pooling layers, and call the resulting architecture "WU-Net". We test our method on segmentation benchmarks and show that even with weak supervision of whole shape tags, our method can infer meaningful semantic regions, without ever observing shape segmentations. Further, once trained, the model can process shapes for which the tag is entirely unknown. As a bonus, our architecture is directly operational under full supervision and performs strongly on standard benchmarks. We validate our method through experiments with many variant architectures and prior baselines, and demonstrate several applications.]  [BibTex@article{Muralikrishnan18,Author = {Sanjeev Muralikrishnan and Vladimir G. Kim and Siddhartha Chaudhuri},Journal = {CVPR},Title = {Tags2Parts: Discovering Semantic Regions from Shape Tags},Year = {2018}}]
 Multi-Content GAN for Few-Shot Font Style TransferSamaneh Azadi, Matthew Fisher, Vladimir G. Kim, Zhaowen Wang, Eli Shechtman, and Trevor DarrellCVPR 2018 (spotlight)[Code and Data]  [Paper: 8mb]  [AbstractIn this work, we focus on the challenge of taking partial observations of highly-stylized text and generalizing the observations to generate unobserved glyphs in the ornamented typeface. To generate a set of multi-content images following a consistent style from very few examples, we propose an end-to-end stacked conditional GAN model considering content along channels and style along network layers. Our proposed network transfers the style of given glyphs to the contents of unseen ones, capturing highly stylized fonts found in the real-world such as those on movie posters or infographics. We seek to transfer both the typographic stylization (ex. serifs and ears) as well as the textual stylization (ex. color gradients and effects.) We base our experiments on our collected data set including 10,000 fonts with different styles and demonstrate effective generalization from a very small number of observed glyphs]  [BibTex@article{Azadi18,Author = {Samaneh Azadi and Matthew Fisher and Vladimir G. Kim and Zhaowen Wang and Eli Shechtman and Trevor Darrell},Journal = {CVPR},Title = {Multi-Content GAN for Few-Shot Font Style Transfer},Year = {2018}}]
 Learning Local Shape Descriptors from Part Correspondences With Multi-view Convolutional NetworksHaibin Huang, Evangelos Kalogerakis, Siddhartha Chaudhuri, Duygu Ceylan, Vladimir G. Kim, and Ersin YumerTransactions on Graphics, 2018 (Presented at SIGGRAPH 2018)[Code and Data]  [Paper: 55mb  2mb]  [AbstractWe present a new local descriptor for 3D shapes, directly applicable to a wide range of shape analysis problems such as point correspondences, semantic segmentation, affordance prediction, and shape-to-scan matching. Our key insight is that the neighborhood of a point on a shape is effectively captured at multiple scales by a succession of progressively zoomed out views, taken from care fully selected camera positions. We propose a convolutional neural network that uses local views around a point to embed it to a multidimensional descriptor space, such that geometrically and semantically similar points are close to one another. To train our network, we leverage two extremely large sources of data. First, since our network processes 2D images, we repurpose architectures pre-trained on massive image datasets. Second, we automatically generate a synthetic dense correspondence dataset by part-aware, non-rigid alignment of a massive collection of 3D models. As a result of these design choices, our view-based architecture effectively encodes multi-scale local context and fine-grained surface detail. We demonstrate through several experiments that our learned local descriptors are more general and robust compared to state of the art alternatives, and have a variety of applications without any additional fine-tuning.]  [BibTex@article{Huang18,Author = {Haibin Huang and Evangelos Kalogerakis and Siddhartha Chaudhuri and Duygu Ceylan and Vladimir G. Kim and Ersin Yumer},Journal = {Transactions on Graphics},Title = {Learning Local Shape Descriptors from Part Correspondences With Multi-view Convolutional Networks},Year = {2018}}]
 ComplementMe: Weakly-supervised Component Suggestion for 3D ModelingMinhyuk Sung, Hao Su, Vladimir G. Kim, Siddhartha Chaudhuri, Leonidas GuibasSIGGRAPH Asia 2017[Code and Data]  [Paper: 15mb  3mb]  [AbstractAssembly-based tools provide a powerful modeling paradigm for non-expert shape designers. However, choosing a component from a large shape repository and aligning it to a partial assembly can become a daunting task. In this paper we describe novel neural network architectures for suggesting complementary components and their placement for an incomplete 3D part assembly. Unlike most existing techniques, our networks are trained on unlabeled data obtained from public online repositories, and do not rely on consistent part segmentations or labels. Absence of labels poses a challenge in indexing the database of parts for the retrieval. We address it by jointly training embedding and retrieval networks, where the first indexes parts by mapping them to a low-dimensional feature space, and the second maps partial assemblies to appropriate complements. The combinatorial nature of part arrangements poses another challenge, since the retrieval network is not a function: several complements can be appropriate for the same input. Thus, instead of predicting a single output, we train our network to predict a probability distribution over the space of part embeddings. This allows our method to deal with ambiguities and naturally enables a UI that seamlessly integrates user preferences into the design process. We demonstrate that our method can be used to design complex shapes with minimal or no user input. To evaluate our approach we develop a novel benchmark for component suggestion systems demonstrating significant improvement over state-of-the-art techniques.]  [BibTex@article{Sung17,Author = {Minhyuk Sung and Hao Su and Vladimir G. Kim and Siddhartha Chaudhuri and Leonidas Guibas},Journal = {SIGGRAPH Asia},Title = {ComplementMe: Weakly-supervised Component Suggestion for 3D Modeling},Year = {2017}}]
 GWCNN: A Metric Alignment Layer for Deep Shape AnalysisDanielle Ezuz, Justin Solomon, Vladimir G. Kim, and Mirela Ben-ChenSGP 2017 (Symposium on Geometry Processing)[Paper: 19mb  2mb]  [AbstractDeep neural networks provide a promising tool for incorporating semantic information in geometry processing applications. Unlike image and video processing, however, geometry processing requires handling unstructured geometric data, and thus data representation becomes an important challenge in this framework. Existing approaches tackle this challenge by converting point clouds, meshes, or polygon soups into regular representations using, e.g., multi‐view images, volumetric grids or planar parameterizations. In each of these cases, geometric data representation is treated as a fixed pre‐process that is largely disconnected from the machine learning tool. In contrast, we propose to optimize for the geometric representation during the network learning process using a novel metric alignment layer. Our approach maps unstructured geometric data to a regular domain by minimizing the metric distortion of the map using the regularized Gromov–Wasserstein objective. This objective is parameterized by the metric of the target domain and is differentiable; thus, it can be easily incorporated into a deep network framework. Furthermore, the objective aims to align the metrics of the input and output domains, promoting consistent output for similar shapes. We show the effectiveness of our layer within a deep network trained for shape classification, demonstrating state‐of‐the‐art performance for nonrigid shapes.]  [BibTex@article{Ezuz17,Author = {Danielle Ezuz and Justin Solomon and Vladimir G. Kim and Mirela Ben-Chen},Journal = {SGP},Title = {GWCNN: A Metric Alignment Layer for Deep Shape Analysis},Year = {2017}}]
 Learning Hierarchical Shape Segmentation and Labeling from Online RepositoriesLi Yi, Leonidas Guibas, Aaron Hertzmann, Vladimir G. Kim, Hao Su, and Ersin YumerSIGGRAPH 2017[Code and Data]  [Paper: 36mb  10mb]  [AbstractWe propose a method for converting geometric shapes into hierarchically segmented parts with part labels. Our key idea is to train category-specific models from the scene graphs and part names that accompany 3D shapes in public repositories. These freely-available annotations represent an enormous, untapped source of information on geometry. However, because the models and corresponding scene graphs are created by a wide range of modelers with different levels of expertise, modeling tools, and objectives, these models have very inconsistent segmentations and hierarchies with sparse and noisy textual tags. Our method involves two analysis steps. First, we perform a joint optimization to simultaneously cluster and label parts in the database while also inferring a canonical tag dictionary and part hierarchy. We then use this labeled data to train a method for hierarchical segmentation and labeling of new 3D shapes. We demonstrate that our method can mine complex information, detecting hierarchies in man-made objects and their constituent parts, obtaining finer scale details than existing alternatives. We also show that, by performing domain transfer using a few supervised examples, our technique outperforms fully-supervised techniques that require hundreds of manually-labeled models.]  [BibTex@article{Yi17,Author = {Li Yi and Leonidas Guibas and Aaron Hertzmann and Vladimir G. Kim and Hao Su and Ersin Yumer },Journal = {SIGGRAPH},Title = {Learning Hierarchical Shape Segmentation and Labeling from Online Repositories},Year = {2017}}]
 Convolutional Neural Networks on Surfaces via Seamless Toric CoversHaggai Maron, Meirav Galun, Noam Aigerman, Miri Trope, Nadav Dym, Ersin Yumer, Vladimir G. Kim, and Yaron LipmanSIGGRAPH 2017[Code]  [Paper: 73mb  4mb]  [AbstractThe recent success of convolutional neural networks (CNNs) for image processing tasks is inspiring research efforts attempting to achieve similar success for geometric tasks. One of the main challenges in applying CNNs to surfaces is defining a natural convolution operator on surfaces. In this paper we present a method for applying deep learning to sphere-type shapes using a global seamless parameterization to a planar flat-torus, for which the convolution operator is well defined. As a result, the standard deep learning framework can be readily applied for learning semantic, high-level properties of the shape. An indication of our success in bridging the gap between images and surfaces is the fact that our algorithm succeeds in learning semantic information from an input of raw low-dimensional feature vectors. We demonstrate the usefulness of our approach by presenting two applications: human body segmentation, and automatic landmark detection on anatomical surfaces. We show that our algorithm compares favorably with competing geometric deep-learning algorithms for segmentation tasks, and is able to produce meaningful correspondences on anatomical surfaces where hand-crafted features are bound to fail.]  [BibTex@article{Maron17,Author = {Haggai Maron and Meirav Galun and Noam Aigerman and Miri Trope and Nadav Dym and Ersin Yumer and Vladimir G. Kim and Yaron Lipman},Journal = {SIGGRAPH},Title = {Convolutional Neural Networks on Surfaces via Seamless Toric Covers},Year = {2017}}]
 Customized Software to Optimize Circumferential Pharyngoesophageal Free Flap ReconstructionOleksandr Butskiy, Vladimir G. Kim, Brent Chang, Donald Anderson, and Eitan PrismanLaryngoscope, 2017[Webpage]  [Paper: link]  [AbstractThis is not a computer science paper. I helped out a friend by building a software that generates parametric developable patches to reconstruct part of a human throat. This software was used for some surgeries at Vancouver General Hospital.]  [BibTex@article{Butskiy17,Author = {Oleksandr Butskiy and Vladimir G. Kim and Brent Chang and Donald Anderson and Eitan Prisman},Journal = {The Laryngoscope},Title = {Customized Software to Optimize Circumferential Pharyngoesophageal Free Flap Reconstruction},Year = {2017},issn = {1531-4995},url = {http://dx.doi.org/10.1002/lary.26497},doi = {10.1002/lary.26497}}]
 A Scalable Active Framework for Region Annotation in 3D Shape CollectionsLi Yi, Vladimir G. Kim, Duygu Ceylan, I-Chao Shen, Mengyan Yan, Hao Su, Cewu Lu, Qixing Huang, Alla Sheffer, and Leonidas GuibasSIGGRAPH Asia 2016[Code and Data]  [Paper: 23mb  7mb]  [AbstractLarge repositories of 3D shapes provide valuable input for data- driven analysis and modeling tools. They are especially powerful once annotated with semantic information such as salient regions and functional parts. We propose a novel active learning method capable of enriching massive geometric datasets with accurate se- mantic region annotations. Given a shape collection and a user- specified region label our goal is to correctly demarcate the corre- sponding regions with minimal manual work. Our active frame- work achieves this goal by cycling between manually annotating the regions, automatically propagating these annotations across the rest of the shapes, manually verifying both human and automatic annotations, and learning from the verification results to improve the automatic propagation algorithm. We use a unified utility func- tion that explicitly models the time cost of human input across all steps of our method. This allows us to jointly optimize for the set of models to annotate and for the set of models to verify based on the predicted impact of these actions on the human effi- ciency. We demonstrate that incorporating verification of all pro- duced labelings within this unified objective improves both accu- racy and efficiency of the active learning procedure. We automati- cally propagate human labels across a dynamic shape network us- ing a conditional random field (CRF) framework, taking advantage of global shape-to-shape similarities, local feature similarities, and point-to-point correspondences. By combining these diverse cues we achieve higher accuracy than existing alternatives. We validate our framework on existing benchmarks demonstrating it to be sig- nificantly more efficient at using human input compared to previous techniques. We further validate its efficiency and robustness by an- notating a massive shape dataset, labeling over 93,000 shape parts, across multiple model classes, and providing a labeled part collec- tion more than one order of magnitude larger than existing ones]  [BibTex@article{Yi16,Author = {Li Yi and Vladimir G. Kim and Duygu Ceylan and I-Chao Shen and Mengyan Yan and Hao Su and Cewu Lu and Qixing Huang and Alla Sheffer and Leonidas Guibas},Journal = {SIGGRAPH Asia},Title = {A Scalable Active Framework for Region Annotation in 3D Shape Collections},Year = {2016}}]
 Data-Driven Shape Analysis and ProcessingKai Xu, Vladimir G. Kim, Qixing Huang, Niloy J. Mitra, and Evangelos KalogerakisSIGGRAPH Asia 2016 (course notes)[Wikipage]  [Paper: 13mb  2mb]  [AbstractData-driven methods play an increasingly important role in discovering geometric, structural, and semantic relationships between 3D shapes in collections, and applying this analysis to support intelligent modeling, editing, and visualization of geometric data. In contrast to traditional approaches, a key feature of data-driven approaches is that they aggregate information from a collection of shapes to improve the analysis and processing of individual shapes. In addition, they are able to learn models that reason about properties and relationships of shapes without relying on hard-coded rules or explicitly programmed instructions. We provide an overview of the main concepts and components of these techniques, and discuss their application to shape classification, segmentation, matching, reconstruction, modeling and exploration, as well as scene analysis and synthesis, through reviewing the literature and relating the existing works with both qualitative and numerical comparisons. We conclude our report with ideas that can inspire future research in data-driven shape analysis and processing.]  [BibTex@article{Xu16,Author = {Kai Xu and Vladimir G. Kim and Qixing Huang and Niloy J. Mitra and Evangelos Kalogerakis},Journal = {SIGGRAPH Asia Course notes},Title = {Data-Driven Shape Analysis and Processing},Year = {2016}}]
 Entropic Metric Alignment for Correspondence ProblemsJustin Solomon, Gabriel Peyre, Vladimir G. Kim, and Suvrit SraSIGGRAPH 2016[Code]  [Paper: 31mb  1mb]  [AbstractMany shape and image processing tools rely on computation of correspondences between geometric domains. Efficient methods that stably extract "soft" matches in the presence of diverse geometric structures have proven to be valuable for shape retrieval and transfer of labels or semantic information. With these applications in mind, we present an algorithm for probabilistic correspondence that optimizes an entropy-regularized Gromov-Wasserstein (GW) objective. Built upon recent developments in numerical optimal transportation, our algorithm is compact, provably convergent, and applicable to any geometric domain expressible as a metric measure matrix. We provide comprehensive experiments illustrating the convergence and applicability of our algorithm to a variety of graphics tasks. Furthermore, we expand entropic GW correspondence to a framework for other matching problems, incorporating partial distance matrices, user guidance, shape exploration, symmetry detection, and joint analysis of more than two domains. These applications expand the scope of entropic GW correspondence to major shape analysis problems and are stable to distortion and noise.]  [BibTex@article{Solomon16,Author = {Justin Solomon, Gabriel Peyre, Vladimir G. Kim, Suvrit Sra},Journal = {Transactions on Graphics (Proc. of SIGGRAPH)},Title = {Entropic Metric Alignment for Correspondence Problems},Year = {2016}}]
 Physics-driven Pattern Adjustment for Direct 3D Garment EditingAric Bartle, Alla Sheffer, Vladimir G. Kim, Danny Kaufman, Nicholas Vining, Floraine BerthouzozSIGGRAPH 2016[Webpage]  [Paper: 37mb  3mb]  [Video]  [AbstractDesigners frequently reuse existing designs as a starting point for creating new garments. In order to apply garment modifications, which the designer envisions in 3D, existing tools require meticulous manual editing of 2D patterns. These 2D edits need to account both for the envisioned geometric changes in the 3D shape, as well as for various physical factors that affect the look of the draped garment. We propose a new framework that allows designers to directly apply the changes they envision in 3D space; and creates the 2D patterns that replicate this envisioned target geometry when lifted into 3D via a physical draping simulation. Our framework removes the need for laborious and knowledge-intensive manual 2D edits and allows users to effortlessly mix existing garment designs as well as adjust for garment length and fit. Following each user specified editing operation we first compute a target 3D garment shape, one that maximally preserves the input garment’s style–its proportions, fit and shape–subject to the modifications specified by the user. We then automatically compute 2D patterns that recreate the target garment shape when draped around the input mannequin within a user-selected simulation environment. To generate these patterns, we propose a fixed-point optimization scheme that compensates for the deformation due to the physical forces affecting the drape and is independent of the underlying simulation tool used. Our experiments show that this method quickly and reliably converges to patterns that, under simulation, form the desired target look, and works well with different black-box physical simulators. We demonstrate a range of edited and resimulated garments, and further validate our approach via expert and amateur critique, and comparisons to alternative solutions.]  [BibTex@article{Bartle16,Author = {Aric Bartle and Alla Sheffer and Vladimir Kim and Danny Kaufman and Nicholas Vining and Floraine Berthouzoz},Journal = {Transactions on Graphics (Proc. of SIGGRAPH)},Title = {Physics-driven Pattern Adjustment for Direct 3D Garment Editing},Year = {2016}}]
 Autocorrelation Descriptor for Efficient Co-alignment of 3D Shape CollectionsMelinos Averkiou, Vladimir G. Kim, and Niloy J. MitraComputer Graphics Forum, 2016 (presented at Eurographics 2016)[Code and Data]  [Paper: 5mb  1mb]  [AbstractCo-aligning a collection of shapes to a consistent pose is a common problem in shape analysis with applications in shape matching, retrieval, and visualization. We observe that resolving among some orientations is easier than others, for example, a common mistake for bicycles is to align front-to-back, while even the simplest algorithm would not erroneously pick orthogonal alignment. The key idea of our work is to analyze rotational autocorrelations of shapes to facilitate shape co-alignment. In particular, we use such an autocorrelation measure of individual shapes to decide which shape pairs might have well-matching orientations; and, if so, which configurations are likely to produce better alignments. This significantly prunes the number of alignments to be examined, and leads to an efficient, scalable algorithm that performs comparably to state-of-the-art techniques on benchmark datasets, but requires significantly fewer computations, resulting in 2-16$\times$ speed improvement in our tests.]  [BibTex@article{Averkiou16,Author = {Melinos Averkiou and Vladimir G. Kim and Niloy J. Mitra},Journal = {{Computer Graphics Forum}}]
 Data-Driven Structural Priors for Shape CompletionMinhyuk Sung, Vladimir G. Kim, Roland Angst, and Leonidas GuibasSIGGRAPH Asia 2015[Code and Data]  [Paper: 5mb  3mb]  [AbstractAcquiring 3D geometry of an object is a tedious and time-consuming task, typically requiring scanning the surface from multiple viewpoints. In this work we focus on reconstructing complete geometry from a single scan acquired with a low-quality consumer-level scanning device. Our method uses a collection of example 3D shapes to build structural part-based priors that are necessary to complete the shape. In our representation, we associate a local coordinate system to each part and learn the distribution of positions and orientations of all the other parts from the database, which implicitly also defines positions of symmetry planes and symmetry axes. At the inference stage, this knowledge enables us to analyze incomplete point clouds with substantial occlusions, because observing only a few regions is still sufficient to infer the global structure. Once the parts and the symmetries are estimated, both data sources, symmetry and database, are fused to complete the point cloud. We evaluate our technique on a synthetic dataset containing 481 shapes, and on real scans acquired with a Kinect scanner. Our method demonstrates high accuracy for the estimated part structure and detected symmetries, enabling higher quality shape completions in comparison to alternative techniques.]  [BibTex@article{Sung15,Author = {Minhyuk Sung and Vladimir G. Kim and Roland Angst and Leonidas Guibas},Journal = {Transactions on Graphics (Proc. of SIGGRAPH Asia)},Title = {Data-driven Structural Priors for Shape Completion},Year = {2015}}]
 Data-Driven Shape Analysis and ProcessingKai Xu, Vladimir G. Kim, Qixing Huang, and Evangelos KalogerakisComputer Graphics Forum (STAR) 2015[Wikipage]  [Paper: 13mb  2mb]  [AbstractData-driven methods play an increasingly important role in discovering geometric, structural, and semantic relationships between 3D shapes in collections, and applying this analysis to support intelligent modeling, editing, and visualization of geometric data. In contrast to traditional approaches, a key feature of data-driven approaches is that they aggregate information from a collection of shapes to improve the analysis and processing of individual shapes. In addition, they are able to learn models that reason about properties and relationships of shapes without relying on hard-coded rules or explicitly programmed instructions. We provide an overview of the main concepts and components of these techniques, and discuss their application to shape classification, segmentation, matching, reconstruction, modeling and exploration, as well as scene analysis and synthesis, through reviewing the literature and relating the existing works with both qualitative and numerical comparisons. We conclude our report with ideas that can inspire future research in data-driven shape analysis and processing.]  [BibTex@article{Xu15,Author = {Kai Xu and Vladimir G. Kim and Qixing Huang and Evangelos Kalogerakis},Journal = {Computer Graphics Forum},Title = {Data-Driven Shape Analysis and Processing},Year = {2015}}]
 Creating Consistent Scene Graphs Using a Probabilistic GrammarTianqiang Liu, Siddhartha Chaudhuri, Vladimir G. Kim, Qi-Xing Huang, Niloy J. Mitra, and Thomas FunkhouserSIGGRAPH Asia 2014[Code and Data]  [Paper: 13mb  1mb]  [AbstractGrowing numbers of 3D scenes in online repositories provide new opportunities for data-driven scene understanding, editing, and synthesis. Despite the plethora of data now available online, most of it cannot be effectively used for data-driven applications because it lacks consistent segmentations, category labels, and/or functional groupings required for co-analysis. In this paper, we develop algorithms that infer such information via parsing with a probabilistic grammar learned from examples. First, given a collection of scene graphs with consistent hierarchies and labels, we train a probabilistic hierarchical grammar to represent the distributions of shapes, cardinalities, and spatial relationships of semantic objects within the collection. Then, we use the learned grammar to parse new scenes to assign them segmentations, labels, and hierarchies consistent with the collection. During experiments with these algorithms, we find that: they work effectively for scene graphs for in- door scenes commonly found online (bedrooms, classrooms, and libraries); they outperform alternative approaches that consider only shape similarities and/or spatial relationships without hierarchy; they require relatively small sets of training data; they are robust to moderate over-segmentation in the inputs; and, they can robustly transfer labels from one data set to another. As a result, the proposed algorithms can be used to provide consistent hierarchies for large collections of scenes within the same semantic class.]  [BibTex@article{Liu14,Author = {Tianqiang Liu and Siddhartha Chaudhuri and Vladimir G. Kim and Qi-Xing Huang and Niloy J. Mitra and Thomas Funkhouser},Journal = {Transactions on Graphics (Proc. of SIGGRAPH Asia)},Number = {6},Title = {{Creating Consistent Scene Graphs Using a Probabilistic Grammar}}]
 Shape2Pose: Human-Centric Shape AnalysisVladimir G. Kim, Siddhartha Chaudhuri, Leonidas Guibas, and Thomas FunkhouserSIGGRAPH 2014[Code and Data]  [Paper: 8mb  3mb]  [Talk]  [AbstractAs 3D acquisition devices and modeling tools become widely available there is a growing need for automatic algorithms that analyze the semantics and functionality of digitized shapes. Most recent research has focused on analyzing geometric structures of shapes. Our work is motivated by the observation that a majority of man-made shapes are designed to be used by people. Thus, in order to fully understand their semantics, one needs to answer a fundamental question: “how do people interact with these objects?” As an initial step towards this goal, we offer a novel algorithm for automatically predicting a static pose that a person would need to adopt in order to use an object. Specifically, given an input 3D shape, the goal of our analysis is to predict a corresponding human pose, including contact points and kinematic parameters. This is especially challenging for man-made objects that commonly exhibit a lot of variance in their geometric structure. We address this challenge by observing that contact points usually share consistent local geometric features related to the anthropometric properties of corresponding parts and that human body is subject to kinematic constraints and priors. Accordingly, our method effectively combines local region classification and global kinematically-constrained search to successfully predict poses for various objects. We also evaluate our algorithm on six diverse collections of 3D polygonal models (chairs, gym equipment, cockpits, carts, bicycles, and bipedal devices) containing a total of 147 models. Finally, we demonstrate that the poses predicted by our algorithm can be used in several shape analysis problems, such as establishing correspondences between objects, detecting salient regions, finding informative viewpoints, and retrieving functionally-similar shapes.]  [BibTex@article{Kim14,Author = {Vladimir G. Kim and Siddhartha Chaudhuri and Leonidas Guibas and Thomas Funkhouser},Journal = {Transactions on Graphics (Proc. of SIGGRAPH)},Number = {4},Title = {{Shape2Pose: Human-Centric Shape Analysis}}]
 Structure-Aware Shape ProcessingNiloy J. Mitra, Michael Wand, Hao Zhang, Daniel Cohen-Or, Vladimir G. Kim, and Qixing HuangSIGGRAPH 2014 (course notes)[Webpage]  [Paper: 31mb  4mb]  [Talk]  [AbstractShape structure is about the arrangement and relations between shape parts. Structure-aware shape processing goes beyond local geometry and low level processing, and analyzes and processes shapes at a high level. It focuses more on the global inter and intra semantic relations among the parts of shape rather than on their local geometry. With recent developments in easy shape acquisition, access to vast repositories of 3D models, and simple-to-use desktop fabrication possibilities, the study of structure in shapes has become a central research topic in shape analysis, editing, and modeling. A whole new line of structure-aware shape processing algorithms has emerged that base their operation on an attempt to understand such structure in shapes. The algorithms broadly consist of two key phases: an analysis phase, which extracts structural information from input data; and a (smart) processing phase, which utilizes the extracted information for exploration, editing, and synthesis of novel shapes. In this course, we will organize, summarize, and present the key concepts and methodological approaches towards efficient structure-aware shape processing. We discuss common models of structure, their implementation in terms of mathematical formalism and algorithms, and explain the key principles in the context of a number of state-of-the-art approaches. Further, we attempt to list the key open problems and challenges, both at the technical and at the conceptual level, to make it easier for new researchers to better explore and contribute to this topic. Our goal is to both give the practitioner an overview of available structure-aware shape processing techniques, as well as identify future research questions in this important, emerging, and fascinating research area.]  [BibTex@article{Mitra14,Author = {Niloy J. Mitra and Michael Wand and Hao Zhang and Daniel Cohen-Or and Vladimir G. Kim and Qi-Xing Huang},Journal = {SIGGRAPH Course notes},Title = {{Structure-Aware Shape Processing}}]
 ShapeSynth: Parameterizing Model Collections for Coupled Shape Exploration and SynthesisMelinos Averkiou, Vladimir G. Kim, Youyi Zheng, and Niloy J. MitraEurographics 2014[Code and Data]  [Paper: 7mb]  [Video]  [AbstractRecent advances in modeling tools enable non-expert users to synthesize novel shapes by assembling parts extracted from model databases. A major challenge for these tools is to provide users with relevant parts, which is especially difficult for large repositories with significant geometric variations. In this paper we analyze unorganized collections of 3D models to facilitate explorative shape synthesis by providing high-level feedback of possible synthesizable shapes. By jointly analyzing arrangements and shapes of parts across models, we hierarchically embed the models into low-dimensional spaces. The user can then use the parameterization to explore the existing models by clicking in different areas or by selecting groups to zoom on specific shape clusters. More importantly, any point in the embedded space can be lifted to an arrangement of parts to provide an abstracted view of possible shape variations. The abstraction can further be realized by appropriately deforming parts from neighboring models to produce synthesized geometry. Our experiments show that users can rapidly generate plausible and diverse shapes using our system, which also performs favorably with respect to previous modeling tools.]  [BibTex@article{Averkiou14,Author = {Melinos Averkiou and Vladimir G. Kim and Youyi Zheng and Niloy J. Mitra},Journal = {Computer Graphics Forum (Proc. of Eurographics)},Number = {2},Title = {{ShapeSynth: Parameterizing Model Collections for Coupled Shape Exploration and Synthesis}}]
 Structure-Aware Shape ProcessingNiloy J. Mitra, Michael Wand, Hao Zhang, Daniel Cohen-Or, Vladimir G. Kim, and Qixing HuangSIGGRAPH Asia 2013 (course notes)[Paper: 26mb  3mb]  [Talk]  [AbstractShape structure is about the arrangement and relations between shape parts. Structure-aware shape processing goes beyond local geometry and low level processing, and analyzes and processes shapes at a high level. It focuses more on the global inter and intra semantic relations among the parts of shape rather than on their local geometry. With recent developments in easy shape acquisition, access to vast repositories of 3D models, and simple-to-use desktop fabrication possibilities, the study of structure in shapes has become a central research topic in shape analysis, editing, and modeling. A whole new line of structure-aware shape processing algorithms has emerged that base their operation on an attempt to understand such structure in shapes. The algorithms broadly consist of two key phases: an analysis phase, which extracts structural information from input data; and a (smart) processing phase, which utilizes the extracted information for exploration, editing, and synthesis of novel shapes. In this course, we will organize, summarize, and present the key concepts and methodological approaches towards efficient structure-aware shape processing. We discuss common models of structure, their implementation in terms of mathematical formalism and algorithms, and explain the key principles in the context of a number of state-of-the-art approaches. Further, we attempt to list the key open problems and challenges, both at the technical and at the conceptual level, to make it easier for new researchers to better explore and contribute to this topic. Our goal is to both give the practitioner an overview of available structure-aware shape processing techniques, as well as identify future research questions in this important, emerging, and fascinating research area.]  [BibTex@article{Mitra13,Author = {Niloy J. Mitra and Michael Wand and Hao Zhang and Daniel Cohen-Or and Vladimir G. Kim and Qi-Xing Huang},Journal = {SIGGRAPH Asia Course notes},Title = {{Structure-Aware Shape Processing}}]
 Understanding the Structure of Large, Diverse Collections of ShapesVladimir G. KimPhD Dissertaion, Princeton University, 2013[Code and Data]  [Paper: 31mb  11mb]  [Talk]  [AbstractMy dissertation is mainly based on three papers: Blended Intrinsic Maps, Exploring Collections of 3D Models using Fuzzy Correspondences, and Learning Part-based Templates from Large Collections of 3D Shapes.]  [BibTex@article{Kim13a,Author = {Vladimir G. Kim},Journal = {Doctoral Dissertation, Princeton University},Title = {{Understanding the Structure of Large, Diverse Collections of Shapes}}]
 Learning Part-based Templates from Large Collections of 3D ShapesVladimir G. Kim, Wilmot Li, Niloy J. Mitra, Siddhartha Chaudhuri, Stephen DiVerdi, and Thomas FunkhouserSIGGRAPH 2013[Code and Data]  [Paper: 13mb  4mb]  [NoteErrataEquation 7 has a typeout, here is the corrected version: ]  [AbstractAs large repositories of 3D shape collections continue to grow, understanding the data, especially encoding the inter-model similarity and their variations, is of central importance. For example, many data-driven approaches now rely on access to semantic segmentation information, accurate inter-model point-to-point correspondence, and deformation models that characterize the model collections. Existing approaches, however, are either supervised requiring manual labeling; or employ super-linear matching algorithms and thus are unsuited for analyzing large collections spanning many thousands of models. We propose an automatic algorithm that starts with an initial template model and then jointly optimizes for part segmentation, point-to-point surface correspondence, and a compact deformation model to best explain the input model collection. As output, the algorithm produces a set of probabilistic part-based templates that groups the original models into clusters of models capturing their styles and variations. We evaluate our algorithm on several standard datasets and demonstrate its scalability by analyzing much larger collections of up to thousands of shapes.]  [BibTex@article{Kim13,Author = {Vladimir G. Kim and Li, Wilmot and Mitra, Niloy J. and Chaudhuri, Siddhartha and DiVerdi, Stephen and Funkhouser, Thomas},Journal = {Transactions on Graphics (Proc. of SIGGRAPH)},Number = {4},Title = {{Learning Part-based Templates from Large Collections of 3D Shapes}}]
 Exploring Collections of 3D Models using Fuzzy CorrespondencesVladimir G. Kim, Wilmot Li, Niloy J. Mitra, Stephen DiVerdi, and Thomas FunkhouserSIGGRAPH 2012[Code and Data]  [Paper: 23mb  5mb]  [Talk]  [Video]  [AbstractLarge collections of 3D models are now commonly available via many public repositories, opening new possibilities for data mining, visualization, and synthesis of new models. However, exploring such collections remains challenging because similarity relationships between points on 3D surfaces are often ambiguous and/or difficult to infer automatically. To address this challenge, we introduce an encoding of similarity relationships using fuzzy point correspondences. Based on the observation that correspondence space is low-dimensional, we propose a robust and efficient computational framework to estimate fuzzy correspondences using only a sparse set of pairwise model alignments. We evaluate our algorithm on a range of correspondence benchmarks and report substantial improvements both in terms of accuracy and speed compared to existing alternatives. Further, we use fuzzy correspondences to process large model collections collectively and demonstrate applications towards view alignment, smart exploration, and faceted browsing.]  [BibTex@article{Kim12,Author = {Kim, Vladimir G. and Li, Wilmot and Mitra, Niloy J. and DiVerdi, Stephen and Funkhouser, Thomas},Journal = {Transactions on Graphics (Proc. of SIGGRAPH)},Number = {4},Read = {0},Title = {{Exploring Collections of 3D Models using Fuzzy Correspondences}}]
 Symmetry-Guided Texture Synthesis and ManipulationVladimir G. Kim, Yaron Lipman, and Thomas FunkhouserTransactions on Graphics, 2012 (Presented at SIGGRAPH 2012)[Code and Data]  [Paper: 74mb  4mb]  [Talk]  [AbstractThis paper presents a framework for symmetry-guided texture synthesis and processing. It is motivated by the long-standing problem of how to optimize, transfer, and control the spatial patterns in textures. The key idea is that symmetry representations that measure autocorrelations with respect to all transformations of a group are a natural way to describe spatial patterns in many real-world textures. To leverage this idea, we provide methods to transfer symmetry representations from one texture to another, process the symmetries of a texture, and optimize textures with respect to properties of their symmetry representations. These methods are automatic and robust, as they don't require explicit detection of discrete symmetries. Applications are investigated for optimizing, processing and transferring symmetries and textures.]  [BibTex@article{Kim12a,Author = {Vladimir G. Kim and Yaron Lipman and Thomas Funkhouser},Journal = {Transactions on Graphics},Number = {3},Title = {{Symmetry-Guided Texture Synthesis and Manipulation}}]
 Simple Formulas For Quasiconformal Plane DeformationsYaron Lipman, Vladimir G. Kim, and Thomas FunkhouserTransactions on Graphics, 2012 (Presented at SIGGRAPH 2012)[Code]  [Paper: 6mb  1mb]  [AbstractWe introduce a simple formula for 4-point planar warping that produces provably good 2D deformations. In contrast to previous work, the new deformations minimizes the maximum conformal distortion and spreads the distortion equally across the domain. We derive closed-form formulas for computing the 4-point interpolant and analyze its properties. We further explore applications to 2D shape deformations by building local deformation operators that use Thin-Plate Splines to further deform the 4-point interpolant to satisfy certain boundary conditions. Although our theory y does not extend to this case, we demonstrate that, practically, these local operators can be used to create compound deformations with fewer control points and smaller worst-case distortions in comparisons to the state-of-the-art.]  [BibTex@article{Lipman12,Author = {Yaron Lipman and Vladimir G. Kim and Thomas Funkhouser},Journal = {Transactions on Graphics},Number = {5},Title = {{Simple Formulas For Quasiconformal Plane Deformations}}]
 Finding Surface Correspondences Using Symmetry Axis CurvesTianqiang Liu, Vladimir G. Kim, and Thomas FunkhouserSGP 2012 (Symposium on Geometry Processing)[Code and Data]  [Paper: 8mb  1mb]  [AbstractIn this paper, we propose an automatic algorithm for finding a correspondence map between two 3D surfaces. The key insight is that global reflective symmetry axes are stable, recognizable, semantic features of most real-world surfaces. Thus, it is possible to find a useful map between two surfaces by first extracting symmetry axis curves, aligning the extracted curves, and then extrapolating correspondences found on the curves to both surfaces. The main advantages of this approach are efficiency and robustness: the difficult problem of finding a surface map is reduced to three significantly easier problems: symmetry detection, curve alignment, and correspondence extrapolation, each of which has a robust, polynomial-time solution (e.g., optimal alignment of 1D curves is possible with dynamic programming). We investigate of this approach on a wide range of examples, including both intrinsically symmetric surfaces and polygon soups, and find that it is superior to previous methods in cases where two surfaces have different overall shapes but similar reflective symmetry axes, a common case in computer graphics.]  [BibTex@article{Liu12,Author = {Tianqiang Liu and Vladimir G. Kim and Thomas Funkhouser},Journal = {Computer Graphics Forum (Proc. of SGP)},Number = {5},Title = {{Finding Surface Correspondences Using Symmetry Axis Curves}}]
 Blended Intrinsic MapsVladimir G. Kim, Yaron Lipman, and Thomas FunkhouserSIGGRAPH 2011[Code and Data]  [Paper: 6mb  1mb]  [Talk]  [AbstractThis paper describes a fully automatic pipeline for finding an intrinsic map between two non-isometric, genus zero surfaces. Our approach is based on the observation that efficient methods exist to search for nearly isometric maps (e.g., Möbius Voting or Heat Kernel Maps), but no single solution found with these methods provides low-distortion everywhere for pairs of surfaces differing by large deformations. To address this problem, we suggest using a weighted combination of these maps to produce a “blended map.” This approach enables algorithms that leverage efficient search procedures, yet can provide the flexibility to handle large deformations. The main challenges of this approach lie in finding a set of candidate maps mi and their associated blending weights bi(p) for every point p on the surface. We address these challenges specifically for conformal maps by making the following contributions. First, we provide a way to blend maps, defining the image of p as the weighted geodesic centroid of mi(p). Second, we provide a definition for smooth blending weights at every point p that are proportional to the area preservation of mi at p. Third, we solve a global optimization problem that selects candidate maps based both on their area preservation and consistency with other selected maps. During experiments with these methods, we find that our algorithm produces blended maps that align semantic features better than alternative approaches over a variety of data sets.]  [BibTex@article{Kim11,Author = {Vladimir G. Kim and Yaron Lipman and Thomas Funkhouser},Journal = {Transactions on Graphics (Proc. of SIGGRAPH)},Number = {4},Title = {{Blended Intrinsic Maps}}]
 Möbius Transformations for Global Intrinsic Symmetry AnalysisVladimir G. Kim, Yaron Lipman, Xiaobai Chen, and Thomas FunkhouserSGP 2010 (Symposium on Geometry Processing)[Code and Data]  [Paper: 7mb  1mb]  [Talk]  [NoteErrataGeodesic distances were not normalized properly, and thus the first row of Tables 1, 2, and 3 are not meaningful. Please, refer to project website for corrections. ]  [AbstractThe goal of our work is to develop an algorithm for automatic and robust detection of global intrinsic symmetries in 3D surface meshes. Our approach is based on two core observations. First, symmetry invariant point sets can be detected robustly using critical points of the Average Geodesic Distance (AGD) function. Second, intrinsic symmetries are self-isometries of surfaces and as such are contained in the low dimensional group of Möbius transformations. Based on these observations, we propose an algorithm that: 1) generates a set of symmetric points by detecting critical points of the AGD function, 2) enumerates small subsets of those feature points to generate candidate Möbius transformations, and 3) selects among those candidate Möbius transformations the one(s) that best map the surface onto itself. The main advantages of this algorithm stem from the stability of the AGD in predicting potential symmetric point features and the low dimensionality of the Möbius group for enumerating potential self-mappings. During experiments with a benchmark set of meshes augmented with human-specified symmetric correspondences, we find that the algorithm is able to find intrinsic symmetries for a wide variety of object types with moderate deviations from perfect symmetry.]  [BibTex@article{Kim10,Author = {Vladimir G. Kim and Yaron Lipman and Xiaobai Chen and Thomas Funkhouser},Journal = {Computer Graphics Forum (Proc. of SGP)},Number = {5},Title = {{M\"{o}bius Transformations For Global Intrinsic Symmetry Analysis}}]
 Shape-based Recognition of 3D Point Clouds in Urban EnvironmentsAleksey Golovinskiy, Vladimir G. Kim, and Thomas FunkhouserICCV 2009[Paper: 3mb  1mb]  [AbstractThis paper investigates the design of a system for recognizing objects in 3D point clouds of urban environments. The system is decomposed into four steps: locating, segmenting, characterizing, and classifying clusters of 3D points. Specifically, we first cluster nearby points to form a set of potential object locations (with hierarchical clustering). Then, we segment points near those locations into foreground and background sets (with a graph-cut algorithm). Next, we build a feature vector for each point cluster (based on both its shape and its context). Finally, we label the feature vectors using a classifier trained on a set of manually labeled objects. The paper presents several alternative methods for each step. We quantitatively evaluate the system and tradeoffs of different alternatives in a truthed part of a scan of Ottawa that contains approximately 100 million points and 1000 objects of interest. Then, we use this truth data as a training set to recognize objects amidst approximately 1 billion points of the remainder of the Ottawa scan.]  [BibTex@article{Golovinskiy09,Author = {Aleksey Golovinskiy and Vladimir G. Kim and Thomas Funkhouser},Journal = {ICCV},Title = {{Shape-based Recognition of 3D Point Clouds in Urban Environments}}]
 Talks Data-Driven Geometry Processing, UBC & SFU, May 2017Part Structures In Large Collections of 3D Models, Dagstuhl 2017 (Seminar 17021, Functoriality in Geometric Data)Finding Structure In Large Collections of 3D Models, GI 2016 (Graphic Interfaces)Structure and Function in Large Collections of 3D Shapes, Cornell 2015
 Program Committee 2018: Eurographics, SGP (Symposium on Geometry Processing), SMI (Shape Modeling International) 2017: SGP, PG (Pacific Graphics), CAD/Graphics 2016: SGP, SMI, PG 2015: SIGGRAPH, SGP, Eurographics, PG, CAD/Graphics 2014: SGP, Eurographics (short papers) 2013: SGP Reviewer: CVPR, ICCV, ECCV, SIGGRAPH, Transactions on Graphics
 Teaching Deep Learning for Graphics, EG 2018 tutorial    with Niloy Mitra, Iasonas Kokkinos, Paul Guerrero, Konstantinos Rematas, and Tobias RitschelData-Driven Shape Analysis and Processing, SIGGRAPH Asia 2016 course and EG 2016 tutorial    with Kai Xu, Qixing Huang, Niloy J. Mitra, and Evangelos KalogerakisCS 468 (Data-driven Shape Analysis), Stanford, 2014, Instructor    with Qixing HuangStructure-Aware Shape Processing, SIGGRAPH 2014 and SIGGRAPH Asia 2013 Course    with Niloy J. Mitra, Michael Wand, Hao Zhang, Daniel Cohen-Or, and Qixing HuangCOS 426 (Computer Graphics), Princeton, 2011, TACOS 116 (The Computational Universe), Princeton, 2010, TA