While there have been many advancements in generative models for 3D design, there has been a limited amount of user interface work in this co-creation domain. The user interface controls and interaction paradigms emerging in this field tend to be unintuitive and hard to standardize, as they are often based upon complicated work related to latent space disentanglement, dimensionality reduction, and other bespoke computational techniques.
We demo a user interface that provides users intuitive controls for generating basic 3D animals shapes. These controls, a set of six sliders, map to simple and universal operations such as scale and rotation. By adjusting these parameters over animal limbs, users can semantically guide generative models towards their goals, optimizing the mapping between AI action and user intention.
This demo user interface governs a semantic space learned from our implementation of an architecture proposed by Wei et. al. We provide a parametric design method that can create arbitrary metashapes (generic low-fidelity shapes), allowing us to apply their generative model to a new nonrigid shape domain: animals. We conclude on an analysis of the benefits and drawbacks of using metashapes as an intermediate abstraction between humans and AI.
Recently, we have been testing multimodality methods to explore generative models. By multimodality we mean the combination of visual geometry and natural language. We apply methods from clustering, semantic search, and dimensionality reduction to create the interface below.