
ABSTRACT
While there have been many advancements in generative models for 3D design, there has been a limited amount of user interface work in this co-creation domain. The user interface controls and interaction paradigms emerging in this field tend to be unintuitive and hard to standardize, as they are often based upon complicated work related to latent space disentanglement, dimensionality reduction, and other bespoke computational techniques.
We demo a user interface that provides users intuitive controls for generating basic 3D animals shapes. These controls, a set of six sliders, map to simple and universal operations such as scale and rotation. By adjusting these parameters over animal limbs, users can semantically guide generative models towards their goals, optimizing the mapping between AI action and user intention.
This demo user interface governs a semantic space learned from our implementation of an architecture proposed by Wei et. al. We provide a parametric design method that can create arbitrary metashapes (generic low-fidelity shapes), allowing us to apply their generative model to a new nonrigid shape domain: animals. We conclude on an analysis of the benefits and drawbacks of using metashapes as an intermediate abstraction between humans and AI.
FIGURES

Our user interface generates animal metashapes, which are generic low-fidelity animal shapes. Above are nine animal metashapes arrived at from our user interface. In the top left rectangle, we picture six of the "semantic sliders" used to generate these shapes. These sliders give users control over the following semantic meaningful parameters such as torso length, neck length, neck rotation, tail length, tail rotation, and leg length. These parameters operate on the shape outputs using intuitive mental operations like scale and rotation.

Cases of bad output animal metashapes for three semantic axes. Left: edits on tail rotation result in changes of neck rotation. The model mixes up posterior extrusion of neck with the anterior extrusion of the tail. Center: Editing tail length leads to a "negative" tail length, which appears as a posterior indent in the animal shape. Right: Maximizing the leg length parameter leads to an outward, noisy extension of legs. The affected areas in the images and parameters are saturated and highlighted respectively.

DEMO

From doing pilot studies, we learned that users wanted exploration tools rather than control.

FORTHCOMING WORK
Recently, we have been testing multimodality methods to explore generative models. By multimodality we mean the combination of visual geometry and natural language. We apply methods from clustering, semantic search, and dimensionality reduction to create the interface below.
