3D modeling you’ll be able to feel | MIT News

Essential for a lot of industries starting from Hollywood computer-generated imagery to product design, 3D modeling tools often use text or image prompts to dictate different elements of visual appearance, like color and form. As much as this is sensible as a primary point of contact, these systems are still limited of their realism attributable to their neglect of something central to the human experience: touch.

Fundamental to the distinctiveness of physical objects are their tactile properties, reminiscent of roughness, bumpiness, or the texture of materials like wood or stone. Existing modeling methods often require advanced computer-aided design expertise and barely support tactile feedback that may be crucial for a way we perceive and interact with the physical world.

With that in mind, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have created a brand new system for stylizing 3D models using image prompts, effectively replicating each visual appearance and tactile properties.

The CSAIL team’s “TactStyle” tool allows creators to stylize 3D models based on images while also incorporating the expected tactile properties of the textures. TactStyle separates visual and geometric stylization, enabling the replication of each visual and tactile properties from a single image input.

Play video

“TactStyle” tool allows creators to stylize 3D models based on images while also incorporating the expected tactile properties of the textures.

PhD student Faraz Faruqi, lead creator of a brand new paper on the project, says that TactStyle could have far-reaching applications, extending from home decor and private accessories to tactile learning tools. TactStyle enables users to download a base design — reminiscent of a headphone stand from Thingiverse — and customize it with the styles and textures they desire. In education, learners can explore diverse textures from around the globe without leaving the classroom, while in product design, rapid prototyping becomes easier as designers quickly print multiple iterations to refine tactile qualities.

“You can imagine using this type of system for common objects, reminiscent of phone stands and earbud cases, to enable more complex textures and enhance tactile feedback in quite a lot of ways,” says Faruqi, who co-wrote the paper alongside MIT Associate Professor Stefanie Mueller, leader of the Human-Computer Interaction (HCI) Engineering Group at CSAIL. “You may create tactile educational tools to display a spread of various concepts in fields reminiscent of biology, geometry, and topography.”

Traditional methods for replicating textures involve using specialized tactile sensors — reminiscent of GelSight, developed at MIT — that physically touch an object to capture its surface microgeometry as a “heightfield.” But this requires having a physical object or its recorded surface for replication. TactStyle allows users to duplicate the surface microgeometry by leveraging generative AI to generate a heightfield directly from a picture of the feel.

On top of that, for platforms just like the 3D printing repository Thingiverse, it’s difficult to take individual designs and customize them. Indeed, if a user lacks sufficient technical background, changing a design manually runs the chance of really “breaking” it in order that it might probably’t be printed anymore. All of those aspects spurred Faruqi to wonder about constructing a tool that allows customization of downloadable models on a high level, but that also preserves functionality.

In experiments, TactStyle showed significant improvements over traditional stylization methods by generating accurate correlations between a texture’s visual image and its heightfield. This allows the replication of tactile properties directly from a picture. One psychophysical experiment showed that users perceive TactStyle’s generated textures as just like each the expected tactile properties from visual input and the tactile features of the unique texture, resulting in a unified tactile and visual experience.

TactStyle leverages a preexisting method, called “Style2Fab,” to change the model’s color channels to match the input image’s visual style. Users first provide a picture of the specified texture, after which a fine-tuned variational autoencoder is used to translate the input image right into a corresponding heightfield. This heightfield is then applied to change the model’s geometry to create the tactile properties.

The colour and geometry stylization modules work in tandem, stylizing each the visual and tactile properties of the 3D model from a single image input. Faruqi says that the core innovation lies within the geometry stylization module, which uses a fine-tuned diffusion model to generate heightfields from texture images — something previous stylization frameworks don’t accurately replicate.

Looking ahead, Faruqi says the team goals to increase TactStyle to generate novel 3D models using generative AI with embedded textures. This requires exploring precisely the type of pipeline needed to duplicate each the shape and performance of the 3D models being fabricated. Additionally they plan to analyze “visuo-haptic mismatches” to create novel experiences with materials that defy conventional expectations, like something that appears to be made from marble but looks like it’s made from wood.

Faruqi and Mueller co-authored the brand new paper alongside PhD students Maxine Perroni-Scharf and Yunyi Zhu, visiting undergraduate student Jaskaran Singh Walia, visiting masters student Shuyue Feng, and assistant professor Donald Degraen of the Human Interface Technology (HIT) Lab NZ in Recent Zealand.