DesignPrompt: Using Multimodal Interaction for Design Exploration with Generative AI
- Authors
- Publication Date
- Jul 01, 2024
- Identifiers
- DOI: 10.1145/3643834.3661588
- OAI: oai:HAL:hal-04652020v1
- Source
- Hal-Diderot
- Keywords
- Language
- English
- License
- Unknown
- External links
Abstract
Visually oriented designers often struggle to create effective generative AI (GenAI) prompts. A preliminary study identified specific issues in composing and fine-tuning prompts, as well as needs in accurately translating intentions into rich input. We developed DesignPrompt, a moodboard tool that lets designers combine multiple modalities — images, color, text — into a single GenAI prompt and tweak the results. We ran a comparative structured observation study with 12 professional designers to better understand their intent expression, expectation alignment and transparency perception using DesignPrompt and text input GenAI. We found that multimodal prompt input encouraged designers to explore and express themselves more effectively. Designer’s interaction preferences change according to their overall sense of control over the GenAI and whether they are seeking inspiration or a specific image. Designers developed innovative uses of DesignPrompt, including developing elaborate multimodal prompts and creating a multimodal prompt pattern to maximize novelty while ensuring consistency.