Workshops

Creative Tools for Thought

While generative AI (GenAI) unlocks new creative possibilities, it also raises critical questions about agency, sensemaking, interpretability, and expressivity. Our focus is on creativity and interaction design from human-computer partnership and the fundamentals of interaction perspective. Creative Tools for Thought is a collection of creativity support tools that we developed—both with and without GenAI. Through these tools, we explore how technology can enhance creative processes with different angles to augment human creativity.

About

From: 2025-04-26
To: 2025-05-01

We explored Generative AI's interpretability in DesignPrompt, which lets visual-oriented designers express their intentions in modalities such as image, color, and semantic labels. We also explored how users create their own gesture commands in FieldWard, emphasizing interpretability to ensure gestures are both user-memorable and system-interpretable.

We explored agency in a Human-AI collaborative design ideation tool—ImageSense, revealing nuances in designers' preferences for designer-led, system-led, and mixed-initiative approaches.

We explored how creative professionals can preserve expressivity with pen interaction in FusAIn, which lets designers compose visual Generative AI prompts by drawing with different visual details such as color, textures and objects and "fuse" them with visuals.

We explored sensemaking in a semantic labeled moodboading tool—SemanticCollage, where we contextualize creative practice with semantic labels that help designers to "reflect in action".

Schedule

From: 2025-04-26
To: 2025-05-01
 

Agency

Agency refers to the level of control and influence a user or designer has in a system. One key challenge is how to share agency in Human-AI collaboration context. How can we design the interaction so that designers benefit from intelligent support, but still retain control? What does a satisfying and effective ‘human-computer partnership’ look like for complex, evolving and open-ended creative tasks such as mood board design? ImageSense lets human designers retain control of the interaction and actively choose the type and level of machine agency.

ImageSense

CSCW 2020

Professional designers create mood boards to explore, visualize, and communicate hard-to-express ideas. We present ImageCascade, an intelligent, collaborative ideation tool that combines individual and shared work spaces, as well as collaboration with multiple forms of intelligent agents.

In the collection phase, ImageCascade offers fluid transitions between serendipitous discovery of curated images via ImageCascade, combined text- and image-based Semantic search, and intelligent AI suggestions for finding new images.
For later composition and reflection, ImageCascade provides semantic labels, generated color palettes, and multiple tag clouds to help communicate the intent of the mood board.

 

Interpretability

Interpretability refers to how easily humans and systems can understand and explain the behavior, reasoning, or outputs of a model, system, or interaction. We explored interpretability in the context of creative intention alignment, enabling designers to use visual-oriented mediums to communicate their creative intentions with Generative AI in DesignPrompt. We also explore interpretability in gestural command definition, where our design FieldWard aims to balance user comprehension and machine interpretation.

DesignPrompt

DIS 2024

Although current generative AI (GenAI) enables designers to create novel images, its focus on text-based and whole-image interaction limits expressive engagement with visual materials. Based on the design concept of deconstruction and reconstruction of digital visual attributes for visual prompts, we present FusAIn, a GenAI prompt composition tool that lets designers create personalized pens by loading them with objects or attributes such as color or texture. GenAI then fuses the pen’s contents to create new images.

FusAIn lets designers (1) extract visual attributes from inspiration image collection; (2) use pens loaded with visual details to compose and control GenAI visual prompts precisely; (3) switch between local and global generation to take active control of image details; and (4) easily re-edit visual compositions by reusing generated design material.

FieldWard

CHI 2017

How can we define our own gesture commands? Fieldward displays a colored field that we can explore when designing new gestures, that we find easy to remember, and that the system can also recognize. We help users create gestures that are both personally memorable and reliably recognized by a touch-enabled mobile device. We address these competing requirements with two dynamic guides that use progressive feedforward to interactively visualize the "negative space" of unused gestures: Pathward and Fieldward.

The Pathward technique suggests four possible completions to the current gesture.

The Fieldward technique uses color gradients to reveal optimal directions for creating recognizable gestures.

We ran a two-part experiment in which 27 participants each created 42 personal gesture shortcuts on a smartphone, using Pathward, Fieldward or No Feedforward.
The Fieldward technique best supported the most common user strategy, i.e. to create a memorable gesture first and then adapt it to be recognized by the system. Users preferred the Fieldward technique to Pathward or No Feedforward, and remembered gestures more easily when using the technique.

Dynamic guides can help developers design novel gesture vocabularies and support users as they design custom gestures for mobile applications.

 

Sensemaking

Sense making is an immersive process that involves discovery and learning, also known as ‘reflection-in-action’ from Schön. During the design process, designers interpret images and groups of images to make sense of larger concepts. SemanticCollage demonstrates how we can create a fluid, intuitive tool that takes advantage of state-of-the-art semantic labeling algorithms to offer designers better support for ideation and sense making.

SemanticCollage

DIS 2020

Designers create inspirational mood boards to express their design ideas visually, through collages of images and text. They find appropriate images and reflect on them as they explore emergent design concepts. After presenting the results of a participatory design workshop and a survey of professional designers, we introduce SemanticCollage, a digital mood board tool that attaches semantic labels to images by applying a state-of-the-art semantic labeling algorithm.

SemanticCollage helps designers to 1) translate vague, visual ideas into search terms; 2) make better sense of and communicate their designs; while 3) not disrupting their creative flow.

Our study showed that SemanticCollage’s semantic features increase exploration and enjoyment; and most designers find semantic labels useful throughout the design process. Designers felt they were in control, and created highly diverse types of mood boards, including unstructured collections, design spaces, and communicative layouts. Semantic labels also helped designers ‘reflect in action’, helping them to transform their vague ideas into expressive search queries. Reflecting on semantic labels further increases awareness of mood board content, including identifying missing elements on the board, and helps designers discover new relationships among images, and find words to communicate their ideas to external stakeholders.

 

Expressivity

Expressiveness in interaction is “the way and degrees to which people can convey thoughts or feelings”. FusAIn enables a compositional means of creating Generative AI prompts, which lets designers express by drawing with decomposed visual material, then Generative AI will generate controllable image content that fits the composition. Extracting and reusing inspirational material matches designers’ existing work practices, making GenAI more contextualized for professional design.

FusAIn

(To appear: CHI 2025)

Although current generative AI (GenAI) enables designers to create novel images, its focus on text-based and whole-image interaction limits expressive engagement with visual materials. Based on the design concept of deconstruction and reconstruction of digital visual attributes for visual prompts, we present FusAIn, a GenAI prompt composition tool that lets designers create personalized pens by loading them with objects or attributes such as color or texture. GenAI then fuses the pen’s contents to create new images.

FusAIn lets designers (1) extract visual attributes from inspiration image collection; (2) use pens loaded with visual details to compose and control GenAI visual prompts precisely; (3) switch between local and global generation to take active control of image details; and (4) easily re-edit visual compositions by reusing generated design material.

 

Who are we?

We are researchers at Inria and the Université Paris-Saclay, in France who are working
on human-centered AI to support creative work.

Janin KOCH

Dr. Janin Koch is a permanent researcher at INRIA Lille. Her research interests include collaborative artificial intelligence for exploratory tasks applied to different domains such as creativity and search. Her work aims to define, study and evaluate human-machine interaction to develop ideas and concepts together with intelligent machines. She was a member of the European HumanE AI network and is involved in a number of EU projects related to exploring creative use of AI as well as their impacts on sustainability. She is also paper chair of Creativity and Cognition conference (C&C)‘25 and ’26 and vice-head of the steering committee of the new conference on Hybrid-Human Artificial Intelligence (HHAI).

Wendy E. MACKAY

I am a Research Director at Inria and Professor Attaché at the Université Paris-Saclay, where I direct the ex)situ research group. We explore the limits of interaction—how extreme users interact with technology in extreme situations. Rather than simplifying technology for novices, we study those who push its boundaries: creative professionals who redefine artistic expression, designers who challenge conventions, and scientists who uncover insights from vast data landscapes. This perspective aligns closely with the workshop’s theme on Generative AI and Human Cognition. Our goal is to create effective human-computer partnerships with GenAI where, instead of deskilling or replacing expert users, we enhance human capabilities over time.

Xiaohan PENG

Xiaohan is a second-year Human-Computer Interaction (HCI) Ph.D student at Université Paris-Saclay, supervised by Wendy Mackay and Janin Koch. Her current research approaches Human-AI interaction from the fundamentals of interaction perspective. She is particularly interested in simple, expressive and inexpensive interaction that augments design and artistic practice. Her first project DesignPrompt is published at DIS‘24, and her follow-up project FusAIn is to appear at CHI‘25. She is also web chair of 24th International Conference on Mobile and Ubiquitous Multimedia(MUM)‘25.