Designing with AI
Who Is Actually in the Room?
Time to Complete: 30 minutes
Download 5-minute PDF Warm-Up Activity above
Who This Is For: UX designers, technical communication instructors, product designers, content strategists and digital learning specialists working inside organizations that have integrated AI tools into early-stage design workflows face a concrete problem: they are under pressure to use generative AI efficiently but lack a principled framework for knowing when to trust it and when to push back. This lesson is also directly relevant to instructional designers building AI-augmented curricula, academic researchers studying human-computer interaction and professionals in healthcare, ed-tech, and software product development who rely on design thinking to create user-centered digital tools. The shared challenge across all of these roles is that generative AI can accelerate the design process in measurable ways while simultaneously creating blind spots that no prompt refinement can fully resolve.
Real-World Applications
UX research teams at product companies are already using AI tools to generate personas, draft content copy, produce color palettes and icon sets and analyze early design layouts -- tasks that map directly onto the defining and designing phases examined in this research. A UX lead at a digital health company who uses an AI collaborative whiteboard to ideate features for a patient-facing app benefits from the speed and breadth that generative AI offers in the early ideation stage; however, that same tool cannot determine whether the app's target users will understand or trust the interface. The gap between generating design ideas and genuinely understanding the people those designs serve is exactly the tension this lesson addresses. Practitioners who learn to name this gap are better positioned to decide when AI assistance adds value and when it requires the kind of human-centered inquiry no tool can replace.
The Problem and Its Relevance
Generative AI is reshaping how design teams work, but it is not reshaping the part of the process that matters most. Research with undergraduate technical writing students shows that AI tools proved effective at defining and designing -- generating mind maps, color palettes, icons and layout ideas -- but contributed little to empathizing with real users or conducting meaningful usability evaluation. That is not a student problem; it is a structural feature of how these tools operate. No amount of prompt engineering enables AI to conduct situated user research. Organizations adopting generative AI in their UX workflows without accounting for this limitation are not becoming more efficient, they are relocating labor from the visible parts of the design process to the invisible work of repairing decisions that were never grounded in actual user needs.
A second issue cuts even deeper into how practitioners understand AI's role in creative work. Students in the study consistently sought to personalize and revise AI-generated outputs -- a tendency that research on consumer behavior connects to the fundamental human desire to claim authorship over things that carry consequence. That impulse is not a sign of AI skepticism; it is a signal that practitioners understand, at least intuitively, that AI-generated design elements carry no accountability. When the algorithm produces an icon and no one on the team made a deliberate choice about it, no one owns the decision -- and no one will catch the problem when the icon fails its intended users. Treating AI as a creative contributor without establishing who retains judgment and responsibility is not a workflow enhancement; it is a liability transfer.
Core Concepts: Understanding GenAI's Role In Design Thinking
Design Thinking in UX
Design thinking is a structured approach to solving user problems that moves through five stages: empathizing with users, defining their needs, designing potential solutions, evaluating those solutions with real users and iterating based on what is learned. Research in technical communication pedagogy applies this model to course assignments requiring students to create prototypes and conduct usability testing. Each stage requires a different kind of thinking, and not all stages are equally supported by AI assistance.
Multimodal Generative AI
Multimodal generative AI refers to applications that produce multiple types of output from simple text prompts — including written text, images, interface components such as buttons and icons, and color palettes. In the UX context, tools like AI-enabled collaborative whiteboards and AI design plugins allow practitioners to generate visual and structural design elements without requiring advanced graphic design skills. This multimodal output capacity is what makes generative AI valuable in the defining and designing stages, where generating and exploring many ideas quickly is an advantage.
Human-in-the-Loop
Human-in-the-loop is an approach to AI-assisted workflows that requires human input and judgment at critical decision points, rather than allowing AI to complete tasks autonomously from start to finish. In UX design, this means practitioners evaluate AI suggestions against their knowledge of real user needs, accept suggestions that serve those needs, and discard or revise suggestions that do not. Research found that students instinctively adopted this approach when they recognized that AI-generated icons did not function correctly as navigation elements, or when AI-suggested ideas exceeded the scope of what their project could realistically deliver.
Critical AI Literacy
Critical AI literacy is the capacity to understand not just how to use AI tools but when to use them, what their limitations are, and what responsibilities remain with the human practitioner after AI has contributed to a task. In UX design, this includes recognizing that AI tools are trained on general patterns and best practices but cannot conduct situated user research — meaning they have no access to the specific, lived experiences of the actual people a design is meant to serve. Students who identified these limitations during their project were demonstrating exactly this kind of literacy.
Local AI Ethics
Local AI ethics refers to the practice-specific norms that individuals and teams negotiate as they integrate AI tools into their work. Rather than applying abstract ethical principles from outside, local AI ethics emerges from the concrete decisions practitioners make about when AI assistance is appropriate, what output is acceptable, and what must remain the product of human judgment. In the research, student teams developed their local AI ethics through group statements documenting their AI use and through the choices they made about when to accept or override AI suggestions.
Lesson Activity (25 minutes)
Read all steps before starting.
Step 1 -- Locate the Gap (5 min)
Think of an AI tool you have used or observed in a design or communication context. Identify one specific task the tool handled well and one task it handled poorly. Write a single sentence for each. Do not describe the tool — describe what happened to the output when human judgment was or was not applied.
Step 2 -- Map It to the Framework (8 min)
Using the five stages of design thinking — empathize, define, design, evaluate, iterate — place each of your two examples at the stage where the AI contribution occurred. Then identify which stage was skipped or weakened as a result. The research found that AI tools performed well in defining and designing but struggled in empathizing and evaluating. Does your example confirm or complicate that finding? Write two to three sentences explaining why.
Step 3 -- Draft Your Local AI Ethics Statement (8 min)
Write a four to six sentence statement that describes how you or your team would use AI in a UX design project. The statement must answer three questions: At which specific design stage would you bring AI in? What would you require from human team members before and after AI generates output? What would disqualify an AI suggestion from being used without further revision? Refer to at least one core concept from this lesson in your statement.
Step 4 -- Share and Challenge (4 min)
Exchange your statement with one other participant. Read their statement and identify one design scenario in which their rules would fail — a situation where following their stated guidelines would still produce a design that did not serve its users. Share your challenge in one sentence. The goal is not to find fault; it is to stress-test the statement against a real-world edge case.
Reflection Questions
The research found that students who had prior experience with AI tools like ChatGPT still struggled to apply that experience to a new design context. What does that tell you about the relationship between general AI familiarity and domain-specific AI literacy?
One student team described AI as a cold-start block problem solver — something they used only to get past the blank-page moment. Another team wove AI into every stage of their workflow. Which approach resulted in stronger accountability for design decisions, and why?
A student noted that AI-generated icons looked thematically correct but failed as navigation elements. What does that failure reveal about the difference between visual plausibility and functional design?
If your organization required you to document, for every AI suggestion you accepted, why you accepted it and what user need it served, how would that change how you use AI in your current workflow?
The Bottom Line
Generative AI tools are most valuable in UX design when they accelerate the phases where quantity and variety of ideas matter — but they are least suited to the phases where accuracy about specific, situated human needs determines whether a design actually works. Using AI to generate app ideas and visualize interface components is a legitimate workflow efficiency. Treating those AI-generated artifacts as a substitute for speaking with real users, observing their behaviors, or testing with them is a category error that produces polished prototypes that no one asked for. The quality of the final design is determined by the quality of the empathizing phase, and no AI tool changes that.
The more consequential insight from this research is about authorship and accountability. When AI generates design content and practitioners accept it without deliberate scrutiny, authorial responsibility becomes diffuse in ways that create real professional risk. Research on human behavior with automated systems shows that people consistently want to add their own labor and judgment to AI outputs — not because they distrust the output but because they understand that claiming a design means being accountable for it. Instructors and team leaders who build that expectation into their workflows — by requiring practitioners to document why they accepted or overrode each AI suggestion — are not slowing down the design process. They are ensuring that someone in the room can answer for what was built.
#GenAIinUXDesign #HumanInTheLoop #AILiteracy #DesignThinking #MultimodalAI