Who Designs the Bot Changes Everything

How AI agent roles, appearance and conversation style shape the value customers receive

Time to complete: 30 minutes 

PDF 5-Minute Warm-Up Activity can be downloaded above.

Who This Is For: This lesson is for anyone who builds, evaluates or is affected by AI-powered customer service tools. That includes UX designers and product managers who configure chatbots for banking, healthcare or retail; marketing strategists deciding how to present AI agents to customers; customer success leads who need to explain why some chatbot interactions feel collaborative while others feel transactional; and policy or compliance officers in financial services trying to anticipate how AI agent design affects customer trust and advice adherence. The shared frustration across these roles is that current guidance on chatbot design is either too technical or too vague: vendors promise engagement, researchers describe mechanisms and practitioners are left guessing which design choice actually moves the needle. This lesson gives you the conceptual vocabulary and evidence-backed logic to make those decisions with confidence. 

Real-World Applications

Bank of America's chatbot Erica operates as a servant agent handling routine service requests, while Woebot functions as a partner agent co-creating mental health support with users. This lesson explains why each design works for its specific context, what happens when the role and appearance of a bot are mismatched and how much customers will pay for each configuration. The same logic applies to any organization using AI in complex service delivery: the design of the bot is not a branding question, it is a value creation question. 

The Problem and Its Relevance

Most organizations deploying AI chatbots treat design as a visual and tone-of-voice exercise. They debate avatar aesthetics and greeting scripts while overlooking the foundational question: what role is this agent meant to play in the customer's life? Role determines the entire architecture of the interaction, yet it remains the least examined design decision in practice. A chatbot that looks human but acts like an automated form is not a design inconsistency, it is a trust hazard. Research shows that mismatches between an agent's assigned role and its appearance actively undermine customer confidence in the agent's capabilities, and that confidence, called proxy efficacy, is the mechanism through which customers decide whether to follow the agent's advice and whether to pay for the service at all. The problem compounds because over-reliance on a well-designed agent carries its own risks. Customers who trust an AI agent too completely begin to lose confidence in their own judgment, and that dependency eventually erodes the very outcome expectancy that made the agent valuable. Organizations that optimize for engagement without understanding this ceiling are building products that will quietly undermine themselves. 

Why This Matters

•   Role alignment shapes customer confidence before a single word is exchanged. Customers read design as a signal of intent.

•   Appearance and conversation style work together, not independently. A human-looking bot with a cold functional script produces worse results than either design alone.

•   The emotional conversation style benefits both partner and servant roles. This finding contradicts the assumption that emotional warmth is only appropriate for collaborative agents.

•   Customer willingness to pay is directly traceable to role and design coherence, not to how sophisticated the underlying AI is.

•   Over-reliance on AI is not a user failure. It is a predictable outcome of effective agent design that organizations must actively manage.

•   The four reasons customers disengage from AI agents (subjective incompatibility, usability and emotional incompatibility, human interaction essentiality, strategic skepticism) each require a different organizational response. 

Three Questions to Hold Before You Begin

•   Can you explain the difference between a partner agent and a servant agent in terms of what the customer is expected to do during the interaction?

•   Do you understand why a chatbot with a robotic appearance and an emotional conversation style outperformed other servant-role configurations?

•   Can you describe what proxy efficacy is and why it predicts customer behavior better than general trust in AI does? 

Conceptual Foundation

What Is an Embodied Conversational Agent?

An embodied conversational agent (ECA) is an AI system with a visible face or body that uses natural language to interact with people. These are not abstract chatbot text windows. They are designed to look and speak like someone. Examples include Bank of America's Erica, Woebot and the digital humans created by Soul Machines. The research examined how the design of these agents (their role, their face, their language) affects what customers believe the agent can do for them and what they ultimately do with its advice. 

The Two Roles: Partner and Servant

Social cognitive theory identifies two fundamental relationships a person can have with a proxy agent: working with someone or working through someone. Applied to AI, this becomes the partner role and the servant role. A partner agent co-creates value alongside the customer. A servant agent takes over a task on the customer's behalf. Neither role is superior in the abstract. Each role is superior for a different context, a different customer need and a different design configuration. 

The Value-by-Proxy Process

Customers decide to trust and use an AI agent through a two-stage process. First, they form a judgment about whether the agent is capable of helping them, which the researchers call proxy efficacy. Second, they form an expectation about whether using the agent will actually produce a result they value, which the researchers call outcome expectancy. Role-congruent design (matching the agent's look and language to its assigned role) drives proxy efficacy. Proxy efficacy drives outcome expectancy. Outcome expectancy drives whether customers book a follow-up, follow the advice, and pay for the service. 

The Over-Reliance Ceiling

The relationship between proxy efficacy and outcome expectancy is not linear. At moderate levels of reliance, higher confidence in the agent produces higher outcome expectations. But beyond a turning point (measured at 7.5 on the study's 9-point reliance scale), the effect reverses. Customers who rely on the agent for everything begin to doubt whether the outcomes it promises are actually achievable by them. Organisations need to design agents that build customer confidence without creating dependency. 

Roadmap

Working individually or in groups, work through the following steps. The full activity is designed to take 25 minutes after the 5-minute warm-up. 

1. Select a service context where an AI agent is either already deployed or likely to be deployed. Good examples include personal finance coaching, retail customer support, health and wellness advising or travel planning. Choose a context where customers have a real stake in the outcome.

2. Decide whether your agent should operate in a partner role or a servant role. Write two sentences explaining your choice. Reference the type of customer need your agent will address: is the customer outsourcing a task or seeking collaborative support?

3. Design the agent's appearance and conversation style. Based on the research, identify whether your role choice calls for a human or robotic appearance and an emotional or functional conversation style. Explain specifically why these design choices match the role you assigned.

4. Map the value-by-proxy chain for your agent. Write out what proxy efficacy looks like in your context (what must the customer believe the agent can do?) and what outcome expectancy looks like (what result does the customer expect if they use the agent?).

5. Identify the over-reliance risk. At what point might a customer in your service context become too dependent on your agent? Propose one design feature that would encourage independent judgment at the right moment.

6. Choose one of the four non-engagement reasons (subjective incompatibility, usability and emotional incompatibility, human interaction essentiality, strategic skepticism) and describe how you would address it in your agent's design or onboarding flow.

7. Develop a willingness-to-pay position. Based on the finding that the partner-human-emotional configuration achieved the highest average price point ($7.00 per use), decide what configuration you would use and what pricing model (subscription or pay-per-use) you would recommend. Justify both decisions with evidence from the research. 

Individual Reflection

After completing the activity, consider the following questions individually.

•   Which finding surprised you most and what assumption does it challenge in how you currently think about chatbot design?

•   How does understanding proxy efficacy change what you would measure when evaluating an AI agent's performance in your organization?

•   If you had to redesign a chatbot you interact with regularly, which role would you assign it and why?

•   What is one thing you now believe organizations consistently get wrong about AI agent design, based on the evidence presented here? 

The Bottom Line

The design of an AI agent is a policy decision. When an organisation chooses to give a financial chatbot a human face and emotional language, it is encoding a claim about the nature of the relationship it intends to have with its customers. That claim either matches the agent's assigned role or it does not, and it shows that mismatch is directly measurable in customer behavior, advice adherence and revenue. The over-reliance finding carries an implication that most AI deployment guides ignore: a highly effective AI agent, used consistently, can erode the very human capabilities it was designed to support. Designing for engagement without designing against dependency is not cautious innovation, it is a liability organisations are choosing not to see.

#AIAgentDesign   #ProxyEfficacy   #EmbodiedAI   #CustomerEngagement   #AIServiceDesign