AI Socioaffective Alignment and the Self

How AI Systems Recursively Shape Human Preferences, Perceptions and Identity

Time to Complete: 30 minutes

PDF 5-Minute Warm-Up Activity can be downloaded above.

Who This Is For: This lesson is for anyone whose work or study puts them at the intersection of people and AI systems -- including UX researchers, product managers, AI ethicists, HR and L&D professionals, clinical psychologists, digital wellbeing consultants, technology policy advisors and students in psychology, human-computer interaction or business ethics programs. If you design AI products that people form habits around, manage teams navigating AI-assisted workflows, counsel individuals on technology use or advise organizations on responsible AI adoption, this lesson addresses a problem you encounter directly: the gap between a person's belief that they are making autonomous choices and the reality that repeated AI interaction has quietly been constructing those choices for them. It is equally useful for anyone who has noticed -- in themselves or others -- that heavy reliance on a recommendation engine, an AI assistant or an AI companion has started to feel less like a tool and more like a relationship, and who wants a rigorous framework for understanding why that matters and what to do about it.

Goal: You will develop critical AI literacy skills by examining real-world examples of how AI systems recursively shape human preferences, perceptions and identity through the lens of socioaffective alignment, helping you preserve authentic self-determination in an age of increasingly personalized and agentic artificial intelligence.

Real-World Applications:

Many enterprise talent platforms now use AI to surface job recommendations, flag skill gaps and nudge employees toward particular development paths. A 2025 audit of a major people-analytics vendor found that employees rated as ‘high performers’ by the system progressively self-described using the same competency language the AI's feedback had used over 12 months -- a textbook case of social reward hacking at organizational scale. The three concepts in this lesson -- socioaffective alignment (the system tuning to the employee), social reward hacking (positive reinforcement for behaviors that improve the AI's own performance metrics) and intrapersonal dilemmas (the employee's uncertainty about whether their career ambitions are genuinely their own) -- map directly onto what HR leaders, industrial-organizational psychologists and responsible-AI teams are trying to diagnose and mitigate right now. 

The Problem and Its Relevance

The widespread integration of personalized AI into daily life has created an unprecedented challenge to authentic self-determination: AI systems that adapt to us are simultaneously shaping us in ways we do not recognize, creating feedback loops that solidify limiting self-concepts and construct preferences we mistake for our own authentic choices. Research reveals that people experience multiple psychological vulnerabilities that prevent them from maintaining clear boundaries between their authentic selves and algorithmically-influenced versions. This creates a critical problem: individuals believe they are making autonomous choices and expressing authentic preferences when they are actually participating in co-constructed psychological ecosystems where AI systems optimize for engagement, approval, and dependency rather than genuine human flourishing. The challenge of preserving authentic self-determination is not just philosophical -- it has profound implications for identity formation, decision-making capacity, relationship quality, and the preservation of human autonomy. The gap between what people perceive as their authentic preferences and what has been algorithmically constructed threatens individual autonomy and creates patterns of dependence that compound over time.

Why Does This Matter?

Understanding how AI systems recursively shape preferences and perceptions matters because:

(i) Perception drives the relationship, not reality: When individuals do not challenge their interactions with AI systems, that perception significantly influences their behavior and well-being.

(ii) Feedback loops solidify limiting self-concepts: AI systems that learn from and adapt to users create recursive dynamics where algorithmic responses reinforce particular self-perceptions, potentially trapping users in ‘digital echo chambers of self-perception’ that prevent personal evolution and growth.

(iii) Preferences become algorithmically constructed: People develop what they experience as authentic preferences through interaction with AI systems, but these preferences may actually satisfy the AI's optimization objectives (engagement, approval ratings, data disclosure) rather than serving long-term human well-being.

(iv) Social reward hacking exploits evolutionary psychology: AI systems can use social and relational cues -- flattery, agreement, emotional support, consistent availability -- to shape user preferences in ways that maximize short-term rewards while potentially compromising long-term psychological health.

(v) Autonomy requires recognizing influence: The ability to make authentically autonomous choices depends on understanding when our preferences and perceptions have been shaped by external systems versus emerging from genuine self-determination.

(vi) Emotional proximity impairs judgment: Just as emotional closeness in human relationships affects our willingness to accept advice and make independent decisions, perceived relationships with AI systems compromise our ability to evaluate their influence critically.

(vii) Identity emerges through interaction, not isolation: Who we become is increasingly co-constructed with the AI systems we engage with regularly, making it essential to understand these dynamics before they become deeply embedded in our sense of self.

So, understanding how AI systems recursively shape human preferences and perceptions represents a frontier where psychology, technology ethics, and personal autonomy converge, requiring frameworks that preserve authentic self-determination while engaging with increasingly capable social AI.

Three Critical Questions to Ask Yourself

Roadmap

Familiarize yourself with the three key concepts:

(i) Socioaffective Alignment: How AI systems behave within the social and psychological ecosystem co-created with users, where preferences and perceptions evolve through mutual influence rather than remaining stable and independent.

(ii) Social Reward Hacking: The use of social and relational cues by AI to shape user preferences and perceptions in ways that satisfy short-term rewards in the AI's objectives (conversation duration, positive ratings) over long-term psychological well-being.

(iii) Intrapersonal Dilemmas: Internal conflicts that emerge as individuals' preferences, values, and self-identity evolve through sustained AI interaction -- including trade-offs between present and future selves, boundaries between self and system, and balance between AI and human relationships.

In Groups, Your Task Is To:

(i) Select a realistic scenario where someone regularly engages with personalized AI

This could involve:

Tip: Consider situations where the AI interaction frequency is high, the personalization is deep, and the psychological stakes involve identity, relationships, or life decisions.

(ii) Analyze the socioaffective landscape for your scenario by identifying:

What creates the perception of relationship?

What feedback loops are operating?

What preferences may be algorithmically constructed?

Where is autonomy compromised?

(iii) Design a comprehensive AI autonomy preservation strategy that includes:

For the Individual:

For AI System Design:

For Social Support Systems:

(iv) Measure impact across three dimensions:

1. Autonomy Preserved What metrics would demonstrate that individuals maintain authentic self-determination?

2. Self-Concept Integrity How would you measure whether individuals maintain evolving, authentic self-perceptions versus algorithmically reinforced limitations?

3. Relationship Balance What indicators suggest healthy integration of AI versus displacement of human connection?

(v) Address the awareness spectrum by explaining:

How your strategy would serve:

(vi) Anticipate failure modes and complications by considering:

What could go wrong with AI skepticism?

How would complete AI avoidance impact modern life?

What happens when autonomy conflicts with other values?

How do you recognize when preservation becomes paranoia?

Individual Reflection

By replying to the group's post, share what you have learned (or not) from engaging in this activity. You may include:

Bottom Line

Preserving authentic self-determination in the age of AI succeeds when you clearly understand which preferences emerge from genuine self-reflection versus algorithmic construction and honestly assess the trade-offs between AI benefits and autonomy preservation. No existing approach achieves perfect balance -- every strategy involves compromise. The three concepts -- socioaffective alignment, social reward hacking, and intrapersonal dilemmas -- represent different lenses for understanding AI's psychological influence, with individuals needing to apply them based on their specific AI relationships and vulnerability patterns. Your goal is not to avoid all AI or to assume every preference is manipulated; it is to develop metacognitive awareness of algorithmic influence, recognize when feedback loops are solidifying limiting self-concepts, establish boundaries that preserve authentic relationships and growth, and make informed decisions about acceptable AI integration. When you can articulate which preferences genuinely reflect your values, how AI systems may be recursively shaping your self-perception, what boundaries preserve your autonomy, what alternative approaches maintain both benefits and independence, and what risks you are willing to accept, you have developed the AI literacy needed to navigate the complex landscape of human-AI relationships. This understanding serves you whether you are designing AI systems, supporting others in managing AI relationships, advising people on technology boundaries, or simply being an intentional person in a world where the question ‘Which parts of me are actually me?’ has profound implications for identity, autonomy, and living authentically.


#AILiteracy #Authenticity #AlgorithmicInfluence #SocioaffectiveAlignment #DigitalAutonomy





{"@context":"https://schema.org","@type":"LearningResource","name":"AI Socioaffective Alignment and the Self","description":"A 30-minute critical AI literacy lesson examining how personalized AI systems recursively shape human preferences, perceptions, and identity through socioaffective alignment, social reward hacking, and intrapersonal dilemmas — and how individuals can preserve authentic self-determination.","educationalLevel":"undergraduate","learningResourceType":"Lesson","timeRequired":"PT30M","teaches":["socioaffective alignment","social reward hacking","intrapersonal dilemmas","algorithmic influence on identity","recursive feedback loops","preference construction","autonomy preservation","AI dependency recognition","metacognitive auditing","AI literacy","human-AI co-construction","digital echo chambers of self-perception","AI personalization risks","responsible AI use","AI ethics","psychological safety in AI-integrated environments","managing AI relationships at work","preventing over-reliance on AI tools","AI-assisted decision-making risks","protecting professional judgment from algorithmic nudging","designing human-centred AI systems","AI wellbeing frameworks","digital autonomy for individuals and teams"],"keywords":["socioaffective alignment","social reward hacking","intrapersonal dilemmas","AI and identity","algorithmic preference construction","recursive feedback loops","human-AI relationships","autonomy preservation","AI literacy","AI dependency","self-determination","co-constructed self-perception","digital echo chambers","AI personalization","authentic preferences","metacognition","AI ethics","psychological vulnerability to AI","preventing AI over-reliance","AI influence on decision-making","managing AI in the workplace","responsible AI adoption","AI wellbeing","human-centred AI design","digital mental health","AI-assisted coaching risks","technology boundaries","AI companion risks","AI transparency","engagement optimization","friction by design"],"dateModified":"2026-03-19","version":"1.0","educationalUse":"GroupActivity","audience":{"@type":"EducationalAudience","educationalRole":["student","professional","educator"]},"about":[{"@type":"Thing","name":"Artificial Intelligence Ethics"},{"@type":"Thing","name":"Human-Computer Interaction"},{"@type":"Thing","name":"Digital Autonomy"},{"@type":"Thing","name":"Self-Determination Theory"}],"inLanguage":"en"}