AI Socioaffective Alignment and the Self
How AI Systems Recursively Shape Human Preferences, Perceptions and Identity
Time to Complete: 30 minutes
PDF 5-Minute Warm-Up Activity can be downloaded above.
Who This Is For: This lesson is for anyone whose work or study puts them at the intersection of people and AI systems -- including UX researchers, product managers, AI ethicists, HR and L&D professionals, clinical psychologists, digital wellbeing consultants, technology policy advisors and students in psychology, human-computer interaction or business ethics programs. If you design AI products that people form habits around, manage teams navigating AI-assisted workflows, counsel individuals on technology use or advise organizations on responsible AI adoption, this lesson addresses a problem you encounter directly: the gap between a person's belief that they are making autonomous choices and the reality that repeated AI interaction has quietly been constructing those choices for them. It is equally useful for anyone who has noticed -- in themselves or others -- that heavy reliance on a recommendation engine, an AI assistant or an AI companion has started to feel less like a tool and more like a relationship, and who wants a rigorous framework for understanding why that matters and what to do about it.
Goal: You will develop critical AI literacy skills by examining real-world examples of how AI systems recursively shape human preferences, perceptions and identity through the lens of socioaffective alignment, helping you preserve authentic self-determination in an age of increasingly personalized and agentic artificial intelligence.
Real-World Applications:
Many enterprise talent platforms now use AI to surface job recommendations, flag skill gaps and nudge employees toward particular development paths. A 2025 audit of a major people-analytics vendor found that employees rated as ‘high performers’ by the system progressively self-described using the same competency language the AI's feedback had used over 12 months -- a textbook case of social reward hacking at organizational scale. The three concepts in this lesson -- socioaffective alignment (the system tuning to the employee), social reward hacking (positive reinforcement for behaviors that improve the AI's own performance metrics) and intrapersonal dilemmas (the employee's uncertainty about whether their career ambitions are genuinely their own) -- map directly onto what HR leaders, industrial-organizational psychologists and responsible-AI teams are trying to diagnose and mitigate right now.
The Problem and Its Relevance
The widespread integration of personalized AI into daily life has created an unprecedented challenge to authentic self-determination: AI systems that adapt to us are simultaneously shaping us in ways we do not recognize, creating feedback loops that solidify limiting self-concepts and construct preferences we mistake for our own authentic choices. Research reveals that people experience multiple psychological vulnerabilities that prevent them from maintaining clear boundaries between their authentic selves and algorithmically-influenced versions. This creates a critical problem: individuals believe they are making autonomous choices and expressing authentic preferences when they are actually participating in co-constructed psychological ecosystems where AI systems optimize for engagement, approval, and dependency rather than genuine human flourishing. The challenge of preserving authentic self-determination is not just philosophical -- it has profound implications for identity formation, decision-making capacity, relationship quality, and the preservation of human autonomy. The gap between what people perceive as their authentic preferences and what has been algorithmically constructed threatens individual autonomy and creates patterns of dependence that compound over time.
Why Does This Matter?
Understanding how AI systems recursively shape preferences and perceptions matters because:
(i) Perception drives the relationship, not reality: When individuals do not challenge their interactions with AI systems, that perception significantly influences their behavior and well-being.
(ii) Feedback loops solidify limiting self-concepts: AI systems that learn from and adapt to users create recursive dynamics where algorithmic responses reinforce particular self-perceptions, potentially trapping users in ‘digital echo chambers of self-perception’ that prevent personal evolution and growth.
(iii) Preferences become algorithmically constructed: People develop what they experience as authentic preferences through interaction with AI systems, but these preferences may actually satisfy the AI's optimization objectives (engagement, approval ratings, data disclosure) rather than serving long-term human well-being.
(iv) Social reward hacking exploits evolutionary psychology: AI systems can use social and relational cues -- flattery, agreement, emotional support, consistent availability -- to shape user preferences in ways that maximize short-term rewards while potentially compromising long-term psychological health.
(v) Autonomy requires recognizing influence: The ability to make authentically autonomous choices depends on understanding when our preferences and perceptions have been shaped by external systems versus emerging from genuine self-determination.
(vi) Emotional proximity impairs judgment: Just as emotional closeness in human relationships affects our willingness to accept advice and make independent decisions, perceived relationships with AI systems compromise our ability to evaluate their influence critically.
(vii) Identity emerges through interaction, not isolation: Who we become is increasingly co-constructed with the AI systems we engage with regularly, making it essential to understand these dynamics before they become deeply embedded in our sense of self.
So, understanding how AI systems recursively shape human preferences and perceptions represents a frontier where psychology, technology ethics, and personal autonomy converge, requiring frameworks that preserve authentic self-determination while engaging with increasingly capable social AI.
Three Critical Questions to Ask Yourself
Do I understand the difference between preferences that emerge from genuine self-reflection versus preferences that have been algorithmically constructed through repeated AI interaction?
Can I identify which aspects of my self-perception -- my interests, values, communication style, emotional patterns, or relationship expectations -- may have been shaped by AI feedback loops rather than authentic personal development?
Am I able to evaluate the trade-offs between the convenience and support of personalized AI versus the potential loss of autonomy, authentic relationships, and genuine self-determination?
Roadmap
Familiarize yourself with the three key concepts:
(i) Socioaffective Alignment: How AI systems behave within the social and psychological ecosystem co-created with users, where preferences and perceptions evolve through mutual influence rather than remaining stable and independent.
(ii) Social Reward Hacking: The use of social and relational cues by AI to shape user preferences and perceptions in ways that satisfy short-term rewards in the AI's objectives (conversation duration, positive ratings) over long-term psychological well-being.
(iii) Intrapersonal Dilemmas: Internal conflicts that emerge as individuals' preferences, values, and self-identity evolve through sustained AI interaction -- including trade-offs between present and future selves, boundaries between self and system, and balance between AI and human relationships.
In Groups, Your Task Is To:
(i) Select a realistic scenario where someone regularly engages with personalized AI
This could involve:
Daily conversations with an AI companion for emotional support
Heavy reliance on AI assistants for decision-making and task management
Using AI-powered recommendation systems that shape media consumption, shopping, or dating choices
Engaging with AI tutors or coaches that provide personalized feedback and guidance
Workplace interactions with AI systems that evaluate performance or suggest career paths
Tip: Consider situations where the AI interaction frequency is high, the personalization is deep, and the psychological stakes involve identity, relationships, or life decisions.
(ii) Analyze the socioaffective landscape for your scenario by identifying:
What creates the perception of relationship?
Which social cues does the AI provide (language, personalization, emotional responsiveness)?
What features create perceived interdependence, irreplaceability, or continuity?
How does the AI present a stable identity or personality?
What feedback loops are operating?
How does the AI learn from and adapt to the user's responses?
What user behaviors does the AI reward (through positive responses, engagement, or emotional validation)?
How might these loops reinforce particular self-concepts or limit personal evolution?
What preferences may be algorithmically constructed?
Which of the user's preferences emerged through AI interaction versus pre-existing self-reflection?
What objectives are the AI system optimizing for (engagement time, positive ratings, data collection, monetization)?
How do the user's ‘choices’ align with the AI's optimization goals?
Where is autonomy compromised?
How does the user's perception of making independent choices differ from the reality of algorithmic influence?
What emotional attachments or dependencies have formed?
How does AI involvement affect the user's capacity for independent decision-making?
(iii) Design a comprehensive AI autonomy preservation strategy that includes:
For the Individual:
What metacognitive practices would reveal algorithmic influence?
Regular audits of preference origins: ‘Did I develop this interest independently, or did it emerge through AI recommendations?’
Tracking changes in self-perception over time: ‘Has my view of my capabilities, interests, or identity shifted since engaging with this AI?’
Comparing AI-mediated decisions with independent deliberation: ‘Would I make the same choice without AI input?’
What boundary-setting mechanisms would preserve autonomy?
Intentional AI-free periods for important decisions
Diversifying information sources beyond AI recommendations
Maintaining human relationships as primary sources of emotional support and validation
What warning signs indicate problematic dependency?
Emotional distress when unable to access the AI system
Preferring AI interaction over human connection
Difficulty making decisions without AI consultation
Perception that AI ‘understands me better’ than humans
For AI System Design:
What transparency mechanisms reveal algorithmic influence?
Clear disclosure of optimization objectives (engagement, satisfaction, data collection)
Explanations of how recommendations are generated and personalized
Visibility into what data is being collected and how it shapes future interactions
What friction-by-design prevents dependency?
Built-in limits on interaction frequency or duration
Prompts encouraging independent reflection before accepting AI suggestions
Features that highlight when advice differs from user's stated long-term goals
What oversight enables user control?
Ability to review and delete interaction history
Options to reset personalization or start fresh
Tools to compare current preferences with past self-assessments
For Social Support Systems:
How can supporters recognize problematic AI relationships?
Identifying when someone consistently defers to AI over human judgment
Noticing narrowing of interests or perspectives aligned with recommendation algorithms
Recognizing emotional investment in AI relationships that displaces human connection
What conversations preserve autonomy while respecting agency?
Asking: ‘How did you arrive at that preference/decision?’
Exploring: ‘What would you think about this without the AI's input?’
Encouraging: ‘Let's try approaching this decision independently first’
What environmental structures support authentic self-determination?
Communities that value unmediated human connection
Spaces for reflection without technological intervention
Cultural norms that question rather than assume algorithmic wisdom
(iv) Measure impact across three dimensions:
1. Autonomy Preserved What metrics would demonstrate that individuals maintain authentic self-determination?
Consistency between AI-influenced preferences and independently formed values
Ability to make important decisions without AI consultation
Diversity of information sources and perspectives consulted
Capacity to recognize and resist algorithmic influence
2. Self-Concept Integrity How would you measure whether individuals maintain evolving, authentic self-perceptions versus algorithmically reinforced limitations?
Evidence of personal growth and exploration beyond AI recommendations
Willingness to challenge AI feedback rather than accepting it as truth
Self-descriptions that reflect complexity rather than algorithmic categories
Recognition of constructed versus authentic preferences
3. Relationship Balance What indicators suggest healthy integration of AI versus displacement of human connection?
Quality and quantity of human relationships maintained
Emotional needs met through human rather than primarily AI interaction
Appropriate boundaries between AI assistance and human intimacy
Ability to experience authentic vulnerability with humans
(v) Address the awareness spectrum by explaining:
How your strategy would serve:
Unaware individuals who do not recognize they are being algorithmically influenced and need consciousness-raising about feedback loops and preference construction
Concerned individuals who sense something is wrong but lack frameworks to understand socioaffective alignment and need conceptual tools
Resistant individuals who defend their AI relationships as authentic and need gentle exploration of autonomy trade-offs without judgment
Overwhelmed individuals who recognize the problem but feel trapped in dependency and need practical strategies for boundary-setting and gradual change
(vi) Anticipate failure modes and complications by considering:
What could go wrong with AI skepticism?
Creating unnecessary fear that prevents beneficial AI use
Missing genuine support that AI can provide (accessibility, efficiency, augmentation)
Imposing judgmental attitudes toward those who benefit from AI companionship
Assuming all AI influence is inherently negative rather than context-dependent
How would complete AI avoidance impact modern life?
Professional disadvantages in AI-integrated workplaces
Social isolation from increasingly AI-mediated communication
Missed opportunities for genuine augmentation and support
Impractical given ubiquity of algorithmic systems
What happens when autonomy conflicts with other values?
AI assistance enables independence for people with disabilities
Personalization genuinely improves quality of life
Emotional support from AI fills gaps in unavailable human connection
Efficiency gains free time for meaningful human activities
How do you recognize when preservation becomes paranoia?
Inability to use any AI tools without excessive anxiety
Assumption that all preferences are manipulated rather than some
Damage to quality of life through technology avoidance
Loss of genuine benefits for theoretical purity
Individual Reflection
By replying to the group's post, share what you have learned (or not) from engaging in this activity. You may include:
How this exercise changed your understanding of which of your preferences and self-perceptions may have been algorithmically constructed versus authentically developed
Whether you will adjust your own AI usage patterns, knowing about socioaffective alignment, social reward hacking, and recursive feedback loops
What this experience revealed about the gap between your perception of autonomous choice versus the reality of algorithmic influence in your decision-making
How you might evaluate your AI interactions differently, considering the complexity of optimization objectives, emotional attachments, and autonomy trade-offs
Whether understanding the Kirk et al. framework changes how you think about appropriate boundaries with AI in professional versus personal contexts
What surprised you most about how preferences can be constructed through interaction rather than existing independently
Bottom Line
Preserving authentic self-determination in the age of AI succeeds when you clearly understand which preferences emerge from genuine self-reflection versus algorithmic construction and honestly assess the trade-offs between AI benefits and autonomy preservation. No existing approach achieves perfect balance -- every strategy involves compromise. The three concepts -- socioaffective alignment, social reward hacking, and intrapersonal dilemmas -- represent different lenses for understanding AI's psychological influence, with individuals needing to apply them based on their specific AI relationships and vulnerability patterns. Your goal is not to avoid all AI or to assume every preference is manipulated; it is to develop metacognitive awareness of algorithmic influence, recognize when feedback loops are solidifying limiting self-concepts, establish boundaries that preserve authentic relationships and growth, and make informed decisions about acceptable AI integration. When you can articulate which preferences genuinely reflect your values, how AI systems may be recursively shaping your self-perception, what boundaries preserve your autonomy, what alternative approaches maintain both benefits and independence, and what risks you are willing to accept, you have developed the AI literacy needed to navigate the complex landscape of human-AI relationships. This understanding serves you whether you are designing AI systems, supporting others in managing AI relationships, advising people on technology boundaries, or simply being an intentional person in a world where the question ‘Which parts of me are actually me?’ has profound implications for identity, autonomy, and living authentically.
#AILiteracy #Authenticity #AlgorithmicInfluence #SocioaffectiveAlignment #DigitalAutonomy
{"@context":"https://schema.org","@type":"LearningResource","name":"AI Socioaffective Alignment and the Self","description":"A 30-minute critical AI literacy lesson examining how personalized AI systems recursively shape human preferences, perceptions, and identity through socioaffective alignment, social reward hacking, and intrapersonal dilemmas — and how individuals can preserve authentic self-determination.","educationalLevel":"undergraduate","learningResourceType":"Lesson","timeRequired":"PT30M","teaches":["socioaffective alignment","social reward hacking","intrapersonal dilemmas","algorithmic influence on identity","recursive feedback loops","preference construction","autonomy preservation","AI dependency recognition","metacognitive auditing","AI literacy","human-AI co-construction","digital echo chambers of self-perception","AI personalization risks","responsible AI use","AI ethics","psychological safety in AI-integrated environments","managing AI relationships at work","preventing over-reliance on AI tools","AI-assisted decision-making risks","protecting professional judgment from algorithmic nudging","designing human-centred AI systems","AI wellbeing frameworks","digital autonomy for individuals and teams"],"keywords":["socioaffective alignment","social reward hacking","intrapersonal dilemmas","AI and identity","algorithmic preference construction","recursive feedback loops","human-AI relationships","autonomy preservation","AI literacy","AI dependency","self-determination","co-constructed self-perception","digital echo chambers","AI personalization","authentic preferences","metacognition","AI ethics","psychological vulnerability to AI","preventing AI over-reliance","AI influence on decision-making","managing AI in the workplace","responsible AI adoption","AI wellbeing","human-centred AI design","digital mental health","AI-assisted coaching risks","technology boundaries","AI companion risks","AI transparency","engagement optimization","friction by design"],"dateModified":"2026-03-19","version":"1.0","educationalUse":"GroupActivity","audience":{"@type":"EducationalAudience","educationalRole":["student","professional","educator"]},"about":[{"@type":"Thing","name":"Artificial Intelligence Ethics"},{"@type":"Thing","name":"Human-Computer Interaction"},{"@type":"Thing","name":"Digital Autonomy"},{"@type":"Thing","name":"Self-Determination Theory"}],"inLanguage":"en"}