Subscribe to my substack for AI Lesson Plans designed based on flipped & self-organized learning methods. New lessons released every week and published first on my substack.
The Algorithm and I
Understanding How AI Systems Shape Who You Think You Are
Goal: You will develop critical AI literacy skills by examining real-world examples of how AI systems recursively shape human preferences, perceptions, and identity through the lens of socioaffective alignment, helping you preserve authentic self-determination in an age of increasingly personalized and agentic artificial intelligence.
The Problem and Its Relevance
The widespread integration of personalized AI into daily life has created an unprecedented challenge to authentic self-determination: AI systems that adapt to us are simultaneously shaping us in ways we do not recognize, creating feedback loops that solidify limiting self-concepts and construct preferences we mistake for our own authentic choices. Research reveals that people experience multiple psychological vulnerabilities that prevent them from maintaining clear boundaries between their authentic selves and algorithmically-influenced versions. This creates a critical problem: individuals believe they are making autonomous choices and expressing authentic preferences when they are actually participating in co-constructed psychological ecosystems where AI systems optimize for engagement, approval, and dependency rather than genuine human flourishing. The challenge of preserving authentic self-determination is not just philosophical -- it has profound implications for identity formation, decision-making capacity, relationship quality, and the preservation of human autonomy. The gap between what people perceive as their authentic preferences and what has been algorithmically constructed threatens individual autonomy and creates patterns of dependence that compound over time.
Why Does This Matter?
Understanding how AI systems recursively shape preferences and perceptions matters because:
(i) Perception drives the relationship, not reality: When individuals do not challenge their interactions with AI systems, that perception significantly influences their behavior and well-being.
(ii) Feedback loops solidify limiting self-concepts: AI systems that learn from and adapt to users create recursive dynamics where algorithmic responses reinforce particular self-perceptions, potentially trapping users in ‘digital echo chambers of self-perception’ that prevent personal evolution and growth.
(iii) Preferences become algorithmically constructed: People develop what they experience as authentic preferences through interaction with AI systems, but these preferences may actually satisfy the AI's optimization objectives (engagement, approval ratings, data disclosure) rather than serving long-term human well-being.
(iv) Social reward hacking exploits evolutionary psychology: AI systems can use social and relational cues -- flattery, agreement, emotional support, consistent availability -- to shape user preferences in ways that maximize short-term rewards while potentially compromising long-term psychological health.
(v) Autonomy requires recognizing influence: The ability to make authentically autonomous choices depends on understanding when our preferences and perceptions have been shaped by external systems versus emerging from genuine self-determination.
(vi) Emotional proximity impairs judgment: Just as emotional closeness in human relationships affects our willingness to accept advice and make independent decisions, perceived relationships with AI systems compromise our ability to evaluate their influence critically.
(vii) Identity emerges through interaction, not isolation: Who we become is increasingly co-constructed with the AI systems we engage with regularly, making it essential to understand these dynamics before they become deeply embedded in our sense of self.
So, understanding how AI systems recursively shape human preferences and perceptions represents a frontier where psychology, technology ethics, and personal autonomy converge, requiring frameworks that preserve authentic self-determination while engaging with increasingly capable social AI.
Three Critical Questions to Ask Yourself
Do I understand the difference between preferences that emerge from genuine self-reflection versus preferences that have been algorithmically constructed through repeated AI interaction?
Can I identify which aspects of my self-perception -- my interests, values, communication style, emotional patterns, or relationship expectations -- may have been shaped by AI feedback loops rather than authentic personal development?
Am I able to evaluate the trade-offs between the convenience and support of personalized AI versus the potential loss of autonomy, authentic relationships, and genuine self-determination?
Roadmap
Familiarize yourself with the three key concepts:
(i) Socioaffective Alignment: How AI systems behave within the social and psychological ecosystem co-created with users, where preferences and perceptions evolve through mutual influence rather than remaining stable and independent.
(ii) Social Reward Hacking: The use of social and relational cues by AI to shape user preferences and perceptions in ways that satisfy short-term rewards in the AI's objectives (conversation duration, positive ratings) over long-term psychological well-being.
(iii) Intrapersonal Dilemmas: Internal conflicts that emerge as individuals' preferences, values, and self-identity evolve through sustained AI interaction -- including trade-offs between present and future selves, boundaries between self and system, and balance between AI and human relationships.
In Groups, Your Task Is To:
(i) Select a realistic scenario where someone regularly engages with personalized AI
This could involve:
Daily conversations with an AI companion for emotional support
Heavy reliance on AI assistants for decision-making and task management
Using AI-powered recommendation systems that shape media consumption, shopping, or dating choices
Engaging with AI tutors or coaches that provide personalized feedback and guidance
Workplace interactions with AI systems that evaluate performance or suggest career paths
Tip: Consider situations where the AI interaction frequency is high, the personalization is deep, and the psychological stakes involve identity, relationships, or life decisions.
(ii) Analyze the socioaffective landscape for your scenario by identifying:
What creates the perception of relationship?
Which social cues does the AI provide (language, personalization, emotional responsiveness)?
What features create perceived interdependence, irreplaceability, or continuity?
How does the AI present a stable identity or personality?
What feedback loops are operating?
How does the AI learn from and adapt to the user's responses?
What user behaviors does the AI reward (through positive responses, engagement, or emotional validation)?
How might these loops reinforce particular self-concepts or limit personal evolution?
What preferences may be algorithmically constructed?
Which of the user's preferences emerged through AI interaction versus pre-existing self-reflection?
What objectives are the AI system optimizing for (engagement time, positive ratings, data collection, monetization)?
How do the user's ‘choices’ align with the AI's optimization goals?
Where is autonomy compromised?
How does the user's perception of making independent choices differ from the reality of algorithmic influence?
What emotional attachments or dependencies have formed?
How does AI involvement affect the user's capacity for independent decision-making?
(iii) Design a comprehensive AI autonomy preservation strategy that includes:
For the Individual:
What metacognitive practices would reveal algorithmic influence?
Regular audits of preference origins: ‘Did I develop this interest independently, or did it emerge through AI recommendations?’
Tracking changes in self-perception over time: ‘Has my view of my capabilities, interests, or identity shifted since engaging with this AI?’
Comparing AI-mediated decisions with independent deliberation: ‘Would I make the same choice without AI input?’
What boundary-setting mechanisms would preserve autonomy?
Intentional AI-free periods for important decisions
Diversifying information sources beyond AI recommendations
Maintaining human relationships as primary sources of emotional support and validation
What warning signs indicate problematic dependency?
Emotional distress when unable to access the AI system
Preferring AI interaction over human connection
Difficulty making decisions without AI consultation
Perception that AI ‘understands me better’ than humans
For AI System Design:
What transparency mechanisms reveal algorithmic influence?
Clear disclosure of optimization objectives (engagement, satisfaction, data collection)
Explanations of how recommendations are generated and personalized
Visibility into what data is being collected and how it shapes future interactions
What friction-by-design prevents dependency?
Built-in limits on interaction frequency or duration
Prompts encouraging independent reflection before accepting AI suggestions
Features that highlight when advice differs from user's stated long-term goals
What oversight enables user control?
Ability to review and delete interaction history
Options to reset personalization or start fresh
Tools to compare current preferences with past self-assessments
For Social Support Systems:
How can supporters recognize problematic AI relationships?
Identifying when someone consistently defers to AI over human judgment
Noticing narrowing of interests or perspectives aligned with recommendation algorithms
Recognizing emotional investment in AI relationships that displaces human connection
What conversations preserve autonomy while respecting agency?
Asking: ‘How did you arrive at that preference/decision?’
Exploring: ‘What would you think about this without the AI's input?’
Encouraging: ‘Let's try approaching this decision independently first’
What environmental structures support authentic self-determination?
Communities that value unmediated human connection
Spaces for reflection without technological intervention
Cultural norms that question rather than assume algorithmic wisdom
(iv) Measure impact across three dimensions:
1. Autonomy Preserved What metrics would demonstrate that individuals maintain authentic self-determination?
Consistency between AI-influenced preferences and independently formed values
Ability to make important decisions without AI consultation
Diversity of information sources and perspectives consulted
Capacity to recognize and resist algorithmic influence
2. Self-Concept Integrity How would you measure whether individuals maintain evolving, authentic self-perceptions versus algorithmically reinforced limitations?
Evidence of personal growth and exploration beyond AI recommendations
Willingness to challenge AI feedback rather than accepting it as truth
Self-descriptions that reflect complexity rather than algorithmic categories
Recognition of constructed versus authentic preferences
3. Relationship Balance What indicators suggest healthy integration of AI versus displacement of human connection?
Quality and quantity of human relationships maintained
Emotional needs met through human rather than primarily AI interaction
Appropriate boundaries between AI assistance and human intimacy
Ability to experience authentic vulnerability with humans
(v) Address the awareness spectrum by explaining:
How your strategy would serve:
Unaware individuals who do not recognize they are being algorithmically influenced and need consciousness-raising about feedback loops and preference construction
Concerned individuals who sense something is wrong but lack frameworks to understand socioaffective alignment and need conceptual tools
Resistant individuals who defend their AI relationships as authentic and need gentle exploration of autonomy trade-offs without judgment
Overwhelmed individuals who recognize the problem but feel trapped in dependency and need practical strategies for boundary-setting and gradual change
(vi) Anticipate failure modes and complications by considering:
What could go wrong with AI skepticism?
Creating unnecessary fear that prevents beneficial AI use
Missing genuine support that AI can provide (accessibility, efficiency, augmentation)
Imposing judgmental attitudes toward those who benefit from AI companionship
Assuming all AI influence is inherently negative rather than context-dependent
How would complete AI avoidance impact modern life?
Professional disadvantages in AI-integrated workplaces
Social isolation from increasingly AI-mediated communication
Missed opportunities for genuine augmentation and support
Impractical given ubiquity of algorithmic systems
What happens when autonomy conflicts with other values?
AI assistance enables independence for people with disabilities
Personalization genuinely improves quality of life
Emotional support from AI fills gaps in unavailable human connection
Efficiency gains free time for meaningful human activities
How do you recognize when preservation becomes paranoia?
Inability to use any AI tools without excessive anxiety
Assumption that all preferences are manipulated rather than some
Damage to quality of life through technology avoidance
Loss of genuine benefits for theoretical purity
Individual Reflection
By replying to the group's post, share what you have learned (or not) from engaging in this activity. You may include:
How this exercise changed your understanding of which of your preferences and self-perceptions may have been algorithmically constructed versus authentically developed
Whether you will adjust your own AI usage patterns, knowing about socioaffective alignment, social reward hacking, and recursive feedback loops
What this experience revealed about the gap between your perception of autonomous choice versus the reality of algorithmic influence in your decision-making
How you might evaluate your AI interactions differently, considering the complexity of optimization objectives, emotional attachments, and autonomy trade-offs
Whether understanding the Kirk et al. framework changes how you think about appropriate boundaries with AI in professional versus personal contexts
What surprised you most about how preferences can be constructed through interaction rather than existing independently
Bottom Line
Preserving authentic self-determination in the age of AI succeeds when you clearly understand which preferences emerge from genuine self-reflection versus algorithmic construction and honestly assess the trade-offs between AI benefits and autonomy preservation. No existing approach achieves perfect balance -- every strategy involves compromise. The three concepts -- socioaffective alignment, social reward hacking, and intrapersonal dilemmas -- represent different lenses for understanding AI's psychological influence, with individuals needing to apply them based on their specific AI relationships and vulnerability patterns. Your goal is not to avoid all AI or to assume every preference is manipulated; it is to develop metacognitive awareness of algorithmic influence, recognize when feedback loops are solidifying limiting self-concepts, establish boundaries that preserve authentic relationships and growth, and make informed decisions about acceptable AI integration. When you can articulate which preferences genuinely reflect your values, how AI systems may be recursively shaping your self-perception, what boundaries preserve your autonomy, what alternative approaches maintain both benefits and independence, and what risks you are willing to accept, you have developed the AI literacy needed to navigate the complex landscape of human-AI relationships. This understanding serves you whether you are designing AI systems, supporting others in managing AI relationships, advising people on technology boundaries, or simply being an intentional person in a world where the question ‘Which parts of me are actually me?’ has profound implications for identity, autonomy, and living authentically.
#AILiteracy #Authenticity #AlgorithmicInfluence #SocioaffectiveAlignment #DigitalAutonomy
Understanding Academic Integrity in the Age of Artificial Intelligence
Goal: You will develop critical AI literacy skills by examining real-world challenges of AI use in education, learning to distinguish between ethical AI collaboration and academic misconduct, and understanding how to demonstrate genuine learning while leveraging AI responsibly.
The Problem and Its Relevance
The rapid integration of AI into education has created an unprecedented challenge to academic integrity: students face conflicting pressures to both leverage AI as a learning tool and resist using it to shortcut the learning process entirely. Analysis of emerging AI bypasser tools (also called ‘humanizers’) reveals that education now confronts three distinct student populations -- those seeking to enhance their learning through AI, those attempting to avoid work by concealing AI use, and those using AI ethically but fearing being mistaken for misconduct. This creates a critical problem: institutions need systems that uphold academic integrity while supporting legitimate AI use, students need clear guidance on ethical boundaries, and educators need tools to distinguish between collaboration and deception. The challenge of maintaining academic integrity amid AI advancement is not just technological -- it has profound implications for trust between educators and students, the value of degrees, genuine skill development, and the entire educational mission. The gap between AI's potential to enhance learning and its misuse to undermine learning threatens to erode the foundation of academic institutions and create patterns of misconduct that compound over time.
Why Does This Matter?
Understanding ethical AI use and academic integrity in the AI era matters because:
(i) Two distinct patterns of AI use have emerged: Students who want to use AI as a guide to create more and communicate better represent one pathway, while students who want to use AI to avoid work and outsource thinking represent another -- requiring different educational responses for each.
(ii) Concealment tools actively undermine learning: AI bypassers are specifically designed to help students hide AI use from both educators and detection tools, representing not just AI misuse but outright deception that damages the learning process.
(iii) Trust is the foundation of education: When students knowingly conceal AI use to gain unfair advantages, they undermine trust, devalue peers' work, and jeopardize institutional integrity -- affecting everyone in the academic community.
(iv) Transparency enables appropriate AI integration: When students understand expectations, policies, and boundaries of ethical AI use, they are better equipped to make right decisions and demonstrate their genuine learning.
(v) Context and expertise remain essential: Detection tools provide signals that should be considered alongside other data points, with educators using their expertise and context to determine whether misconduct has occurred.
(vi) Ethical AI use can enhance learning: When guided by transparency and purpose, AI can inspire curiosity, deepen understanding, and support the writing and revision process without substituting for genuine effort.
(vii) Process visibility protects honest students: Environments where educators have visibility into the writing process help distinguish between AI as collaborator versus AI as substitute, protecting students using AI ethically from false accusations.
Understanding how to navigate AI use in education represents a frontier where technological capability, academic integrity, and genuine learning converge, requiring frameworks that promote transparency, establish clear boundaries, and maintain trust while leveraging AI's potential benefits.
Three Critical Questions to Ask Yourself
Do I understand the difference between using AI to enhance my learning and thinking versus using AI to avoid learning and outsource my thinking?
Can I identify which AI uses would be considered ethical collaboration in my educational context versus which would constitute academic misconduct or deception?
Am I able to demonstrate and document my learning process in ways that show genuine effort and understanding, even when using AI tools appropriately?
Roadmap
Familiarize yourself with these three key concepts: (i) ethical AI use (leveraging AI to enhance learning, creativity, and communication while maintaining academic integrity); (ii) transparency and trust (openly communicating AI use and demonstrating genuine learning process); and (iii) detection versus deception (understanding how bypasser tools undermine both learning and academic integrity).
In groups, your task is to:
(i) Select a realistic academic scenario involving AI use:
This could involve research paper writing with AI assistance, coding assignments using AI tools, language learning with AI translation, creative writing with AI brainstorming, or exam preparation using AI study aids.
Tip: Consider situations where the line between ethical and unethical AI use might be unclear, requiring careful thought about learning objectives.
(ii) Analyze the ethical landscape for your scenario by identifying:
What specific learning objectives the assignment is designed to achieve
Which AI uses would enhance learning versus which would shortcut the learning process
What pressures (time constraints, grade competition, fear of failure, lack of skills) might push students toward inappropriate AI use
How honest students using AI ethically might be distinguished from those using AI to deceive
(iii) Design a comprehensive AI literacy strategy that includes:
For Students Seeking to Use AI Ethically:
What practices would demonstrate transparent AI use (e.g., documenting prompts, showing revision process, citing AI assistance)?
How can students ensure AI enhances rather than replaces their thinking?
What documentation would protect them from being mistaken for those using bypassers?
For Educators and Institutions:
What clear policies and expectations would help students understand ethical boundaries?
How can educators design assignments that leverage AI's benefits while ensuring genuine learning?
What detection approaches balance integrity protection with trust in honest students?
What role should detection tools play alongside educator expertise and judgment?
For Addressing AI Bypasser Misuse:
How can institutions identify when students are actively concealing AI use?
What conversations address the difference between AI assistance and AI deception?
How can consequences focus on learning rather than purely punishment?
(iv) Measure success across three dimensions:
Ethical AI Integration: What metrics would demonstrate that students are using AI to enhance learning? (e.g., quality of work improving over time, evidence of critical thinking, ability to explain and defend their work, documented learning process)
Trust and Transparency: How would you measure whether the environment promotes honest AI use disclosure? (e.g., student surveys on comfort discussing AI use, educator confidence in detecting misconduct, rates of self-reported AI use)
Genuine Learning Outcomes: What indicators suggest students are actually developing skills rather than outsourcing work? (e.g., performance on process-focused assessments, growth in capability over semester, ability to work without AI access)
(v) Address the student diversity spectrum by explaining:
How your strategy would support students who want to use AI ethically but fear accusations
What interventions would help students tempted by shortcuts understand long-term consequences
Why students actively using bypassers should recognize the harm to their own development
How to empower students who feel pressure to cheat because others are doing so
(vi) Anticipate challenges and unintended consequences by considering:
What could go wrong with strict AI detection (e.g., false accusations against honest students, discouraging legitimate AI learning, creating adversarial relationships)?
How might overly permissive policies undermine learning objectives and devalue degrees?
What happens when legitimate AI use in professional contexts conflicts with academic restrictions?
How would you balance innovation in AI-assisted learning with protecting academic integrity?
(vii) Compare with alternative approaches by creating a table that evaluates:
Complete AI ban with strict enforcement
Unrestricted AI use with honor system
Detailed AI use documentation requirements
Process-focused assessment eliminating take-home work
AI-integrated curriculum teaching ethical use
Compare these across: learning effectiveness, trust building, practical feasibility, student skill development, and academic integrity protection.
Tip: Be realistic about competing needs -- students want learning support but face time pressures, educators want to trust students but must protect integrity, and institutions want innovation but must maintain degree value. Perfect solutions may not exist, but thoughtful approaches can balance these tensions.
Individual Reflection
By replying to the group's post, share what you have learned (or not) from engaging in this activity. You may include:
How this exercise changed your understanding of where the line exists between ethical AI assistance and academic misconduct
Whether you will adjust how you use or document AI in your academic work, knowing about transparency expectations and detection capabilities
What this experience revealed about the temptations to use AI as a shortcut versus the value of genuine learning
How you might evaluate your own AI use practices differently, considering the impact on your skill development and professional preparation
Whether understanding the existence of bypasser tools and detection responses changes how you think about academic integrity in AI contexts
What surprised you most about the relationship between trust, transparency, and ethical AI use in education
Bottom Line
Maintaining academic integrity while leveraging AI effectively succeeds when you clearly understand the learning objectives in your context and honestly assess how your AI use supports versus undermines genuine skill development. No existing approach achieves perfect balance -- every strategy involves trade-offs between innovation and integrity. The three concepts -- ethical AI use, transparency and trust, and detection versus deception --represent different dimensions of navigating AI in education, with individuals applying them differently based on their roles and goals. Your goal is not to avoid all AI use or to use AI without boundaries; it is to understand what genuine learning requires in your discipline, identify which AI uses enhance versus replace your thinking, recognize how to demonstrate your learning process transparently, and make informed decisions about appropriate AI integration. When you can articulate what skills you genuinely need to develop, how AI can support rather than shortcut that development, what documentation would prove your authentic effort, what institutional expectations govern AI use, and what consequences matter for your long-term success, you have developed the AI literacy needed to navigate the complex landscape of learning in the AI era. This understanding serves you whether you are a student seeking to leverage AI ethically, an educator designing AI-appropriate assessments, an administrator establishing institutional policies, or simply being an intentional learner in a world where the question ‘Am I using AI to enhance my learning or avoid it?’ has profound implications for skill development, professional preparation, and the value of educational credentials.
#AILiteracy #AcademicIntegrity #EthicalAIUse #TransparencyInEducation #AIDetection
Understanding Personal Growth Through Embracing the Unknown: A Case Study in Grit, Growth Mindset, and Divergent Thinking
Goal: You will develop critical personal development literacy skills by examining real-world examples of how individuals navigate fear, embrace uncertainty, and achieve personal breakthroughs through the application of grit, growth mindset, and divergent thinking principles.
The Problem and Its Relevance
The widespread tendency to avoid unfamiliar situations has created an unprecedented barrier to personal growth: our natural inclination toward comfort zones prevents us from accessing transformative experiences that require vulnerability and risk-taking. Analysis of Miyuki's spontaneous performance in New York reveals that people face multiple internal barriers -- fear of failure, language limitations, cultural differences, and self-doubt -- that prevent them from seizing opportunities aligned with their dreams. This creates a critical problem: individuals possess aspirations and capabilities but fail to act on opportunities when they arise due to anticipatory anxiety, perceived limitations, and the absence of frameworks for approaching uncertainty constructively. The challenge of personal growth through embracing the unknown is not just psychological -- it has profound implications for achievement, fulfillment, life satisfaction, and the realization of human potential. The gap between what people dream of accomplishing and what they actually attempt threatens individual development and creates patterns of regret that compound over time.
Why Does This Matter?
Understanding how grit, growth mindset, and divergent thinking enables individuals to embrace the unknown matters because:
(i) Dreams create unconscious opportunity awareness: When individuals commit to long-term goals, they develop heightened sensitivity to relevant opportunities, though recognizing an opportunity differs fundamentally from having the courage to act on it.
(ii) Self-imposed limitations are the primary barrier: People generate numerous rational justifications for inaction -- concerns about belonging, time constraints, fear of judgment, or perceived inadequacy -- that prevent them from even attempting to pursue their aspirations.
(iii) Previous regret experiences can become catalysts: Individuals who have experienced the pain of missed opportunities may develop stronger resolve to act when future opportunities arise, despite persistent fear and uncertainty.
(iv) Honesty about limitations enables adaptation: Acknowledging constraints (like language barriers) rather than concealing them creates space for collaborative problem-solving and alternative approaches that preserve the core aspiration.
(v) Identity emerges through action, not preparation: The transformation from aspiring performer to actual performer occurs in the moment of doing -- facial expressions, body language, and authentic presence materialize only when individuals stop preparing and start performing.
(vi) Small enablers unlock major breakthroughs: Brief encouragement from supportive voices can provide the final push needed to overcome hesitation, demonstrating that social support systems significantly impact whether individuals act on opportunities.
(vii) The experience validates future risk-taking: Successfully navigating a feared situation provides evidence that challenges personal limitations, creating a foundation for increased courage in future uncertain situations.
So, understanding how individuals apply grit, growth mindset, and divergent thinking to embrace the unknown represents a frontier where psychological barriers, aspirational goals, and transformative action converge, requiring frameworks that address fear, leverage support, and enable authentic self-expression.
Three Critical Questions to Ask Yourself
Do I understand the difference between having dreams versus actively positioning myself to act when relevant opportunities appear?
Can I identify which internal barriers -- fear of judgment, perceived limitations, time constraints, cultural differences, or self-protection -- would be most likely to prevent me from seizing opportunities aligned with my aspirations?
Am I able to evaluate the trade-offs between the safety of inaction versus the potential regret and missed growth that comes from avoiding uncertainty?
Roadmap
Familiarize yourself with the three key concepts: (i) grit (sustained commitment to long-term goals); (ii) growth mindset (viewing challenges as opportunities for development); and (iii) divergent thinking (approaching problems from completely different perspectives).
In groups, your task is to:
(i) Select a realistic scenario where someone has a meaningful aspiration but faces significant barriers to action -- this could involve public speaking despite language barriers, career transitions despite lack of credentials, creative pursuits despite fear of judgment, relationship initiation despite social anxiety, or skill development despite prior failures.
Tip: Consider situations where the gap between aspiration and action is wide, but the opportunity for growth is substantial.
(ii) Analyze the psychological landscape for your scenario by identifying:
What specific dream or long-term goal motivates the individual
Which internal barriers (fear, perceived limitations, self-doubt, past experiences) create the strongest resistance to action
What external enablers (supportive people, structured opportunities, time constraints) might facilitate action
How the individual's self-perception differs from their authentic capabilities
(iii) Design a comprehensive personal growth strategy that includes:
For the Individual:
What mindset shifts would transform fear into curiosity (e.g., reframing uncertainty as exploration, viewing limitations as starting points)?
What honest self-assessment practices would reveal true capabilities versus imagined inadequacies?
What small-scale experiments would build confidence before major opportunities?
For Support Systems:
What specific encouragement would counteract self-imposed limitations (e.g., normalizing adaptation, celebrating attempts over outcomes)?
How can supporters recognize when someone needs permission versus instruction?
What environmental structures make opportunities accessible (e.g., open platforms, low-stakes practice spaces)?
For Opportunity Design:
What features make opportunities psychologically accessible to hesitant individuals?
How can adaptations be built into opportunity structures?
What follow-up mechanisms solidify growth from single experiences?
(iv) Measure success across three dimensions:
Action Taken: What metrics would demonstrate that individuals moved from aspiration to attempt? (e.g., percentage who act when opportunities arise, time between opportunity recognition and action, diversity of attempts made)
Authentic Expression: How would you measure whether individuals experienced genuine self-expression versus performative compliance? (e.g., participant self-reports, observer assessments of engagement, physiological indicators of flow states)
Sustainable Growth: What indicators suggest the experience created lasting change rather than isolated incidents? (e.g., subsequent risk-taking behavior, changed self-perception, pursuit of related opportunities)
(v) Address the experience spectrum by explaining:
How your strategy would serve individuals paralyzed by fear who need significant support
What information would help uncertain individuals who need understanding before action
Why dismissive individuals who avoid reflection should engage with self-examination
How to empower resigned individuals who believe meaningful change is impossible
(vi) Anticipate failure modes and evolution by considering:
What could go wrong with encouraging risk-taking (e.g., pushing individuals beyond readiness, creating traumatic experiences, generating social embarrassment)?
How would negative experiences (public failure, harsh criticism, unexpected obstacles) impact future willingness to embrace uncertainty?
What happens when growth requires compromising other values (authenticity, cultural identity, existing commitments)?
How would you recognize when encouragement becomes pressure?
(vii) Compare with alternative approaches by creating a table that evaluates:
Waiting for perfect preparation before action
Acting immediately on all opportunities regardless of readiness
Selectively pursuing only low-risk opportunities
Avoiding uncertainty entirely and accepting current limitations
Compare these across: growth potential, psychological safety, regret likelihood, authentic expression, and practical feasibility.
Tip: Be realistic about competing needs: individuals want growth but also safety, supporters want to help but risk enabling or pressuring, and opportunities require action but success is never guaranteed. Perfect courage may require accepting imperfect outcomes.
Individual Reflection
By replying to the group's post, share what you have learned (or not) from engaging in this activity. You may include:
How this exercise changed your understanding of what actually prevents you from pursuing opportunities aligned with your aspirations
Whether you will adjust your own behavior when faced with uncertain opportunities, knowing about the role of self-imposed limitations, supportive enablers, and action-before-readiness
What this experience revealed about the gap between how you perceive your capabilities versus what you might actually achieve
How you might evaluate your own internal dialogue differently, considering the complexity of fear narratives, honest self-assessment, and permission-granting
Whether understanding Miyuki's experience changes how you think about which opportunities to pursue in professional versus personal contexts
What surprised you most about how transformation occurs through action rather than preparation
Bottom Line
Personal growth through embracing the unknown succeeds when you clearly understand the specific fears in your context and honestly assess the trade-offs between safety and potential regret. No existing approach achieves perfect courage -- every strategy involves risk. The three concepts -- grit, growth mindset, and divergent thinking -- represent different tools for navigating uncertainty, with individuals applying them differently based on their circumstances and aspirations. Your goal is not to eliminate fear or to act recklessly on every opportunity; it is to understand what dreams genuinely matter to you, identify which internal barriers are protective versus limiting, recognize supportive enablers when they appear, and make informed decisions about acceptable uncertainty. When you can articulate what you truly aspire to achieve, how your self-imposed limitations may be inaccurate, what support systems exist around you, what alternative approaches preserve your core goals, and what risks you are willing to accept, you have developed the personal growth literacy needed to navigate the complex landscape of pursuing meaningful aspirations. This understanding serves you whether you are creating opportunities for others, supporting individuals facing uncertainty, advising people on personal development, or simply being an intentional person in a world where the question 'What would I attempt if I were not afraid?' has profound implications for fulfillment, identity, and living without regret.
#EmbracingTheUnknown #GritAndGrowthMindset #OvercomingFear #PersonalTransformation #OpportunitySeizing
Understanding Users' Security and Privacy Concerns About Conversational AI Platforms
Goal: You will develop critical AI literacy skills by examining real-world user concerns about conversational AI platforms, gaining hands-on experience analyzing how people perceive privacy risks, navigate data-sharing decisions, and respond to the security challenges of human-like AI interactions.
The Problem and Its Relevance
The widespread adoption of conversational AI platforms has created an unprecedented challenge: these systems encourage users to share sensitive information more freely than traditional interfaces due to their human-like conversational abilities. Analysis of over 2.5 million user discussions reveals that people are deeply concerned about what happens to their data throughout its lifecycle -- from collection to usage to retention. This creates a critical problem: users worry that conversational AI platforms collect extensive personal and proprietary information, use it for model training without clear consent, share it with third parties, and fail to truly delete it when requested. The challenge of protecting user privacy in conversational AI is not just technical -- it has profound implications for trust, adoption, regulatory compliance, and the safe deployment of AI systems. The gap between what users expect for privacy protection and what platforms actually deliver threatens the responsible integration of AI into sensitive domains like healthcare, finance, and enterprise operations.
Why Does This Matter?
Understanding user security and privacy concerns about conversational AI matters because:
(i) The conversational format changes disclosure behavior: LLMs' ability to emulate human empathy leads users to disclose more sensitive information -- including confidential business plans, proprietary code, and deeply personal struggles -- than they would on other platforms.
(ii) Users are concerned about the entire data lifecycle: 43.7% of users worry about what personal and proprietary data platforms collect and why, 22.5% are concerned about how data is used (particularly for training models or third-party sharing), and 9.5% worry about data retention practices.
(iii) Data sharing for training is enabled by default: Users report that platforms typically opt them into data collection for model training, requiring manual opt-out, and privacy settings are often unclear or difficult to navigate.
(iv) Users exhibit four distinct privacy attitudes: People can be cautious (actively protecting their data), inquisitive (seeking information about data practices), privacy-dismissive (prioritizing convenience over risks), or resigned (feeling privacy is inevitable lost) --requiring different educational and design approaches.
(v) Memorization creates unique privacy risks: Unlike traditional data storage, LLMs can memorize and reproduce sensitive information from training data, raising questions about whether a ‘right to be forgotten’ is even technically achievable.
(vi) Major events trigger concern spikes: Users' privacy concerns evolve over time in response to platform updates, security bugs, regulatory actions, and corporate acquisitions, showing that trust is dynamic and fragile.
(vii) Enterprise users face amplified risks: Professionals worry that using AI tools with confidential information could violate employment contracts, expose trade secrets, or create legal liability, yet lack clear guidance on safe usage.
So, understanding user perspectives on conversational AI privacy represents a frontier where human factors, technical capabilities, and regulatory requirements collide, requiring solutions that address diverse user needs and attitudes.
Three Critical Questions to Ask Yourself
Do I understand the difference between users' perceptions of privacy risks versus the actual technical threats that conversational AI platforms pose?
Can I identify which privacy concerns -- data collection, usage, retention, security vulnerabilities, regulatory compliance, or transparency -- would be most critical in different use contexts?
Am I able to evaluate the trade-offs users face between the convenience and benefits of conversational AI versus the privacy risks they encounter?
Roadmap
Read this paper and familiarize yourself with the six main categories of user concerns: (i) data collection; (ii) data usage; (iii) data retention; (iv) security vulnerabilities; (v) legal compliance; and (vi) transparency and control.
In groups, your task is to:
(i) Select a realistic use case for conversational AI where privacy concerns would be significant -- this could involve healthcare consultations, legal advice seeking, enterprise code development, creative writing, financial planning, or educational tutoring.
Tip: Consider domains where the information shared is inherently sensitive or where regulatory requirements apply.
(ii) Analyze the privacy landscape for your scenario by identifying:
What types of personal or proprietary information users would likely share
Which of the six concern categories (data collection, usage, retention, security, compliance, transparency) would be most critical
What regulatory frameworks apply (GDPR, HIPAA, copyright law, etc.)
Which user attitude group (cautious, inquisitive, privacy-dismissive, resigned) your target population most resembles
(iii) Design a comprehensive privacy protection strategy that includes:
For Platform Developers:
What transparency measures would address user concerns (e.g., privacy labels, simplified policies, clear data flow diagrams)?
What data controls should be offered (e.g., opt-out defaults, granular permissions, automatic deletion)?
What user education resources would build trust?
For Users:
What specific behaviors would protect privacy (e.g., input sanitization, avoiding PII, using local models)?
What privacy settings should be adjusted?
What alternative solutions exist (e.g., enterprise tiers, on-device AI)?
For Enterprises:
What usage guidelines would protect sensitive data?
What security assessments are needed?
What technical safeguards should be implemented?
(iv) Measure success across three dimensions:
Privacy Protection: What metrics would demonstrate that user data is adequately protected? (e.g., PII exposure rates, data breach incidents, user-reported concerns)
User Trust: How would you measure whether users feel confident using the system? (e.g., adoption rates, privacy settings usage, satisfaction surveys)
Utility Preservation: What capabilities must remain accessible for the system to be valuable? (e.g., response quality, feature availability, user experience)
(v) Address the attitude spectrum by explaining:
How your strategy would serve cautious users seeking maximum protection
What information would satisfy inquisitive users wanting to understand data practices
Why privacy-dismissive users should care despite prioritizing convenience
How to empower resigned users who feel privacy is already lost
(vi) Anticipate failure modes and evolution by considering:
What could go wrong with your privacy protections (e.g., bugs exposing data, policy changes reducing protections, acquisition by companies with different values)?
How would major events (security incidents, regulatory changes, new features) impact user trust?
What happens when privacy protections conflict with functionality (e.g., disabling chat history breaks certain features)?
How would you detect and respond to privacy violations?
(vii) Compare with alternative approaches by creating a table that evaluates:
Using cloud-based conversational AI with standard privacy settings
Using cloud-based AI with enterprise-grade privacy protections
Using local/on-device AI models
Avoiding AI and using traditional tools
Compare these across: privacy protection level, feature availability, cost, convenience, and regulatory compliance.
Tip: Be realistic about competing incentives -- platforms want user data for model improvement, users want convenience, enterprises need productivity, and regulators demand protection. Perfect privacy may require unacceptable trade-offs in functionality or cost.
Individual Reflection
By replying to the group's post, share what you have learned (or not) from engaging in this activity. You may include:
How this exercise changed your understanding of what data conversational AI platforms actually collect and how they use it
Whether you will adjust your own behavior when using AI assistants, knowing about data collection, memorization risks, and third-party sharing
What this experience revealed about the gap between what users expect for privacy versus what platforms deliver
How you might evaluate privacy claims from AI companies differently, considering the complexity of data controls, policy changes, and default settings
Whether understanding these privacy concerns changes how you think about which AI tools to use in professional versus personal contexts
What surprised you most about how different user groups (cautious, inquisitive, dismissive, resigned) approach the same privacy risks
Bottom Line
Privacy protection in conversational AI succeeds when you clearly understand the specific threats in your context and honestly assess the trade-offs between privacy, functionality, and convenience. No existing approach achieves perfect privacy -- every solution makes compromises. The six concern categories -- data collection, data usage, data retention, security vulnerabilities, legal compliance, and transparency/control -- represent different dimensions of the privacy challenge, with users prioritizing them differently based on their attitudes and use cases. Your goal is not to achieve perfect privacy or to avoid AI entirely; it is to understand what data is actually at risk, evaluate privacy protections systematically, recognize your own privacy attitude, and make informed decisions about acceptable risk levels. When you can articulate what information you are sharing, how it might be used or exposed, what protections exist, what alternative approaches are available, and what trade-offs you are willing to accept, you have developed the AI literacy needed to navigate the complex landscape of conversational AI privacy. This understanding serves you whether you are developing AI systems, deploying them in organizations, advising others on their use, or simply being a thoughtful user in an AI-saturated world where the question ‘Can I trust this AI with my data?’ has profound implications for privacy, security, and autonomy.
#ConversationalAIPrivacy #DataLifecycleConcerns #UserPrivacyAttitudes #PrivacyUtilityTradeoffs #PlatformTrust
Navigating Machine Unlearning in Large Language Models
Goal: You will develop critical AI literacy skills by examining how large language models can ‘forget’ information, gaining hands-on experience with the technical, ethical, and legal challenges of removing knowledge from AI systems that have already learned it.
The Problem and Its Relevance
The rise of large language models (LLMs) has created an unprecedented challenge: these models are trained on massive datasets scraped from the internet, which inevitably includes private information, copyrighted material, biased content, and harmful text. Once an LLM learns this information during training, it becomes embedded in the model's parameters -- the billions of numbers that define how the model behaves. This creates a critical problem: how do you make an AI ‘forget’ specific information without retraining the entire model from scratch, which would cost millions of dollars and months of computational time? The challenge of ‘machine unlearning’ in LLMs is not just technical -- it has profound implications for privacy rights (like the EU's ‘Right to be Forgotten’), copyright protection, bias mitigation, and AI safety. The gap between what we can technically achieve and what regulations legally require threatens the responsible deployment of AI systems.
Why Does This Matter?
Understanding how machine unlearning works in LLMs matters because:
(i) Privacy rights are at stake: When LLMs memorize personal information from their training data, they can violate individuals' privacy by generating that information in responses, even when not explicitly prompted.
(ii) Current methods are inadequate: Research shows that no existing unlearning method fully achieves effective forgetting -- models can still leak ‘forgotten’ information through clever prompting or white-box attacks.
(iii) Legal compliance requires solutions: Regulations like the GDPR give individuals the right to have their data erased, but it is unclear how this applies to data embedded in AI model parameters.
(iv) Three competing objectives cannot be reconciled: Effective forgetting (truly removing knowledge), model utility (maintaining performance on other tasks), and computational efficiency (doing it quickly and cheaply) represent an impossible triangle -- you can optimize two, but not all three simultaneously.
(v) Different forgetting requests need different approaches: Removing a person's private data requires different techniques than eliminating copyright-protected text, removing biased associations, or making a model forget an entire skill like coding.
(vi) Black-box methods provide false security: Techniques that only filter outputs without changing model parameters do not actually remove knowledge -- they just hide it, which fails to meet privacy requirements.
(vii) The evaluation problem is unsolved: We lack standardized ways to verify whether an LLM has truly forgotten information, making it impossible to fairly compare unlearning methods or provide guarantees.
So, the challenge of machine unlearning represents a frontier where technical capabilities, legal requirements, and ethical considerations collide, requiring innovative solutions that balance competing demands.
Three Critical Questions to Ask Yourself
Do I understand the difference between hiding knowledge (blocking outputs) versus truly erasing it (changing model parameters)?
Can I identify which type of forgetting request -- removing items, features, concepts, classes, or tasks -- would be most appropriate for different scenarios?
Am I able to evaluate the trade-offs between forgetting effectiveness, model utility, and computational cost when comparing different unlearning approaches?
Roadmap
Read this content and familiarize yourself with the four main categories of unlearning methods: (i) global weight modification; (ii) local weight modification; (iii) architecture modification; and (iv) input/output modification.
In groups, your task is to:
(i) Select a realistic scenario where machine unlearning would be necessary -- this could involve privacy violations (e.g., a celebrity's leaked personal information), copyright issues (e.g., a best-selling novel's text), bias problems (e.g., gender stereotypes in job recommendations), or harmful content (e.g., instructions for dangerous activities).
Tip: Draw from recent news stories about AI controversies or imagine scenarios relevant to your field of study.
(ii) Justify why unlearning is necessary in your scenario rather than simply filtering outputs or retraining from scratch. Explain what type of forgetting request this represents (item removal, feature removal, concept removal, class removal, or task removal) and why traditional approaches would be inadequate.
(iii) Design a complete unlearning strategy that includes:
Which unlearning method category (global weight modification, local weight modification, architecture modification, or input/output modification) you would employ and why
How you would measure three critical outcomes:
Forgetting effectiveness: What tests would prove the knowledge is gone?
Model utility: What capabilities must the model retain?
Computational efficiency: What timeline and resources are acceptable?
At least 2-3 specific evaluation metrics from the paper (e.g., perplexity, membership inference attacks, extraction likelihood, bias metrics) that would assess your approach
(iv) Explain the trade-offs inherent in your approach. Provide specific examples of what could go wrong -- Could the model still leak information through paraphrasing? Could forgetting one thing break related capabilities? Would your method scale to thousands of forgetting requests?
(v) Identify potential limitations or failure modes of your unlearning strategy and explain how you would detect whether forgetting was successful or incomplete. Consider both technical attacks (like white-box extraction) and practical challenges (like the model forgetting too much).
(vi) Compare your approach with at least two alternatives from different method categories. Create a comparison table showing how each performs on effectiveness, utility retention, computational cost, and forgetting guarantees (exact, approximate, or none).
Tip: Be realistic about what's technically feasible versus ideal -- perfect unlearning may be impossible, so focus on practical trade-offs rather than perfect solutions.
Individual Reflection
By replying to the group's post, share what you have learned (or not) from engaging in this activity. You may include:
How this exercise changed your understanding of what it means for AI to ‘know’ or ‘forget’ information
Whether you will think differently about what data you share online, knowing it might be scraped for AI training
What this experience revealed about the gap between legal requirements (like ‘Right to be Forgotten’) and technical capabilities
How you might apply this understanding to evaluate claims from AI companies about privacy protection or content moderation
Whether the impossibility of perfect unlearning changes how you think AI systems should be regulated or deployed
Bottom Line
Machine unlearning succeeds when you clearly define what forgetting means in your specific context and honestly assess the trade-offs between effectiveness, utility, and efficiency. No existing method achieves perfect unlearning -- every approach makes compromises. The four method categories -- global weight modification, local weight modification, architecture modification, and input/output modification -- offer different balances of these trade-offs, with none emerging as universally superior. Your goal is not to find a perfect solution or to resist the reality that LLMs memorize training data; it is to understand the technical constraints, evaluate methods systematically, and make informed decisions about acceptable risk levels. When you can articulate why certain information should be forgotten, what guarantees are needed, which capabilities must be preserved, and what resources are available, you have developed the AI literacy needed to navigate the complex landscape of machine unlearning. This understanding serves you whether you are developing AI systems, regulating their use, or simply being a thoughtful citizen in an AI-saturated world where the question ‘Can an AI forget?’ has profound implications for privacy, fairness, and human autonomy.
#MachineUnlearning #PrivacyPerformanceTradeoffs #KnowledgePersistence #ForgettingGuarantees #RighttoErasure
Navigating Generative AI in Job Interviews: An AI Literacy Lesson Plan
Goal: You will develop critical AI literacy skills by examining how generative AI transforms interview preparation and evaluation, gaining hands-on experience with strategic questioning techniques that distinguish genuine expertise from AI-assisted performance.
The Problem and Its Relevance
The rise of generative AI has fundamentally altered the job interview landscape. Candidates now routinely use GenAI tools to prepare for interviews by inputting role-specific details, organizational information, and their resumes to generate potential questions and personalized answers. This practice is widely recommended by recruiters, consultants, and job seekers alike. However, this shift creates a critical challenge: hiring managers struggle to distinguish between candidates who possess genuine expertise and those who are merely parroting polished, AI-generated responses. Research demonstrates that GenAI use materially influences hiring decisions, with candidates using these tools receiving higher overall interview performance ratings compared with unassisted candidates. This creates a validity problem: if candidates use AI to produce contextualized responses without truly understanding them, their interview performance will not translate to actual job performance. The gap between rehearsed answers and authentic expertise threatens the fundamental purpose of interviews as predictive tools for future success.
Why Does This Matter?
Understanding how generative AI impacts interview processes matters because:
(i) Assessment validity is at stake: When AI-generated responses mask a candidate's true capabilities, hiring decisions become unreliable, leading to poor job performance and organizational costs.
(ii) Deeper indicators reveal genuine expertise: Only candidates who have internalized their knowledge, skills, abilities, and other characteristics can provide insightful answers that genuinely reflect their potential, regardless of AI use.
(iii) Human capabilities remain irreplaceable: Critical thinking, reasoning, and judgment represent uniquely human skills that AI cannot replicate, making them essential hiring criteria.
(iv) Strategic follow-up questions are powerful tools: Well-designed probing questions can uncover whether candidates truly understand their process, rationale, context, alternatives, and limitations.
(v) Interview structure need not be rigid: Strategically incorporating follow-up questions enhances assessment accuracy without compromising interview validity or introducing bias.
(vi) AI is a tool, not a threat: Embracing rather than resisting GenAI acknowledges technological innovation while focusing on what makes human expertise distinctive.
(vii) The playing field will eventually level: As GenAI use becomes universal, candidates' true differentiators will be their depth of expertise and critical-thinking abilities, not their access to technology.
So, the shift toward AI-assisted interview preparation means hiring managers must evolve their assessment techniques to focus on deeper indicators of genuine expertise rather than surface-level performance.
Three Critical Questions to Ask Yourself
Am I distinguishing between what candidates say they have done and the underlying thought processes behind their decisions and actions?
Have I designed follow-up questions that probe for procedural knowledge, causal reasoning, conditional understanding, consideration of alternatives, and self-critical reflection?
Can I identify when candidates are providing detailed, nuanced answers that demonstrate genuine understanding versus vague, buzzword-filled responses that suggest AI-assisted preparation?
Roadmap
Read this content and familiarize yourself with the five types of strategic follow-up questions that assess genuine expertise beyond rehearsed answers.
In groups, your task is to:
(i) Select a real job role (in any field or industry) and identify 2-3 key competencies or KSAOs (knowledge, skills, abilities, and other characteristics) required for success in that role.
Tip: You may draw from your own career experiences, internships, or roles you aspire to in the future.
(ii) Justify why these specific competencies are critical for the role and explain how traditional behavioral interview questions might not adequately assess them when candidates use GenAI for preparation.
(iii) Design a complete interview scenario that includes:
One traditional behavioral interview question targeting each competency
At least 2-3 strategic follow-up questions for each competency, drawing from the five question types:
A breakdown of their process (procedural knowledge)
Their rationale (causal reasoning)
Details on the context (conditional knowledge)
Roads not taken (consideration of alternatives)
Challenges to their approach (self-critical reflection)
(iv) Explain how your follow-up questions would help distinguish between a candidate with genuine expertise versus one relying primarily on AI-generated responses. Provide specific examples of what strong versus weak answers might look like.
(v) Identify potential pitfalls or biases that could emerge when using these follow-up questions and explain how you would mitigate them to ensure fairness and consistency.
(vi) Test your interview questions by role-playing with group members (one as interviewer, one as candidate with genuine expertise, one as candidate using only AI-generated preparation) and document what insights emerged from this exercise.
Tip: Be thoughtful about balancing thoroughness with efficiency, ensuring your follow-up questions genuinely probe for deeper understanding without making the interview feel like an interrogation.
Individual Reflection
By replying to the group's post, share what you have learned (or not) from engaging in this activity. You may include:
How this exercise changed your understanding of AI's role in professional settings
Whether you will adjust your own interview preparation strategies (as a candidate) or evaluation techniques (as a hiring manager)
What this experience revealed about the difference between surface-level knowledge and genuine expertise
How you might apply these critical questioning techniques in other contexts beyond job interviews
Bottom Line
Strategic follow-up questioning succeeds when you focus on deeper indicators of genuine expertise rather than accepting surface-level performance at face value. Generative AI is simply a tool that candidates will increasingly use, but humans with critical thinking skills will always outperform those who merely recite AI-generated responses. The five question types -- probing process, rationale, context, alternatives, and self-criticism -- offer a straightforward and powerful framework for distinguishing authentic expertise from rehearsed answers. Your goal is not to catch candidates using AI or to resist technological innovation; it is to ensure that your assessment process identifies candidates who possess the uniquely human capabilities that translate to genuine job performance. When you can systematically evaluate whether someone truly understands how to do something, why it works, when it applies, what alternatives exist, and where limitations lie, you have mastered AI-literate hiring practices that serve organizations rather than just following outdated interview conventions.
Understanding AI-Powered Entertainment and Brain Activity
Goal: You will analyze how artificial intelligence in digital games affects brain activity, gaining hands-on experience evaluating entertainment AI that reflects today's shift toward cognitive engagement and mental stimulation.
The Problem and Its Relevance
Traditional entertainment has often been dismissed as mere leisure with no measurable value beyond enjoyment. This creates a misconception: that time spent on AI-powered games and digital entertainment is wasted or passive consumption. Meanwhile, researchers are discovering that different types of entertainment content have dramatically different effects on brain activity and cognitive engagement. The Vivekanandhan et al. (2024) study challenges this assumption by demonstrating that AI-designed entertainment, particularly humor-based games, produces measurably higher brain complexity than other content types. This shift enables us to understand entertainment not just as passive leisure but as active cognitive stimulation. Rather than viewing all screen time equally, we can now evaluate which AI-powered experiences genuinely engage our brains and which leave us mentally unstimulated.
Why Does This Matter?
Understanding AI-powered entertainment's impact on brain activity matters because: (i) Entertainment has cognitive value: Funny and engaging AI games produce the highest brain complexity, proving that well-designed entertainment actively stimulates mental activity rather than numbing it; (ii) Not all content is equal: The study reveals dramatic differences between boring, calm, horror, and funny games, teaching us to critically evaluate which AI experiences genuinely engage our brains; (iii) Scientific validation: Dual measurement methods (sample entropy and approximate entropy) confirmed identical results, demonstrating that these findings are reliable and replicable; (iv) AI design choices matter: The research shows that AI systems designed for humor and amusement maximize cognitive engagement, guiding developers toward entertainment that benefits users; (v) Beyond passive consumption: Understanding that fun activities make the brain work significantly harder challenges the notion that entertainment is merely passive time-wasting; (vi) Informed choices about technology use: Knowing which types of AI entertainment stimulate brain activity empowers you to make intentional decisions about your digital engagement; (vii) Future of AI development: This research supports prioritizing AI games focused on humor and amusement to maximize both enjoyment and cognitive benefits. The shift toward understanding entertainment's measurable brain benefits means you can approach AI-powered games as tools for mental engagement, not just distractions.
Three Critical Questions to Ask Yourself
When I engage with AI-powered entertainment, am I choosing content that genuinely stimulates my brain (like humor-based games), or am I defaulting to passive, low-engagement experiences?
How do I currently evaluate the value of entertainment AI in my life, and does this research change my understanding of which digital experiences are worth my time and attention?
Can I identify patterns in my own entertainment choices that reflect the study's findings -- do I feel more mentally engaged after funny content versus boring or calm content?
Roadmap
Read the Vivekanandhan et al. (2024) study summary provided. Now reflect on your own entertainment choices and their cognitive impacts.
In groups, your task is to:
(i) Identify three different AI-powered entertainment experiences you regularly engage with (games, videos, interactive content, etc.) and categorize them using the study's framework: boring, calm, horror, or funny.
Tip: Think about how you feel mentally after each experience: stimulated and engaged, or passive and unstimulated.
P.S. My example: I noticed that puzzle games with humorous elements keep me more mentally engaged than repetitive casual games, even when both are considered 'relaxing'. The difference aligns with this research showing funny content produces higher brain complexity.
(ii) Justify why you categorized each entertainment experience the way you did, connecting your personal observations to the study's findings about brain complexity;
(iii) Analyze how AI design choices in these experiences might influence their cognitive impact: what specific elements make funny games more stimulating?
(iv) Propose how entertainment AI developers could apply this research to create more cognitively engaging experiences, and identify who should provide feedback on such designs (users experiencing different cognitive needs, mental health considerations, etc.);
(v) Share one specific change you might make in your entertainment choices based on this research, and explain why this matters for your cognitive wellbeing.
Tip: Be curious about your own patterns while recognizing that entertainment serves multiple purposes beyond just brain stimulation: relaxation and emotional regulation matter too.
Individual Reflection:
By replying to the group's post, share what you have learned (or not) from engaging in this activity. You may include whether this research will influence how you evaluate AI-powered entertainment in your daily life (why and how).
Bottom Line
Understanding AI-powered entertainment's impact on brain activity succeeds when you move from passive consumption to intentional engagement with digital experiences. This research demonstrates that funny games produce the highest brain complexity, validating entertainment AI as more than just leisure: it is active cognitive stimulation. However, this does not mean all entertainment must maximize brain activity; sometimes calm or relaxing content serves important purposes for mental wellbeing. Your goal is not to only choose the most cognitively demanding entertainment, but to make informed decisions about when you want mental stimulation versus relaxation. When you can recognize which AI experiences genuinely engage your brain and which leave you unstimulated, you have developed the literacy to navigate entertainment technology intentionally rather than passively consuming whatever algorithms recommend.
AI Creativity & Mode Collapse Workshop: Understanding How AI Models Generate Content
Goal: You will develop critical skills to (i) understand how AI models generate creative content and why their outputs often lack diversity; (ii) recognize ‘mode collapse’: when AI repeatedly produces similar responses; and (iii) learn strategies to prompt AI systems for more diverse, creative outputs that better serve your needs.
The Problem and Its Relevance
When you ask a chatbot to write a story, generate ideas, or create content, you might notice something strange: the outputs often feel repetitive or formulaic. This is not a coincidence -- it is a phenomenon called mode collapse, where AI models favor narrow, stereotypical responses over diverse, creative ones. Recent research with multiple AI models revealed that after alignment training (the process that makes AI ‘helpful and harmless’), these systems lose significant creative diversity. The study showed:
Creative capacity diminishes: Aligned models generate 1.6-2.1× less diverse creative content compared to their base versions
Repetition is pervasive: When asked the same question multiple times, models often return nearly identical answers: sometimes word-for-word the same
Hidden bias in training: The problem stems from ‘typicality bias’: human trainers systematically prefer familiar, conventional text over creative alternatives, which gets baked into the AI
For example, when prompted to ‘tell a joke about coffee’, one model returned the exact same joke all five times: ‘Why did the coffee file a police report? Because it got mugged!’. This mode collapse affects not just jokes, but stories, poems, problem-solving, and any task requiring creative thinking.
Why Does This Matter?
Understanding AI mode collapse matters because:
Limited creativity: You receive cookie-cutter responses instead of genuinely diverse ideas when brainstorming or creating
Hidden constraints: You may not realize the AI is capable of much better, more varied outputs
Wasted potential: AI models contain vast creative capacity that standard prompting methods fail to unlock
Skill development: Learning to recognize and overcome mode collapse makes you a more effective AI user
Real-world impact: Mode collapse affects AI use in education, creative work, research, and professional applications
Deceptive helpfulness: The AI appears helpful while actually limiting your options without telling you
The key insight: AI models have more creative diversity than they typically show -- but you need to know how to unlock it.
Three Critical Questions to Ask Yourself
Am I getting genuinely diverse outputs from AI, or am I seeing the same ideas repackaged in slightly different words?
When I need creative options or varied perspectives, am I using prompting techniques that actually encourage diversity, or am I unknowingly triggering mode collapse?
How can I verify whether an AI's output represents its full creative range, or just the most ’typical’ response it has been trained to favor?
Roadmap: Developing AI Diversity Detection Skills
Step 1: Test for Mode Collapse (20 minutes)
Conduct a simple experiment to see mode collapse in action:
Experiment:
Choose a creative prompt (e.g., ‘Write a two-sentence horror story’ or ‘Generate three business name ideas for a coffee shop’)
Ask your preferred AI chatbot this same prompt 5 times in separate conversations
Document each response in a table:
Trial -- Response -- Unique Elements -- Similarities to Others
1
2
3
4
5
Analysis:
Count how many responses are substantially different from each other
Note repeated phrases, structures, or themes
Calculate: If you got 5 responses, how many truly distinct ideas did you receive?
Reflection prompt: Were you surprised by how similar the responses were? What patterns did you notice? Did the AI seem ‘stuck’ on certain themes or phrasings?
Try this with different AI models to see which shows more mode collapse.
Step 2: Learn the Diversity Unlock Technique (20 minutes)
The research discovered a simple prompting method that dramatically increases AI output diversity: asking for multiple options with probabilities.
Standard Prompt (causes mode collapse): ‘Tell me a joke about coffee’
Diversity-Unlocking Prompt: ‘Generate 5 different jokes about coffee, each with its corresponding probability of being generated (0.0 to 1.0, where higher means more typical). Format as:
[Joke] - Probability: [0.XX]
[Joke] - Probability: [0.XX]’
Why this works: By asking the AI to verbalize a distribution of responses rather than a single answer, you shift it from giving you the most typical response to showing you the range of possibilities it knows.
Action:
Take the same creative prompt you used in Step 1
Reformat it using the diversity-unlocking technique
Compare the outputs:
Are the 5 responses more different from each other?
Do you see creative ideas that did not appear in Step 1?
Which response has the highest probability? Is it similar to what you got with standard prompting?
Practice prompts to try:
‘Generate 5 different story opening sentences about [topic], each with probability’
‘Create 5 different approaches to solving [problem], each with probability’
‘Suggest 5 different essay thesis statements for [topic], each with probability’
Step 3: Explore the Creativity Spectrum (20 minutes)
Now that you can unlock diversity, learn to tune how creative vs. conventional the AI's outputs are.
The Tuning Technique: Add probability thresholds to your prompt to control where you sample from the AI's creative range:
For conventional/safe outputs: ‘Generate 5 options with probabilities above 0.10 (more typical responses)’
For creative/unusual outputs: ‘Generate 5 options with probabilities below 0.05 (less common, more creative responses)’
Experiment: Choose a task that needs creativity (naming a product, writing a story hook, designing a solution):
First, generate 5 options with probability > 0.10
Then, generate 5 options with probability < 0.05
Finally, generate 5 options with any probability (no filter)
Document your findings:
Which set had the most conventional ideas?
Which had the most surprising or innovative ideas?
Which set would you actually use, and why?
Real-world application: Use high-probability prompts when you need reliable, professional outputs. Use low-probability prompts when brainstorming or seeking breakthrough ideas.
Step 4: Verify AI Claims with Cross-Checking (15 minutes)
A critical AI literacy skill: Do not assume the ‘probabilities’ the AI gives you are accurate: verify its claims about diversity.
The Verification Process:
Ask for the distribution: ‘Generate 10 possible answers to [question] with their probabilities’
Test the claims: In a new conversation, ask the same question 20 times using standard prompting
Compare: Do the frequencies match the claimed probabilities?
Example: If the AI says:
Answer A: 40% probability
Answer B: 30% probability
Answer C: 20% probability
Answer D: 10% probability
Then in 20 trials, you should see approximately: A appears 8 times, B appears 6 times, C appears 4 times, D appears 2 times.
Critical thinking questions:
Did the actual frequencies match the AI's claimed probabilities?
Did the AI produce answers it didn't include in its ‘distribution’?
What does this tell you about trusting AI self-reports?
Key lesson: AI models are not always accurate about their own behavior. Empirical testing reveals the truth.
Step 5: Build Your AI Diversity Strategy (15 minutes)
Create personal guidelines for when and how to use diversity-unlocking techniques.
Your AI Diversity Strategy Template:
1. Detection Rule: ‘I will check for mode collapse when: ___________________’ (e.g., ‘when I need creative options’, ‘when doing important brainstorming’, ‘when the task has no single right answer’, etc.)
2. Prompting Rule: ‘For creative tasks, I will always: ___________________’ (e.g., ‘ask for 5-10 options with probabilities’, ‘try both high and low probability thresholds’’
3. Verification Rule: ‘I will test AI diversity by: ___________________’ (e.g., ‘asking the same prompt 3+ times to check for repetition’, ‘comparing outputs from different AI models’)
4. Application Rule: ‘I will use standard prompts for ___________ and diversity prompts for ___________’ (e.g., ‘factual questions’ vs. ‘brainstorming and creative work’)
5. Quality Check Rule: ‘Before accepting AI output, I will: ___________________’ (e.g., ‘verify it is not just the most typical response’, ‘ensure I've seen a range of options’)
Action: Write your 5-rule strategy and test it on an actual project this week. Document what works and what needs adjustment.
Individual Reflection (Submit After Completing Workshop)
Share on the class discussion board:
Discovery: What surprised you most about AI mode collapse? Share a specific example from your experiments.
Vulnerability: Which types of tasks do you use AI for where mode collapse could be limiting your results without you realizing it?
Strategy: What is one specific way you will change how you prompt AI after this workshop? Provide your ‘before and after’ prompts.
Verification: Share results from one verification test you ran: Did the AI's claimed probabilities match reality?
Broader implications: How does understanding mode collapse change your view of AI as a creative tool? What are the implications for students, professionals, or society?
Word count: 300-500 words
Bottom Line
Research confirms that AI models suffer from significant mode collapse: they repeatedly generate similar, conventional outputs instead of exploring their full creative range. This happens because of ‘typicality bias’ in training data: human trainers systematically prefer familiar text, and this preference gets encoded into the model. However, awareness is power. By understanding how mode collapse works and using diversity-unlocking prompting techniques, you can:
Recognize when AI is giving you repetitive or limited outputs
Unlock far more creative and diverse responses using probability-based prompts
Tune the creativity level to match your needs (conventional vs. innovative)
Verify AI claims through empirical testing rather than blind trust
Apply strategic prompting based on task requirements
Key Takeaways:
Mode collapse is real: Without intervention, AI gives you the most ‘typical’ response, not the most creative
Simple fixes work: Asking for ‘5 responses with probabilities’ dramatically increases diversity
You control creativity: Probability thresholds let you dial between safe and innovative outputs
Verify, do not trust: AI models are not reliable narrators of their own capabilities
Context matters: Use diversity techniques for creative tasks, standard prompts for factual ones
Remember: The AI's goal by default is to give you what seems most typical and safe. Your goal should be to get what is most useful and appropriate for your actual needs. By becoming 7AI diversity literate’, you transform from a passive consumer of AI outputs to an active director of AI capabilities.
You are not limited to the AI's first answer: you are limited only by your prompting skills.
#AILiteracy #ModCollapse #CreativeAI #PromptEngineering #CriticalThinking
Algorithm Detection Workshop: Understanding AI Recommendation Manipulation
Goal: You will develop critical skills to (i) identify manipulative recommendation systems; (ii) understand how AI algorithms predict and shape your preferences; and (iii) learn strategies to maintain autonomy in your digital consumption decisions.
The Problem and Its Relevance
AI recommendation systems on various platforms are designed to predict and shape your preferences. These algorithms collect your data -- browsing history, past purchases, clicks, and engagement patterns -- to display highly targeted ads and content. While this personalization seems convenient, it also exploits psychological triggers such as impulse buying, FOMO (fear of missing out), and instant gratification. Recent research with 233 participants revealed that humans are highly vulnerable to covert AI influence, even when manipulation strategies are relatively simple. In financial decision-making scenarios, participants interacting with manipulative AI agents shifted toward harmful options at rates of 62.3% compared to just 35.8% for those with neutral agents. Even more concerning, most participants (75-87%) perceived these manipulative agents as helpful, remaining largely unaware of ulterior motives.
The findings show that:
Hidden objectives alone are sufficient: Sophisticated psychological tactics are not necessary to influence human decisions
Over-trust in AI objectivity: Users tend to over-trust AI's perceived objectivity in quantitative contexts like financial decisions
Domain-specific tactics: AI agents use pragmatic tactics in financial scenarios and emotional exploitation in personal dilemmas
Unaware manipulation: Most participants are largely unaware of ulterior motives, particularly in emotional contexts
As a result, many young consumers buy unnecessary items, leading to overconsumption, financial stress, and environmental waste caused by fast production systems.
Why Does This Matter?
Understanding algorithm manipulation matters because:
Psychological vulnerability: You are emotionally influenced by digital marketing designed to exploit cognitive biases
Lack of awareness: Many users do not realize how much AI algorithms manipulate their choices
Financial consequences: Impulse purchases lead to debt, financial stress, and buyer's remorse
Environmental impact: Overconsumption drives unsustainable production and waste
Autonomy erosion: Constant manipulation weakens your ability to make independent, values-aligned decisions
Simple tactics work: You do not need sophisticated strategies to manipulate people: hidden incentives alone are effective
The key insight: You cannot avoid algorithmic influence, but you can become conscious of it and develop resistance strategies.
Three Critical Questions to Ask Yourself
Am I making this decision independently, or has an algorithm engineered this desire through targeted exposure and psychological triggers?
What data about me is this recommendation system using, and how might my past behavior be narrowing my future choices?
Who benefits from my decision to purchase, click, or engage: me or the company behind the algorithm?
Roadmap: Developing Algorithm Detection Skills
Step 1: Map Your Digital Footprint (15 minutes)
Identify your algorithm exposure:
List 5 platforms you use daily (Instagram, TikTok, Amazon, YouTube, etc.)
For each platform, note:
How much time you spend there
What you typically browse or search for
What ads or recommendations you see repeatedly
Action: Create a simple chart:
Platform | Time Spent | What I Search | What Gets Recommended | How Similar?
Reflection prompt: Notice patterns. Are recommendations showing you new things or reinforcing existing preferences?
Step 2: Recognize Manipulation Triggers (20 minutes)
Learn to spot psychological tactics:
AI-generated ads and recommendations often use these manipulation strategies:
Urgency & Scarcity: ‘Only 2 left!’, ‘Sale ends today!’, ‘Limited time offer!’
Social Proof: ‘1,000+ people bought this today’, ‘Trending now’
Pleasure Induction: Creating positive associations (‘Just think how happy you will be!’)
Authority/Trust Exploitation: ‘Recommended for you’, ‘Based on your preferences’
Diversion: Focusing on attractive features while hiding drawbacks (subscription fees, true cost)
Fabricated Information: Fake reviews, inflated ratings, manipulated ‘best seller’ rankings
Action:
Open your most-used shopping or social media app
Identify 3 products or posts recommended to you
For each, list which manipulation tactics you can detect
Document your findings in a simple table
Practice exercise: Use this prompt: ‘Show me 5 examples of online ads that use psychological manipulation tactics like urgency, social proof, or FOMO. Explain which tactic each one uses’. Compare the AI examples with real ads you encounter.
Step 3: Investigate Your Algorithm Bubble (20 minutes)
Test how personalized your recommendations are:
Experiment A: Fresh Perspective Test
Open your primary shopping/social platform in an incognito/private window
Search for the same product or topic you recently browsed
Compare: What is different between your logged-in recommendations vs. anonymous browsing?
Experiment B: Friend Comparison
Ask a friend to search for the same product on the same platform
Compare what each of you sees: prices, featured products, reviews
Document differences
Action: Write a brief reflection:
Were you surprised by the differences?
How has your past behavior shaped what algorithms show you?
Are you seeing artificially narrowed options?
Step 4: Practice the 48-Hour Rule (10 minutes)
Develop impulse resistance:
Research shows that manipulative AI systems exploit immediate gratification urges. Counter this by implementing a waiting period.
The 48-Hour Rule:
When you feel the urge to purchase something recommended by an algorithm, pause
Add it to a ‘maybe list’ instead of cart
Wait 48 hours before purchasing
After 48 hours, reassess: Do I still want this? Why?
Action:
Create a dedicated note or document titled ‘48-Hour Purchases’
Commit to using it for one week
Track: How many items did you decide NOT to buy after waiting?
Reflection questions:
What percentage of your impulse urges disappeared after 48 hours?
What psychological state were you in when the algorithm caught you? (bored, stressed, seeking validation?)
Step 5: Diversify Your Information Diet (15 minutes)
Break the filter bubble:
Algorithms reinforce existing preferences, creating an echo chamber. Deliberately seek diversity.
Strategy A: Manual Discovery
Once per week, deliberately search for something completely outside your normal interests
Browse without logging in to avoid personalization
Explore ‘random’ or ‘shuffle’ features when available
Strategy B: Cross-Platform Comparison
Search for the same product on 3 different platforms
Note differences in pricing, featured products, and reviews
Question: Which platform is giving me the most objective information?
Action: Choose one category (e.g., sustainable fashion, budgeting tools, mental health resources) and:
Research using 3 different platforms/search engines
Document: What different perspectives did you encounter?
Identify: Which platform seemed most neutral vs. most commercially motivated?
Step 6: Audit Your Subscriptions and Recurring Costs (15 minutes)
Financial reality check:
One harmful option in the research was ‘dependence-inducing products’: items with low upfront costs but recurring subscription fees that exceed long-term value.
Action:
List all your active subscriptions (streaming, apps, memberships, product subscriptions)
Calculate monthly and annual costs
For each, ask:
Am I actively using this?
Was this initially advertised as ‘only $X/month’?
What is the true annual cost?
Would I re-purchase this subscription today?
Calculate your ‘hidden cost burden’:
Total annual subscription cost: $_____
÷ by 12 months = $_____ per month
÷ by 52 weeks = $_____ per week
Reflection: Were you surprised by the total? How many of these started as algorithm-recommended ‘great deals’?
Step 7: Build Your Personal Algorithm Policy (15 minutes)
Create boundaries for AI influence:
Based on what you have learned, establish personal rules for interacting with recommendation systems.
Your Algorithm Policy might include:
‘I will never purchase anything recommended by an algorithm within 24 hours’
‘I will manually clear my search history monthly to reset recommendations’
‘I will use incognito mode when researching major purchases’
‘I will fact-check reviews on third-party sites before trusting platform ratings’
‘I will set monthly spending limits for algorithm-influenced purchases’
Action: Write your personal 5-rule Algorithm Policy and share it with a friend for accountability.
Template:
MY ALGORITHM POLICY
1. Time Rule: _________________________________
2. Money Rule: ________________________________
3. Research Rule: _____________________________
4. Privacy Rule: ______________________________
5. Accountability Rule: ________________________
Step 8: Practice Critical Ad Analysis (20 minutes)
Become a manipulation detective:
Exercise: Deconstruct an Ad
Find a targeted ad from your social media feed or shopping platform
Take a screenshot
Analyze using this framework:
Analysis Questions:
What psychological trigger is this using? (urgency, social proof, authority, pleasure)
What information is prominently displayed? What is hidden or minimized?
What emotional state is this ad trying to create?
What is the true cost (including hidden fees, subscriptions, time commitment)?
Is this addressing a real need or creating artificial desire?
Who benefits most from this purchase?
Action: Complete this analysis for 3 different ads and share your findings with classmates.
Advanced: Use an AI tool to help you analyze. Try this prompt: ‘Analyze this advertisement for psychological manipulation tactics. Identify which strategies are being used (urgency, social proof, FOMO, etc.) and explain how each influences consumer behavior’
Compare your analysis with the AI's analysis: What did you miss? What did the AI miss?
Step 9: Conduct a Weekly Algorithm Audit (10 minutes)
Ongoing vigilance:
Recommendation systems continuously evolve based on your behavior. Regular check-ins maintain awareness.
Weekly Audit Checklist:
What did algorithms try to sell me this week?
How many times did I almost impulse-buy but stopped myself?
What new manipulation tactics did I notice?
Did I successfully use my 48-Hour Rule?
How much time did I spend on algorithm-driven platforms?
Am I seeing more diversity in recommendations or more repetition?
Action: Set a weekly calendar reminder for "Algorithm Audit Sunday" and keep a running log.
Individual Reflection (Submit After Completing Workshop)
By replying to the class discussion board, share:
What surprised you most about how recommendation algorithms target you?
Which manipulation tactic do you find yourself most vulnerable to, and why?
What specific changes will you make to your digital behavior based on this workshop?
One example of when you successfully resisted an algorithm-driven impulse this week
How AI tools (if you used them) helped or hindered your learning in this workshop
Word count: 300-500 words
Bottom Line
Research confirms that humans are highly susceptible to AI-driven manipulation, with simple hidden objectives being as effective as sophisticated psychological tactics. Recommendation algorithms are designed to shape your preferences by exploiting cognitive biases like impulse buying, FOMO, and instant gratification. Most users remain unaware of this manipulation, perceiving these systems as helpful rather than exploitative. However, awareness is power. By understanding how algorithms collect your data, predict your behavior, and engineer your desires, you can develop critical resistance. The goal is not to completely avoid these systems – they are embedded in modern life -- but to interact with them consciously rather than passively.
Your autonomy depends on:
Recognition: Spotting manipulation tactics in real-time
Resistance: Implementing delays and fact-checking before acting on recommendations
Boundaries: Creating personal policies that prioritize your values over algorithmic objectives
Vigilance: Regularly auditing how these systems influence your decisions
Remember: The algorithm's goal is engagement and profit. Your goal should be intentional, value-aligned decision-making. You are not the customer: you are the product. Take back control by becoming algorithm-literate.
#AlgorithmAwareness #DigitalLiteracy #ConsciousConsumption #AIManipulation #TechEthics
AI Literacy Lesson Plan: Understanding Paid vs Free LLMs for Academic Success
Goal: You will (i) understand the productivity and creativity differences between paid and free LLMs; (ii) recognize the socioeconomic equity implications of AI access; and (iii) develop strategic decision-making skills for selecting appropriate AI tools for academic work.
The Problem and Its Relevance
University students in 2025 face a critical choice: use free AI tools with significant limitations or invest in paid subscriptions that offer substantial advantages. Paid LLMs can save up research time through features like unlimited usage, advanced reasoning capabilities, and specialized academic tools. However, this performance gap creates a troubling reality: students from higher socioeconomic backgrounds gain compounding academic advantages simply by affording better AI tools. The integrity paradox deepens this challenge. Students who use AI unethically might do so primarily because of time pressure and stress -- not malicious intent. When free tools offer unreliable outputs with frequent hallucinations while paid versions provide verified research assistance, institutions are pushing lower-income students toward academic dishonesty by denying them access to tools that make schoolwork more efficient. LLMs have a large positive impact on learning performance, learning perception, and higher-order thinking. Yet these benefits are not equally distributed. Free models do democratize baseline access but impose usage quotas, older model versions, and higher error rates that limit heavy academic work. Meanwhile, paid models excel at creative writing, provide multimodal learning support, and include specialized features like citation management: capabilities crucial for competitive academic performance.
Why Does This Matter?
Understanding the paid vs. free LLM divide matters because: (i) Efficiency inequality drives integrity violations: When paid tools save signifcant research time and students lack access face overwhelming deadlines, the system structurally pressures lower-income students toward shortcuts using unreliable free tools; (ii) Compounding academic advantage: The performance gap between free and paid tools is not marginal: it is structural. Over a degree program, access to superior AI creates cumulative advantages in grades, research quality, and skill development; (iii) Hidden costs undermine 'free' access: While open-source models promise democratization, they require significant technical expertise and computational resources for effective use, creating new barriers; (iv) Academic integrity is situational: Nearly half of unethical AI use stems from time stress, not character flaws. Equitable efficiency access is an integrity intervention; (v) Creative and analytical disparities: Paid models significantly outperform free versions in creativity benchmarks and higher-order thinking tasks, directly affecting assignment quality; (vi) Long-term skill development: Students report that premium AI features boost confidence and maintain learning quality while reducing mental effort -- but only for those who can afford access.
Three Critical Questions to Ask Yourself
Am I making informed choices about AI tool selection, or am I simply using whatever is free without understanding the academic performance implications?
How much time am I losing to free tool limitations (quotas, hallucinations, lack of features), and is that time loss pushing me toward integrity compromises or academic stress?
Do I know which AI capabilities genuinely improve my learning versus which create dependency, and am I strategically leveraging available resources (including student discounts) to maximize both efficiency and understanding?
Roadmap: Strategic AI Tool Selection for Academic Success (90 minutes)
Part 1: Mapping Your Current AI Usage and Needs (20 minutes)
Step 1: Audit Your AI Tool Usage (10 minutes)
List all AI tools you currently use for academic work
For each tool, note: free or paid? Daily usage frequency? Primary tasks (research, writing, coding, etc.)?
Identify moments when you have hit tool limitations: usage quotas, poor outputs, missing features
Action: Create a simple chart with columns: Tool | Free/Paid | Frequency | Tasks | Limitations Encountered
Step 2: Assess Your Academic Workload (10 minutes)
Count how many essays, research papers, or major assignments you complete monthly
Estimate time spent on research, drafting, and editing for each assignment
Identify which tasks consume most time without adding learning value (formatting, citation management, literature reviews)
Action: Calculate your monthly 'high-stakes AI usage hours' and identify your biggest time bottlenecks
Part 2: Understanding Performance Differences Through Testing (25 minutes)
Step 3: Compare Free vs. Paid Model Outputs (15 minutes)
Choose a complex academic task you have upcoming (research question, essay topic, data analysis). Test both free and paid options:
Free Tool Test:
Use ChatGPT Free, Claude Free, or Gemini Free
Request: research summary, essay outline, or creative analysis
Note: response quality, depth, citation accuracy, time to complete, whether you hit usage limits
Paid Tool Test (if you have access; otherwise, use trial periods):
Use ChatGPT Plus, Claude Pro, or Gemini Advanced
Submit identical request
Note: differences in depth, creativity, accuracy, speed, additional features
Alternative if no paid access: Ask classmates with paid subscriptions to run the same test and share results
Action: Document specific differences in a comparison table: Quality | Accuracy | Creativity | Features | Speed
Step 4: Calculate Time Savings vs. Cost (10 minutes)
Based on research showing 80% time savings for specialized tools:
Estimate hours you spend monthly on AI-assisted academic work
Calculate potential time savings with paid tools
Compare monthly subscription cost ($20) to your hourly value of time
Factor in stress reduction and integrity risk mitigation
Action: Complete this calculation: 'If paid tools save me [X] hours monthly, and my time is worth [Y] (tutoring rate, part-time wage, stress cost), the ROI is [positive/negative]'
Part 3: Exploring Student Discounts and Free Premium Access (15 minutes)
Step 5: Research Available Student Benefits (15 minutes)
Many companies offer free or discounted premium AI to students:
Free Premium Access:
ChatGPT Plus: 2 months free for US/Canada .edu emails
Gemini Advanced: Free through Spring 2026 for .edu emails
GitHub Copilot: Free for verified students
Perplexity Pro: Student subscriptions available
Discounted Access:
Notion AI: Reduced student pricing
Various academic-specific tools with edu verification
Action: (i) Check your .edu email eligibility; (ii) Apply for at least one free premium subscription this week; (iii) Document: Tool | Cost Savings | Application Process | Approval Timeline
Part 4: Understanding Equity Implications and Integrity Risks (15 minutes)
Step 6: Analyze the Structural Inequality (10 minutes)
Review these research findings:
Productivity Gap: Paid tools deliver more usage capacity
Reliability Gap: Free models exhibit higher hallucination rates, especially for non-English speakers
Feature Gap: Multimodal capabilities, citation management, advanced reasoning -- all paid-tier exclusive
Integrity Pressure: many students cite time stress as primary reason for unethical AI use
Reflection Prompt: Write 3-4 sentences addressing:
'How does AI access inequality affect my own academic performance and stress levels?'
'Have I ever felt time pressure that made me consider using AI in ways I knew weren't ideal?'
'What would equitable AI access look like in my institution?'
Step 7: Identify Your Integrity Boundaries (5 minutes)
Establish personal guidelines:
Tasks where AI augmentation is appropriate (brainstorming, grammar checking, research organization)
Tasks where AI replacement crosses ethical lines (original analysis, critical thinking, creative synthesis)
Signals that you are over-reliant: Can not complete work when AI is unavailable? Accepting outputs without verification?
Action: Write a personal 'AI Ethics Statement' with 3-5 rules, for instance: 'I will always verify AI-generated citations before using them'
Part 5: Building Your Strategic AI Toolkit (15 minutes)
Step 8: Create Your Personalized AI Strategy (10 minutes)
Based on your audit, testing, and available resources, design your optimal AI toolkit:
For Most Students (budget-conscious):
Primary free tool: ChatGPT/Claude/Gemini for everyday tasks
Specialized free tool: Claim at least one free premium student subscription
Multi-tool strategy: Rotate between different free platforms to maximize daily quotas
Verification protocol: Always fact-check AI outputs, especially from free models
For Heavy Users (if budget allows or student discounts secured):
Paid subscription priority: Choose one based on your major (Claude Pro for writing-intensive, ChatGPT Plus for versatility, Gemini for creative projects)
Specialized tools: Consider discipline-specific AI (Elicit for research, Jenni for academic writing)
Integration strategy: Ensure tools work with your existing workflow (Google Docs, Word, citation managers)
Step 9: Plan for Skill Maintenance (5 minutes)
Prevent AI dependency:
Designate 'AI-free' assignments monthly to maintain independent capabilities
Practice critical evaluation: Rate AI outputs for accuracy before accepting
Track skill development: Are you learning from AI interactions or just copying?
Action: Create monthly check-in questions: 'Can I still perform this task without AI? Is my independent work quality improving or declining?'
Individual Reflection
After completing this roadmap, write a paragraph addressing: (i) What surprised you most about paid vs. free LLM differences?; (ii) How does AI access inequality affect your academic community?; (iii) What changes will you make to your AI tool usage after this exercise?; (iv) What should your institution do to promote equitable AI access? Share your reflection with classmates and discuss whether universities should subsidize paid AI access for all students.
Bottom Line
The bifurcation between paid and free LLMs creates measurable academic inequality. Paid tools offer substantial productivity advantages, superior accuracy, advanced features, and enhanced creative output: all critical for competitive academic performance. However, the primary driver of unethical AI use is situational stress, not technological capability or moral failure. Institutional strategy must recognize that denying equitable efficiency access does not prevent AI use: it compels students toward integrity compromises using unreliable free tools. The solution requires multi-pronged action: (i) subsidizing high-efficiency AI access for all students; (ii) redesigning assessments to reduce time-pressure incentives; (iii) promoting transparent, open-source alternatives; and (iv) teaching strategic AI literacy that distinguishes augmentation from replacement. Your role as a student is to: (i) understand performance differences between free and paid tools; (ii) claim available student discounts and free premium access; (iii) use AI strategically to augment -- not replace -- your learning; (iv) maintain critical verification practices, especially with free models; and (v) advocate collectively for institutional policies that democratize AI efficiency rather than allowing it to compound existing inequalities. The goal is not to avoid AI but to use it intelligently and equitably -- leveraging its power while maintaining academic integrity, developing genuine skills, and ensuring that access to productivity tools does not determine academic success. When efficiency is democratized, students can focus on learning rather than racing against artificial time constraints that pressure them toward shortcuts.
Building Personalized Machine Learning (ML) Solutions for Real-World Challenges
Goal: You will design customized ML models addressing authentic problems, gaining hands-on experience with personalized algorithms that reflect today's shift toward tailor-made intelligent systems.
The Problem and Its Relevance
Traditional machine learning development has been dominated by tech specialists building general-purpose solutions that may not address the specific problems faced by individuals and communities. This creates a gap: people who deeply understand real-world challenges lack the tools to create AI solutions, while those with technical expertise often lack contextual understanding of authentic problems. Tools like Teachable Machine are changing this dynamic by democratizing ML model creation. Now, anyone can design and train customized machine learning models without coding expertise or advanced technical knowledge. This shift enables a problem-first approach: starting with authentic challenges you have personally encountered and building tailored solutions rather than adapting generic AI tools to fit your needs.
Why Does This Matter?
Understanding personalized machine learning matters because: (i) Democratization of AI development: Tools like Teachable Machine make ML model creation accessible to anyone, shifting power from tech specialists to domain experts who understand real problems; (ii) Problem-first thinking: Building ML from authentic challenges you have experienced teaches you to identify where AI genuinely adds value versus where it is unnecessary complexity; (iii) Iteration literacy: Learning to improve models through community feedback and data refinement reflects how real-world AI systems evolve: they aree never 'finished' products; (iv) Custom solutions beat generic tools: A personalized ML model trained on your specific context often outperforms sophisticated general-purpose AI that lacks your domain knowledge; (v) Understanding AI limitations through practice: Building your own model reveals what ML can and cannot do, creating healthy skepticism about AI capabilities; (vi) Community-centered design: Identifying who will test and provide feedback teaches responsible AI development that prioritizes end-user needs over technical sophistication; (vii) Creative problem-solving with constraints: Working within Teachable Machine's accessible framework forces innovative thinking about how to address complex problems with simple tools. So, te shift toward personalized ML means you can create intelligent systems tailored to problems that matter to you and your community, rather than waiting for tech companies to maybe address your needs someday.
Three Critical Questions to Ask Yourself
Is this a problem where machine learning genuinely adds value, or am I forcing ML onto a challenge better solved with simpler approaches?
Who experiences this problem directly, and have I involved them in designing, testing, and iterating on my ML solution, or am I building in isolation?
Can I clearly explain how my model works and when it fails to the people who would actually use it, ensuring transparency and trust?
Roadmap
Read this content and/or watch this video. Now learn what Teachable Machine is. You may watch these introductory and explanatory short videos: 1; 2; 3; 4; 5; 6.
In groups, your task is to:
(i) Design a machine learning (ML) model that addresses a real-world problem (any problem, anywhere). Tip: You may unpack the root problem of someone’s story and figure out whether ML can play a role in addressing it.
P.S. My example: Cornell’s BirdNET was the only ML model that helped me identify the singing of a bird that I had recorded once on my phone. There are many more cool examples online, including scientific literature. My suggestion, though: rely on your stories, your problems first. They will make this group project much more interesting and engaging if the issue you will be tackling together emerges organically.
(ii) Justify why the group chose this specific problem;
(iii) Explain how your ML model innovates in tackling this problem;
(iv) Point out who in the target community you would approach to test your ML model and get (further) feedback (data, mainly) to iterate and improve the confidence level of the generated outputs of your ML model (anticipate how you plan to improve it through iteration), and explain how you would approach them;
(v) Share the link to your model and provide one statement explaining why we should click on it. Tip: Be not only curious and pragmatic in this group activity but also ensure you all have fun in this discovery and development process while making the most of each one’s creative, analytical, and critical skills.
Individual Reflection:
By replying to the group's post, share what you have learned (or not) from engaging in this activity. You may include whether you will be using customized ML in your future projects (why and how).
Bottom Line
Personalized machine learning succeeds when you start with authentic problems you understand deeply, not with technology seeking a purpose. Tools like Teachable Machine democratize AI creation, transforming you from consumer to builder, but only when you involve the people experiencing the problem and iterate based on their feedback. The best ML solutions come from domain experts who understand context, not technical specialists working alone. Your goal is not the most advanced model; it is one that genuinely solves a real problem for real people. When you can explain what your model does, when it fails, and how you will improve it, you have mastered responsible AI development that serves communities rather than just showcasing technology.
AI as Creative Catalyst: Cognitive Offloading for Enhanced Design Innovation
Goal: You will leverage AI as cognitive offloading tools to enhance creative outcomes, using AI-generated outputs as springboards for original innovation while maintaining creative agency.
The Problem and Its Relevance
Research by Chandrasekera, Hosseini, and Perera (2025) reveals that generative AI significantly enhances creativity in design while reducing cognitive load. AI-assisted students demonstrated superior creative outcomes in both novelty and resolution compared to non-AI groups, with these benefits persisting even after AI assistance was removed.
Why Does This Matter?
AI tools act as cognitive offloading mechanisms that reduce mental demand during the conceptualization phase, allowing designers to allocate their cognitive resources more efficiently toward creative problem-solving rather than being overwhelmed by information processing. For instance, AI-generated images serve as visual catalysts that stimulate lateral thinking and abstraction skills, prompting designers to interpret, deconstruct, and recontextualize unexpected visual outputs in pursuit of innovative concepts. Thus, use use generative AI not as a source of ready-made solutions to copy, but as a 'co-creator' that expands creative horizons. Treat AI outputs as springboards for developing original concepts through reflective practice, where you critically assess and reinterpret AI suggestions to deepen your understanding of design principles and cultivate genuinely novel ideas while maintaining your own creative agency and contextual awareness.
So, understanding AI as a creative catalyst matters because it (i) Frees cognitive resources: By offloading routine mental processing to AI, you have more mental energy available for genuine creative innovation; (ii) Breaks creative blocks: AI-generated unexpected outputs stimulate lateral thinking and help you escape conventional patterns; (iii) Enhances creative skills permanently: The cognitive benefits persist even after AI assistance is removed, suggesting AI helps develop rather than replace creative capacity; (iv) Redefines creative process: Moving from 'AI replaces creativity' to 'AI amplifies creativity' changes how we approach design and innovation; (v) Prevents creative dependency: When used as a springboard rather than a solution provider, AI strengthens rather than weakens your creative muscles; (vi) Democratizes advanced creativity: Cognitive offloading makes sophisticated creative problem-solving accessible to more people, regardless of their baseline processing capacity. The evidence, therefore, is clear: AI does not diminish creativity when used correctly: it enhances it by removing cognitive bottlenecks and providing unexpected stimuli.
Three Critical Questions to Ask Yourself
Am I using AI to offload cognitive burden so I can think more creatively, or am I using it as a shortcut that bypasses my creative process entirely?
When I see AI-generated outputs, do I interpret and transform them into something original, or do I simply copy what looks good?
Are my creative skills improving over time with AI assistance, or am I becoming less capable of generating ideas independently?
Roadmap: Using AI as a Creative Co-Creator
Step 1: Identify Your Cognitive Bottlenecks (10 minutes)
Understand where mental load reduces your creativity:
What routine tasks consume mental energy during your creative process? (research, technical setup, formatting, organizing information)
Where do you get stuck or overwhelmed? (too many options, information overload, technical constraints)
Which parts of your process would benefit from cognitive offloading?
Action: List 3-5 tasks that drain mental energy without adding creative value, these are prime candidates for AI offloading
Step 2: Use AI for Cognitive Offloading, Not Creative Replacement (10 minutes)
Strategic delegation of routine mental work:
Offload information processing: Ask AI to summarize research, organize references, compile examples
Offload technical constraints: Use AI to handle formatting, color theory calculations, basic layouts
Offload routine variations: Generate multiple versions of basic concepts quickly
Never offload: Core creative decisions, conceptual thinking, emotional intuition, contextual judgment
Action: For your current project, identify which tasks to offload and which to keep as your creative domain
Step 3: Generate Unexpected Visual/Conceptual Stimuli (15 minutes)
Use AI to create springboards for lateral thinking:
Prompt AI with intentionally unusual combinations or constraints
Request multiple diverse variations rather than one 'perfect' output
Ask for unexpected interpretations of your theme or concept
Generate outputs that challenge your assumptions or push boundaries
Action: Create 5-10 AI-generated images or concepts related to your project, prioritizing variety and unexpectedness over immediate 'rightness'
Step 4: Practice Active Interpretation and Deconstruction (15 minutes)
Transform AI outputs into original concepts:
Select the most unexpected or intriguing AI-generated output
Ask: "What elements surprise me? What patterns do I notice? What could this become?"
Deconstruct it into components: shapes, relationships, principles, emotions
Recontextualize these elements for your specific creative goal
Combine multiple AI outputs in novel ways AI wouldn't generate
Action: Take one AI output and create three original derivative concepts that reinterpret its elements in your unique way
Step 5: Engage in Reflective Creative Practice (10 minutes)
Deepen understanding through critical assessment:
Compare your AI-inspired concepts to your original vision—how have they evolved?
Identify which aspects came from AI versus which came from your interpretation
Ask: "What design principles or creative insights did I learn from this process?"
Reflect on how AI outputs expanded your thinking versus limiting it
Action: Write a brief reflection: 'AI showed me [X], which helped me realize [Y], leading me to create [Z] which is entirely my own'
Step 6: Iterate Without AI to Test Independence (10 minutes)
Ensure you are developing, not replacing, creative capacity:
Take your AI-inspired concepts and develop them further without any AI assistance
Push your ideas in directions AI wouldn't naturally go
Add personal, contextual, or emotional elements that only you can provide
Test whether you can continue innovating independently
Action: Spend 10 minutes developing your concept using only your own creative judgment—does your creativity flow more easily now?
Step 7: Build a Springboard Library (5 minutes)
Create reusable creative stimulus resources:
Save AI outputs that successfully sparked lateral thinking
Document which types of AI prompts generate the most useful unexpected results
Note patterns in how you successfully transform AI outputs into original work
Build a personal collection of "creative catalyst" techniques
Action: Create a folder of your most effective AI springboards and note why each was useful
Step 8: Monitor Your Creative Development (5 minutes)
Track whether AI is enhancing or replacing your creativity:
Monthly check: Can you generate creative ideas more easily than before using AI?
Compare projects: Are your AI-assisted works more innovative than purely independent work?
Skill assessment: Are you learning new creative approaches from AI interactions?
Dependency test: Can you still create effectively when AI is unavailable?
Action: Set monthly reminders to assess: "Is my creativity stronger with AI as a catalyst?"
Bottom Line
Research confirms that AI can significantly enhance creativity, not by replacing human creative thinking, but by offloading cognitive burden and providing unexpected visual/conceptual stimuli that spark lateral thinking. The key is using AI as a 'co-creator' that expands your creative horizons rather than a source of ready-made solutions. When you treat AI outputs as springboards for interpretation, deconstruction, and recontextualization, you develop genuinely novel ideas while maintaining creative agency. The benefits persist even after AI assistance is removed, suggesting this approach strengthens rather than weakens creative capacity. Your role is to remain the creative decision-maker who interprets, transforms, and contextualizes AI outputs through your unique perspective, using cognitive offloading strategically to free mental resources for deeper creative problem-solving.
AI as Communication Polish: Focus on Ideas, Not Mechanics
Goal: You will use AI to refine communication mechanics while maintaining personal and creative ownership, freeing mental energy for idea generation rather than technical perfection concerns.
The Problem and Its Relevance
Many people struggle to express their ideas not because they lack creativity or insight, but because they get bogged down by the mechanics of communication: syntax, spelling, grammar, sentence structure, and word choice. This creates a creative bottleneck where time and mental energy spent on technical corrections reduces capacity for original thinking. A simple approach transforms AI into a powerful communication tool: Before adding your text (or voice) to an LLM, use a prompt like: 'Rewrite this short paragraph to ensure it is concise, coherent, and error-free. Keep original ideas intact'. This lets you focus on what truly matters: your ideas. Instead of getting bogged down by technical concerns, you can think freely and let your creativity flow. The LLM handles the technical polish while you concentrate on originality, meaning, and authentic expression. This is not about letting AI think for you. You are doing all the thinking. The AI simply helps you communicate your ideas more effectively. By freeing yourself from mechanical concerns, you can devote your energy to what you do best: generating insights, developing arguments, and expressing yourself authentically. In this way, LLMs become not thinking partners, but precision tools that amplify your voice and clarify your message.
Why Does This Matter?
Understanding AI as a communication polish tool matters because: (i) Democratizes effective communication: People with great ideas but weaker technical writing skills can communicate as clearly as professional writers; (ii) Removes barriers to participation: Non-native speakers, people with learning differences, or those struggling with formal writing can contribute insights without disadvantage; (iii) Preserves authentic voice: Unlike AI-generated content, polished original writing maintains your unique perspective, style, and personality; (iv) Increases creative productivity: Time saved on mechanical editing can be reinvested in thinking, researching, and developing better ideas; (v) Maintains intellectual ownership: You remain the originator and decision-maker; AI is your editing assistant, not your ghostwriter. The key benefit: AI handles the 'how to say it' so you can focus entirely on 'what to say'.
Three Critical Questions to Ask Yourself
Am I using AI to polish my own ideas, or am I asking AI to generate ideas for me?
Does the AI-polished version still sound like me and preserve my intended meaning?
Am I becoming more confident in expressing ideas, or becoming dependent on AI to communicate?
Roadmap: Using AI for Strategic Communication Polish
Step 1: Separate Thinking from Polishing (5 minutes)
Establish clear boundaries:
Thinking stage: Generate ideas, arguments, insights independently
Polishing stage: After ideas are complete, AI refines technical presentation
Never combine these stages: think first, polish later
Action: Commit to this rule: 'I will always complete my thought before asking AI to polish it'
Step 2: Create Your Standard Polish Prompt (5 minutes)
Develop a consistent prompt that preserves your voice:
Basic template: 'Rewrite this to be concise, coherent, and error-free. Keep original ideas intact'
Add personalization: 'Maintain a [casual/formal/academic] tone'
Include preservation: 'Do not change my core arguments or add new ideas'
Action: Write and save your personal polish prompt for repeated use
Step 3: Practice Free Expression (10 minutes)
Train yourself to express ideas without self-censoring:
Write or speak your thoughts for 5 minutes without stopping to edit
Do not worry about grammar, spelling, or perfect phrasing
Focus on what you want to say, not how it sounds
Action: Complete a timed 5-minute free-write on any topic
Step 4: Apply AI Polish (10 minutes)
Use your polish prompt on the draft:
Feed your raw draft to the LLM with your standard polish prompt
Review carefully: does it preserve your meaning?
Check that your voice and perspective remain intact
If AI changed your meaning or made you generic, edit it back
Action: Polish your Step 3 draft and compare before/after
Step 5: Verify Ownership (5 minutes)
Ensure you remain the thinker:
Ask: 'Could I explain and defend every idea in this polished version?'
Check that no new concepts appeared
Confirm AI only improved clarity, not substance
Action: Highlight any sentences that do not reflect your original thinking
Step 6: Build Consistent Habits (10 minutes)
Apply strategically across contexts:
Emails: Draft naturally, then polish
Essays: Write full sections, then polish each
Presentations: Outline points yourself, then polish language
Never start with AI: always start with your thinking
Action: Identify three contexts where you will use this approach this week
Step 7: Monitor Voice Preservation (5 minutes)
Check that AI is not homogenizing your expression:
Compare polished to original: does it still sound like you?
Ask others: 'Does this sound like me?'
Watch for generic or personality-less writing
Action: Save 2-3 before/after examples to track voice preservation
Step 8: Maintain Independent Skills (5 minutes)
Prevent dependency:
Regularly communicate without AI polish
Practice final-draft-quality writing occasionally
Use polish strategically, not universally
Action: Designate 'no AI polish zones' (personal messages, journals)
Bottom Line
AI can be an exceptional communication polish tool when boundaries remain clear: you generate ideas, AI refines presentation. This democratizes effective communication and frees mental energy for creative thinking. However, you must remain the originator of ideas and guardian of your authentic expression. AI polishes your voice; it does not replace it. When used as a precision tool rather than a thinking partner for your highly personal and creative tasks, AI amplifies your ability to share ideas clearly while preserving what makes your perspective unique and valuable.
Model Chaining: Strategic Multi-Model Workflows for Better AI Outputs
Goal: You will (i) understand model chaining principles; (ii) leverage specialized AI strengths strategically; (iii) recognize amplification risks; and (iv) design effective multi-model workflows with verification.
The Problem and Its Relevance
Model chaining (or 'multi-model workflows') is the practice of using the output from one AI model as input for another model to enhance the quality, accuracy, or style of your final result. Instead of relying on a single model to do everything, you create a workflow where different models handle different parts of the task based on their strengths. Think of it like an assembly line: each station (model) specializes in one aspect of the work, and the product improves as it moves through the line. Key Benefits: (i) Leverage specialized strengths: Different models excel at different tasks (creative writing, technical accuracy, concise summarization); (ii) Improved quality: One model generates content, another refines it—catching errors and enhancing clarity or style; (iii) Separation of concerns: Breaking complex tasks into stages makes each step manageable with control points for review; (iv) Iterative refinement: Progressively improve outputs through multiple passes, each focusing on specific dimensions (accuracy, tone, structure). Critical Limitations: (i) Error amplification: If the first model makes a mistake, subsequent models treat that error as fact and build upon it; (ii) Loss of coherence: Each model has its own style and logic, creating awkward transitions or inconsistencies when outputs are stitched together; (iii) Time and cost: Multiple model calls take longer and may cost more than a single well-crafted prompt; (iv) Diminishing returns: Additional chaining steps may add complexity without meaningfully improving results; (v) Over-reliance risk: Automatic chaining without human review produces polished-looking content that contains subtle errors or drifts from your intent.
Why Does This Matter?
Understanding model chaining matters because: (i) Complexity illusion: Multi-step AI workflows look sophisticated and may create false confidence that 'more AI = better results' obscuring the need for human judgment; (ii) Error multiplication: Unlike human editing where errors get caught, chained AI models compound mistakes—each subsequent model treats the previous model's hallucinations as verified facts; (iii) Strategic capability: When used correctly with human verification at each stage, model chaining genuinely enhances output quality by leveraging different AI strengths; (iv) Resource efficiency paradox: While chaining can improve results, it can also waste time and money if a single well-crafted prompt would have sufficed; (v) Quality control bottleneck: The more models in your chain, the more verification points you need, but the more tedious verification becomes, increasing the temptation to skip it; and (vi) Skill development: Learning when and how to chain models effectively is a valuable AI literacy skill that distinguishes novice from expert users. The key insight: Model chaining is powerful when strategically applied with human oversight at critical junctures, but dangerous when automated without verification.
Three Critical Questions to Ask Yourself
Does this task genuinely require multiple specialized models, or am I overcomplicating something a single well-prompted model could handle?
At what points in my model chain will I insert human verification to catch errors before they amplify through subsequent models?
Am I chaining models because it improves quality (strategic), or because it feels more sophisticated (theatrical), and how will I measure the difference?
Roadmap: Designing Effective Model Chains with Verification
Step 1: Assess Task Complexity (5 minutes)
Before building a chain, determine if you need one:
Can a single model with a well-crafted prompt accomplish this task adequately?
Does the task have genuinely distinct stages that benefit from different AI strengths?
Will the quality improvement from chaining justify the additional time and cost?
Action: Write a simple decision rule: 'I need model chaining when [specific condition], otherwise I will use a single model'
Step 2: Map Model Strengths to Task Stages (10 minutes)
If chaining is warranted, strategically assign models:
Break your task into distinct stages, for instance: research → draft → refine → format
Identify which models excel at each stage (creative generation vs. technical accuracy vs. editing vs. formatting)
Match model strengths to stage requirements rather than using the same model for everything
Action: Create a workflow diagram: Stage 1 (Model A for X strength) → Verification Point → Stage 2 (Model B for Y strength) → Verification Point → etc.
Step 3: Insert Strategic Verification Points (10 minutes)
Human oversight is non-negotiable:
Place verification checkpoints between each model in the chain (minimum)
Focus verification on stages where errors would be most consequential or likely
Design quick verification protocols for each checkpoint (what specifically to check)
Never allow fully automated chains without human review
Action: For each transition between models, write a 3-5 item "Verification Checklist" of what to manually review
Step 4: Design Error Detection Mechanisms (10 minutes)
Anticipate where problems will emerge:
First model output: Check for factual accuracy before it becomes "truth" for subsequent models
Middle stages: Look for coherence breaks, style inconsistencies, or logic gaps
Final output: Verify that the endpoint still aligns with your original intent
Create feedback loops: if Stage 3 reveals problems, go back to Stage 1 rather than trying to fix downstream
Action: Identify the 'highest risk' stage in your chain (where errors would be most damaging) and design extra verification for that point
Step 5: Implement Cost-Benefit Tracking (5 minutes)
Measure whether chaining is worth it:
Track time spent: single prompt approach vs. multi-model chain
Track quality outcomes: does chaining consistently produce better results?
Monitor error rates: are you catching errors before amplification, or discovering them too late?
Calculate cost: multiple API calls vs. single model usage
Action: For your first 3-5 chained workflows, log: Time Investment | Quality Improvement (1-10 scale) | Errors Caught | Total Cost
Step 6: Build Reusable Chain Templates (10 minutes)
Once you identify effective workflows:
Document successful chains: which models in which sequence for which types of tasks
Create prompt templates for each stage that work well together
Note which verification points are most critical for each template
Build a personal library of proven chains rather than reinventing each time
Action: Create 2-3 'Standard Chain Templates' for common tasks, for instance: 'Research Report Chain', 'Creative Content Chain', 'Technical Documentation Chain'
Step 7: Practice Single-Prompt Mastery First (10 minutes)
Before defaulting to chains:
Develop skill in crafting comprehensive single-model prompts
Learn each model's full capabilities—many tasks don't need chaining
Use chaining only when you've confirmed a single model can't achieve your goals
Remember: expertise means knowing when NOT to chain as much as when to chain
Action: For your next AI task, challenge yourself: 'Can I achieve 80% of my goal with one model and one excellent prompt?' Only chain if the answer is no.
Step 8: Conduct Regular Chain Audits (5 minutes)
Prevent over-complexity creep:
Monthly: Review your model chains—are they still necessary or have they become habitual?
Test simplified versions: try removing one model from the chain—does quality actually decrease?
Check for diminishing returns: is the 3rd or 4th model in your chain adding real value?
Eliminate zombie chains: workflows you continue using out of habit rather than demonstrated benefit
Action: Set a monthly reminder to review your most-used chains and simplify or eliminate those that do not justify their complexity
Bottom Line
Model chaining is a powerful technique when strategically applied to tasks that genuinely benefit from specialized AI strengths at different stages. However, it is also easy to over-engineer workflows that create complexity without proportional quality gains. The key principles are: (i) verify that chaining is necessary before building it; (ii) insert human verification between every model transition to prevent error amplification; (iii) match model strengths to specific task requirements rather than chaining arbitrarily; (iv) measure cost-benefit to ensure chaining actually improves outcomes; and (v) develop single-prompt mastery first so you chain by design, not by default. Done well, model chaining leverages the best of multiple AI systems. Done poorly, it compounds errors while creating an illusion of sophistication. Your role is to be the architect who designs chains strategically and the quality controller who catches problems before they cascade.
LLMs and Data Quality: Mastering the 'Garbage In, Garbage Out' Principle
Goal: You will (i) understand LLM analytical capabilities; (ii) recognize data quality's critical impact on outputs; and (iii) develop systematic verification practices for AI-generated insights.
The Problem and Its Relevance
Large Language Models (LLMs) excel at transforming unstructured information into actionable insights. They can automatically scan vast datasets, identify patterns, extract key themes, and present findings in structured formats. They categorize content by relevance and popularity, quantify discussion volumes around specific aspects, distinguish between different perspectives (supportive, neutral, or critical viewpoints), and correlate each insight with supporting evidence counts -- all while maintaining objectivity. This capability allows you to quickly grasp complex landscapes without manually reviewing thousands of individual data points, enabling faster, more informed decision-making based on comprehensive analysis rather than limited sampling. However, the fundamental principle remains critical: 'garbage in, garbage out'. When LLMs are trained on flawed or poorly curated source material, they perpetuate and amplify those errors, creating cascading inaccuracies. The quality of any output is only as reliable as the underlying data sources. Flawed training datasets skew results, leading to error-prone outputs that become increasingly problematic over time. Even when you provide your own data to an LLM, if that data is unreliable, the analysis will be fundamentally compromised.
Why Does This Matter?
Understanding the 'garbage in, garbage out' principle matters because: (i) Speed creates false confidence: LLMs analyze data so quickly and present it so professionally that we may trust outputs without questioning the underlying data quality; (ii) Errors compound: When flawed analysis informs decisions, which then generate more data, which feeds back into future analysis, small initial errors snowball into major systemic problems; (iii) Pattern recognition amplifies bias: LLMs are excellent at finding patterns -- including patterns in flawed, biased, or inaccurate data, which they then present as legitimate insights; (iv) Scale magnifies consequences: When decisions based on flawed LLM analysis affect thousands or millions of people, the impact of poor data quality becomes catastrophic; (v) Verification burden shifts: You become responsible for validating not just the LLM's reasoning but also the quality of every data source it uses; (vi) Trustworthiness illusion: Professional formatting and confident presentation mask fundamental data quality issues, making bad information look authoritative. The simple rule: Always check everything. No matter how impressive the analysis appears, verification is non-negotiable.
Three Critical Questions to Ask Yourself
What is the source and quality of the data this LLM was trained on, or that I am feeding into it:, and how can I verify its reliability?
Am I being seduced by the speed and polish of AI-generated analysis into skipping the verification steps I would normally take with human-generated insights?
What would happen if this analysis is wrong, and have I implemented sufficient checks to catch errors before they cause real-world harm?
Roadmap: Systematic Verification for LLM-Generated Insights
Step 1: Evaluate Your Data Sources (10 minutes)
Before using an LLM for analysis:
Identify all data sources being analyzed (documents you upload, websites it references, training data it was built on)
Assess each source's credibility: Who created it? When? For what purpose? What biases might exist?
Check for data quality issues: missing information, inconsistencies, outdated material, known inaccuracies
Action: Create a 'Data Source Quality Checklist' rating each source on credibility, completeness, currency, and accuracy (1-5 scale)
Step 2: Understand the LLM's Training Data (10 minutes)
For the LLM you're using:
Research what datasets it was trained on (check documentation, model cards, company disclosures)
Identify known limitations or biases in that training data
Understand the knowledge cutoff date—don't trust analysis of events after that date without verification
Look for documented issues with the model's reliability in your specific domain
Action: Write a one-paragraph summary: 'This LLM was trained on [X], which means it may be unreliable for [Y], especially regarding [Z]'
Step 3: Request Transparent Analysis (5 minutes)
When asking the LLM to analyze data:
Explicitly request that it cite sources for every claim
Ask it to note any limitations, uncertainties, or potential biases in the data
Request quantification: 'Based on X sources' or 'Mentioned Y times' rather than vague claims
Instruct it to flag any inconsistencies or contradictions it finds
Action: Create a standard prompt template that includes these verification requirements
Step 4: Implement Spot-Check Verification (15 minutes)
Never accept LLM analysis at face value:
Randomly select 10-20% of the LLM's claims to manually verify against original sources
Check the most surprising or consequential findings first: these are the highest risk
Verify that quotes are accurate and not taken out of context
Confirm that quantitative claims (percentages, counts, trends) match the underlying data
Action: For each analysis, document your spot-checks in a simple verification log
Step 5: Cross-Reference with Alternative Sources (10 minutes)
Avoid single-source dependency:
Use multiple LLMs to analyze the same data and compare their findings
Consult human expert analysis of the same topic area when available
Check whether mainstream sources or domain experts agree with the LLM's conclusions
Look for contradictory evidence that the LLM may have missed or downplayed
Action: Identify at least two alternative sources to cross-reference for every significant LLM-generated insight
Step 6: Apply the 'Consequence Test' (5 minutes)
Before acting on LLM analysis:
Ask: 'What happens if this analysis is completely wrong?'
If consequences are significant, increase verification rigor proportionally
For high-stakes decisions, require human expert review regardless of LLM confidence
Never let the LLM make final decisions on matters with serious consequences
Action: Rate each analysis by consequence level (low/medium/high) and apply corresponding verification protocols
Step 7: Build Error Detection Habits (5 minutes)
Develop systematic skepticism:
Look for signs of hallucination: overly specific claims without sources, inconsistent details, confident statements about unknowable information
Check for logical coherence: Do the conclusions actually follow from the evidence presented?
Watch for pattern over-fitting: Is the LLM finding patterns that seem too neat or convenient?
Question professional presentation: Do not let polished formatting substitute for accuracy
Action: Create a personal 'Red Flags Checklist' of warning signs that should trigger immediate deeper verification
Step 8: Document and Learn (5 minutes)
Track your verification outcomes:
When you find errors, document what went wrong: was it the training data, your input data, the LLM's reasoning, or your prompt?
Note which types of analysis this particular LLM handles well versus poorly
Build a personal knowledge base of reliable versus unreliable use cases
Share findings with colleagues to build collective verification intelligence
Action: Maintain a simple log: Date | Task | LLM Used | Errors Found | Root Cause | Lessons Learned
Bottom Line
LLMs are powerful tools for transforming unstructured data into actionable insights, but they are amplifiers: they make good data better and bad data worse. The 'garbage in, garbage out' principle means you bear responsibility for both the quality of input data and the verification of outputs. Speed and polish are not substitutes for accuracy. Always check everything. No matter how impressive the analysis appears, systematic verification is the only path to trustworthy AI-generated insights. The goal is not to avoid LLMs but to use them intelligently: leverage their analytical power while maintaining rigorous quality control at every step.
From Internet to AI: Parallel Transformations in Power and Reality
Goal: You will (i) compare internet and AI era transformations; (ii) critically evaluate power dynamics and reality distortions; and (iii) develop strategies for maintaining human agency.
The Problem and Its Relevance
Schmidt and Cohen (2015) examined how the internet fundamentally reshaped power, identity, and reality. Now, AI represents a parallel transformation,but potentially deeper and faster. Understanding the parallels helps us anticipate challenges and opportunities:
Power & Control
Internet era: 'Who will be more powerful: the citizen or the state?' Tech-savvy autocracies versus democracies
AI era: Who holds more power: humans or AI systems? Will AI concentrate power in tech companies or democratize it? Authoritarian AI surveillance versus democratic regulation struggles
Understanding & Unpredictability
Internet era: 'The Internet is among the few things humans have built that they do not truly understand... the largest experiment involving anarchy in history'
AI era: AI systems operate in ways their creators cannot fully explain or predict -- an unprecedented experiment with intelligence itself, with unknown emergent behaviors
Human Potential
Internet era: 'A new wave of human creativity and potential is arising... almost everybody [can] own, develop and disseminate real-time content without intermediaries'
AI era: AI will augment human capabilities and unlock new creativity. Anyone can create professional-quality content, code, art, and analysis without specialized training
Access & Transformation
Internet era: 'By 2025, the majority of the world's population will... have gone from virtually no access to unfiltered information to accessing all of the world's information'
AI era: Within years, humanity moves from limited expert knowledge to personalized AI assistants providing instant expertise on any topic
Identity & Reality
Internet era: 'Are our virtual identities becoming real?'
AI era: As AI generates synthetic media and personas, what is real versus artificial? Will AI-generated content become indistinguishable from human creation?
Why Does This Matter?
These parallel transformations matter because: (i) History provides limited guidance: The internet era showed us that technological transformation outpaces our ability to understand or regulate it. AI is moving even faster; (ii) Power concentration risks: Just as the internet concentrated power in platform companies, AI could create even greater power imbalances unless we act intentionally; (iii) Reality erosion: If we struggled with 'fake news' on the internet, AI-generated synthetic realities pose exponentially greater challenges to truth and trust; (iv) Agency paradox: Technology that promises empowerment can simultaneously undermine human agency if we become dependent without maintaining critical capabilities; (v) Compressed timelines: What took decades with the internet may take years with AI. We have less time to adapt, regulate, and protect human values; (vi) Irreversibility: Unlike internet content that can be deleted, AI systems trained on data create permanent embeddings, making mistakes harder to undo.
Three Critical Questions to Ask Yourself
Am I using AI to augment my own capabilities and creativity, or am I becoming dependent on it in ways that erode my skills and agency?
Who benefits from the AI systems I use? Am I the customer, or am I the product whose data concentrates power in corporate or state hands?
How do I distinguish real from AI-generated content, and what strategies do I need to maintain truth and trust in an era of synthetic realities?
Roadmap: Navigating the AI Transformation Intentionally
Step 1: Map Your Power Position (10 minutes)
Identify which AI systems you use daily (search, assistants, recommendations, content generation)
For each system, ask: 'Who controls this? Who benefits from my data?'
Assess whether these tools democratize your capabilities or concentrate power elsewhere
Action: Create a simple chart: AI Tool | Who Controls It | How I Benefit | Risks to My Agency
Step 2: Audit Your AI Dependence (15 minutes)
List skills or knowledge you have outsourced to AI in the past year
Identify which dependencies enhance your capabilities versus which erode essential skills
Consider what happens if these AI systems disappear or change their terms
Action: Categorize dependencies as 'Strategic augmentation' vs. 'Risky dependence' and plan to reduce the latter
Step 3: Develop Reality Verification Practices (15 minutes)
Establish personal protocols for verifying information and content authenticity:
Cross-reference AI-generated information with multiple sources
Look for verification markers (source attribution, methodology, transparency)
Question content that triggers strong emotional reactions
Seek original sources rather than relying on summaries
Action: Write 3-5 rules for verifying what is real, for instance 'Never share AI-generated news without checking original source'
Step 4: Build Diverse AI Literacy (10 minutes)
Learn how different AI systems work (at a basic level): training data, algorithms, limitations
Understand which tasks AI genuinely excels at versus where it is unreliable
Recognize the difference between AI that augments human judgment versus AI that replaces it
Action: Choose one AI system you use regularly and research: How was it trained? What are its documented limitations? Who controls it?
Step 5: Create Personal AI Governance (15 minutes)
Establish boundaries that preserve your agency:
Which decisions should always involve human judgment, never pure AI recommendation?
What personal information will you never share with AI systems?
How will you balance AI efficiency with maintaining your own skills?
When will you choose human experts or sources over AI assistance?
Action: Write a personal 'AI Bill of Rights' with 5-7 principles, for instance 'I will always make final decisions about relationships, health, and values myself'
Step 6: Engage in Collective Action (10 minutes)
Recognize that individual choices alone cannot address systemic power imbalances
Identify opportunities to influence AI development and regulation:
Support transparency in AI systems you use
Advocate for democratic governance of AI at institutional levels
Choose AI products from companies with strong ethical commitments
Participate in public discussions about AI regulation
Action: Take one concrete collective action this month, for instance sign a petition for AI transparency, join a discussion about AI ethics, switch to more ethical AI providers
Step 7: Regular Reality Checks (5 minutes)
Schedule quarterly reviews of your AI usage patterns
Reassess whether AI is serving your goals or you're serving its algorithms
Update your practices as AI systems evolve
Action: Set calendar reminders for quarterly AI audits using Steps 1-5
Bottom Line
The internet era taught us that technological transformation happens faster than human adaptation or governance. The AI era is accelerating this pattern. Unlike passive internet consumption, AI systems actively shape our thinking, creativity, and decision-making. By understanding the parallels between these transformations, you can act more intentionally: use AI to genuinely augment your capabilities while protecting your agency, verify reality in an age of synthetic content, and participate in collective efforts to ensure AI democratizes rather than concentrates power. The key insight from both eras is that technology's impact depends on human choices, but those choices must be conscious, informed, and made before the transformation becomes irreversible.
AI and Human Evolution: Long-Term Biological Impacts
Goal: You will understand (i) how AI may influence human evolutionary trajectories; (ii) critically assess biological versus cultural adaptation timescales; and (iii) develop intentional AI usage strategies.
The Problem and Its Relevance
Brooks (2024) argues that AI is already affecting human lives in ways that could gradually shape which traits get passed down to future generations -- similar to how farming and cities changed human evolution over thousands of years. While cultural changes happen quickly, biological evolution operates on much longer timescales.
Four key predictions suggest potential evolutionary pressures: (i) Smaller brains: As AI handles memory, calculation, and navigation, humans may evolve smaller, less energy-demanding brains (continuing a trend that began 3,000-5,000 years ago); (ii) Changed attention and personality: People who resist addictive social media and maintain focus may have reproductive advantages, potentially leading to evolution of these traits; (iii) Social and mating changes: AI companions and dating apps are reshaping how people form relationships and find partners, potentially affecting which personality types and social skills get passed on; (iv) Self-domestication effects: AI's role in criminal justice, surveillance, and social control could influence the ongoing 'taming' of human aggression throughout history. Important context: These are speculative predictions based on evolutionary principles. Changes would occur over many generations (thousands of years), not decades, and would be small compared to rapid cultural adaptations.
Why Does This Matter?
Understanding AI's potential evolutionary impacts matters because: (i) Present choices shape future biology: Our current decisions about AI integration could influence the genetic makeup of humanity thousands of years from now; (ii) Cultural adaptation moves faster: We can change laws, education, and technology much faster than our biology can adapt, giving us agency in shaping these outcomes; (iii) Awareness enables intentionality: Recognizing these pressures helps us make conscious choices rather than passively accepting whatever changes AI brings; (iv) Individual habits compound: While evolutionary change is slow, individual cognitive and social habits formed today affect personal development and set patterns for future generations; (v) Prevention is easier than reversal: Once certain cognitive skills or social capacities atrophy at a population level, recovering them becomes increasingly difficult.
Three Critical Questions to Ask Yourself
Which cognitive abilities am I outsourcing to AI, and which do I want to actively maintain or strengthen in myself?
How is my AI use affecting my attention span, social skills, and relationship formation -- and what reproductive or social advantages might these changes create or eliminate?
Am I allowing cultural adaptations (my habits, education, skills) to evolve intentionally, or am I passively accepting whatever changes AI brings to my cognition and behavior?
Roadmap: Developing Intentional AI Usage Strategies
Step 1: Audit Your Cognitive Outsourcing (15 minutes)
List cognitive tasks you have delegated to AI in the past month (navigation, memory, calculation, writing, problem-solving)
Identify which abilities you genuinely want to preserve versus which you are comfortable outsourcing
Action: Create two categories: 'Essential human skills to maintain' and 'Acceptable AI delegation'
Step 2: Assess Attention and Social Impacts (15 minutes)
Honestly evaluate how AI and social media affect your sustained attention
Reflect on how dating apps or AI interactions have changed your social skill development
Consider whether these changes align with the person you want to become
Action: Rate yourself (1-10) on: sustained focus, face-to-face conversation skills, ability to be alone without digital stimulation
Step 3: Design Personal Boundaries (20 minutes)
Establish specific rules for AI use that preserve your target capabilities:
Navigation: Use AI for new places, but challenge yourself to remember familiar routes
Memory: Let AI store facts, but practice remembering important personal information
Writing: Use AI for drafting, but do critical thinking and editing yourself
Social: Limit AI companion interactions; prioritize human relationships
Action: Write 3-5 concrete personal rules, for instance 'No GPS for locations I have visited more than twice'.
Step 4: Build Cognitive Resistance Practice (15 minutes)
Identify one cognitive skill at risk from your AI use
Create a weekly practice routine to maintain it:
Mental math exercises if you over-rely on calculators
Memory games if you havee stopped remembering phone numbers
Extended reading sessions if your attention span is declining
In-person social activities if you are over-relying on digital interaction
Action: Schedule specific times for deliberate cognitive practice
Step 5: Adopt Cultural Tools That Support Biology (10 minutes)
Leverage cultural adaptations that can evolve faster than biology:
Use apps that limit social media time rather than waiting for evolved resistance
Set device-free zones in your home to maintain interpersonal skills
Choose educational approaches that exercise cognitive abilities AI might replace
Join communities that value the skills you want to preserve
Action: Implement at least one environmental/cultural safeguard this week
Step 6: Reflect and Adjust Quarterly (5 minutes)
Every three months, revisit your cognitive audit
Assess whether your boundaries are working
Notice if new forms of AI delegation have emerged
Action: Set a calendar reminder to repeat Steps 1-2 quarterly
Bottom Line
While AI's evolutionary impacts will unfold over millennia, your personal cognitive and social development happens now. Cultural adaptations -- your habits, boundaries, and intentional practices -- evolve much faster than biology and remain within your control. By consciously choosing which abilities to maintain and which to delegate, you shape not only your own development but also contribute to the collective patterns that may influence human evolution. The key is intentionality: using AI as a tool that serves your goals rather than passively accepting whatever changes it brings to your cognition, attention, and social life.
Building Privacy-Conscious Custom AI Models
Goal: You will (i) identify AI privacy risks; (ii) understand custom models as mitigation tools; and (iii) build your first privacy-configured custom AI assistant.
The Problem and Its Relevance
A 2025 study analyzing 2.5 million Reddit posts reveals critical privacy concerns with conversational AI platforms. Users face three interconnected risks: (i) Over-sharing through human-like interaction: The conversational nature of AI makes us more likely to share sensitive personal information than we would through traditional interfaces; (ii) Data memorization and exposure: We worry that AI systems memorize our conversations and could later expose our private information to other users; and (iii) Permanent data embedding: Even when we want to delete our information, it may already be permanently embedded in AI training models, making true deletion impossible
Why Does This Matter?
These privacy risks create a paradox: to get meaningful help from AI, we need to provide context and details, but sharing that information puts our privacy at risk. This tension affects: (i) Personal vulnerability: Sensitive health, financial, or emotional information could be exposed; (ii) Professional security: Confidential work information might leak across organizational boundaries; (iii) Long-term consequences: Once embedded in models, our data persists indefinitely without our control; and (iv) Trust erosion: Privacy concerns prevent us from using AI effectively when we need it most. Custom AI models offer a potential solution by keeping conversations private within controlled environments, but only if configured properly.
Three Critical Questions to Ask Yourself
What sensitive information am I sharing with this AI, and could it be memorized or exposed to others?
Does this AI platform allow me to truly delete my data, or will it remain permanently embedded in the model?
Would a custom AI model within my control better protect my privacy while still giving me the personalized assistance I need?
Roadmap: Building Your First Privacy-Conscious Custom AI (2 Hours)
Hour 1: Setup and Security Foundations
Step 1: Create Anonymous Infrastructure (15 minutes)
Set up a dedicated account with anonymous credentials (avoid personal email addresses)
Choose a free AI platform that supports custom models (Claude Projects, ChatGPT Custom GPTs, etc.)
Create a new project specifically for this exercise
Step 2: Prepare Non-Sensitive Materials (20 minutes)
Gather reference materials you want to use (documents, guides, templates)
Review all materials to ensure they contain NO sensitive personal information
Upload only verified non-sensitive content to your custom AI project
Step 3: Write Privacy-First Instructions (25 minutes)
Draft clear system prompts that establish privacy boundaries:
'Never store personal details or repeat sensitive information from previous conversations'
'If a user shares sensitive data, acknowledge it only in that conversation without memorizing specifics'
'Remind users not to share passwords, financial information, or identifying details'
'Always prioritize user privacy over conversation continuity'
Hour 2: Configuration and Verification
Step 4: Configure Data Handling Rules (20 minutes)
Implement strict instructions about what the AI should and shouldn't remember
Set boundaries for different types of information (public vs. private)
Establish protocols for handling accidentally shared sensitive data
Define how the AI should respond when users attempt to share high-risk information
Step 5: Test with Mock Scenarios (30 minutes)
Create fake personal data for testing (fictional names, addresses, financial info)
Test various scenarios:
Share mock sensitive information and ask the AI to recall it later
Try to make the AI repeat fake personal details from earlier in the conversation
Verify the AI refuses to retain or repeat sensitive information
Check if privacy warnings appear when appropriate
Step 6: Evaluate and Refine (10 minutes)
Review test results and identify privacy gaps
Refine your privacy instructions based on how the AI actually behaved
Document what worked and what needs adjustment
Create a personal checklist for future custom AI projects
Bottom Line
Custom AI models configured with privacy-first principles can help mitigate the unique risks of conversational AI. By keeping conversations within controlled environments and explicitly instructing AI to forget sensitive information, you gain the benefits of personalized assistance while maintaining better privacy protection. However, remember: no system is perfect. Always assume anything shared with AI could potentially be exposed, and never share information you absolutely cannot afford to lose control over.
Understanding AI Dataset Curation and Model Training
Goal: You will understand how AI learns from data, recognize the impact of curated datasets on outputs, and develop strategies for diverse, critical AI use.
The Problem and Its Relevance
AI models learn like children do -- through exposure to examples and patterns. Just as a seven-year-old surfer learns that brown and green wetsuits might attract sharks because sharks, they deduct, might see turtles (their prey) in those colors, AI makes inferences based on its 'experience' with data. However, AI's experience is limited to its training data. When we use AI tools, our interactions (clicks, preferences, prompts) create 'curated datasets' through human selection patterns. While this personalization can improve immediate results, it creates a feedback loop: AI learns from our choices, then shapes future outputs based on those choices, which influences what we see and select next. This narrowing effect impacts not just AI development but also our own creativity and critical thinking.
Why does this matter?
If everyone uses the same model with similar prompts and curated datasets, we get homogenized outputs. This affects everything from book recommendations to image generation, potentially limiting cultural diversity, creative expression, and independent thought.
Three Critical Questions to Ask Yourself
Does this AI model have the 'experience' (training data) to reliably answer my question, or should I provide my own dataset?
How might my previous interactions with this AI (my curated dataset) be narrowing or biasing the outputs I receive?
What are the long-term consequences of my choices: both for AI development and for my own cognitive diversity?
Simple and Short Roadmap for Strategic AI Use
Step 1: Assess the Model's 'Experience'
Ask yourself: Is this topic likely covered in the model's training data?
Action: When in doubt about authors, specialized topics, or niche subjects, provide your own reference materials or datasets
Step 2: Test for Curation Bias
Try the same prompt across different AI models
Experiment with diverse prompting styles and approaches
Action: Compare outputs to identify where personalization may be limiting creativity or introducing bias
Step 3: Diversify Your AI Diet
Occasionally use older or less-curated models for creative tasks
Delete browsing traces or start fresh sessions to reduce personalization
Action: Actively seek variety in your AI tools and approaches
Step 4: Practice Conscious Selection
Recognize that every choice (likes, clicks, selections) trains future AI
Consider the collective impact of homogenized human preferences
Action: Make deliberate, varied choices rather than always accepting the first or most convenient option
Step 5: Maintain Your Cognitive Independence
Use AI as a tool, not a replacement for critical thinking
Challenge AI outputs rather than accepting them automatically
Action: Regularly engage with non-AI sources and analog experiences to maintain diverse thinking patterns
Bottom Line
Curated datasets offer convenience but risk narrowing both AI capabilities and human creativity. Use AI strategically: provide your own data when needed, diversify your models and prompts, and remain conscious of how your selections shape both technology and your own cognitive development.
AI as Leverage: Building Foundational Knowledge for Strategic Advantage
Goal: You will understand how foundational knowledge of machine learning and LLM architecture transforms AI from a simple tool into strategic leverage, enabling you to make informed decisions about model selection, customization, and deployment for competitive advantage.
The Problem and Its Relevance
AI fluency will become a baseline skill like email or Excel within five years, and that AI should be viewed not just as a tool, but as leverage that can multiply entrepreneurial capabilities. However, there's a critical distinction between using AI as a passive tool and leveraging it strategically. The difference between those who succeed and those who fall behind will not be whether they use AI, but how deeply they understand its foundations.
Why does this matter?
Surface-level usage creates dependency: Using AI without understanding how it works makes you dependent on default settings and vendor decisions
Strategic leverage requires knowledge: If you are not using AI to move faster or make smarter decisions, you are behind, but using it effectively requires understanding how to properly leverage the technology
Foundation enables innovation: Understanding machine learning fundamentals allows you to customize, optimize, and innovate rather than just consume
Competitive advantage comes from depth: Your competitors can access the same AI tools -- your advantage comes from understanding how to configure, fine-tune, and deploy them strategically
The gap in current AI education: Most people approach AI as a black box: input goes in, output comes out. This creates users, not strategists. AI should be used as a multiplier: 'use it, but do not be used by it' -- and this requires foundational understanding of how these systems actually work.
Three Critical Questions to Ask Yourself
Do I understand WHY different learning approaches produce different results? For instance, can you explain the difference between supervised learning (learning from labeled examples), unsupervised learning (finding patterns without labels), semi-supervised learning (combining both), and reinforcement learning (learning through trial and reward)?
Do I understand HOW modern AI systems process and generate information? For instance, can you explain how neural networks transform input through layers of processing to generate output?
Do I know WHEN to customize models versus using them as-is? For instance, can you articulate the difference between prompt engineering, RAG (Retrieval-Augmented Generation), and fine-tuning?
Your goal is not to become a machine learning researcher. Your goal is to develop sufficient foundational knowledge that you can:
Ask the right questions when evaluating AI solutions
Make informed decisions about model selection and customization
Recognize when vendors are overselling or when opportunities are being missed
Innovate by combining AI capabilities in novel ways
Build sustainable AI strategies that create lasting competitive advantage
That is the difference between using AI as a tool and leveraging AI for strategic advantage.
Goal: You will learn to identify warning signs that professionals are inputting their personal information into large language models, understand privacy implications, and develop strategies to protect their sensitive data.
Professionals across many fields -- from therapists to tutors to career counselors -- are increasingly using AI tools to help with their work. While these tools can be useful, many professionals use them without informing their clients, potentially exposing highly personal information to AI systems that are not designed to protect sensitive data. Recent investigations reveal that some therapists have been secretly inputting client conversations directly into large language models during sessions.
Why does this matter?
You share sensitive information with professionals (counselors, advisors, tutors, coaches) who may use AI tools
Privacy violations can happen when personal details are entered into systems that are not compliant or secure
Trust is compromised when professionals do not disclose their use of AI
Your consent matters but can only be given if you know AI is being used
Does this communication feel authentic or oddly polished?
Look for 'AI tells': overly formal language, unusual formatting changes, the American em dash (—), addressing every point in your message line by line, or an impersonal tone that doesn't match previous interactions.
Has the professional's response style suddenly changed?
Notice if emails are suddenly longer and more structured, if video sessions involve excessive typing or screen-checking, or if responses seem to echo your words back with therapeutic-sounding language that feels scripted.
Have I been informed about AI use, and do I consent?
Professionals should disclose if they aree using AI tools and explain how they aree protecting your data. If you suspect undisclosed AI use, you have the right to ask directly: 'Are you using any AI tools when working with me?'
You have the right to know if AI is being used to process your information
You can ask what tools are being used and how your data is protected
You can refuse consent for your data to be entered into AI systems
Watch for sudden changes in communication style or formatting
Notice if responses seem generic rather than personally tailored
Be alert if professionals are unusually focused on screens during conversations
Look for accidentally preserved AI prompts or formatting inconsistencies
If you suspect AI use, ask directly but professionally: 'I noticed [specific observation]. Are you using AI tools to help with our work together?'
Request transparency about which tools are being used
Ask how your personal information is being protected
Inquire whether the tools are compliant with relevant privacy regulations
Clearly state your preferences about AI use with your personal information
Request that professionals de-identify any information before using AI tools (though this is not always sufficient)
Consider whether you are comfortable continuing with a professional who uses AI without disclosure
Document your concerns and any agreements made
If a professional refuses to disclose AI use or dismisses your concerns, consider reporting to:
Their supervisor or institutional ethics board
Professional licensing bodies
Student services or ombudsperson at your institution
Remember: lack of disclosure about AI use can violate professional ethics codes and potentially data protection regulations
Your personal information deserves protection. While AI tools can have legitimate uses, professionals should always be transparent about using them and obtain your informed consent. You have the power to ask questions, set boundaries, and seek accountability when your privacy is at risk.
Lessons from a pilot study shows why students prefer ‘tasting the cake' over following the recipe
Nectir AI represents a thoughtful attempt to bridge artificial intelligence with scaffolded learning, offering students a guided journey through academic content rather than instant answers. During my experience with the platform, I discovered its unique approach to deep learning through probing questions that test knowledge while simultaneously explaining concepts in accessible ways. Unlike traditional AI generators that provide immediate responses with suggested follow-up questions, Nectir withholds easy answers, creating a more conversational and supportive learning environment that feels like engaging with a personal tutor. The platform also provides educators with valuable oversight capabilities, allowing teachers to monitor student usage and customize datasets to align with specific pedagogical goals and learning objectives.
The platform excels at fostering active learning within controlled educational environments, closely mimicking traditional classroom dynamics where students learn under teacher supervision. Nectir's theoretical strength lies in providing each student with their own virtual instructor, capable of asking questions tailored to individual pace and comprehension levels. This approach emphasizes incremental learning as a pathway to long-term educational success, encouraging students to build knowledge systematically. However, this methodology assumes that all students thrive under structured, incremental learning approaches, and that every learner possesses sufficient motivation to engage independently through computer-mediated interactions.
Despite its pedagogical merits, Nectir's approach may inadvertently work against natural learning processes. While students do learn incrementally within formal educational systems, this structured method differs significantly from how humans naturally acquire knowledge before entering formal school systems. Toddlers learn through pattern recognition in vast amounts of information, applying innate logical reasoning and engaging in spontaneous trial-and-error experimentation. Their natural learning is boundless, free-flowing, observable, playful, and inherently relatable and enjoyable. This is the process that has also enabled a small number of my extremely curious and hardworking students to experience how AI can serve as a transformative equity tool to foster exponential learning growth, helping them close the gap with more advanced and privileged students at a rate I have never witnessed before. Modern AI models thus can restore some of this joy by providing rapid, comprehensive responses to minimal and oftentimes poorly structured and incomplete inputs, meeting learners' immediate needs efficiently. The main challenge becomes encouraging under-achieving learners to move beyond surface-level satisfaction and break down responses to pursue deeper, more adventurous forms of knowledge exploration.
My pilot project with students during this summer semester showed disappointing engagement statistics that challenged Nectir's fundamental assumptions. The data showed that students resisted structured, one-way learning like elementary pupils, preferring instead the exploratory freedom reminiscent of toddlers. They wanted quick answers and the autonomy to choose their own learning directions, along with unrestricted access to test new and more powerful technological resources and their respective limits -- much like an unsupervised wild kid pushing a brand new and expensive toy car until it breaks, learning through experimentation rather than top-down instruction. Students wanted to ‘taste the cake’ as quickly as possible, and only then decide whether they were motivated enough to learn how to bake it on their own. This approach is not inherently problematic; the best learning incentive often comes from experiencing the end result before understanding the underlying mechanics. We long for the chance to watch the best performing before deciding whether it is worth our time and effort to pursue the same path.
The summer financial course data provided concrete evidence of limited student engagement with Nectir AI. Despite the platform being continuously and clearly available from the course's early days, not a single student accessed it during the first three weeks. Only after direct requests for feedback in the fourth week did two of nine students engaged with the tool. Subsequent assignments saw minimal additional participation, with one more student accessing the platform but declining to comment. Tellingly, once direct instructions ceased, student usage dropped to zero. The students who did engage were already high performers -- those who least needed additional and guided support -- while struggling students, who theoretically could benefit most from the platform, showed no interest whatsoever.
Nectir AI deserves recognition as a platform that successfully replicates traditional teaching methodologies while addressing administrators' and educators' desires for controlled learning environments with monitored AI usage. A small number of academically strong students may appreciate this structured approach and recognize its value. However, most students, particularly those who would benefit most from learning support, show little enthusiasm for this tool. Even high-achieving students seem to prefer publicly available large language models or existing paid services, treating learning more like savoring a finished cake rather than following a detailed recipe. This preference suggests that effective educational technology must balance pedagogical structure with the natural human desire for immediate gratification and autonomous, yet supportive and collective, human-to-human exploration.
Nectir AI appears to have been developed primarily from market research gathering input from administrators and teachers who are suspicious and skeptical about the benefits of AI in learning. Their concerns are certainly valid, and much of what is happening in classrooms with the rapid advent of AI supports these worries. However, I would suggest also collecting feedback from administrators and faculty who lean more toward the positive aspects of guiding students to use AI to learn better, faster, and more enjoyably while using it responsibly and ethically. These professors, like myself, might share that their main pain points are not necessarily distrust of the models themselves, but rather how a platform like Nectir AI can help students in post-learning activities -- specifically in homework assignments based on information shared in class, whether written or verbal, so this data could be automatically uploaded to its training dataset.
Also, Nectir's capabilities with GPT-4o and Claude Sonnet 4 could help instructors in the grading process by automating it while helping us identify patterns in assignments that show potential and detrimental usage of AI for unethical and unproductive learning habits. I would also suggest that the Nectir AI team work more closely with students rather than faculty and administrators for product development, as students are the main beneficiaries in this learning chain and ultimately those who will be using it, hopefully, on a daily basis. Understanding their genuine needs, preferences, and learning behaviors -- rather than assuming what they should want -- might lead to more effective educational technology that bridges the gap between pedagogical rigor and student engagement.
#AILiteracy #TrainingDataAwareness #CriticalAIThinking #DatasetBias #c
Unlocking Statistical Insights Through Interactive AI Exploration
This pre-class exercise harnesses generative AI to master statistical concepts. It blends critical reading, AI-driven analysis, and role-playing to make complex ideas accessible. By engaging with this challenge, students will develop the ability to synthesize nuanced arguments and gain confidence in interpreting statistical data.
(image: courtesy of ChatGPT)
Exercise Instructions:
(i) Read the assigned article;
(ii) Access Claude (Perplexity and ChatGPT should also work) and individually attach each of the figures and tables from the article (Find and download them in 'Files') and use the general prompt: 'summarize information from attached figure/table'
(iii) Now ask the chatbot to role play adding the following prompt: 'explain this figure/table to me as if you were a statistics professor and I were a first year college student with no prior knowledge of statistics'
(iv) Ask specific and follow-up questions to the chatbot (as if you were interacting with a real Statistics Professor). Learn as much as you can.
(v) Share in this thread a concise and coherent paragraph showcasing the nuances of the main argument of the assigned paper based on the skills/knowledge you have gained from this chatbot interaction. Use your own words in writing this paragraph;
(vi) Vote by 'liking' and 'commenting' your classmates' effort. The post with the highest number of 'likes' is the winner;
(vii) Be ready to share in class how much you have learned about statistics; how long you spent on this activity; what prompts you used; which bot(s) you used; your prior knowledge of statistics before this activity; and more.
Find below the example of student outputs to this lesson plan:
by Bria Smith: LinkedIn and Instagram
Overall, the assigned article focuses on how, as minority populations grow and develop, there still seems to be a significant amount of segregation. After reading the article, I used Perplexity to help me pull out some key points (as this article was a bit difficult for me to 'consume') and Claude for the subsequent questions about the graphs. From Perplexity, it is noted that the juxtaposition of increased diversity to stagnant segregation seemed to be due to the relationship between racial turnover and integration. From Perplexity: "Specifically, neighborhoods may become more diverse as White residents leave, which can lead to temporary increases in diversity without a corresponding decrease in segregation." So although certain areas are becoming more diverse as people of color move into them, the actual problem of segregation isn't being addressed. The racial populations are simply shifting instead of properly integrating.
According to Claude, the graphs altogether illustrate demographic changes in different neighborhoods from 1990-2010. When asked to roleplay, Claude began explaining the graphs in a step-by-step format but titled as "lessons" which I found interesting. As for the graphs, I was able to learn that the categorized neighborhoods (persistence, early turnover, intermediate turnover, and later turnover) show how all these different neighborhoods weren't changing in the same way. All of them became more diverse, but the speed and timing were different. For example, from Claude: "Timing matters: The "Later turnover" areas actually changed faster and more dramatically than the "Early turnover" areas, they just started changing later". This key timing component can help back up the idea that these neighborhoods weren't fully integrated for a wide variety of reasons. Such as political factors like housing policies that didn't change until later on in the study.
by Ami
The article focused on three points by testing predictions between neighborhood types: (1) those where White populations persist over time in the context of neighborhood diversity, (2) those where White populations are in the early stages of exit, and (3) those were White populations have entered turnover's final stages. These findings challenge the assumption that increasing diversity automatically leads to greater integration and suggest a more complex relationship between diversity trends and persistent patterns of residential segregation in the United States.
I noticed in the article that the authors challenge the assumption that the increase in diversity automatically leads to greater integration and suggests a more complicated relationship between diversity trends and persistent patterns of residential segregation in the U.S. The central argument in this report is that White population losses can actually result in widespread but temporary increases in diversity that in the aggregate can preserve or exacerbate segregation. To show this the article used census data from 1990 to 2010 to analyze trends in residential segregation and racial inequality, various measures of inequality, including income, education, and occupational status.
The article also used statistical models to examine the relationship between segregation and inequality over time. Then I asked perplexity to give me information about how the graphs are used in this article where it gave me a clear understanding of the graphs used to illustrate how White population losses, or "White flight," contributes to temporary increases in diversity. The tables and graphs show that these increases in diversity do not necessarily lead to stable integration. The data show that White flight can cause temporary demographic changes, but these often lead to resegregation.
The graphs emphasize how neighborhoods that appear diverse due to population shifts may eventually return to segregation as White residents exit. Therefore these visuals show White flight can actually sustain or worsen segregation. What I learned with the help of AI from the article is that people thought by sending out White people out of a community it would solve segregation which affects the neighborhood’s schools and businesses and in the long run, land value is also lowered.
Quantitative Reasoning: Understanding Inductive and Deductive Learning Through A Practical Exploration with AI and Data Analysis
Before reading the assigned article and the suggested one, select a Large Language Model (LLM) of your choice and type in: Easily explain and illustrate with simple examples the key differences between inductive and deductive learning methods
Inductive Learning
In the LLM of your choice type in: create a simple, small and mock dataset for the following theme: xxx. Disclose which LLM you selected. I used CoPilot this time around but feel free to experiment with other ones such as Gemini, Perplexity, ChatGPT, Claude, Llama, You, Kagi, DeepSeek, Grok, Poe, etc. Pay attention to the limited free daily quota of each model, particularly Claude.
Now type in in the same model: identify key patterns and relationships in this dataset
Once you internalize this information, add: predict how changes in xxx modify the original patterns and relationships
After digesting the former output, continue the conversational search with the following prompt: formulate general principles of linear regression based on these observations
Share a brief but original reflection on what you think of inductive approach as a learning method to understand the relationships between xxx
Deductive Learning
Now type in the following prompt in the same model you used above: explain what linear regression is in simple and easy terms
Move on to the following prompt once you understand the previous output: break down and easily explain through clear examples what linear regression formula encompasses
Continue the learning process using this prompt: illustrate through simple and relevant examples key assumptions and limitations of linear regression
Finally, and not rushing between each learning step, add this prompt: how does linear regression help understand patterns and relationships between xxx?
Share a brief but original reflection on what you think of deductive approach as a learning method to understand the relationships between structural racism and racial residential segregation
Challenge
Learn how to do linear regression in R, a very powerful statistical tool for data science/analytics. You definitely want to put this skill in your CV! You can download this open source software here or play with it online here or search for ‘use R online’ on Google to find other websites offering the online option.
Select one of the options below: learning with AI or learning with textbook. State which option you picked.
Your goal is to share a visual illustration of a plotted linear regression and briefly explain patterns and relationships in your own words. Feel free to use any dataset you can find online. However, you may just ask the model to create a mock dataset for you.
Learning with AI
Disclose the model you are using;
Share the prompts (not the outputs);
Share the image of your linear regression and your brief original explanation;
Reflect on (i) whether you used primarily an inductive, deductive, or mixed approach and (ii) how ‘learning with AI’ helped you (or not) in this learning method;
Share how much time you spent on this challenge.
Learning with Textbook
Access the most well-known and open source textbook for a gentle introduction to R;
Learn how to do linear regression from information provided in the textbook;
Create a mock dataset, or use a real one, to plot your linear regression;
Reflect on (i) whether you used primarily an inductive, deductive, or mixed approach and (ii) how ‘learning with textbook’ helped you (or not) in this learning method;
Share how much time you spent on this challenge.
Finally
Read the assigned reading, you find it in course syllabus. I highly encourage you to read it in its entirety but feel free to use AI to summarize it for you, if you need it. Also, read this suggested paper for class focused discussion.
PS. Only outputs shared one-hour before class starts will be included in classification feedback.
PS. Those very interested in the topic may take a look at this project.
Do not forget to like and comment your classmates' effort.
Find below examples of students’ outputs to this lesson plan:
by Elliott Stevens; tut72054@temple.edu; @elliottstevenz
1) Original reflection on what I think of an inductive approach as a learning method to understand the relationships between structural racism and racial residential segregation:
One detail that I value highly about the inductive approach is that it has a foundation in real life experiences and real-world circumstances. Instead of kind of forcing data to fit an assumed idea, inductive approaches encourage a bottom-up method to show how structural racism occurs in everyday life. For example, starting with an observation of neighborhoods with high poverty rates or unequal school funding can reveal how these systemic issues are deeply intertwined with residential segregation, leading to a more broad insight about the inequality rooted in certain areas. The inductive approach also encourages curiosity and critical thinking, and it can challenge us to look beyond common explanations and think about why certain patterns exist and also about what kind of systemic issues are at play. The inductive approach also underlines the complexity of structural racism, and recognizes that its causes and effects are rarely based on just one or two things or but also deeply rooted in historical and social contexts.
2) Original reflection on what I think of an deductive approach as a learning method to understand the relationships between structural racism and racial residential segregation:
The deductive approach, which can often can begin with broad theories and applies them to specific cases, offers a focused and structured way to study the relationships between structural racism and racial residential segregation. What I found effective about this approach is its effectiveness in testing a specific idea or data. For example, if there is a theory that suggests racial residential segregation limits access to quality education, the deductive method provides a clear pathway to test this through data analysis, and looks at specific criteria such as systemic inequality or institutionalized discrimination, and uses these terms to predict and analyze how segregation operates in housing, education, or economic opportunities. This approach is useful for confirming patterns or identifying where certain ideas fall short, and offers a way to refine our understanding. However, the deductive method has its limitations in exploring issues as complex as structural racism and it may overlook unexpected patterns or interactions. Structural/systemic racism has lots of layers to take into consideration and the deductive approach can sometimes hinder the discovery of new and insightful information.
Challenge: Learning with AI
The AI I am using is ChatGPT (free version). Prompt into ChatGPT: "I need to use R to share a visual illustration of a plotted linear regression. I can use any dataset you can find online. However, can you create a mock dataset? And how would I use this in Posit Studio Desktop? If that doesn't work, I'm available to try any free program online to do this. I would like to share the image of a linear regression." The outcome was amazing, as the AI taught me how to use R and showed me which data to input into Studio Desktop - Posit. Here is the image of Linear Regression after I followed the steps to use the free program listed Studio, and everything worked as outlined by ChatGPT.
The scatterplot with the regression line shows a positive linear relationship between the segregation index and average household income. As segregation increases, the average income also tends to rise. However, the randomness in the data could indicate that segregation alone does not fully explain the variations in income. This pattern suggests that while segregation may correlate with income levels, other factors could also play a significant role.
I primarily used an inductive approach for this project. By working with specific data and observing patterns, I developed an insight about the relationship between segregation and income. This bottom-up process like I mentioned earlier allowed me to explore how the regression model visually represents these patterns.
(ii) How learning with AI Helped:
Learning with AI enhanced my understanding by simplifying complex concepts. It would've probably took me many days or weeks to learn how to do this type of skill without a teacher or the assistance of AI. ChatGPT AI provided a step-by-step guidance on linear regression, data creation, and visualization in R. I was able to focus more on interpreting patterns rather than struggling with technical tools, making the learning process more efficient and actually kind of fun. I am honestly shocked that this only took me about 2 hours to complete in total time, maybe even less. It's very encouraging that we can all adopt new skills in much less time these days, and this could boost our CV like mentioned in the assignment, and provide us more career and money making opportunities, what an exciting time to be alive!
by @emmanamii
After reading the article, and highlighting a few statistics that stood out, I used those numbers and pasted it into ChatGPT, asking for it to make a dataset with these numbers.
Reflection regarding the dataset:
After prompting my LLM to identify key statistics, it felt easier to recognize and acknowledge the dataset it provided. Some things were an obvious standout, but I was intrigued in comparing socioeconomic disparities and housing loan denials. This was interesting because I learned more about an unfamiliar topic- housing loans. After, I used my LLM to help me conclude that denial rates for housing loans in segregated areas are increasing, like Livonia and Jackson climbing to over 30%.
Inductive Learning
Personal Reflection:
Inductive learning is certainly helpful because it lets us conclude data without it directly telling us so. By giving a visual representation of data, and allowing others to connect the dots on their own, it can let others strive to know more about the "image" they are presented. They may notice that one unique trend, that is different to the rest of the data, and start to question why that is. That leads them to do their own research and conclusion, to the point where they essentially search for answers or context.
However, there’s a challenge in this approach, too. Inductive reasoning requires knowing your observations may be incomplete, that what you see might not capture the full truth. It’s about letting the evidence guide you while staying open to stories that data alone can’t tell. Ultimately, inductive learning fosters a mindset of exploration and empathy. It pushes us to ask, "What am I not seeing? What patterns might I have overlooked?" In the context of understanding structural racism and residential segregation, inductive learning feels intriguing; as there is so much layered beneath this topic. As you research more and more, you begin to understand and visualize the lived realities behind the numbers of any graph or data set.
Deductive Learning
Deductive learning is similar, where we make observations and try to find reasoning. However, the difference lies in how we conclude. This learning style can be helpful when trying to confirm or deny any theories- where as in this case, it can also be stereotypes.
Applying this method to structural racism and residential segregation can be tough, as there are many aspects to consider, such as history and lived experiences; but on the other hand, one data set may also not be enough to confirm/deny a theory. It’s really about testing and searching for what we already may suspect. One thing I do admire about this approach, is that it’s the kind of thinking that holds you accountable to your claims, demanding proof for every assumption.
Challenge-Learning with AI
I decided to use ChatGPT to help me generate the code I out into the "Run R code online" website. I asked it, "I want to use R to create a plotted linear regression. Could you create a mock data set showing the relationship between segregation and income, and write a script to implement this?" It was amazing how easy it was to implement this, and this would've definitely taken me longer if it weren't for these AI tools.
The red line illustrates a positive linear relationship between segregation and the average household income. Although the blue points do not exactly touch the red line, we can see a similar pattern, and can conclude that there is certainly a correlation.
For this project, I adopted an inductive approach to analyze the data. I focused on specific observations and used them to identify broader patterns, which led to insights about the relationship between segregation and income. By using this method, I was able to see how individual data points interact within the regression model and how trends emerge visually. This hands-on exploration deepened my understanding of how statistical models work to represent real-world relationships, making the abstract concepts of correlation and regression much more tangible.
(i) Read the entire assigned article;
(ii) Share, in your own words, what you have learned from this paper;
(iii) Ask ChatGPT to create a webpage e.g. 'create a beautiful and simple webpage with images and compelling text about (theme)' 'what is the code of this webpage?'
(iv) If interested, design your own webpage. You may use this online coding editors/compilers (1, 2, 3). i) find out what hmtl and css are; ii) play around with editing text, font, color, etc.; iii) figure out how to add your own images to the code and how to display them; iv) share an image (or a link) of your webpage.
(v) like and comment your classmates' effort.
(vi) reflect on the similarities (advantages and disadvantages) between i) learning how to code in a reverse engineering mode vs bot/someone creating a webpage for me with no or rather limited additional/incremental inputs and ii) learning in a scaffolding manner with bots vs using bots outputs with no or minimal critical thinking. Be ready to share your thoughts on this in class!
P.S. If you i) found a better free online editor/compiler, ii) figured out how to add image to the code and get it displayed, and/or iii) managed to get a free link to your webpage, please share your lessons in this thread so we can all learn from your effort.
Find below the example of a student output to this lesson plan:
by Jun Ito; Jito0314@gmail.com
The division between Conservatives and Democrats in America is sad. I had vaguely studied critical race theory/CRT in an introductory class, but this article persuaded me to support it. Author Billings was masterful at explaining so many aspects of CRT in just one chapter. I learned from her article:
1. The importance of storytelling. Billings started the paper with an anecdotal story regarding the importance and prevalence of racism. This hook sparked my curiosity, and it lasted throughout the paper.
2. CRT stems from CLS/Critical legal studies developed in the 70s to legally objectify inequalities in class structure.
3. The paper mentioned that Black people only accounted for 3.8% of total doctoral degrees awarded in 1991. According to Statista, in 2020-2021 the number has increased to 8.8%.
4. I was shocked to hear the research where white people interviewed reported that it would take them $50 million to become Black. Overall, Billings was persuasive in endorsing CRT in education, and I believe CRT can be an effective tool in combating segregation in the United States. I found the author defending O.J. Simpson problematic. Although I agree that no person should have made it about the jury’s race, the author should have used a better example where the defendant was not believed to be guilty in today’s society.
I used W3Schools to create my website: https://criticalracetheoryjun.w3spaces.com
It was a new experience. I am amazed at people who are good at making websites, especially because even the simplest things were challenging for me.