The AI Report Card
What 23 students revealed about generative AI in college courses
The dataset for this analysis comes from an anonymous online poll conducted at the end of the spring 2026 semester for two online college courses I teach at TUJ. Twenty-three students voted on 10 original statements, which were drawn from recent peer-reviewed empirical research on generative AI in higher education. The polling methodology groups participants by patterns of agreement and disagreement. This produced three distinct opinion groups. The findings are exploratory and not statistically generalizable.
What Everyone Agreed On
Two findings cut across all participants with minimal dissent.
21/23 agreed that universities should provide clear and consistent rules about when and how AI can be used in coursework. Moorhouse, Yeo and Wan (2023) found 57% of top universities had AI guidelines but enforcement remained inconsistent.
20/23 agreed that knowing how to use AI well will be essential for getting and keeping a job after graduation. WEF Future of Jobs Report 2025 projects AI will create 170 million new roles while displacing 92 million. PwC's 2025 Global AI Jobs Barometer confirmed a significant wage premium for workers with AI skills.
15/23 disagreed that most students who use AI for assignments are honest with their professors about it. A 2024-2025 study at King's Business School (Assessment and Evaluation in Higher Education) by Gonsalves (2025) found that 74% of students failed to declare AI usage despite mandatory disclosure requirements, primarily out of fear of instructor judgment. Kasneci et al. (2023) argue that governance frameworks with clear policies, procedures and controls are essential for the safe and successful adoption of large language models.
Three Mindsets:
The opinion groups that emerged from the data are cognitive profiles and what separates them is what they believe about what AI does to and for them.
The Optimistic Majority: 11 participants
Career necessity (Statement 4): 11/11 agree
AI aids comprehension (Statement 6): 11/11 agree
AI gives equal advantage (Statement 2): 9/11 agree
Trust AI without verifying (Statement 9): 1/11 agree
This is the largest group and the most consistently enthusiastic about AI in learning. All participants agreed that knowing how to use AI will be essential for employment. All agreed that AI makes it easier to understand complex concepts quickly. Ten of 11 disagreed that this course had taught them nothing about responsible AI use, suggesting they felt the course addressed these questions substantively. Only one of 11 trusted AI-generated information enough to skip external verification in academic work. This combination is significant. The Optimistic Majority believes AI is powerful and career-defining. They also believe it creates real risks for academic integrity and do not blindly trust AI outputs. This pattern resembles the learner profile that Kasneci et al. (2023) described as ideal for productive AI integration, one where students use AI as a tool for engagement and exploration rather than as a replacement for independent thinking.
Nine of 11 members of The Optimistic Majority agreed that AI gives every student the same advantage regardless of background. This equity belief runs counter to a substantial body of research. The OECD issued a working paper (Varsik & Vosberg 2024) warning that without targeted intervention, AI risks deepening rather than closing educational disparities. A British Education Research Association analysis conducted by Allison (2025) pointed out that AI systems trained predominantly on Global North datasets fail to reflect the linguistic and cultural needs of many student populations. The Optimistic Majority may be experiencing AI as equalizing because, for members of this group, it has been, but it may reflect a particular position within the access and literacy spectrum rather than the broader reality of who benefits from AI tools and who does not.
The Structural Skeptics: 7 participants
Critical thinking poses no risk (Statement 1): 7/7 disagree
Universities need clear rules (Statement 3): 7/7 agree
AI gives equal advantage (Statement 2): 6/7 disagree
Trust AI without verifying (Statement 9): 0/7 agree
This group is the most internally consistent and the most critical group. All members disagreed that regularly using AI for thinking poses no risk to long-term critical thinking skills. All either disagreed or passed on trusting AI information without checking external sources. All agreed that universities should set clear rules. Five students from this group agreed that AI fluency will matter for employment, with 2 passing. The group also shows the sharpest disagreement with The Optimistic Majority on the question of equity. Only 1 of 7 agreed that AI gives every student the same advantage. This puts The Structural Skeptics in strong alignment with the academic literature on the AI literacy divide. Hadar Shoval (2025) argues that students who are already technologically adept gain compounding advantages through AI-enhanced learning while students with less exposure fall further behind. In addition, The Structural Skeptics' unanimous concern about critical thinking aligns directly with Gerlich (2025), who published a study of 666 participants that found a significant negative correlation between frequent AI tool use and critical thinking scores. The mediating mechanism was cognitive offloading, the tendency to delegate mental work to an external system and younger participants showed the strongest effects.
The Uncertain Pragmatists: 5 participants
AI improves problem-solving (Statement 7): 4 pass
Trust AI without verifying (Statement 9): 3/5 agree
Course taught nothing about AI responsibility (Statement 8): 2 agree, 1 disagree, 2 pass
The Uncertain Pragmatists comprised the smallest and the most ambivalent group. The clearest signal is the frequency of ‘pass’ votes, particularly on whether AI tools help improve problem-solving skills. Four of these group members chose not to vote either way. This was the highest pass rate of any group on any statement and contributed to problem-solving being flagged as an overall ‘area of uncertainty’ in the poll analysis. The Uncertain Pragmatists do not know what to think about this and that uncertainty is itself informative. The most consequential characteristic emerging from this group is their relationship with AI accuracy. Three of 5 participants agreed that AI-generated information can be trusted in academic work without checking external sources. No other group came close to this position. Research on AI hallucination makes this finding significant. Bhattacharyya et al. (2023) found that in ChatGPT-generated medical articles, 46% of references were fabricated and only 7% were authentic and accurate. Athaluri et al. (2023) found that 28 of 178 references in ChatGPT-generated research proposals did not exist. Shao (2025) found that users form trust in AI outputs based on fluency, tone and perceived authority, frequently overlooking factual accuracy when AI presents information with confidence. The Uncertain Pragmatists relatively high trust in AI outputs places them in a risk category that none of the research supports. These group members otherwise resemble the broader sample on most questions.
What This Data Is Asking Administrators and Instructors To Do
Results tell us that students are actively engaging in the following conversations:
Build AI verification as a taught skill
Three members of The Uncertain Pragmatists trusted AI outputs without checking sources. That is not a character flaw but a knowledge gap. Students need to practice source verification the same way they practice citation, as a specific, teachable and assessable competency. We, instructors, need to create and instill this habit.
Treat equity as a concrete institutional responsibility
The Structural Skeptics mistrust about AI equity is grounded in a substantial body of research. Institutions that deploy AI tools without auditing who has effective access, who can use them critically and who benefits from their outputs are not solving the equity problem. They are deferring it to students who can least afford the delay.
Make AI policy a student-facing conversation
Near-universal agreement on the need for university rules is not a demand for prohibition but a request for clarity. Students across all three groups want to understand what is acceptable. Institutions that treat this desire as a starting point for shared policy development will produce more durable and legitimate frameworks than those that issue mandates and wait for compliance.
Main Takeaways
The most troubling number emerging from this rather small dataset is not the percentage of students who might be cheating but the near-universal belief that cheating is already happening, combined with the near-universal agreement that no one is being honest about it. That combination does not describe a crisis of student ethics but a crisis of institutional credibility. What this poll ultimately shows is that students are not waiting for faculty or administrators to figure out how to respond to generative AI, they are already living inside these complex, professional-defining questions by sorting themselves into distinct relationships with these tools, with different assumptions about equity, cognition and trust. Those differences will not resolve themselves without structured, evidence-based engagement from the institutions serving them. The classroom, we need to remember, is not just a place where knowledge is transferred but where assumptions about knowledge are negotiated. Generative AI has made this negotiation unavoidable.
#GenerativeAI#AIinEducation#HigherEdAI#AILiteracy#FutureOfLearning
References
Allison, J. (2025). Digital equity in the age of generative AI: Bridging the divide in educational technology. British Educational Research Association (BERA) blog. https://www.bera.ac.uk/blog/digital-equity-in-the-age-of-generative-ai-bridging-the-divide-in-educational-technology
Athaluri, S. A., Manthena, S. V., Kesapragada, V. K. M., Yarlagadda, V., Dave, T., & Duddumpudi, R. T. S. (2023). Exploring the boundaries of reality: investigating the phenomenon of artificial intelligence hallucination in scientific writing through ChatGPT references. Cureus, 15(4). https://www.cureus.com/articles/148687-exploring-the-boundaries-of-reality-investigating-the-phenomenon-of-artificial-intelligence-hallucination-in-scientific-writing-through-chatgpt-references#!/
Bhattacharyya, M., Miller, V. M., Bhattacharyya, D., Miller, L. E., & Miller, V. (2023). High rates of fabricated and inaccurate references in ChatGPT-generated medical content. Cureus, 15(5). https://pmc.ncbi.nlm.nih.gov/articles/PMC10277170/
Gerlich, M. (2025). AI tools in society: Impacts on cognitive offloading and the future of critical thinking. Societies, 15(1), 6. https://www.mdpi.com/2075-4698/15/1/6
Gonsalves, C. (2025). Addressing student non-compliance in AI use declarations: implications for academic integrity and assessment in higher education. Assessment & Evaluation in Higher Education, 50(4), 592-606. https://www.tandfonline.com/doi/full/10.1080/02602938.2024.2415654
Hadar Shoval, D. (2025). Artificial intelligence in higher education: Bridging or widening the gap for diverse student populations?. Education Sciences, 15(5), 637. https://www.mdpi.com/2227-7102/15/5/637
Kasneci, E., Seßler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., ... & Kasneci, G. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learning and individual differences, 103, 102274. https://www.sciencedirect.com/science/article/pii/S1041608023000195
Moorhouse, B. L., Yeo, M. A., & Wan, Y. (2023). Generative AI tools and assessment: Guidelines of the world's top-ranking universities. Computers and Education Open, 5, 100151. https://www.sciencedirect.com/science/article/pii/S2666557323000290
PwC. (2025). 2025 Global AI jobs barometer. https://www.pwc.com/gx/en/services/ai/ai-jobs-barometer.html
Shao, A. (2025). New sources of inaccuracy? A conceptual framework for studying AI hallucinations. Harvard Kennedy School Misinformation Review. https://misinforeview.hks.harvard.edu/article/new-sources-of-inaccuracy-a-conceptual-framework-for-studying-ai-hallucinations/
Varsik, S., & Vosberg, L. (2024). The potential impact of Artificial Intelligence on equity and inclusion in education. OECD artificial intelligence papers. https://www.oecd.org/en/publications/the-potential-impact-of-artificial-intelligence-on-equity-and-inclusion-in-education_15df715b-en.html
WEF. (2025). Future of jobs report 2025. World Economic Forum. https://www.weforum.org/publications/the-future-of-jobs-report-2025/