The Divergent Realities of Generative AI in Education: Insights from a pol.is Survey
by Marvin Starominski-Uehara, April 24th, 2025
This pol.is survey on Generative Artificial Intelligence in the two courses I taught this past semester reveals a striking dichotomy in how students perceive and interact with AI tools. The data, drawn from 24 participants divided into two opinion groups, highlights both the transformative potential and the deep-seated anxieties surrounding AI in learning environments. What emerges is a nuanced combination of enthusiasm, skepticism, and uncertainty: one that challenges educators to tread carefully in integrating AI into academia.
The Enthusiasts vs. The Skeptics
The most compelling finding is the clear polarization between Group A (18 participants) and Group B (6 participants). Group A overwhelmingly embraces Generative AI, with 80% excited about its learning potential; 93% using it for readings; and 87% reporting it increased their knowledge (see tables below). For them, AI is a catalyst for efficiency, creativity, and engagement. In contrast, Group B is marked by fear and rejection: 100% fear AI’s use in education; 83% deem it ‘useless for learning’; and 100% believe it hinders cognitive development. This stark divide underscores a critical lesson: AI’s value is not self-evident; it is deeply contingent on individual perspectives and experiences.
The Paradox of Consensus and Division
While some statements garnered broad agreement -- e.g. 85% agreed the courses taught them to use AI effectively -- others revealed irreconcilable splits. For instance, ‘Generative AI helps me have more fun while learning’ saw 94% agreement in Group A but 80% disagreement in Group B. Such contradictions suggest that even when training or exposure is uniform, pre-existing attitudes may dictate outcomes. This warns against one-size-fits-all AI integration strategies; what empowers some may alienate others.
The Shadows of Uncertainty
The survey also exposes gaps in collective understanding. Statements about specific AI tools (e.g. ‘Grok is the best for students’) elicited high pass rates (65–70%), indicating many lacked strong opinions or knowledge. Similarly, 57% passed on whether paid AI subscriptions improve performance, hinting at unresolved debates about equity and access. These ‘areas of uncertainty’ are fertile ground for education -- not just about AI’s mechanics but its ethical and practical implications.
Limitations and Open Questions
The data’s richness is tempered by its limitations. With only 24 participants, generalizability is questionable. The imbalance between Groups A and B (18 vs. 6) skews ‘majority’ findings, potentially masking minority concerns. Additionally, the survey captures snapshots of sentiment, not causality. For example, does AI use cause fun in learning, or do engaged students simply embrace AI more? Such nuances demand deeper, longitudinal study.
Conclusion: Navigating the AI Divide
This pol.is survey paints a picture of a community at a crossroads. For educators, the takeaway is twofold:
Celebrate consensus where it exists (e.g. AI’s utility for research) while acknowledging divisive points (e.g. cognitive risks).
Address uncertainty through dialogue, ensuring students understand AI’s capabilities and limits.
Ultimately, the data rejects simplistic narratives. Generative AI is neither a panacea nor a peril -- it is a mirror, reflecting the diverse values and fears of those who use it. The challenge lies in harnessing its potential without deepening the divides it reveals.
Embracing Generative AI in Education: Navigating Opportunities and Risks
By Marvin Starominski-Uehara, January 10th, 2025
For the past year and a half, I have fully embraced artificial intelligence (AI) as a tool to assist both learning and teaching. During this time, I have witnessed firsthand not only the perils but also the immense opportunities this breakthrough technology offers. Contrary to expectations, the rise of Generative AI -- or large language models -- has not universally enhanced (Mollick 2023) critical thinking skills among my students. Instead, it has dramatically widened the gap in creativity and originality. This troubling trend, however, should not be interpreted as a condemnation of AI’s role in education. Rather, it presents an opportunity to reevaluate current practices and ensure that all stakeholders (administrators, faculty, parents, and students) collaborate to recalibrate the aggregated learning curve. The goal is to extend the benefits of AI, currently enjoyed by a select few students who use it effectively, to all learners through (i) personalized guidance; (ii) open and free access; and (iii) structured support.
(The percentages cited in this article are drawn from an anonymous online poll of sixty students across two online courses I taught during the Fall 2024 semester. See table below)
Context: Seventy-two percent of my students believe AI accelerates their learning, while 54% worry it may undermine their originality. This duality highlights a critical challenge: leveraging AI’s potential while preserving educational integrity and depth. In this article, I propose initial strategies to address this dilemma by addressing the unique concerns of stakeholders, fostering a framework that balances innovation with accountability.
For Administrators: Prioritizing Integrity and Equity: Administrators rightly focus on safeguarding academic integrity. The poll reveals that 51% of my students fear falling behind peers who use AI, a concern that risks incentivizing misuse. Many institutions have opted to combat low-quality AI-generated work by upgrading detection tools, hoping to deter malpractice and uphold traditional educational standards. However, scholars caution that (i) overreliance on detection systems could stifle originality (Ardito 2023); (ii) warn that such systems also exhibit concerning discrepancies between false positives and false negatives (Gegg-Harrison & Quarterman 2024); (iii) might not be aligned with what ‘constitutes plagiarism in the digital age’ (Hutson 2024:21); or (iv) exacerbate systemic inequities (Perkins et al. 2024:18). To mitigate this, administrators are hosting workshops on ethical AI use and revising honor codes to reinforce academic excellence. Some are proactively spotlighting success stories (Ouellette 2024) to demonstrate how responsible adoption can speed up personalized learning experiences while enhancing institutional reputation.
For Faculty: Redesigning Pedagogy: Faculty face the challenge of reimagining curricula and content delivery in the AI era. With 64% of my students using AI for reading and 32% for writing, my assignments must evolve. Rather than banning AI, I now design assessments that prioritize critical analysis -- for example, peer reviews of AI-generated content or projects comparing human and machine outputs. The 42% of my students who believe AI hinders cognitive development benefit from exercises dissecting its limitations. By framing AI as a collaborative tool, not a replacement, I cultivate skills no algorithm can replicate.
For Parents: Ensuring Fairness and Creativity: Parents seek reassurance about fairness and creativity. While 65% of my students report using AI in unique ways -- suggesting its potential to inspire innovation --, unequal access to evolving tools and awareness of their limitations has, as I have observed, widened learning gaps exponentially. Schools can address this by (i) demystifying AI models; (ii) subsidizing access to leading tools; and (iii) offering hands-on training sessions. Transparent communication, such as emphasizing that 69% of my students find AI enhances learning, can reassure parents that guided use fosters -- not stifles -- independent thought.
For Students: Balancing Tool and Crutch: Students grapple with using AI as a supplement rather than a crutch. While 63% still prefer Google for learning, many of my students rely on AI for tasks like research (52%) or writing (32%). To curb over-dependence, I argue that institutions should teach 'AI literacy', clarifying when to use AI (e.g., clarification and illustration of key concepts) versus when to think independently (e.g., refining original arguments). Peer mentorship programs, where students share strategies to preserve originality (addressing the 54% anxious about losing their voice), foster accountability.
Conclusion: Collaboration Over Resistance: AI tools, particularly large language models, are neither villains nor saviors -- they are collaborators. By aligning policies with stakeholder needs -- detecting misuse, redesigning assignments, ensuring equity, and promoting mindful use -- we can transform risks into opportunities. The goal is not to resist AI but to empower a generation that wields it wisely, ensuring technology amplifies -- not diminishes -- each student’s potential.
Reference list:
Ardito, C. G. (2023). Contra generative AI detection in higher education assessments. arXiv preprint arXiv:2312.05241.
Gegg-Harrison, W., & Quarterman, C. (2024). AI Detection's High False Positive Rates and the Psychological and Material Impacts on Students. In Academic Integrity in the Age of Artificial Intelligence (pp. 199-219). IGI Global.
Hutson, J. (2024). Rethinking Plagiarism in the Era of Generative AI. Journal of Intelligent Communication, 4(1), 20-31.
Mollick, E. (2023, September 24). Everyone is above average. Retrieved from https://www.oneusefulthing.org/p/everyone-is-above-average
Ouellette, K. (2024, April 29). MIT faculty, instructors, students experiment with generative AI in teaching and learning. MIT News.
Perkins, M., Roe, J., Vu, B. H., Postma, D., Hickerson, D., McGaughran, J., & Khuat, H. Q. (2024). Simple techniques to bypass GenAI text detectors: implications for inclusive education. International Journal of Educational Technology in Higher Education, 21(1), 53.
The scariest place: Temple University Japan (TUJ) Cafeteria
By Marvin Starominski-Uehara, October 31st, 2024
Yes, you read it right. TUJ cafeteria is a scary place, a very scary one. It is so scary because it is not ordinary. It is not even unique since places like this are not supposed to exist. TUJ Cafeteria is a place where 99.99% of the world population have never experienced or can even image exist. You step in there and instantly feel out of place, something weird going on. On any given day, you will find in this cafeteria around two hundred college students from more than eighty countries and over a hundred different regions of the world casually walking around and having a good time with people who are, theoretically, so different from them. How is it possible that so many young adults from all these corners of the world can hang out together as if it is the most common thing to do, day in, day out?
Well, it might help to mention that in this environment, everyone has a very good command of verbal communication in English. I would also guess that more than half of those people bonding in this cafeteria are bilingual, and a third of them can speak multiple languages. Okay, you might be thinking now that all those young adults are a bunch of privileged individuals coming from elite education systems around the globe. I cannot deny that. Among these foreign students, there are children of political and business leaders who relocated to Tokyo. There are also many non-native English speakers whose parents invested heavily in international education for their children from a very young age. So, you might argue that this cafeteria is not representative of the global population because of the economic disparities within and between countries and regions. This is a fair point! I live in Miyazaki, one of the poorest prefectures in Japan (by Japanese standards), and I have yet to meet a student in my TUJ classes who was born and raised here.
But that is just the face value of the ‘scary’ diversity you feel when you are in this cafeteria. In this environment, there are also a number of students who come from low-income single-parent households. You will come across many students who decided to bet on themselves and take out loans to invest in their future. You will listen to many stories of personal struggles from students coming from marginalized, minority, and persecuted groups. These are real testimonials of resilience and perseverance, and they are not uncommon to be shared over casual conversations. So, there are many other elements, beyond the financial and economic ones, that explain the mix of people enjoying themselves in the TUJ Cafeteria. But this more nuanced understanding of this diverse community does not alleviate the anxiety one will certainly feel when being there for the very first time. It might even add more stress to those operating under expected rigid norms and attitudes. Like I said, TUJ Cafeteria is a scary place, a very scary one.
TUJ Cafeteria is scary because it is not what the world is but what it should be. Most parents would be totally overwhelmed stepping into this cafeteria. They would have little to no clue of what is happening in this environment. How can so many different looks, different clothes, different genders, different colors, different languages, and struggles co-exist? How is that even possible? This is not how 99.99% of the parents around the globe, including myself, grew up. This is not what we were told. This is far removed from what we have ever experienced and even imagined in our wildest dreams of diversity or just enjoying a glamorous cosmopolitan lifestyle. But here is the flip side: most of these same parents, like me, would be incredibly proud to see their children confidently navigating a world that values people for who they truly are, what they say, and how they act with respect and empathy toward everyone around them, especially the most vulnerable and marginalized. Sure, culture, physical traits, beliefs, and nationalities all shape who we are, what we believe, how we connect, and what we can dream of. But these institutional labels that are supposedly meant to help us feel more comfortable while growing into a predictable world of rules and customs are unceremoniously left behind the doors of those entering into TUJ Cafeteria. And for many of us, being stripped of what we have believed and experienced throughout our lives is quite unnerving. AT TUJ Cafeteria, 99.99% of the world population is asked to be naked. How much money you or your family have does not matter. What does matter, and what really helps you navigate diversity with confidence and make the most of it, is how proactive you can be, how willing you are to listen and learn, to compromise, to turn ideas into actions that help communities beyond those doors become less divided and more united. TUJ Cafeteria is indeed a very scary place, a place where dreams are never too scary to be dreamed of and achieved. And as a student of mine recently said in class: ‘Experiencing diversity can change the trajectory of your life. It really can!’
The Origins of the Stigmergy Network Theory
In my mid-twenties, after returning home from work, the janitor of the building complex I lived in asked me to call the police immediately as my mother had just been run over and was in hospital. After some frantic searches around public hospitals, I found her and learned she was clinically fine, but the shock of this experience led me to visit days later the exact location where she was hit by a motorcycle early evening. From this informal ‘inspection’ and conversations with residents and business owners, I learned that road accidents involving pedestrians occurred frequently at this intersection; however, no structural measures had been taken to mitigate these known risks. This experience then left me wondering i) What if I could share the risks, and evidence, of this intersection with a large audience in an easy, safe, and trusted way? What if I could learn what had happened before and monitor the structural and nonstructural measures taken by local officials -- and residents -- to mitigate these risks? How about creating something like this SenseCityVity project?
Ten years later, a toddler in Queensland died after being neglected and abused. That piece of news ‘hit me’ very hard as I had just become a father the year before. Pondering on what I could do to help this vulnerable group, as well as caretakers, I put together a research proposal to develop a mobile application that would collect ‘perceived risk inputs’ and provide, as a ‘weighted probabilistic output’, a set of recommendations for prevention and early intervention. Unfortunately, I have never had a chance to pursue this project titled: ‘Reducing the Cost Error of False Risks with Artificial Intelligence’. The main takeaway from this unrealized project was leading my curiosity to the field -- and potentials -- of artificial intelligence in the context of Error Management Theory so that I could incorporate those learnings into my future research projects.
I wrote the research proposal on child maltreatment because around that same time I was looking into the inherent and residual risks associated with the 2010/2011 Queensland Floods for my doctoral thesis. During my initial investigations on the early signs of this evolving risk, I came across a post shared in a forum from a concerned citizen urging residents living downstream the Brisbane River to evacuate due to the imminent water release from the Wivenhoe Dam. I became particularly interested in this message as it was uploaded days before the release of the official warning by authorities. As the underlying themes of my dissertation were risk perception and decision making, I could not stop wondering whether the number of deaths and missing victims would have been different, despite hindsight bias and related ‘behavioral traps’, had this post reached the people it needed to. How could this online forum have ensured that such an individual is reputable and trustworthy? How could this individual increase the trustworthiness of the piece of information he/she was sharing online? How could at-risk communities be informed about an emerging threat by an outlier? How could such an outlier be ‘intelligently’ validated through an automated classification and ranking system? This set of questions led me to explore the possibilities that heuristics, or rules of thumb, and networks create for informing individual risk perception and shaping decision making under uncertainty, which I later included in my thesis and served for my assessment on the role of eyewitnesses in spurring collective action during the COVID-19 pandemic titled: ‘Powering Social Media: Simple Guide for the Most Vulnerable to Make Emergencies Visible’.
Towards the end of my doctorate, and due to my growing interest in finding ways to efficiently collect and ‘intelligently’ rectify and validate individual risk perception for decision making, I audited ‘statistics’, ‘machine learning’, ‘algorithms’, and ‘artificial intelligence’ courses at the University of Queensland (UQ). Around that time, I was looking for a suitable location to test the hypotheses and limitations of the ideas I had in mind. After consulting with machine learning scholars in the U.S., I thought about designing a supervised machine learning application to help border agents detect high-risk travelers (note: I clearly understand such a project falls into the type of ‘social profiling’ and can further heighten social inequalities and indiscriminately target the most vulnerable, which is the reason why the European Union and many countries banned this type of artificial intelligence projects. However, my intent with this project is to further investigate not only the potentials but also the limits and risks of breakthrough discoveries and disruptive technology and how, I believe, they intersect with my specific research interests. The title of this proposal is ‘Detecting High Risk Threats and Improving Border Cross Experience with Machine Learning’.)
After the submission of my dissertation, UQ awarded me further scholarship to stay in Australia for another year, which made me the first UQ graduate student to be awarded with the Career Development Scholarship, so that I could continue pursuing alternative avenues to build bridges between the market and academia. I then joined UQ Ventures and, together with a doctoral candidate in the School of Engineering, co-founded a startup to help online marketplaces classify and rank reviews using Natural Language Processing to offer a better experience to their users. This enterprise attracted the attention of many engineering students from different programs across Queensland as well as CSIRO, which selected us as the first UQ startup to join their ON Prime accelerator program. CSIRO awarded us further monetary support after our team excelled in incorporating the design thinking principles, they had equipped us with, into testing and validating our ‘Minimum Viable Product’, or simple prototype. In the meantime, I had the privilege to meet many key stakeholders in the startup community in Brisbane and Sydney, including angel investors, venture capitalists, and decision makers like the then NSW’s Minister for Innovation, Science and Technology, the honorary Member of Parliament Matthew Kean. Unfortunately, this endeavor came to an abrupt ending when my co-founder and I realized that the timeline for funding this enterprise was not aligned with our family financial obligations once our scholarships expired. Nevertheless, the lessons from this transformational experience taught, and instilled in, me valuable soft skills, which I have since then applied in developing my project-, inquiry-based classes, as well as the way I approach and collaborate with my colleagues.
Finally, during the summer break of the Olympics in Tokyo, I spent most of my time sitting in a coffee shop trying to figure out whether there were any patterns that would help me explain and understand some of the most relevant and global political events occurring at that time. During this inquiry, I came across the work of Francis Heylighen and the provocative, and somewhat radical, book ‘Binding Chaos’ of Heather Marsh. Many of their research and propositions were quite original but it was the concept of ‘stigmergy’, which I had never heard of, that caught my attention. I then decided to conduct my own research on what stigmergy entailed and whether it would help me understand and, at least, explain some of the complexities I was witnessing around the globe. The more I explored the fundamentals of this ‘indirect coordination by autonomous agents in a mediated environment’, the more it helped me evaluate some of the contemporary social and political phenomena around me from a different and novel perspective. I have then started questioning whether it would be possible to replicate the autonomous and exploratory acts of ‘ants and termites’ on building and maintaining highly complex evolving systems by quickly responding and recovering from unexpected and massive environmental disturbances. This is when my epistemological -- and ontological -- interest for ‘human stigmergy’ was born and I became increasingly determined to explore its possibilities -- and limitations -- in explaining, predicting, and transforming our risk societies into resilient ones. The result of this inquiry led me to design the fundamentals of the ‘Stigmergy Network Theory’.