The Humanities Disconcerting Position on AI


November 2025

Log onto social media or seek out government policy reviews of the humanities. Notice anything? The humanities fall under serious scrutiny; much of the scrutiny returns to one core concern: many in the humanities have failed to make their work accessible, inclusive, and relational to learners, civil society, and policymakers/taxpayers overall.

Humanities faculty response to artificial intelligence (AI) has barely shifted since GenAI was introduced into the mainstream in 2021 and 2022. It has become increasingly divisive with academics positioning themselves firmly on the “I use AI [with the unnecessary caveat that they use it ethically, in order to deter critique from peers]” and “I oppose AI and I think it’s the worst thing on the planet.” There is no nuance. There is no in-between. And, what’s worst is that many of the most vocal opponents lack the most basic technical skills and have very little knowledge about the differences between Predictive AI and GenAI, and how AI is actually implored to transform communities and improve learners’ confidence — or, even make use as an assistive technology tool.

I return often to a reflection made by a popular humanities academic on LinkedIn. They dismissed AI because they were concerned that it removed human creativity and adversely impacted critical thinking. The academic positioned their argument as ‘the humanities are about humanity.’ But, whose humanity are they concerned with, because certainly it does not appear that it includes my own or millions of other people. Therein lies one of the many reasons why the humanities face such stark condemnation from politicians, scholars, thinkers, and leaders; their grievances are not un-justified.

As I began drafting this post, I reflected on feedback on my scholarship that I received from humanities academics who are racialized as white and who are unfamiliar with decolonial research. Their feedback was mostly positive, however, I was reminded of their critique of my prose — citing that it was not palatable enough to (white) academic peers. My writing – my creative flow; more specifically, my intentional syntax – informed by my distinct creativity and Black studies was dismissed, too. I was told to write less like the Black scholars and thinkers who shaped my scholarship. My arguments were dismissed. Ultimately, however, I rejected their feedback, though; it had nothing to do with the substance of my work and it sought not to shape the merit of what I communicated, but my personal (my human) way of communicating, despite it being conveyed professionally, tailored to my audience, and easily understood.

These trained humanities academics voided my ability to be creative and adhere to my own ethical way of communicating and sharing knowledge. And, it most certainly was not the first time that has happened. It is dehumanizing. It is epistemicidal. It is reductive. And, as a PhD researcher and a learner, I must also admit that it had a harmful effect on my confidence.

And therein lies yet another fundamental issue: the humanities’ refusal to break away from its colonial hold means that it centers Eurocentric and white constructs of knowledge, communication, and power. It centers engagement with Eurocentric and colonial scholarship as rigorous whilst it dehumanizes and dismisses the work of non-Eurocentric scholars and those working against the grain. The power dynamics which I note below illustrate an environment we continue to reproduce in which the status quo cannot be challenged – least not safely. I argue that this is further evidenced by the humanities’ rejection of AI, despite the humanities standing on soft, crumbling ground. Even prior to AI, the conversation was, ‘humanity for me, but not for thee’ and ‘creativity is only permitted if it adheres to [the Eurocentric, colonial] practices’.

Understanding the demo

Well, as I have written in this blog before, the humanities as they stand today are overwhelmingly white and lack the presence of scholars and researchers from BAME individuals and those from the Global South. Many of the same academics who oppose AI on the basis that it somehow violates humanity champion morally sound practices like diversity, equity/equality, and inclusion, but do so on the basis that individuals adhere to the same rigid rules that produced (and reproduced) the colonized communities and learning environments from which they thrive.

Acceptance is, therefore, conditional. In the humanities, according to the American Academy for the Arts & Sciences, 79% of humanities department chairs identify as white, 6% identify as Black, 5% identify as Asian American, 3% identify as Hispanic, and 1% identify as Native American. Of all faculty in American postsecondary education, 69-72% identify as white. In the UK, 10% of professors identified as BAME with only 0.6% identifying as Black. Some 96% of historians at UK universities identify as white while fewer than 1% identify as Black.

I mention these demographics because it is worth noting that scholars and thinkers – communities, members of the public, taxpayers – all hold different approaches to knowledge creation that are wholly legitimate. One of the best examples is indigenous peoples’ reliance on community knowledge sharing made orally and collectively, and passed on to generations. Knowledge is less defined by written adherence to some Eurocentric and colonized approach to writing and assessment, but via respect and adherence to responsive cultural means of communication, knowledge production, and knowledge sharing.

And, yet, because these individuals from ethnic minority communities lack senior positions of power, their views are somehow always eschewed. BAME scholars and researchers must therefore adhere to colonial ways of knowing, of being, and communicating in order to exist in the academy – or else. Our logic – despite how grounded it is in scholarship, community or engagement is dismissed, rendering us siloed and reproducing the same colonial approach to learning, teaching, and being that scholars in the humanities claim they are so concerned with documenting and addressing in their scholarship and research.

Existence in the humanities is conditional on an adherence to Eurocentric practice.

Meanwhile, the humanities observe, critique, and extract from the same BAME communities that they exclude in knowledge production and knowledge creation. It is ongoing epistemicide and the continued reproduction of harms that have caused pain to people who look like me that I find so deeply troubling for academia. I use the modern convention of GenAI as an example. The loudest anti-AI voices are not diverse; they come from the most homogenous part of the academy. They frame their concerns around ‘creativity’ and ‘humanity’ and yet ignore the ongoing epistemicide in their educational and research practice. So much so that they will revert to teaching and assessment modes of the past – that we know caused harm to many learners – all for the sake of preserving their control. It is an assertion of dominance that they will not debate even when presented with counterarguments and evidence. (Trust me, I’ve tried.)

Learners of color are amongst the broadest adopters of effective deployment of AI. And, yet, they are told that AI use is illegitimate. Even despite their voice, their ideas, their critical engagement and critical analysis being reflected in their scholarship. It is not dissimilar to the work of Black and brown researchers being wholly dismissed; reconciling the reality that they face active exclusion, demonization, dehumanization, and precarious employment in academic settings in higher volumes than their white peers. They are overpoliced. It becomes unsafe for them to speak-up and to speak-out.

I recently attended a forum on GenAI and the academy. An academic I know who is very interested in ethical AI and responsible uses for confronting bias and algorithmic abuse raised two strong points I think it is worth addressing in the remainder of this blog: First, the academic is not wholly opposed to GenAI but skews on the opposition, concerned that it may reproduce harms inflicted on marginalized communities; Secondly, that bias reflected in AI responses far outweighs any harm GenAI may cause than good.

I took issue with the academic’s position, largely because it reproduces the same sort of academy in which marginalized people have been actively excluded and dismissed. The academic was not concerned with the system as it stands – as fractured and as harmful as it is. They were more concerned with protecting a version of Eurocentricity that does not attract people from marginalized communities into these spaces. It is chiefly important, however, to note that individuals from marginalized communities are, indeed, capable of entering and thriving in the academy as it stands problematically today. Many have and many more will. But, why should they be interested in existing within an institution that causes them great distress and harms the communities from which they belong? An institution that extracts from them – their pastoral care of learners, their rigorous research, their community-based coalition building and knowledge sharing – and yet dismisses their intellect, their ways of being and their ways of knowing and sharing knowledge.

On the second point, the academic’s position suggests a degree of human inaction that should make most individuals familiar with AI uncomfortable. I return to a term I first introduced to this blog previously, “epistemology of ignorance”. What is the difference between a student who navigates a library and avoids every book that they know disproves their position and a student who actively and intuitively engages GenAI to explore, identify, and engage with scholarship, theories, and their ideas? A student who utilizes GenAI to translate non-English scholarship and as a tool for critical brainstorming and data analysis? (I discuss these in another blog post here.) It also ignores a degree of socially responsible – or, even, dare I say, civic minded – and degree of pragmatism that is required and expected of humanities scholars: AI is firmly here to stay.

AI has transformed organizational processes, improved learners’ confidence and outcomes, empowered life-saving research in medicine, engineering, and the life sciences. Governments have deployed AI to produce more responsive and engaging work on behalf of taxpayers. And, yet, the humanities insist that society’s humanity is being stripped away? The solution is not to turn our backs to AI, but to find ways to engage with it critically. Educators have an obligation – a social, civic, and moral responsibility – to prepare learners to be mindful and relevant participants in the society to which they belong. One could (and many did) make the argument that accessible computers would pose a threat to human civilization; and, yet, we use them anyway. Technological advances like the modern computer have been hugely transformative and have aided civil society tremendously. With this new advent of GenAI, workers find that the support in processes and operations enhances scope for creativity and innovation. In short, less focus on processes wields more time to innovate and impact. It renders more time to be creative and reflective and engaged in the world we live in.

Irresponsible (*and uninformed) educating: a return to the past

Some humanities academics across the world have called for a return to pre-digital assessment (paper or oral): handwritten assignments, facilitated discussion, and memory exercises to assess learner progress. What’s remarkable is that these former methods of assessments are products of the past; so, instead of looking forward and making assessments more accessible, more inclusive, more engaging – and, removing incentives to cheat – they are more focused on returning to a past in which the humanities produced less qualified thinkers for a society in which today’s students do not belong. It seems both counterintuitive and irresponsible.

In spite of worsened marketization in the academy – where class sizes have ballooned, jobs are more precarious, wages have remained stagnant, the disconnect between the humanities and society have worsened, and staff have less time to support learners – some humanities academics want to drive the classroom backwards. Without community input. Without employer input. Without alumni or student input. And, without input from policymakers.

Currently, some of the most vocal anti-AI voices in the humanities make the dishonest argument that AI prevents students from learning to write and that academic writing is the best means of learning and knowledge production. First, writing is not a practice owned by the humanities and thinking is never replaced when AI is used ethically. Problematically, this firm anti-AI biased stance that many in the humanities have espoused is indicative of reproducing the same dismissive and Eurocentric approaches that the academic I mentioned earlier seemed concerned with preventing.

When I meet with humanities scholars and academics who hold some technological skills and promote AI use, I find that they are community-based individuals interested in systemic change and adopt a positive outcome-based solution in their work that is reflective, thoughtful, and people-focused.

This is one of many reasons why I view AI as a handy tool in the decolonial academic’s toolkit. AI helps provide scholars with bespoke, interactive, and inclusive learning that empowers. Multilingual students, including those for whom academic English or similar speak is not common. AI does not replace their thinking nor does it provide their arguments; it simply helps different thinkers express themselves if we view it as a translation device and as a cognitive scaffold. It isn’t incumbent upon decolonial scholars to seek acceptance for a tool in a colonial academic environment. Those academics concerned with banning AI should be as transparent about their (lack of reasoning) as they demand scholars to practice transparency in their AI use.

Conclusion

The idea of banning AI reinforces the colonial ideal of what it means to be a proper writer, researcher, and communicator of knowledge. In banning it, academics adopt a one-size-fits-all exclusion for those who: are not neurotypical, and those who do not ascribe to Eurocentric means of knowledge and knowledge production. Linguistic and epistemic violence is exhibited in the AI blanket bans based on the epistemology of ignorance, but humanities departments reject that argument on the basis that… “AI is bad for the environment and critical thinking,” despite the growing amounts of evidence on neurodiverse and non-Eurocentric audiences.

I worry that some of the response to AI in universities has been to retreat into older forms of assessment as if they are automatically safer or more authentic. That is a trap. Those methods were changed for a reason. Higher education moved on from those modes of assessment for a reason, and going backwards will not magically make assessment more meaningful now.

In the humanities, if we are serious about preparing students for a diverse, rapidly evolving world, our focus has to be on critical skills, reflexivity, and adaptability, not on recreating assessment models from the 1980s and 1990s. Reverting to those formats in the name of “integrity” is not neutral. It can be cruel, it excludes many students, and it exposes a real gap in pedagogical standards. One would imagine that in the advent of accessible GenAI, educators would instead seek to use it to make their lectures more interactive, update their syllabi more than once every five years, provide agentic tools to support learners’ development in real-time, make classroom and online activities accessible for neurodiverse and neurotypical audiences alike, translate content from multilingual speakers (including those in the Global South), and locate more scholarship and contributions to engage with from the Global South more generally.

If educators are uninterested in engaging with new tools or improving assessment design, that is a problem. The service of educating is a profession that requires ongoing learning. Refusing to modernize assessment because of discomfort with AI is like a teacher insisting that students should not use computers, Word, or online sources. It does not protect learning; it limits it. It is time for university leaders and policymakers to act before another generation of thinkers are severely disadvantaged by individuals with a social, ethical, and moral obligation to empower them and enrich their personal and self-development, not hamper it. The late feminist educator bell hooks chronicled that education was a practice of freedom (Teaching to Transgress: Education as the Practice of Freedom). Is ‘freedom’ clinging onto Eurocentric epistemological and ontological traditions that have long – and often continue to – reproduce the harms that we encounter today?

We already know that the future of higher education is shifting towards skill development, workplace readiness, and applied learning. Students are clear about this. More importantly, they want to build skills that matter in the world they are entering. When humanities academics and educators refuse to innovate or outright refuse to engage with modern tools, they are not protecting learning; they are failing learners.

The world has changed. Employers have changed. Students’ expectations have changed. If humanities departments want to remain relevant and continue training thoughtful, capable, and socially minded graduates, they have to participate in that change rather than retreat from it. The humanities can and should be leading on creativity, not shutting it down. Critical thinking still matters. Students do need time to develop those skills before relying heavily on AI – of course. But that does not mean they cannot use AI to explore ideas, test their thinking, or understand the world around them. When I engage with revolutionary and transformative educators in the humanities, they are excited about AI and focused on leading conversations that center sound research ethics and are focused on learner outcomes and social impact-based scholarship and engagement.

For more seasoned students and for scholars, though, AI can be part of an augmented learning environment because, in line with pedagogical foundations, educators know that transparency and intention matter. If AI helps learners strengthen their skills, deepen their understanding, or express their creativity with confidence, we should recognize that as a benefit. Clinging to rigid, Eurocentric modes of writing, research or assessment does not prepare students for the world they live in. Instead, it only keeps the humanities locked within values and practices that no longer align with the realities of society, work, or civic life. What use is our transformative feminist, decolonial, inclusive scholarship, and DEI practice in the academy if we fail to transform the system and continue to rely on a conditional mode of learning and engaging within the academy?

This vision applies beyond the humanities. Engineering students are excited when they encounter modules that encourage creativity and curiosity; the same is true in reverse. If we open the door to innovation in the humanities, we will see more passion, better critical engagement, and stronger thinking from students across disciplines. The next step is not rigidity. It is a culture that welcomes creativity, transparency, and critical engagement with the world as it exists today.


Leave a Reply

Your email address will not be published. Required fields are marked *