Skip to main content

KEEPING ENGLISH LANGUAGE PROFESSIONALS CONNECTED

★ New to TC: Perspectives

Resisting AI and Refocusing on the Human

by Nigel A. Caplan |

As educators, we are seeing a deluge of articles, blog posts, and even institutional guidance touting the benefits of “generative artificial intelligence (AI)” products such as OpenAI’s ChatGPT, Anthropic’s Claude, and Google Gemini. Proponents say that today’s AI is a revolution that is radically transforming all aspects of society, and that educators must not only adopt but also adapt to this new technology. A previously niche form of machine learning, known as a large-language model, has spread at an impressive speed. However, the deafening level of buzz does not make generative AI inevitable or even desirable in education. Instead, as TESOL professionals, we need to pause and critically evaluate the implications of this attractive but potentially dangerous technology.

The term “AI” is notoriously vague and problematic, a catchall that falsely equates chatbots like ChatGPT with familiar forms of automation, algorithmic software, and machine learning to make them sound “sophisticated, powerful, or magical,” as Emily Bender has shown. Bender, a computational linguist, prefers the term synthetic media machines—media because they can generate images and videos as well as text, and machines to emphasize that they are built by humans and deployed by companies.

It is also vital to call such machines products, not tools, because they are indeed commercial products developed or backed by major technology corporations with little effective regulation and largely unenforceable promises of self-control. Their incorporation in educational technology is driven more by the allure of a lucrative market than by demonstrated benefits to learning.

My aim here is to raise awareness of the real and present harms that synthetic content generators are causing or may cause in education so that as TESOL professionals, we can make informed choices rather than following the hype. These risks, which have been identified by journalists and researchers, include the following:

    • Copyright infringement: Language models are trained using enormous quantities of human content, often obtained without consent and, by the corporations’ own admission, in violation of authors’ and artists’ copyright.
    • Exploitative labor: AI products aren’t artificial—they rely on humans to do traumatic work for little pay in the global South and even prisons.
    • Environmental impact: Large-language models require massive amounts of energy and water for training and operation at a time when we desperately need to combat climate change and conserve water.
    • Fabrication: Synthetic content generators are designed for plausibility, not accuracy, and their responses are often confidently wrong, especially when asked to provide citations. It cannot be overstressed that the output of a chatbot is unreliable and can inadvertently mislead users or be deliberately harnessed for spam, fraud, and disinformation.
    • Bias: The training data for large-content models reflects existing racial and gender biases, which are then reproduced in the output.
    • Privacy: Users of chatbots may be exposing their own data and compromising digital security. In the U.S., there are serious fears about the spread of artificially generated and sexually explicit images of children and celebrities. Other AI products that are marketed in education raise threats of surveillance, discrimination, and profiling.
    • Deskilling: As schools and universities look to cut costs, it is already tempting to turn to “AI tutors,” which will likely dilute educators’ attention, divert funding to technology corporations, and deny some students access to human teachers. One education policy expert calls this trend “degenerative AI.” Although there is no evidence that automated feedback is equal let alone superior to human instruction, the hyperbole surrounding AI can easily be used as cover for increased class sizes and workloads, particularly for contingent instructors. Ben Williamson, a chancellor’s fellow at the University of Edinburgh Centre for Research in Digital Education, warns that “AI is likely to reproduce the worst aspects of schooling,” such as the five-paragraph essay.
    • Academic integrity: Above all, because content generators can be used to circumvent writing, thinking, and learning, their use is incompatible with the mission of many classes, programs, and institutions, especially in TESOL.

For all these reasons, I am skeptical of claims that we need to teach “AI literacy” by allowing or encouraging the use of synthetic content generators in classes and assignments. I cannot in good conscience use and endorse products with such deleterious ethical, environmental, social, and academic impacts. Tressie McMillan Cottom calls on us to evaluate educational technologies by asking not what we can do with them but “what we should do with that shiny stuff.” To me, AI literacy instead means looking beneath the shiny surface of the chatbot interface and deciding that we should just say no.

The defense of AI in education often hinges on two assumptions: that its use is inevitable, and that students need to learn how to interact with chatbots for future employment. However, the future of synthetic media machines is far from certain. We have the choice and the power not to uncritically accept technodeterminism. As educational technology researcher Neil Selwyn has recently argued, “the future of AI and education is not a foregone conclusion that we simply need to adapt to. Instead, the incursion of AI into education is definitely something that can be resisted and reimagined.”

Clearly, there are applications for machine learning that will likely become integral in some fields, and they will almost certainly be marketed as AI. But these will probably be narrow products that require specialist training to produce and implement and will not look or behave like today’s chatbots. For our purposes as language teachers, the purported benefits pale next to the risks—not only the immediate harms I have discussed here, but also the threat to language education itself. What distinguishes our field is the human connection, something which AI technology can only supplant, not supplement.

Today’s machine translation and text generation are impressive on the surface, but languages are learned through human interaction for the purpose of human interaction. To cede that space to AI would be a mistake that devalues our profession and leaves students woefully unprepared for a multilingual future. At the very least, I call on my colleagues to engage critically with new technologies, to question how and why they will be used, and to make deliberate and informed choices rather than accept risky technologies as inevitable.

About the author

Nigel A. Caplan

Nigel A. Caplan, PhD, is a professor at the University of Delaware English Language Institute and graduate program coordinator of the MA TESL degree. No generative AI products were used in writing this column.

This website uses cookies. A cookie is a small piece of code that gives your computer a unique identity, but it does not contain any information that allows us to identify you personally. For more information on how TESOL International Association uses cookies, please read our privacy policy. Most browsers automatically accept cookies, but if you prefer, you can opt out by changing your browser settings.