A famous aphorism by the psychologist Abraham Maslow suggests that once an individual has a hammer, they tend to see every problem as a nail (Maslow 1966). This idea epitomizes a great challenge associated with the integration of new technologies into . . .
A famous aphorism by the psychologist Abraham Maslow suggests that once an individual has a hammer, they tend to see every problem as a nail (Maslow 1966). This idea epitomizes a great challenge associated with the integration of new technologies into educational settings. If unguided by teachers and not embedded in responsive pedagogies, many students will not only overestimate the abilities of a technology’s power to solve a great variety of problems but also start to only recognize those problems that are solvable with technologies available to them. Despite this essay’s plea to advance the integration of AI-powered technologies into educational settings, Maslow’s aphorism is guiding our thoughts on technology and education. We believe that with the advent of AI-powered technologies, the responsibility of educators to help learners to operate their new hammers in productive and ethical ways is greater than ever. Moreover, not integrating contemporary technologies into language and humanities classrooms will prevent the next generation from developing a nuanced appreciation for the affordances and limitations of these tools.1
This essay is written from the perspective of two educational linguists, language teachers, humanities educators, and language program administrators whose research focuses on identifying and optimizing factors that govern second language learning, literacy development, and intercultural awareness in a variety of educational settings. Over the last decade, investigating affordances and limitations of a variety of technologies to support learning processes that students experience as learners in brick-and-mortar classrooms, behind computer screens, and in study-abroad contexts has dominated our research agenda.
The observations outlined in this essay emerged initially in direct response to a research project addressing a practical dilemma that many language educators faced toward the end of the second decade of the twenty-first century: Should they continue to ban rapidly improving online machine translation services such as Google Translate2 from their classrooms, or would they serve their students better by modifying their teaching approaches and integrate machine translation applications into their teaching? Not surprisingly, many language educators continue to hold negative beliefs regarding online machine translation services (Merschel and Munné 2022). Many feel that the use of technology is undermining the development of linguistic competencies despite growing research that points to the opposite (Hellmich 2019; (Jolley & Maimone, 2022)). This cautious reaction, we discovered in the context of our research, is not unique to our field. In fact, thirty years earlier, mathematics educators articulated very similar concerns toward an emerging technology, the pocket calculator, and two and a half millennia earlier, the educational establishment in Ancient Greece feared the disruptive impact of literacy, a new technology that emerged about 3000 BC in Southern Mesopotamia and from there conquered the eastern Mediterranean. We argue that these historic cases can help us to better understand and predict the future roles of chatbot technologies such as ChatGPT3 in writing-intensive language and humanities classrooms.
In the first part of this essay, we offer three vignettes on the perceptions of technological innovation among educators vis à vis three technologies: literacy, the pocket calculator, and Google Translate. The starting point of this essay is Socrates, founder of western philosophy, and his negative perception of an innovation that was relatively new in his time: literacy. But we will also show how his skepticism toward reading and writing had a long afterlife in western thought by pointing out that the French philosopher Jean-Jacques Rousseau articulated similar beliefs. We will then consider two technologies that have impacted society in recent decades. The pocket calculator changed the ways people solve numeric problems, and machine translation applications are transforming the ways millions read, write, speak, and listen across languages and cultures. We will show that the successful integration of the pocket calculator into mathematics education over a span of twenty years from the 1970s to the 1990s relied on a fundamental rethinking of educational objectives that advocated for a shift away from aiming at independent arithmetic accuracy to teaching problem-solving through human–machine partnership. This process ultimately advanced mathematics education. We will argue that many language educators are currently engaged in a similar shift and are developing approaches to integrate machine translation applications productively into the curriculum as the field gradually shifts from condemnation to acceptance. This integration of the new technology aims to advance the students’ linguistic repertoire and communicative range without sacrificing the development of a strong foundation of language proficiency that continues to enable students to communicate autonomously without the technology. Although the field is far from having embraced the presence of Google Translate in our classrooms, we notice promising signs that the process from rejection to acceptance and embrace is underway.
After having established this pattern from rejection to acceptance and embrace in these three vignettes in the first part of this essay, we will turn to generative AI and, in particular, focus on chatbot technologies based on large language models such as ChatGPT. We will argue that there are clear signs that language educators, writing teachers, and humanities instructors will go through a similar pattern from rejection to acceptance and embrace as they struggle with the arrival of a new technology that initially for many educators seemed to disrupt their disciplines. But we will also argue that ChatGPT and future AI-powered technologies can only be successfully integrated into language, writing, and humanities classrooms if language and humanities educators, like their colleagues in STEM fields several decades earlier, will be supported and led by their professional organizations as they rethink learning objectives in the wake of a technology shift. Whereas skepticism, resignation, and fear continue to dominate the sentiment among many practitioners, others are more proactive in rethinking their pedagogical practice in the presence of generative AI tools. We predict that eventually the proliferation and increasing levels of sophistication and accuracy of chatbot technologies will encourage educators in writing-intensive language and humanities learning environments to reformulate educational objectives in their fields, which will eventually enable an intentional and thoughtful integration of chatbot technologies of the future into their fields. In closing, we will articulate our optimism that this rethinking is already underway and that generative AI technologies will eventually become uncontroversial realities in a wide range of humanities classrooms. Moreover, we also believe that if guided effectively, this development will result in deeper intellectual engagement, enhanced critical thinking, stronger written communication abilities, and overall greater learning opportunities for undergraduate students. But we also predict that the process will take time and needs to be coordinated by forward-thinking institutions and led by innovative educators who are encouraged to push the traditional boundaries of their fields.
The mix of arrogance, skepticism, resignation, and fear that we see in some educators responding to new technologies is nothing new. In fact, as we will show in this first vignette, writing and reading itself were perceived as threats to the very fabric of the educational enterprise by members of the intellectual establishment in Ancient Greece. Socrates, who was most likely illiterate himself, famously rejected the notion of reading and writing in a dialogue with Plato (1925) that was ironically preserved only thanks to the invention of literacy.
For this invention [literacy] will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise. (p. 275)
Socrates faulted literacy for weakening the necessity and power of memory and for allowing the pretense of understanding in his students rather than true understanding. In his dialogue with Plato, he even predicted that this new paradigm would have a negative effect on the behavior of young people, who as a result of reading will not only achieve “true wisdom,” but even become “ignorant and hard to get along with” (p. 275). At the same time, Socrates failed to recognize how literacy enhances cognitive abilities, intellectual depth, and the democratization of knowledge.
Over the past two millennia, we have witnessed a reversal of views that relate to literacy. The consensus among educational psychologists and cognitive scientists is that reading and writing are not at all weakening the mind but extending our cognitive, mental, and emotional range. Written language does not only play an important role in mediating cognition; literacy skills extend our knowledge of the world. And this is not just a philosophical argument. Today, neuroscientists can empirically demonstrate that the impact of literacy is reflected in different spheres of cognitive functioning. Learning to read reinforces and modifies certain fundamental abilities, such as verbal and visual memory, phonological awareness, and even visuomotor skills. Functional imaging studies are now showing empirically that literacy itself influences the pathways used by the brain for problem-solving (Ardila et al. 2010).
Despite today’s scientific consensus that literacy enhances human cognition and thus represents a central position in our curricula, Socrates’ critique of literacy has had an afterlife and sporadically reappeared in western intellectual history. Jean-Jacques Rousseau’s treatise Émile or On Education, first published in the original French in 1762, is arguably one of the most influential literary works in the Western Canon, with an immeasurable impact on post-enlightenment thought and educational philosophy. And yet, Rousseau’s opus magnum states ideas that resonate with Socrates’ negative perception of literacy: “I hate books. They only teach one to talk about what one does not know” (Rousseau 1979, p. 184). “The child who reads does not think, he only reads; He is not informing himself, he learns words” (Rousseau 1979, p. 168).
The invention of encoding ideas in writing and the practice of decoding these ideas across time and space that had been recorded for us on clay, on paper, or as pixels on a screen represent arguably a quantum leap in the development of human civilization. Yet, the illiterate Socrates and the highly literate Rousseau warned of the dangers of literacy and an education that heavily relies on secondary experiences through book knowledge. Literacy comes at a cost, according to Socrates and Rousseau: lack of memory, lack of discipline, and the lack of wisdom resulting from the absence of oral dialogue and lack of knowledge due to the lack of unmediated interaction and experience with the world around us.
When the pocket calculator entered American households in the late 1970s, the technology was far from novel. The underlying principle of digital calculators using binary-coded decimal arithmetic operationalized through a system of on/off switches had been developed in the 1940s with electrical tubes and transistors, before the semiconductor allowed engineers to shrink the technology to a pocketable device (O’Regan 2008). Through mass production in low-wage countries, consumer electronics such as the electronic pocket calculator became affordable for the middle class in the United States. By 1975, a four-function electronic calculator was available to American consumers for under $20, circa $100 in today’s dollars, and could be found in 11% of American households (Weaver 1977).
In response to this development, mathematics educators faced a dilemma that is not unlike the dilemma that philosophers faced in antiquity and that today’s language and humanities educators started to face in the 2010s with Google Translate or since November 2022 with ChatGPT: Should they teach their students to use new technologies in a productive and ethical manner, or would they serve learners better by discouraging or even banning them?
The initial reaction among many math educators in the 1970s was to prohibit the use of pocket calculators. Commentaries in newsletters suggest that many feared that students would be unable to learn basic arithmetic skills (Pendleton 1977). In addition, educators expressed concerns that a digital divide would emerge as a result of the initial costs of the device and the operating costs due to the early models’ high consumption of battery power (Pendleton 1977). However, innovative teachers and educational psychologists showed that children could use calculators in meaningful ways without compromising the development of basic arithmetic skills. Instruction shifted toward problem-solving and the development of mathematical thinking through human–machine collaboration (Pendleton 1977 ).
However, systemic change regarding the status of the pocket calculator occurred only incrementally. The standards for curriculum and instruction issued by The National Council for Teachers of Mathematics document the gradual introduction of the pocket calculator. Whereas the recommendations from 1980 merely allowed the calculator, guidelines from 1989 and 1999 strongly encouraged the use of calculators (National Council of Teachers of Mathematics 1980; National Council of Teachers of Mathematics 1989; National Council of Teachers of Mathematics 1999). At the same time, the organization endorsed a new pedagogy that focused on problem-solving and the development of mathematical thinking. In other words, the introduction of the pocket calculator in American classrooms inspired rethinking educational objectives.
We will conclude this section by highlighting three insights from this case that will help us better understand the reception of machine translation and chatbot technologies among today’s educators. First, the pocket calculator needed about 20 years to become an established, uncontroversial tool used by millions of students and embraced by many of their teachers across middle and high school classrooms in America. Second, the gradual and intentional implementation of the pocket calculator into the educational system coincided with a rethinking of educational objectives that moved the focus from basic math skill development to teaching children to solve problems through a combination of mathematical thinking and human–machine collaboration. Third, this process was intentionally guided by the profession, and as a result of the thoughtful integration of the pocket calculator into a dynamic learning environment, the technology did not disrupt math education. The pocket calculators enhanced math education and resulted in a shift toward mathematical thinking through human–machine collaboration.
Google Translate poses a similar challenge to language educators that 40 years ago math educators were facing. Should they ban the technology or are there ways to integrate machine translation applications that enhance the communication range of their students without compromising basic skill development?
The service introduced in 2006 and updated in 2016 is the undisputed market leader among free consumer-oriented online translation platforms. The technology is the result of decade-long efforts in the areas of computational linguistics and artificial intelligence research (Poibeau 2017; Lewis-Kraus 2016). Current figures are not available, but already in 2016, the service had more than 500 million users who translated 100 billion words per day with Google Translate (Turovsky 2016). The technology impacts verbal behavior at a global scale. According to a New York Times news report, the 2018 FIFA Soccer World Cup in Russia is widely considered to be the first global event that demonstrated the emerging role of mobile translation applications. Hundreds of thousands of international soccer fans used machine translation to navigate the host country (Smith 2018). The technology promises the ability to communicate across languages without the efforts and resources necessary to invest years in language study. Translation errors and occasional intercultural misunderstandings that would horrify language teachers are tolerated by the users and regarded as a trade-off against the convenience of the technology.
The rapid improvement of Google Translate in the second half of the last decade creates a real challenge to conventional models of communicative language education. Whereas language educators until very recently could simply demonstrate the problematic output and this appeal to their reason to avoid crude technology (Steding 2009), technological progress in recent years makes this strategy less effective. And whereas language educators understand the limitations of the technology to produce satisfactory translations of texts that are only comprehensible through an understanding of their cultural embeddedness, many students and monolingual adults may not. As a result, the technology does more than just transform actual linguistic behavior. Google Translate triggers attitudinal change among learners, members of the society, and even decision-makers who manage shrinking educational budgets. The sheer existence of Google Translate helps to legitimize policy decisions that further devalue language education as a public good and reduce resources for those students from non-English speaking homes who rely on our assistance most (Salzman 2023). We do not think that it is controversial to state that Google Translate has already transformed the way millions communicate across languages and modified these users’ perceptions of the value of language proficiency. But will Google Translate also disrupt language education?
We argue that to confront the challenge that Google Translate poses to language education, teachers need to find ways to integrate the technology in an intentional manner into their classes. But this change should not be the sole responsibility of individual teachers or educational researchers. Instead of sporadic experimentation, we need systemic leadership. In contrast to the National Council for Teachers of Mathematics leadership in response to the calculator in the last century, the American Council for the Teaching of Foreign Languages (ACTFL) has not yet issued guidelines regarding the use of Google Translate. This is unfortunate despite the growing empirical evidence and an emerging consensus in the research community that machine translation applications have the potential to enhance language learning in certain language learning contexts and if implemented in a thoughtful and intentional manner (Aikawa 2018; Vinall and Hellmich 2022). The fact that these research findings have not been translated into curricular recommendations from organizations such as ACTFL is demoralizing for language teachers with positive attitudes toward machine translation. They would have to create sensible and consistent policies for their programs. But finding consensus on such a controversial and emotional issue across only a few teachers that work together in a language program is challenging. Frequently, the orthodoxy of the status quo simply dominates in such educational settings, and as a result of the lack of leadership from ACTFL, a large majority of programs continue to address the challenge by simply prohibiting the use of Google Translate both in the classroom and for homework assignments. Nevertheless, we are optimistic that in the long term, Google Translate will not disrupt language education more than the pocket calculator has disrupted math education. We predict that once the professional organizations will confront the issue and develop sensible policy recommendations regarding the use of Google Translate in language classes at the PK–12 and post-secondary level, we will see a similar shift toward acceptance and embrace of a new technology that we have observed among math educators in the last century and among philosophers in over the past two and a half millennia.
After having sketched three cases of technology adaptation that show similar patterns, we will finally turn to the impact of generative AI on education, and we will focus on ChatGPT’s impact on writing-intensive learning environments in language and humanities classes at the post-secondary level.
Over the past decade, most language and humanities educators were in a state of blissful ignorance regarding the potential impact that generative AI may one day have on their learning environments and professional community. This changed overnight in late 2022 with the release of OpenAI’s large language model ChatGPT and the accompanying media hype (Bartholomew and Mehta 2023). The new chatbot composed essays of previously unthinkable sophistication that sent shockwaves through the world.
The initial reactions among educators were characterized with the same mixture of fear and arrogance that we outlined in the first part vis-à-vis the invention of literacy, the pocket calculator, and the proliferation of improved machine translation applications. Although commentators reached a quick consensus regarding the shallowness of the chatbot’s output, many predicted the death of the college application essay (Marche 2022) and end of High School English (Herman 2022). The timing and tone of these voices contributed to a decades-long tradition in the humanities of doomsday thinking and a sense of permanent crisis that culminated in March 2023 with the publication of an alarming and much discussed essay entitled “The End of the English Major.” Independently of the recent technological advancements, the text obituarized in great detail the demise of a discipline that until fairly recently enjoyed a privileged position at the very center of the liberal arts curriculum across a wide spectrum of colleges and universities in the Anglophone world (Heller 2023).
The deeply pessimistic scenarios painted by many humanities educators in their immediate responses to ChatGPT were not put into perspective by emerging evidence that pointed toward shortcomings of the technology. Quite the contrary: Hallucinations, inaccuracies, errors, and shallowness further fueled their concerns. For humanities scholars, often equipped with only the vaguest understanding of the technology, ChatGPT quickly morphed from an existential threat to the writing-intensive college classroom to a hazard to democracy, an agent of nuclear proliferation, and, thus, an apocalyptic force with the potential to bring human life on the planet to an end.4
The concern that resulted in immediate educational policy in response to the new technology, however, sounded relatively benign compared to the apocalyptic doomsday scenarios, and it resonates with Socrates’ assumptions that literacy causes disruptive behavior among his students: School administrators, most swiftly at the K–12 level, cited concerns that relate to compromised learning opportunities as they banned the technology over the winter break by blocking access the OpenAI’s website from their school systems networks (Elsen-Rooney 2023). As institutions of higher education explored measures to prohibit ChatGPT and still struggled with questions on how to enforce such a ban, a variety of services were launched that promised to detect cheaters by identifying texts composed by chatbot technology (Hsu and Myers 2023). The problem appeared to be contained.
But how did our students respond to this situation? Besides the alleged risks to life on earth, hastily implemented bans, and the possibility of getting caught by detection technologies, many students quickly integrated ChatGPT into the growing repertoire of technologies they use in their classroom and for homework (Klar 2023). For students, collaborating with ChatGPT, it appears, became as natural as it was for their parents 25 years ago to use their word processor spell checker to increase orthographic precision, a practice that many of their teachers considered academically dishonest behavior a quarter of a century ago. As students displayed noncompliance with ChatGPT bans, those technologies that claimed to reliably detect the illegal use of ChatGPT turned out to be buggy, very easy to circumvent, and prone to produce false positives (Sadasivan et al. 2023).
These two realities, noncompliance to prohibitions and inaccurate enforcement technologies, coincided with a shift in the debate on the impact of ChatGPT on education from demonization toward acceptance and embrace. Instructors in and beyond STEM fields started to experiment in their classrooms with the intentional and productive integration of ChatGPT into their teaching practices to help students develop a critical appreciation of the affordances and limitations of the technology.
In the twelve months after the release of ChatGPT, the perceptions among some educators shifted from rejection to acceptance and embrace (Roose 2023). Academic conferences in a wide range of disciplines provide opportunities for researchers and instructors to explore productive ways to integrate ChatGPT into their pedagogies. This shift also manifested itself in educational policy: School administrators reversed their hasty decisions from early 2023 to ban the technology (Singer 2023; Serrano 2023). These are clear signals that an attitudinal shift from rejection to acceptance has already started, at least among some progressive and forward-thinking thought leaders in language and humanities education.
But how long will it take for ChatGPT and future chatbots to become an uncontroversial presence in writing-intensive language and humanities classrooms? Bill Gates famously suggested that we tend to overestimate the pace of innovation that will occur within the next year, but that we underestimate the extent of change that will happen over the next decade (Gates 1996). This observation, as well as our analysis of the reception of the pocket calculator and Google Translate, guides our prediction: In the next few years, we anticipate ongoing disorientation, confusion, and controversies among educators confronted with ChatGPT. However, we are certain that over a longer time horizon, chatbot technologies will become an integral part of language and writing-intensive humanities education.
Whereas we cannot predict a timeline, we are certain of a critical condition that needs to be met in order to see ChatGPT as an uncontroversial presence in our writing-intensive learning environments. Major professional organizations for collegiate language and humanities education, including the ACTFL, the Modern Language Association of America, the American Historical Association, and the American Philosophical Association, must show leadership and guide their constituencies, despite the controversial nature of the issue, toward an understanding that, if used appropriately, ChatGPT and future chatbots will enhance the intellectual depth of language and humanities classrooms if students are taught how to collaborate effectively with these technologies.
Inspired by the developments in mathematics education in the past century, we call on the above-mentioned professional organizations to guide their membership toward a positive and productive understanding of the role of machine translation and generative AI technologies in language and humanities classrooms. We know from the pocket calculator case study that the guidance from such organizations is critical for the intentional and strategic integration of online translators and chatbot technologies into the learning environments. These organizations can also help their membership to understand that once a general prohibition of online translators and chatbots is partially lifted, the technology will not become an uncontrolled omnipresence in the classroom. Assignments and classroom activities will vary and not become ‘easier’ when completed through human–machine collaboration. To guide such a systematic and intentional process, professional organizations must formulate new learning outcomes for AI-enhanced classrooms.
An overall objective of humanities education is to help students to develop the skill to ask questions. Philosopher Slavoj Žižek reminds his audience in a short video lecture that the core purpose of his field is to ask good questions (Žižek, n.d.). Harvard’s Graduate School of Education offers a professional development program aimed at teachers and professors to provide them with instructional strategies to enable their students to raise better questions (Harvard School of Education, n.d.). Asking good questions is teachable and learnable, and so is engineering effective prompts. Humanities classrooms in the future will be both about asking good questions and translating these questions into effective prompts. Students will also learn to critically interrogate the output of ChatGPT and enhance their critical thinking when processing information both in their classrooms and as citizens. This shift toward a ChatGPT-enhanced humanities classroom has the potential to reinvigorate the humanities. ChatGPT may become the innovation that leads the humanities out of a decade-long crisis.
We believe language and humanities educators also have an ethical obligation to integrate AI-enhanced technologies into their teaching. We are currently witnessing arguably the most significant technological social transformation since the proliferation of the World Wide Web in the early 1990s. These AI technologies are already creating new knowledge ecosystems that impact the workplace. In the 1970s, innovations in the field of applied robotics eliminated millions of blue-collar manufacturing jobs. In the 1980s and 1990s, the personal computers and computer networks eliminated many low-level clerical positions. Of course, these transformations also generated new jobs. Today, the rapid advancement in generative AI threatens jobs for highly skilled, creative white-collar workers across industries from publishing to software development through gains in productivity. Our students need to be ready for this transformation. They must learn to be effective and ethical collaborators with AI-powered technologies. If they don’t systematically develop the abilities to engage in meaningful human–machine collaboration during their years as undergraduate students, their professional futures are in jeopardy. Therefore, it is the duty of educators across the academic spectrum to integrate these technologies into our classrooms.
ChatGPT is not a threat to language education and the humanities. On the contrary: There are great opportunities to reenergize the humanities. Educators will develop strategies to integrate the technology into the classroom without compromising writing development by teaching young people to engage in human–machine collaboration. In the future, humanities scholars will teach young people how to use AI critically and effectively through techniques associated with the new field of prompt engineering. They will also teach young people to be able to understand the limitations of computer-generated texts, to identify stylistic features that may distinguish them from human-generated texts, and to question the factual accuracy of both machine- and human-generated texts. Thus, fostering a critical awareness for authorship and authenticity will become an important dimension for language and humanities teachers in the future. These are important roles for language and humanities education, but these outcomes cannot be achieved if we ban, ignore, or discourage the use of machine translation and generative AI in our classes. Language and humanities educators must welcome the arrival of ChatGPT because it will provide the humanities with an opportunity to reconsider and update its vision on how to prepare our students for critical citizenship in a world in which human- and machine-generated texts, artworks, and ideas will become increasingly intertwined.