KNOWLEDGE CENTER // ai-robotics // 

Will ChatGPT give us a lesson in education?

There might be a learning curve as AI tools grow in popularity, but this technology offers teachers opportunities to help pupils acquire new skills around formulating questions and in critical thinking.

Published September 21, 2023
Share this Article

After super-powerful chatbots such as ChatGPT-4 started becoming widely available this year, school administrators around the world moved to ban the technology from classroom education. Nearly half a dozen US districts blocked access to AI and other multimodal large language models (MLLMs) on school devices and networks, and some Australian schools turned to pen-and-paper exams after students were caught using chatbots to write essays.

Teacher resistance reached its peak when ChatGPT-4 was released in March 2023. Developed by San Francisco-based OpenAI, this generative AI can write poetry and songs, and it passed the US bar exam in the 90th percentile. MLLMs can process images as well as text, and they answer queries by looking for patterns in online data.

When asked why Seattle schools had moved to restrict ChatGPT-4 from district-owned devices, a spokesperson for the district, Tim Robinson responded: “Generative AI makes it possible to produce non-original work, and the school district requires original work and thought from students.”

However, confronted with AI’s seemingly inevitable growth, many schools are now reversing course, albeit carefully. “There’s still a fear that students will use large language models as shortcuts instead of practicing to become better writers,” says Tamara Tate, a project scientist at the University of California, Irvine’s Digital Learning Lab. She adds that if AI is here to stay then students might be better served by educational strategies that promote creative uses of the technology. “These tools can provide students with in-the-moment learning partners on a huge range of topics.”

In the view of Tate and other experts, MLLMs have several positive educational roles to play, including encouraging students to evaluate answers rather than automatically accepting them. Careful thought is needed to ensure that these potential upsides are realized, however, and to mitigate any potential downsides. How might AI-assisted education unfold?

Classroom gains and losses

Proponents of the educational uses of generative AI point to several advantages. For one thing, ChatGPT-4 has an extraordinary command of proper sentence structure, which Tate says could be especially useful for non-native speakers seeking insight into how to correctly incorporate words and phrases in real-world settings.

Xiaoming Zhai, a visiting professor who studies applications for machine learning in science education at the University of Georgia in Athens, believes that teachers also stand to benefit from using models like ChatGPT as teaching aids. The models can generate personalized lesson plans and other resources geared to the needs of individual students while assisting with grading and other mundane tasks. In Zhai’s view, that capability frees time so that teachers can provide students with more one-on-one feedback. By efficiently automating basic tasks like searching out relevant literature and materials and summarizing content, the models allow students and teachers alike to “focus more on creative thinking”.

Creative thinking will help people get the most from MLLMs. “Large language models are like search engines: garbage in, garbage out,” Tate wrote in a recent preprint paper.

Teachers can help their students develop expert prompting and search optimization strategies to generate the most helpful content. “To use the technology effectively, students need to double down on the work of revision,” Tate says. “ChatGPT-4 can generate a fluent first-draft response, but not a lot of deep content. The responses can be vague and often wrong.”

While researching this article, we asked ChatGPT-4 to tell us, in its own words, why it would be a helpful tool for education. Seconds later, the model provided a detailed answer in which it claimed it had access to vast amounts of knowledge and could respond instantly to questions in multiple languages at any time. But the model was also candid about its limitations, pointing out that if ChatGPT-4 doesn’t understand the nuances of a particular question, then it might deliver incomplete or erroneous information that could be problematic for students who rely solely on the model for answers.

Given that MLLMs may fail to support their claims with reasons or evidence, this gives teachers the opportunity to demonstrate the need for critical reasoning. “Students need to think about who said what and why in a given response,” Tate says.

Lea Bishop, a law professor at Indiana University’s Robert H. McKinney School of Law in Indianapolis, agrees that potential inaccuracies will require students to scrutinize the model’s output. “You have to develop the habit of questioning everything you see,” she says. “That means asking probing follow-up questions and triangulating with other sources of knowledge to see what matches up. I need you to show me that you’re better than the computer.”

Dealing with cheating and secrecy

Some experts worry that, for less motivated students, these sorts of models provide a tempting source of ready-made content that diminishes critical thinking skills. The predecessors to ChatGPT-4 proved themselves capable of generating essays and responses to short-answer and multiple choice exam questions. “We already have a lot of problems with students who feel that learning equates to searching, copying and pasting,” says Paulo Blikstein, an associate professor of communications, media, and learning technologies at Columbia University, in New York. “With AI, we have an even greater risk that some will take the shortest and easiest path, and incorporate those heuristics and methods as a default mode.”

Teachers can try to flag AI-generated content with software packages called output detectors. But these packages have questionable reliability, and in July 2023, OpenAI discontinued its own output detector citing concerns over low accuracy. Experts worry that models like ChatGPT-4 will increasingly put teachers into the unwanted role of having to police students who break rules on AI-generated content.

Such concerns are valid, and contributed to the initial negative responses. Blikstein says early school restrictions may be seen as a “knee-jerk reaction against something that is still very hard to understand”.

And although these bans are gradually being lifted, ChatGPT is not yet in the clear: its workings remain opaque, even to the experts. Between its inputs and outputs are billions of ‘black-box’ computations. ChatGPT is said to be OpenAI’s most secretive release yet. The company hasn’t disclosed anything about how the model was trained, and proprietary systems developed by competing companies are now driving an AI ‘arms race’ — advancing at mind-boggling speed.

Defining core skills

Does the rise of MLLMs mean writing itself will go the way of older skills, in much the same way that basic mathematical competence was rendered nearly obsolete by calculators? Experts offer a range of opinions. Taking a bullish stance, Bishop argues that functional writing skills such as spelling, grammar, and knowledge of how to organize a standard essay “will be totally obsolete two years from now”. Others see need for caution. “Without practice writing their own content, it will be hard for students to predict where and how writing mistakes are made — and then spot them in AI-generated content,” Tate says.

In Blikstein’s view, this grey area underscores a need to proceed slowly. “The stakes are high with language,” he says, adding that generative AI can be a powerful partner for enhancing — not replacing — a student’s cognition. But important questions remain. “For instance, we don’t have a good model for authorship in the area of AI-generated content,” he says. “The text appears out of the ether, and we have no idea where it came from.” For accomplished professionals, using AI to boost writing skills may not pose much of a problem. “But that’s not true for younger people who don’t understand the craft of writing to begin with,” he adds.

Blikstein also worries that AI might perpetuate educational inequities. Wealthier school districts have resources to apply the technology with an emphasis on human interaction and project-based learning, while poorer schools might move increasingly towards automation to save money. “If you settle for something cheap, it can take over your whole system,” he says. “Then five years later, it’s the new normal,” he says.

Ultimately, AI could offer an evolution in educational norms that sends educators back to basics. “We have to identify the core competencies that we want our students to have,” says Zhao. “How are we going to incorporate models like ChatGPT into the learning process? We are preparing future citizens, and if AI will be available, then we need to think about how we build competence in education so that students can be successful.”

Explore FII’s publications site for more thought-provoking articles and podcasts about artificial intelligence and the impact of technology on society.

Produced by

FII Institute

FII Institute is a global nonprofit foundation with an investment arm and one agenda: Impact on Humanity. Committed to ESG principles, we foster the brightest minds and transform ideas into real- world solutions in five focus areas: AI and Robotics, Education, Healthcare and Sustainability. We are in the right place at the right time – when decision makers, investors and an engaged generation of youth come together in aspiration, energized and ready for change. We harness that energy into three pillars – THINK, XCHANGE, ACT – and invest in the innovations that make a difference globally.

Related Articles