OpenAI wants ChatGPT in classrooms, despite its high risk of misuse and confusion. The company has suggested ways for teachers to use the system beyond its traditional role as a research assistant for procrastinating students.
Dubious use case of unknown prevalence: plagiarism makes the chatbot controversial. Teachers worldwide have caught or suspected students of using ChatGPT to write essays or answer take-home quizzes. Based on your educational philosophy, this may be cheating, fair play, or something in between, but it is disrupting lesson plans worldwide.
OpenAI wants to rehabilitate its image in education, so it has offered a number of compelling ways to use it in the classroom.
ChatGPT can help English learners translate and write clearly. The system may forget facts, but a corpus of mostly correct writing keeps it grammatically correct while hallucinating. I’ve heard this from non-native English speakers, and it couldn’t be more useful to a 5th grader than to an adult.
OpenAI also repeats experts’ claims that it could help create new test questions or role-play as a job interviewer.
The best advice, from Geetha Venugopal in Chennai, India, is to teach kids not to trust everything a computer says:
In her classroom, she advises students to remember that the answers that ChatGPT gives may not be credible and accurate all the time, and to think critically about whether they should trust the answer, and then confirm the information through other primary resources. The goal is to help them “understand the importance of constantly working on their original critical thinking, problem solving and creativity skills.”
If those kids learn that, they’ll do what half the world can’t!
In a FAQ, the first (and likely most asked) question is about recognizing AI-generated content as student work.
OpenAI’s words are clear:
Do AI detectors work?
In short, no. While some (including OpenAI) have released tools that purport to detect AI-generated content, none of these have proven to reliably distinguish between AI-generated and human-generated content.
Instead of asking ChatGPT or other systems questions like “did you write this?” they advise against misinterpreting the model. They admit “small edits” are enough to avoid detection. For example, removing the “As an AI, I…” section that lazy plagiarists overlook.
OpenAI recommends having students show their work and drafts, including conversations with AI models, to show they are not blindly trusting them. They would know this in M. Venugopal’s class.
The company provides extensive prompts to cast ChatGPT as a tutor or assistant: “You are a friendly and helpful instructional coach helping teachers plan a lesson,” etc. While educators may not want to use these right away, reading them suggests what direction the agent needs to be helpful but not too helpful.
AI agents like ChatGPT will be part of education in the future, even though they can be abused. Who can say they didn’t install games on their TI-83 graphing calculator or copy their Napoleon report from Encarta? Probably many young people. Despite dating myself, the similarities are clear. Only if students and teachers adopt and customize the tools will they adapt.