ChatGPT (Generative Pre-trained Transformer) is a chatbot launched by OpenAI in November 2022. It is built on top of OpenAI’s GPT-3.5 family of large language models, and is fine-tuned with both supervised and reinforcement learning techniques.
ChatGPT was launched as a prototype on November 30, 2022, and quickly garnered attention for its detailed responses and articulate answers across many domains of knowledge. Its uneven factual accuracy was identified as a significant drawback.
Positive reactions
ChatGPT was met in December 2022 with generally positive reviews; The New York Times labeled it “the best artificial intelligence chatbot ever released to the general public”.
Samantha Lock of The Guardian noted that it was able to generate “impressively detailed” and “human-like” text.
Technology writer Dan Gillmor used ChatGPT on a student assignment, and found its generated text was on par with what a good student would deliver and opined that “academia has some very serious issues to confront”.
Alex Kantrowitz of Slate lauded ChatGPT’s pushback to questions related to Nazi Germany, including the claim that Adolf Hitler built highways in Germany, which was met with information regarding Nazi Germany’s use of forced labor.
In The Atlantic‘s “Breakthroughs of the Year” for 2022, Derek Thompson included ChatGPT as part of “the generative-AI eruption” that “may change our mind about how we work, how we think, and what human creativity really is”.
Kelsey Piper of Vox wrote that “ChatGPT is the general public’s first hands-on introduction to how powerful modern AI has gotten, and as a result, many of us are (stunned)” and that “ChatGPT is smart enough to be useful despite its flaws”. In a tweet, tech mogul Elon Musk wrote that “ChatGPT is scary good. We are not far from dangerously strong AI”.
Negative reactions
In a December 2022 opinion piece, economist Paul Krugman wrote that ChatGPT would affect the demand of knowledge workers.
The Verge’s James Vincent saw the viral success of ChatGPT as evidence that artificial intelligence had gone mainstream.
Journalists have commented on ChatGPT’s tendency to hallucinate (confidently give false answers that seem unjustified by its training data).
Mike Pearl of Mashable tested ChatGPT with multiple questions. In one example, he asked the model for “the largest country in Central America that isn’t Mexico”. ChatGPT responded with Guatemala, when the answer is instead Nicaragua.
When CNBC asked ChatGPT for the lyrics to “The Ballad of Dwight Fry”, ChatGPT supplied invented lyrics rather than the actual lyrics.
Researchers cited by The Verge compared ChatGPT to a “stochastic parrot”, as did Professor Anton Van Den Hengel of the Australian Institute for Machine Learning.
In December 2022, the question and answer website Stack Overflow banned the use of ChatGPT for generating answers to questions, citing the factually ambiguous nature of ChatGPT’s responses.
Economist Tyler Cowen expressed concerns regarding its effects on democracy, citing the ability of one to write automated comments in an effort to affect the decision process of new regulations.
The Guardian questioned whether any content found on the Internet after ChatGPT’s release “can be truly trusted” and called for government regulation.
Ax Sharma of Bleeping Computer noted that ChatGPT was capable of writing malware and phishing emails. The CEO of ChatGPT creator OpenAI, Sam Altman, wrote that advancing software could pose “(for example) a huge cybersecurity risk” and also continued to predict “we could get to real AGI in the next decade, so we have to take the risk of that extremely seriously”.
Implications for education
In The Atlantic, Stephen Marche noted that its effect on academia and especially application essays is yet to be understood. California high school teacher and author Daniel Herman wrote that ChatGPT would usher in “The End of High School English”.
In Nature, Chris Stokel-Walker pointed out that teachers should be concerned about students using ChatGPT to outsource their writing but that education providers will adapt to enhance critical thinking or reasoning.
Emma Bowman with NPR wrote of the danger of students plagiarizing through an AI tool that may output biased or nonsensical text with an authoritative tone: “There are still many cases where you ask it a question and it’ll give you a very impressive-sounding answer that’s just dead wrong.”
Joanna Stern with The Wall Street Journal described cheating in American high school English with the tool by submitting a generated essay.