ChatGPT: friend or foe?
Socrates once said, “The secret of change is to focus all of your energy not on fighting the old, but on building the new.”
Perhaps, on some level, he was urging us to accept the fact that, in the sacred field of education, changes will inevitably arise. And that maybe, the key isn’t to fight the changes, but to embrace and then build on them. Was he talking about AI bots? Well, we’ll never really know. But maybe!
ChatGPT, in the last couple of months, has been blowing up the internet—similarly to the unfolding of the ALS ice bucket challenge in 2014 (remember that?).
What is ChatGPT?
A product of the company OpenAI, ChatGPT is an artificial intelligence chatbot that can respond to prompts, instructions, & questions (and do a lot more). Seemingly, ChatGPT can carry out some pretty impressive tasks, including debugging code, writing essays, taking debate stances, and solving problem sets.
Officially, ChatGPT is a “large language model,” which means that it can instantly generate readable text in different styles and for different purposes. It uses OpenAI’s GPT-3 technology, which stands for Generative Pre-trained Transformer 3.
The chatbot is user-friendly and (currently) free, making it pretty appealing to students and people in various fields.
ChatGPT General uses
To put it simply, ChatGPT can do a lot. Some if its functions include:
- Writing short-form content such as poems
- Writing long-form content such as research papers
- Explaining topics
- Brainstorming ideas
- Expressing personalized communication such as email responses
- Summarizing content
- Translating language
- Marketing content
But of course, as with any new product, there are clear limitations to ChatGPT. And if you’re planning to use the software, you should definitely consider these (which are only a few):
- It has limited knowledge of events which occurred in the past year.
- It can misinterpret your question.
- It can incorrectly output information.
How does ChatGPT work?
ChatGPT generates human-like text when given a question, prompt, or instruction.
To do this, the software uses a learning technique called “transformer architecture”; essentially, ChatGPT sorts through a ton of data—which is made up of billions of words—to produce the text.
In order to train ChatGPT, the OpenAl team used text databases from the internet; specifically, they fed 300 billion words into the system. This included information from books, articles, Wikipedia, and various pieces of writing on the internet.
ChatGPT works on probability, by guessing what the next word in a sentence should be. To get to this stage, it went through a supervised testing stage, where the team fed the software inputs—for instance, “what color are tomatoes?”
If the bot provided incorrect answers, the team inputted the correct answer into the system—in turn, teaching ChatGPT the correct answers and building its knowledge.
In the second training stage, the team offered the bot multiple answers while a team member ranked the answers from best to worst, training the software on comparisons.
What are the main concerns about ChatGPT in education?
As humans, it seems natural to feel somewhat threatened if we believe that a bot may be able to carry out human functions better than we can.
And with ChatGPT, of course, educators have concerns about the new software’s potential negative impacts in the classroom. For instance, many teachers fear that students will never need to learn to write if they can simply rely on ChatGPT.
And some worry that the new software will lead to the end of the college essay.
Plus, in general, teachers fear that inviting ChatGPT into the classroom will disintegrate the trust held between teachers and students. With the accessibility of ChatGPT, it’s not surprising that teachers are worried about students taking advantage of the software.
Meanwhile, others fear that the reliance on ChatGPT will make it difficult, if not impossible, for students to develop critical-thinking and problem-solving skills.
What is the role of ChatGPT in the classroom?
Of course, with the arrival of this incredibly intelligent software, educators and teachers have voiced their fears.
But let’s look at all the ways in which schools can positively utilize ChatGPT. After all, we can probably say that, regardless of whether or not ChatGPT sticks, artificial intelligence is here to stay.
ChatGPT can provide specific feedback about grammar, sentence structure, and vocabulary choice; and it can also scan text for consistency. Considering the high volume of work that teachers do outside of the classroom, the software could be quite handy here, freeing up time for the teachers to put their energy elsewhere.
Based on classroom material/discussions, teachers can utilize the software to create quizzes to check on students’ knowledge. This could be particularly useful during a high-stress time such as finals week, when students need to review a lot of material.
Writing/discussion prompt generator
ChatGPT can also help teachers generate prompts, which can be very useful in a writing/english class, which often requires open discussions about concepts, language, and texts.
Reading comprehension tool
In addition, ChatGPT can help students improve reading comprehension skills. Teachers can instruct ChatGPT to generate a prompt to which students can provide answers. This offers teachers a valuable way to assess students’ comprehension and ultimately determine where they may need extra help.
By generating sentences using words that are unfamiliar to students, ChatGPT can help students widen their vocabulary.
ChatGPT can also generate sentences using a particular word; then, teachers can instruct students to guess the meaning of the word based on the context of the sentence
Significantly, ChatGPT can serve as a translator. This may be useful to a student whose native language isn’t English. They can enter their thoughts into the software in their native language, and ChatGPT can translate them into English—in turn, this empowers non-English students to better connect with the other students in the class.
There have been some developments in products to detect ChatGPT-produced content
As educators strive to consider how this new software may change the classroom experience, they may be happy to see positive developments related to minimizing unethical use of ChatGPT.
For instance, OpenAI recently launched a program called “AI Text Classifier,” which will flag pasted-in text with the following labels: “very unlikely”, “unlikely”, “unclear if it is”, “possibly”, or “likely” to determine whether text is ChatGPT-produced.
Additionally, a Princeton student, Edward Tian, created an app, GPTZero, that aims to determine whether essays are AI-generated. Unfortunately, the website crashed due to high traffic (which is a testament to the way in which ChatGPT is blowing up), but is available to use on Tian’s Streamlite page.
Tian shared videos which compared the app’s analysis of a New Yorker article and a ChatGPT-produced letter; and the app correctly identified the AI-generated text.
The problem isn’t “what”, it’s “how
Over the years, researchers have looked into the technology’s impacts—and it seems that they usually come to a general conclusion: what matters more than the actual technological product is how we use it.
For instance, since it’s arisen, there’s been talk about social media’s negative impacts. However, researchers, time and time again, illustrate how it’s not that social media is inherently negative—rather, it can cause negative effects when we use it in certain ways.
For instance, when we use social media to compare ourselves to airbrushed, facetuned models, yes, we may feel depressed.
On the flip-side, when we use social media for positive outlets, such as connecting with like-minded people, we may actually reap benefits.
Artificial intelligence isn’t going anywhere—it’ll likely just keep growing. So it’s in our best interests to determine how we can leverage these products to optimize the learning experience.
Software like ChatGPT won’t replace critical thinking
In school, one of the most critical skills that students can learn is critical thinking. Critical thinking empowers students to develop their own opinions, effectively solve problems, and make important decisions.
In almost every profession, critical thinking is important. While a computer or bot can recite facts, spit out data, and draw conclusions, humans have this incredible ability to process information and think about it while considering all other factors involved such as circumstances and emotions.
ChatGPT, according to the anecdotal evidence, cannot execute critical thinking like we can.
ChatGPT, similar to social media and really, any product, is not innately bad. Since it doesn’t have a conscience, it obviously isn’t able to have morals or true motives.
Remember: it’s not what it is, but how we use it.
Teachers will have to change with the changing times
What we must keep in mind is that homework aids are certainly not new. For years, students have been utilizing sites such as Chegg and Quizlet as homework helpers. And as technology continues to develop, we can confidently assume that students will continue to take advantage of it.
As noted above, ChatGPT can’t replace critical thinking. So instead of trying to restrict students from using the software (which will probably be an impractical effort), educators should start considering how they may modify their assignments to tap more into critical thinking skills.
After all, if assignments aren’t building skills, how effective are they in the first place?
The point of school isn’t to craft students into robots who can memorize and recite facts. Rather, schools build students up to be able to think for themselves.
AI tools will continue to arise
If history can teach us anything, it’s that we’ll continue to see technology advance and advance. And if you try to resist these changes, well, you’re pretty much putting yourself at a disadvantage.
That is, if you refuse to welcome these new technologies and understand how they can be a positive force, you’re failing to see what could be.
As someone who spends a lot of time writing, it often seems necessary to remind myself why I write. One reason I always come back to is that writing provides the opportunity to share my unique voice.
And having that canvas to share oneself is truly a privilege for people who don’t best express themselves verbally.
So of course, reading about how this powerful AI product could diminish the art of writing leaves me with some apprehension. But when it comes down to it, I don’t really believe that ChatGPD’s arrival translates to the end of writing.
It seems, as humans, we have the innate desire to relate to one another. And, significantly, we have the need to to express ourselves. We are complex beings, and a complex bot won’t take that away from us.
And while the new, shockingly intelligent software seems like a huge deal now, remember this: we’re prone to believing that something will have significantly more of a negative impact than it actually ends up having.
When faced with major changes, we often freak out initially; research has found that our experience of uncertainty and/or change is extremely similar to our experience of failure.
But if you look at history, what usually happens is we adapt. We grow with change, and show more resilience than we ever thought possible—namely, we show our collective ability to exhibit a growth mindset.
Just around 40 years ago, the internet became publicly available. There are literally countless ways in which the internet provides opportunity for wrongdoings—and while these do admittedly still occur, we have evolved, and have continuously put in place strategies to mitigate potential consequences.
As Charles Darwin once said, ”It is not the strongest of the species that survive, nor the most intelligent, but the one most responsive to change.”