ChatGPT is an AI tool that has swept into our collective conscience very recently and appears to be taking the world by storm, with numerous articles, tweets, blog posts hailing or lamenting its arrival. Briefly, ChatGPT will take any text input and provide within seconds a fully formed response. This can be anything from a simple direct question (e.g. give the me the top three reasons to take up exercise) to more multi-layered requests (e.g. write an essay on the effects on COVID-19 on the mental health of first responders and present as a dialogue between Hamlet and Darth Vader). The results are amazing if not always spot on.
Where a search engine like Google will respond to a query by pointing you to resources, ChatGPT will make an attempt to give you the solution directly. Many of us are probably used to the basic functionality of chatbots from Siri or Alexa. You can ask what the weather will be tomorrow but anything beyond that gets you wishing for the good ole days when the robots didn’t talk back. Alexa can barely play a song request without getting the artist wrong three or four times to the point where I have to speak like a stern schoolteacher: crisp and direct and seething with frustration. It’s easier and way cooler to go put on a record, so we’ve already been pretty blasé about chatbots even when it’s wild to recognize we are talking to our machines.
Nonetheless, these things continue to improve, and ChatGPT takes that improvement to another level of Wow. First-time users will have fun experimenting with unique prompts for which ChatGPT unbelievably churns out a response in seconds. But, frankly, this gets boring until one realizes that ChatGPT is more than just a toy and can actually provide a useful service with the right direction. So on to Phase 2.
Commentators have expressed some alarm that students will use ChatGPT to write their assignments for them, and this indeed is a possibility, and we would be fools to pretend that students won’t try this. At the same time, there are many potential uses for ChatGPT in educational contexts that will demand creative and critical thinking and that may very well help learners to become better writers and thinkers, so we can be rightfully concerned but curious as well. Users, both educators and students, are only beginning to experiment with its potential. If you are following along, it seems that people are offering new ideas and suggestions on a daily basis.
A few frontrunners of opinion have emerged. For instance, some educators want to ban the technology, and this has happened in school districts here and there. On the face of it, this appears to a Luddite reaction to the sheer brightness of new technology and there will be plenty of fingerwagging to this approach, but it is understandable insofar as there is no authoritative understanding of the potential and limits of the technology, not to mention the hazards. You know that obnoxious tech phrase “Move fast and break things”? Well, here we are. Still obnoxious. So I don’t blame anyone for wanting to call a timeout when we have barely had time to process how the technology works or how to use it wisely, never mind what the ethics were in building it. However, banning is already off the table. First of all, banning what exactly? ChatGPT isn’t the only game in town. This is like trying to stem the tide with a shovel. This is also related to the idea to take writing out of the computers so that it’s neither copiable nor detectable. Get students to brush up on their penmanship (sic). I am not sure anyone really thinks this is a good idea, but it has popped up.
Another concerning reaction is the clarion call to join the ed tech surveillance arms race by advocating for ChatGPT detection tools in the manner of Turnitin and SafeAssign that go after the cheaters and plagiarists with the build-a-better-mousetrap mentality while secretly employing students at no cost as data entry workers to help strengthen the tool itself. This has serious problems, not least the adversarial depiction of the student-teacher relationship and the embedded mentality that everyone is a cheater given the lack of top-down control from the authority in charge. The good news is this discussion has been around for a long time, and there are cogent responses to the hazards of over-teching your academic environment through detection and surveillance. Let’s just not go there.
So if you can’t outsmart the outsmarters, what’s left? This leads us to the third chorus of voices of the If-You-Cant-Beat-Em-Join-Em crowd. These are the folks who continue to play with the prompts and make interesting discoveries that they feel deserve sharing with the public. But they also include those educators who are trying to come to grips with the reality of ChatGPT and other AI tools and are looking for ways to work with not for the technology. In the resources below, I highlight a few articles and presentations where people have already compiled lists of useful activities and strategies using ChatGPT and other AI tools.
There might also be a growing fourth school of thought out there, those who are still thinking and testing and questioning, not simply accepting the tool with resignation but trying to see where AI has come from and where it has arrived for us at this moment in our histories. There has also been some discussion about what the purpose of writing is in higher education. Why are we assigning essays in the first place? It’s perfectly okay to ask these questions. These folks may have fewer exclamation-driven blog posts out there, but their analyses and speculations are those I’m trying to keep an eye on for my own ongoing attempts at understanding.
Past educational technologies have come and gone but few have arrived with such a brash entrance, with the piano going silent as the saloon doors swing in silent anticipation of what’s next. What appears to be missing while ChatGPT is heralding a kind of armageddon is that Artificial Intelligence (AI) has merrily been giving us more tools to work with all along. This is another development that appears to have been cast aside for the moment while we freak out about essay-writing bots and the end of Education. Educators may be worried about students popping their research questions into ChatGPT, but students are not the intended market for this tool. You are, dear reader.
AI has been helping writers, designers, educators, and anyone else who operates a computer or wields a device, especially as part of their job. ChatGPT draws up lesson plans and creates test questions with the same level of enthusiasm as it spits out essays. So are you going to use it? This is what Large Language Models do, combing scads of data sets to be able to assemble predictable strings of text together. Of course, this is going to have an impact for anyone who has writing in their job description. I watched a promo for the tool known as Jasper, which promises to take the pain out of blog posting by writing your posts for you…because blog posts are sooooo painful to write. The promo includes a special reassurance that Jasper “can create original content without plagiarizing anything!” I appreciate the offer of help, but it’s precisely this overblown cheerleading that gives pause….”So you can get the tech to do your writing for you, but don’t worry, it’s totally legit!” The logic is that if you can get something rather than someone to do your work, it’s all fine. Or what about Elicit, a tool to help you refine your research questions, gather resources, and develop your lit review? Is that cheating or is it just better searching functionality? Lots of people already use these tools to clean up their reference lists, fix their grammar and sentence structure, and now compose the kind of tired sentences that might appear in a mission statement, strategic report, or job description, not to mention written assignments, as long as you want your writing to be predictable and as soothing as a really kick-ass Muzak track. The list of tools goes on and on, more than most of us are probably aware of. We’ve come a long way from the Spellchecker.
It’s in times like these that it’s important we get some perspective. Neil Postman argued that a new technology is more ecological than additive. That is, it doesn’t just fit into its environment, it changes it. He also referred to technological change as a Faustian bargain, and so it is worth considering what lies beyond the rush of gratification when a new tool does something that seems miraculous. One way to see this is recognizing that technology can improve in one area and cause havoc or harm in another, and we may currently have no idea of the distribution of that kind of result. Who is benefitting from this technology? Who is losing out or worse? This is playing out right before our eyes at the moment, so let’s take heed and be measured about what we’re giving up with the accommodations we make by integrating AI into our working lives.
For those who need to catch up, my colleagues and I have been sharing stories, posts, presentations. Many of them appear here or in the lists below. The following readings below offer some insight into ChatGPT with a focus on its application in educational settings. I’ve added brief summaries after each entry to make your scanning even more efficient. But keep in mind, many of these sources will be out of date in about a month.
Bates, T. (2023). Playing with ChatGPT: now I’m scared (a little). Online Learning and Distance Education Resources. Contact North. https://www.tonybates.ca/2023/01/02/playing-with-chatgpt-now-im-scared/
A typical first impression about ChatGPT. There are a million of these types of articles out there, especially in the blogosphere. Bates is a thoughtful guy so it’s worth reading his reactions. ChatGPT will impress, but it sticks to generalities and ideas that you can anticipate with a little common sense and experience.
Contact North (2023). AI in Higher Education Resource Hub. https://teachonline.ca/ai-resources
Consider this as a starting point if you want to know more about AI in higher education. It has a growing bunch of useful articles, including tips for how to work with ChatGPT in educational contexts. This is a good example of how the resources are growing and becoming more informative as we collectively figure things out.
Marcus, G. (2022). AI platforms like ChatGPT are easy to use but potentially dangerous. Scientific American. https://www.scientificamerican.com/article/ai-platforms-like-chatgpt-are-easy-to-use-but-also-potentially-dangerous/
Very impressive, but often inaccurate or off base if you are not careful. A potential powder keg of misinformation if not used carefully.
Wiley, D. (2023). AI, Instructional Design and OER. Improving learning. https://opencontent.org/blog/archives/7129
Wiley looks at the AI issue from the perspective of the (instructional) designer and sees light at the end of the tunnel. Like a lot of new powerful technologies, he wants to see how something like ChatGPT can influence our practices, especially for those whose work overlaps with ChatGPT’s promise. My colleague, Steven O’Hearn, takes a similar position, celebrating the wonder and then anticipating the potential applications, in this demonstration-of-concept infographic. Both Wiley and O’Hearn are imagining the practice once we’ve got past the initial shock of the tools’ entry but with a distinctively optimistic bent.
Alby, C. (2023). ChatGPT: Understanding the new landscape and short-term solutions. Shared Googledoc. https://docs.google.com/document/d/1ERCgdylG2LyOeL93aWrK6Jf97N_m1qaueN9W4kzO0Rk/edit?usp=sharing
This new technology gives us an opportunity to reflect on what we’re doing with assignments and how we might want to rethink some of our approaches. Like many thoughtful educators who have tackled this topic, Alby is more concerned with how students learn generally and wants to explore how we navigate this change rather than seeking to block it.
Contact North. (Jan. 2023). When ChatGPT makes it easier to cheat. Teach Online. Contact North. https://teachonline.ca/tools-trends/when-chatgpt-makes-it-easier-cheat
A bit of a rehash of the suggestions when educators were seeking to avoid using plagiarism detection software. Not to say it’s not good advice. It is, but we’ve been here before, so educators should take heart.
Greene, P. (2022). No, ChatGPT is not the end of high school English. Instead, here are the useful tools it offers teachers. Forbes Magazine. https://www.forbes.com/sites/petergreene/2022/12/11/no-chatgpt-is-not-the-end-of-high-school-english-but-heres-the-useful-tool-it-offers-teachers/?sh=5330de941437
Maybe the 5-paragraph essay is dead. Perhaps it should be. ChatGPT is not good for highly localized or personalized information. This could be a key to its utility. It also makes stuff up when it doesn’t have an answer…just like the bluff meisters of old.
Marche, S. (2022). The college essay Is dead. The Atlantic. https://www.theatlantic.com/technology/archive/2022/12/chatgpt-ai-writing-college-student-essays/672371/
Is the title an example of the glass half full or half empty? ChatGPT works amazingly well. This may be a good time to rethink our assessments anyway. It’s also a good moment to pause and consider why it’s not great when humanities and the sciences go their separate ways.
Metzler, K. and ChatGPT (2022). How ChatGPT could transform higher education. Social Science Space. https://www.socialsciencespace.com/2022/12/how-chatgpt-could-transform-higher-education/
Well-informed and helpful article about potential uses of ChatGPT and how this is only the latest development in the use of AI for writing tasks. Note the author attribution!
Mollick, E. (2022). How to use AI to teach some of the hardest skills. One Useful Thing. Substack. https://oneusefulthing.substack.com/p/how-to-use-ai-to-teach-some-of-the
Takes an alternative perspective by looking into how to employ ChatGPT in productive ways in an educational setting. There is potential to encourage creative thinking among students as well as challenging them to add more in-depth explanations to AI-derived responses. Encouraging. The author references his recent research on new modes of learning using AI: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4300783.
Prochaska, E. (Jan. 2023). Embrace the bot: Designing writing assignments in the face of AI. Faculty Focus. https://www.facultyfocus.com/articles/course-design-ideas/embrace-the-bot-designing-writing-assignments-in-the-face-of-ai/?st=FFdaily%3Bsc%3DFF230123%3Butm_term%3DFF230123&mailingID=4402
It’s here. Get on with it. Instructors use technology to help with their work, and so do students. The author finishes with some practical strategies for writing assignments.
Roose, K. (2022). Don’t ban ChatGPT in schools. Teach with it. The New York Times. https://www.nytimes.com/2023/01/12/technology/chatgpt-schools-teachers.html?searchResultPosition=2
We may worry about the cheating potential afforded by ChatGPT, but what about its learning potential? Let’s try to be more informed and reflective so that we can teach better.
Trust, T. (2023). ChatGPT and Education. https://docs.google.com/presentation/d/1Vo9w4ftPx-rizdWyaYoB-pQ3DzK1n325OgDgXsnt0X0/preview?slide=id.g1c66bf93982_1_5
Excellent primer on ChatGPT in addition to very good suggestions about its usage in educational settings.
Veletsianos, G. (Jan. 2023). AI use in class policy. George Veletsianos, Phd. https://www.veletsianos.com/2023/01/12/ai-use-in-class-policy/?utm_source=rss&utm_medium=rss&utm_campaign=ai-use-in-class-policy
A UPenn professor, Ryan Baker shares his policy on the use of AI in his class. This includes not only ChatGPT but lots of other examples, such as Dall-E and Stability AI. Takes a commonsense approach and highlights briefly the cautions students need to keep in mind.
Clarke Gray, B. and Lamb B. (2022). We are not good thought leaders. The You Got This! podcast. Season 3, Episode 9. https://yougotthis.trubox.ca/podcast/season-3-episode-9-we-are-not-good-thought-leaders-ft-brian-lamb/
Algorithmic processing at first seems amazing and limitless but as you use the tool more and more, you can begin to notice how it narrows, not expands, one’s thinking. As educators, aren’t we supposed to be heading in the other direction?
Roose, K. and Newton, C. (2022). Can ChatGPT make this podcast? HardFork podcast. https://www.nytimes.com/2022/12/09/podcasts/can-chatgpt-make-this-podcast.html?action=click&module=audio-series-bar®ion=header&pgtype=Article
Short answer to the title’s question: not really. However, text-based AI more exciting than its image-based counterparts? Oh yes!
Roose, K. and Newton, C. (2023). A teacher who loves ChatGPT and is M3GAN real? Hardfork podcast. https://www.nytimes.com/2023/01/13/podcasts/hard-fork-chatgpt-teachers-gen-z-cameras-m3gan.html?action=click&module=audio-series-bar®ion=header&pgtype=Article
Some teachers are quickly learning to adapt ChatGPT for their classes, to help students improve their writing and critical thinking skills.
Rushkoff, D. (2023). How to read papers written by AI. Team Human Podcast. https://www.teamhuman.fm/episodes/235-jesamyn-west
Rushkoff describes how he detected an AI written paper in one of his classes. He then goes on to reflect on the nature of assessment, evaluation, motivation, and engagement in higher education and why those are the real issues for educators, not the technology.
Stachowiak, B. and Alby, C. (Jan., 2023). How artificial intelligence is impacting higher education. Teaching in Higher Ed Podcast. https://teachinginhighered.com/podcast/how-artificial-intelligence-is-impacting-higher-education/
Great interview with one of the authors of Learning that Matters. Maybe we should read more of Alby’s book and stop goofing around with ChatGPT prompts!
Dave Smulders is the Program Manager for Faculty Development at CTLI. He just wants to help.