ChatGPT was the first of what are now many generative AI chatbots that produce coherent written responses and computer code in response to a user’s natural language prompts and adapt their responses based on user feedback or additional prompts. Many are available for free (usually with user registration), but some of the most advanced tools, such as GPT-4, require a paid subscription.
These tools have already been used to produce a wide range of essays, exam solutions, letters, fiction, blog posts, and many other outputs at various levels of quality and accuracy. Materials are usually clearly structured and grammatically correct. They can also generate code and fix coding bugs. They can tackle certain problem sets. And these tools can produce summaries of readings, including materials from PDFs.
While there is a lot generative AI can do, it also has some varied and changing limitations. These tools periodically generate what have come to be called “hallucinations:” completely made up information that looks plausible. Some, but not all, of the tools struggle with referencing sources. Some of the tools might produce citations that look correct but that do not refer to an actual article or book. Some may present information without any way to verify where it comes from or its validity. Similarly, generative AI can produce harmful, biased, misleading or simply inaccurate content. All of this means that knowing a field is helpful in order to use generative AI most effectively.
Additionally, many tools do not have knowledge generated after the dates of their training (2021 for ChatGPT; 2023 for Claude). Some do and some do not have access to search the internet.
Alongside these performance concerns, generative AI raises concerns related to privacy and intellectual property. Penn's Office of Privacy, Audit and Compliance notes instructors' responsibility to protect students' privacy when using generative AI.
- Do not enter any information that could identify a student, particularly student names or detailed descriptions of student work or engagement with the class.
- Do not enter student work (papers or projects) even anonymized without the student's permission.
- Do not require that students enter their own work or other personal information into generative AI.
If you want to understand better how these chatbots work, The New York Times has a series called How to Become an Expert on A.I. that can help.