How are faculty using generative AI in their classrooms? 

By Elizabeth Bernasko, Publishing Intern at Sage

Over the past few months, I have been investigating how faculty are using generative AI in their classrooms. After conducting interviews which garnered some interesting insights on the potential AI has in acting as an multidisciplinary in-classroom assistant, I began some desk research and found a plethora of examples, mostly based in the US. Here’s what you need to know about how AI is being incorporated, in general.  

Faculty in the social sciences integrated in AI in research tasks to teach students its interpretive limitations.

For instance, Brian Macdonald, Senior Lecturer and Research Scientist in Statistics & Data Science at Yale, asked his students to use ChatGPT to produce an answer to “What is Poisson Regression?” and based on the response, identify and correct inaccuracies. In the assignment brief is Macdonald asked his students to asses ChatGPT’s responses to note what wasn’t ‘quite accurate’. This instruction requires students to become smart about how to integrate AI to generate answers that require reflection and precision from a human. 

With a similar focus on distinguishing the imperfections of AI, faculty in the social sciences also used AI to highlight and dig deeper into problems within respective research projects

For instance, Justin Farrell, Professor of Sociology at the Yale School of the Environment, assigned his students to use ChatGPT to ‘sharpen ideas’ and ‘dig deeper’ into the problem statements they posed to solve within their research projects. The primary assignment instruction was “answer your problem statement and research question with an essay from ChatGPT”.  Students then posed a question relevant to their problem statement and research question to ChatGPT and then annotated with a focus on the ways the AI-produced write-up may be inaccurate, misleading, incomplete, and/or unethical. The students also considered how ChatGPT helped them refine their research project or generated a new way for them to see their problem. Interestingly, the assignment's instructions highlight the advantage of using AI as a tool…as laser even, to quickly identify steps that optimize our path to desired outcomes. 

Faculty in Political Studies used AI not only to explore research questions but to investigate how context impacts the scope and persuasiveness of AI generated writing.

For instance, Alexander Cooley, the Claire Tow Professor of Political Science at Barnard College, created an assignment which uses ChatGPT to explore how ‘wealthy/politically influential individuals with controversial pasts (corruption scandals or political controversies) actively manage their global public profiles’. Students were to produce a positive profile and a critical profile. Then, students were invited to critically assess the relative strengths of the profile generated, without using ChatGPT. Cooley wanted to demonstrate that the more positive profile will be the most convincing, as the publicly available information that ChatGPT has gathered will be more sanitized and plentiful than the controversial information, which tends to be deleted or sanitized. And, to provoke evaluation of which writing voice ChatGPT is more effective in.  

Faculty in the Humanities department used ChatGPT to illustrate that creativity in writing cannot be replicated by AI. 

Ryan Wepler, Director of the Graduate Writing Lab at the Poorvu Center for Teaching and Learning gave students an assignment to prompt the ChatGPT to write an essay on the same premise as one of their previously written essays, with eyebrow raising request to “make the essay funny”. Students had to compare their essay with ‘the robot essay’ and write a reflection on how ChatGPT’s take on their premise differed and what felt distinctive about their comedic writing voice in comparison to ChatGPT’s.  

The inclusion of humour in this assignment prompted a class discussion on the implications of AI for the authoring industry. Professors often criticize that generative AI lacks (what I label as) ‘creative criticality’; the ability to have agile, contrasting and experience-based analysis. Overall, Wepler noted that he found the students' reflections on seeing an AI write their own paper "deep and compelling." It prompted a class discussion on the implications for the practice of writing. 

Moving on, a research methods professor integrated AI as a step in working with data. 

Wendy Castillo, Econometrics and Research methods professor at Princeton School of Public and International Affairs experimented with LLMs to teach a data course. After testing out an assignment with ChatGPT’s new Code Interpreter, a tool developed to allow users to upload data (in any format) and use conversational language to execute code, Castillo noted inaccuracies with what ChatGPT produced and resolved that human touch is required to receiving a passing grade.  

Faculty in the History department used ChatGPT innovatively to encourage the skill of fact-checking and corroborating information across multiple sources.  

Benjamin Breen, History Professor at UC Santa Cruz, asked students to use ChatGPT to simulate historical events. Knowing that the simulations sometimes contain inaccuracies or ‘hallucinations’, students were tasked with mapping information from different sources and working with counterfactuals to establish the facts of an event—a key skill budding historians must develop. Furthermore, Breen believes the continual use of LLM’s will elevate the importance of the humanities. Breen states that LLM’s rely on textual data, aligning directly with the skills and methods emphasized in university humanities courses, such as analysing texts at an abstract level and making critical comparisons. Notably, language models are trained on historical primary sources spanning hundreds of languages, a valuable resource for history students and majors to think about texts at higher levels of abstraction and making critical textual comparison. Looking ahead, Breen plans to incorporate ChatGPT into an upcoming world history class; he will task students with prompting the AI to act as an advanced history simulator, reconstructing and describing historical settings. This exercise promises to challenge students' ability to distinguish fact from fiction while leveraging the power of AI to enhance their understanding of the past. 

Further examples of how faculty are integrating AI into their classrooms:

Note: Blog image was generated by copilot.microsoft.com.


Elizabeth Bernasko

About the author

Elizabeth Bernasko is an accomplished AuDHD academic coach and tutor, supporting both secondary and higher education students in achieving their academic goals. With a passion for the written word, they also excel as a creative non-fiction editor and writer, crafting engaging narratives. Currently, Elizabeth serves as an Editorial Intern for the New Product Development team, honing their editorial skills and contributing to cutting-edge projects. 

Previous
Previous

Diary Methods in a Digital World

Next
Next

Resources for Online Interviewers