Using AI To Implement Effective Teaching
Using AI To Implement Effective Teaching
Abstract: This paper provides guidance for using AI to quickly and easily implement evidence-
based teaching strategies that instructors can integrate into their teaching. We discuss five
teaching strategies that have proven value but are hard to implement in practice due to time and
effort constraints. We show how AI can help instructors create material that supports these
strategies and improve student learning. The strategies include providing multiple examples and
explanations; uncovering and addressing student misconceptions; frequent low-stakes testing;
assessing student learning; and distributed practice. The paper provides guidelines for how AI
can support each strategy, and discusses both the promises and perils of this approach,
arguing that AI may act as a “force multiplier” for instructors if implemented cautiously and
thoughtfully in service of evidence-based teaching practices.
Students need many examples when learning complicated concepts (Kirschner &
Heal, 2022). When confronted with new and complex ideas, adding many and varied
examples helps students better understand them. If students are presented with only
one example, they may focus on the superficial details of that example and not get at
the deeper concept. Multiple examples of a single concept can help students
decontextualize the idea from the example, leading to better recall and understanding.
Giving students examples when teaching new ideas provides a number of benefits:
examples enhance understanding by providing a real-world context in which to
ground an abstract concept; examples help students remember concepts by serving as
anchors in the form of an analogy or story, grounding the concept in engaging details
that illustrate a general principle (Atkinson et al. 2000); examples also help students
think critically, prompting analysis and evaluation mechanism across different
examples; and examples can help surface the complexity of a concept by highlighting
While students may gravitate towards the superficial aspects of an example, such as
the narrative details, it is crucial to strike the right balance between complexity and
simplicity. Overly complex examples can lead to confusion, while oversimplified ones
may fail to convey the full scope of a concept. Consequently, educators must carefully
craft examples that are both accessible and informative, while taking into
consideration the diverse needs of their students.
Producing many examples of one concept is a time-consuming task and one that can
be outsourced to the AI. The AI can generate numerous examples in very little time.
Here is how:
Prompt for GPT-4/Bing: You can use the following link to get Bing to generate
examples: https://sl.bing.net/bePdl4o9xf2
It passes the following prompt to Bing: I would like you to act as an example generator for students.
When confronted with new and complex concepts, adding many and varied examples helps students
Once evaluated, the AIs output can be deployed in the classroom in a number of
ways. For instance, instructors can weave examples into a lecture and post them as
additional notes or material for a lesson in their Learning Management System.
Instructors can also use the examples to give students additional practice; they can
provide a number of examples to students and ask students to explicitly name the
core conceptual principle as an in-class or outside-of-class exercise: These examples have
one thing in common: what do they demonstrate? Similarly, instructors can ask students to
evaluate how each example highlights different aspects of a concept: Compare and
contrast these examples: what different aspects of [concept X] does each highlight? If students have
a knowledge base about the topic, instructors can also use any wrong or subtly wrong
output as an advanced exercise: Which of these examples demonstrate concept X? Which do
not? Explain your reasoning.
The AI can help instructors with this task by generating multiple explanations from a
variety of perspectives, by generating explanations that use a step-by-step approach
and adding details to any existing explanations. If students are confused by a concept,
the AI can produce a simpler summary of the concept that may help students grasp
the topic. Similarly, instructors can give the AI their current explanation of a topic and
ask it to simplify it, add more examples, or explain it using a step-by-step approach.
Note that all AI-generated explanations are a starting point and must be vetted by the
instructor before they reach students.
Here is how:
It passes the following prompt to Bing: You generate clear, accurate examples for students of
concepts. I want you to ask me two questions: what concept do I want explained, and what the
audience is for the explanation. Then look up the concept and examples of the concept. Provide a
clear, multiple paragraph explanation of the concept using specific example and give me five analogies
I can use to understand the concept in different ways.
Once you have my answers you will look up the topic and construct several multiple choice questions to
quiz the audience on that topic. The questions should be highly relevant and go beyond just facts.
Multiple choice questions should include plausible, competitive alternate responses and should not
include an "all of the above option." At the end of the quiz, you will provide an answer key and
explain the right answer.
Several classroom assessment techniques can help instructors and students monitor
their learning and understanding of the course material. These are important because
they can provide immediate feedback to both instructors and students about what
students know and, crucially, what students are confused by. Known as the 1 minute
paper or muddiest point exercise, these assessments encourage active learning and
reflection by asking students to summarize and interrogate their knowledge and
identify areas of confusion (Angelo & Cross,1993). Any gaps can be addressed in
future classes. These exercises also increase students' engagement and motivation by
showing students that instructors are responsive to their needs and that their
questions and opinions matter (Wolvoord, 2010).
Instructors can decide what they want to focus on to design this assessment. For
instance, instructors might focus on a specific activity, topic, or class discussion. Then,
write a question for students to answer that will uncover what students understand
and are confused by. For instance, the question might be: What was the most important
idea or concept covered in class today? Why do you think this idea is important? What is the most
difficult class concept so far? What did you struggle to understand? What concept or problem would
you like to see explored in more detail? (Angelo & Cross, 2012).
Prompt for ChatGPT/GPT-4 (Note: Bing’s 2,000 character limit makes it unsuitable
for this strategy)
To have the AI help quickly summarize student responses, instructors can create a
Google Doc or any shared document and ask students to submit their responses.
Then, instructors can submit a set of collective responses to the AI with the following
prompt:
I am a teacher who wants to understand what students found most important about my class and
what they are confused by. Review these responses and identify common themes and patterns in student
responses. Summarize responses and list the 3 key points students found most important about the
class and 3 areas of confusion: [Insert material here]
While this type of exercise pushes students to name key ideas and identify points of
confusion explicitly, students may worry about revealing their struggles. Instructors
may consider assigning this question via an anonymous survey or an anonymous
discussion board in a Learning Management System; the latter has the added benefit
of giving students a sense that they are not alone in their concerns. One note: when
using a discussion board, set the board to reveal responses only once students have
posted their own responses. Students may be influenced by their classmates in their
initial posts.
One way to include this exercise dynamically in a class is to post the response not at
the end of class but mid-class. Have students post the responses. Upload those
responses to the AI and show students the AI-generated output. Then discuss those
results: What did the AI point out? What patterns did it highlight? What common areas of
confusion do students hold? This can lead to a discussion of the learning outcome for the
class and to students answering each other's questions and clearing up that confusion
through a facilitated conversation.
Conclusion
Using AI to implement five evidence-based teaching strategies – using multiple
examples, varied explanations, diagnosing and addressing misconceptions, distributed
practice, and low-stakes testing – can help instructors develop more effective lessons
and enhance student learning. These strategies require extensive work on the part of
instructors to implement effectively, work that AI and specifically Large Language
Models can now help with to varying degrees. With careful vetting and oversight, AI
can generate explanations, examples, practice problems, and diagnostic questions to
support instructors, helping them spend less time on developing materials and more
time focusing on students. AI can also respond to student questions, grade
References
Adesope, O. O., Trevisan, D. A., & Sundararajan, N. (2017). Rethinking the use of tests: A meta-
analysis of practice testing. Review of Educational Research, 87(3), 659-701.
Angelo, T. A. & Cross, K. P. (1993). Classroom assessment techniques (2nd ed.). San Francisco, CA:
Jossey-Bass.
Angelo, T. A., & Cross, K. P. (2012). Classroom assessment techniques. Jossey Bass Wiley.
Atkinson, R. K., Derry, S. J., Renkl, A., & Wortham, D. (2000). Learning from examples:
Instructional principles from the worked examples research. Review of educational research, 70(2), 181-
214.
Bransford, J. D., Brown, A. L., & Cocking, R. R. (2000). How people learn (Vol. 11). Washington, DC:
National academy press.
Bransford, J., Derry, S., Berliner, D., Hammerness, K., & Beckett, K. L. (2005). Theories of learning
and their roles in teaching. Preparing teachers for a changing world: What teachers should learn and be able to
do, 40, 87.
Chi, M. T. (2006). Two approaches to the study of experts' characteristics. The Cambridge handbook of
expertise and expert performance, 21-30.
Dehaene, S. (2021). How we learn: Why brains learn better than any machine... for now. Penguin.
Ebersbach, M., & Nazari, K. B. (2020). Implementing distributed practice in statistics courses:
Benefits for retention and transfer. Journal of Applied Research in Memory and Cognition, 9(4), 532-541.
Ericsson, K. A., & Lehmann, A. C. (1996). Expert and exceptional performance: Evidence of
maximal adaptation to task constraints. Annual review of psychology, 47(1), 273-305.
Felten, E., Raj, M., & Seamans, R. (2023). How will Language Modelers like ChatGPT Affect
Occupations and Industries?. arXiv preprint arXiv:2303.01157.
Fiorella, L., & Mayer, R. E. (2016). Eight ways to promote generative learning. Educational Psychology
Review, 28, 717-741.
Fries, L., Son, J. Y., Givvin, K. B., & Stigler, J. W. (2021). Practicing connections: A framework to
guide instructional design for developing understanding in complex domains. Educational Psychology
Review, 33(2), 739-762.
Karpicke, J. D., & Roediger III, H. L. (2008). The critical importance of retrieval for
learning. science, 319(5865), 966-968.
Kirschner, P. A., Hendrick, C., & Heal, J. (2022). How Teaching Happens: Seminal Works in Teaching and
Teacher Effectiveness and What They Mean in Practice. Routledge.
Little, J. L., Bjork, E. L., Bjork, R. A., & Angello, G. (2012). Multiple-choice tests exonerated, at least
of some charges: Fostering test-induced learning and avoiding test-induced forgetting. Psychological
science, 23(11), 1337-1344.
Murre, J. M., & Dros, J. (2015). Replication and analysis of Ebbinghaus' forgetting curve. PloS
one, 10(7), e0120644.
Korinek, A. (2023). Language Models and Cognitive Automation for Economic Research (No. w30957).
National Bureau of Economic Research.
Perkins, D. N., & Salomon, G. (1992). Transfer of learning. International encyclopedia of education, 2,
6452-6457.
Pomerance, L., Greenberg, J., & Walsh, K. (2016). Learning about Learning: What Every New
Teacher Needs to Know. National Council on Teacher Quality.
Rohrer, D., & Pashler, H. (2007). Increasing retention without increasing study time. Current
Directions in Psychological Science, 16(4), 183-186.
Rozenblit, L., & Keil, F. (2002). The misunderstood limits of folk science: An illusion of explanatory
depth. Cognitive science, 26(5), 521-562.
Schwartz, D. L., & Goldstone, R. (2015). Learning as coordination: Cognitive psychology and
education. In Handbook of educational psychology (pp. 75-89). Routledge.
Terwiesch, C. (2023). Would Chat GPT Get a Wharton MBA? A Prediction Based on Its
Performance in the Operations Management Course. Mack Institute for Innovation Management.
Retrieved from https://mackinstitute.wharton.upenn.edu/wp-content/uploads/2023/01/Would-
ChatGPT-get-a-Wharton-MBA.pdf
Wiliam, D. (2015). Designing Great Hinge Questions. Educational Leadership, 73(1), 40-44.
Willingham, D. T. (2003). Ask the Cognitive Scientist: Students Remember... What They Think
About. American Educator, Summer 2003, 16, 77-81.
Willingham, D. T. (2017). A mental model of the learner: Teaching the basic science of educational
psychology to future teachers. Mind, Brain, and Education, 11(4), 166-175.