The 2023-24 academic year is coming to a close for me, with just a bit of marking for me to complete before a summer of research, writing, prep for next year, and time off await. In this post I wanted to reflect on some of the new things I have tried this year, and what new trends I have seen in my students. Some of this is things I have been writing about in the last few months, some of this is AI based, and some are new observations.
AI rather than search
When working on research activities in my classes, I observe that virtually all my students begin with AI for their research rather than with a google search. This has positives and negatives. On the positive side, search will often get results from low quality sources that many students would take too seriously, modern trends in sponsored results and search engine optimisation making this more of a problem. With some better prompting an LLM-generated answer will be fairly comprehensive and get a good overview of many important points quickly.
On the downside, the AI may hallucinate or misinterpret what is needed, especially with naive prompting on less capable models, though in reality I have rarely found this to be the case and I think using AI for these purposes stands up pretty well against more conventional search in my experience.
What is needed is a synthesis of the two, AI can give the overview, with specific facts verified by search, and both used in tandem for more depth. Learning to do this well with the emerging tools available is a crucial skill for modern students and for anyone doing research.
OpenAI on top
Speaking of AI use, almost all students are using ChatGPT as their go to resource. I recently put some effort into pushing students to use other models more, in particular as Claude emerged as possibly the best free to use LLM. I had a class activity where different groups compared and analysed explanations of key economic concepts with each group assigned a different LLM, and we had a lecture activity role playing a historical figure using Claude. However with the release of the new ChatGPT 4o model, and its roll-out to users of the free version, OpenAI are back in the driving seat and other models are likely to take a backseat for the large majority of students.
Attention
In my most recent course I made more of a point than usual of making students have their phones put away during workshop sessions, unless specifically needed for a task. I think this is helpful and I will continue to do it, however it was of limited success in achieving high levels of concentration, because every student has a laptop in class.
In one way this is a good thing, I can have activities that involve research, that use AI, that involve working on shared documents or doing electronic quizzes easily, knowing that access to technology is not a problem. However, there is then the constant possibility of distraction, getting students to be fully focused on a task can become difficult. What will I do differently next year:
Review activities to make sure they are sufficiently interesting and of the right difficulty (challenging without being so difficult they cannot be done).
Sequence periods of the class where no laptops (or phones) are needed, and make sure all electronics are away to focus on this activity. make sure these periods are long enough so that we are not constantly asking students to switch between laptops/phones out, to having everything away.
This gets enforced by me in the class, but my eyes can’t be everywhere and I like to talk to students properly whilst they are doing activities, so the first element of having interesting and challenging activities is going to be really important.
Mundane Utility with AI
Two things I now do routinely with AI that give marginal improvements to my teachings resources:
Every lecture I do is recorded (this is university policy and has been for a while). I don’t think many students really want to listen to a 50 minute recording (I will have more to say on the use of videos soon), so what I now do is download a text file captured from the recording, and pass it into an LLM to provide a summary that I then provide to students. I also do the same for other videos that I provide to students as purely online lectures.
I could always have done this, but use of AI makes it much easier. I should note that I always check and tweak these summaries before publishing, never just copy and paste.
Using the same method, when passing the text of the recording through the LLM, I also ask it to generate possible multiple choice questions about the lecture/video, which I then insert into my recordings to make them more interactive. Our recording software has quite a lot of functionality around this, at the moment I have it set so that you cannot progress in the video until you have got the questions correct.
In both cases I do not have strong evidence about the effectiveness of this, which is a drawback that I will try to rectify. Purely going on views of the AI-drafted summaries, this seems to be something students appreciate at least.
Engaging Lectures
I have written extensively about making lectures interactive, and especially giving them a social element, for them to have value over and above what they create by adding some structure to a student’s weekly timetable.
In my most recent course I have had both successes and failures in my efforts. I have used plenty of quizzes and electronic voting, which I always find works well. I also used an AI activity where the students generated a historical character to talk to that was interesting. I used the ‘think-pair-share’ method to get some discussion going in the lecture as well.
I had only a small group (around 20 students total, so normally only a dozen in the lecture) so just some question and answering worked OK as well, as several of the students were willing to engage. The small group limited me with some activities however. I wanted to run ‘pass the pointer’ a couple of times, but the small number in the room made it feel not worthwhile, so I reverted to a more standard Q&A to see what needed clarification. This goes to show that some things will work better in some circumstances, and it pays to be flexible depending on how the session is going.
I did stick to my promise to myself to make every lecture have at least something that was engaging, and it never just be me talking for the full set of time.
AI in workshops
I have mentioned an AI activity in lectures, what did I try in my workshops?
The main assessment for the module was a group presentation. We did a workshop session on using AI for planning the presentation, talking through some prompting techniques. I also gave them my materials on using AI for presentation feedback and discussed how to do this.
As part of the aforementioned efforts to get students to try out some other LLMs, we did an activity using Claude to do a textual analysis of Francis Fukuyama’s ‘End of History’ essay, comparing it to our own analysis (Claude does pretty well!)
An activity aimed at fostering critical thinking of AI output. Having given the students an explanation of comparative advantage and having done some activities, we generated explanations from the AI. We then analysed these for accuracy, also exploring prompting techniques to see if we could get better explanations and also compare different models. I hoped with this activity to show that AI can be useful in explaining things, but the output needs to be considered critically, and as a complement to other sources of knowledge.
On the same topic of comparative advantage, we played a game which I designed with the help of AI, where students play the role of countries who have to make decisions about allocation of resources and then how to trade. I found this very valuable to reenforce this concept whilst having fun in the classroom, though the negotiation element brought out the bad side of some students!
Attendance
My final comment on this most recent course, is that I enjoyed some of the best attendance rates I have ever had. Hopefully the sessions were interesting and reasonably well taught, which will help, but the main element was the group assessment. Students worked in their groups during class time, and many activities were directly related to completion of the assessment. I therefore told students that non-attendance was equivalent to not contributing to their group, and that if this persisted they would be removed from the group and have to an individual piece of work, which would effectively leave them with more to do.
This ‘stick’ approach had to be enforced, a couple of students who rarely turned up did end up being told to do separate work. For the rest I sent emails after every session missed with warnings, and then followed-up in-person at the next session I saw them. At least partly due to this approach, I had very good attendance, a lively classroom atmosphere, and generally good engagement with the module.
I have written before about attendance, and the possible downside that this sort of compelled attendance can backfire if you force students to come in who do not want to be there. I felt this to some degree, and this relates to the previous point on making sure I have their attention throughout, but overall the impact of having plenty of people in attendance was positive, especially with a piece of group work for them to focus on completing.
Conclusion and practical steps
Overall I was happy with how the course went, though as I discussed in my last post, I do worry about the degree to which students will retain what we learnt. If I ask them about the ideas again in 6 months time, how much will they remember, what about in 2 years time? Next year I will be thinking about sequencing repetition of central ideas to try and get better retention, along with continued adaptation to emerging new AI technology. Some practical steps:
Are your students using AI or more traditional search? In both cases are they taking steps to get the best out of it? If you are worried they are not, either you or someone else teaching on the programme needs to guide them.
Consider producing lecture summaries and using some AI help to make recordings more interactive.
There are lots of interesting ideas about using AI in workshops (some of which I will write about soon). If you aren’t doing this yet, consider some of the things I have mentioned and think how these could complement what you are doing already.
Does your assessment strategy encourage attendance? If not can you change it? Offer both a carrot (well-designed, engaging sessions), and a stick (lost marks or other things to make life difficult). Only do this if you really think it is important that you need people in the room, there are cases where this may not be so important.