The 2024-25 academic year will be the third in which the wave of AI tools launched since ChatGPT3.5 was released in late 2022. Just in the last year there have been many new tools launched and other updates, most of which have implications for education.
In this article I will document how I believe the higher education sector should change in the coming years. This is my Higher Education AI Manifesto1.
Making predictions about the future is inherently difficult, and in this field, there are several uncertainties. AI tools may become much more powerful quite quickly, leading to further re-evaluation. On the other hand, there may be new regulatory and legal systems that stymy further developments and make some current uses more difficult. In doing this we can distinguish between things we do already which can be done better, and things we can now do which could not be done before.
Let us start by considering assessment.
Assessment
Assessment was the first topic that gained a lot of attention in higher education with the release of ChatGPT3.5 in late 2022. I will discuss this first because I think it is important, but also the least interesting topic when considering the possibilities for AI.
Firstly it is worth reminding ourselves why we do assessment:
Formative assessment that has the primary aim of helping students understand how well they are learning the material and their ability to use that learning to address tasks.
Summative assessment that has the primary aim of helping students signal their ability, and therefore assisting future employers sort stronger from weaker students. This signalling element is often neglected but it is a vital element of the value created by higher education.
For formative assessment, there are now new possibilities opened up by AI. Specifically it is now much easier to generate questions and answers, and to use AI to provide feedback. The primary impediment to doing lots of formative assessment has always been time, setting work, marking it and giving good quality feedback is very time intensive, but this is no longer the case. Of course AI will not provide assessments and feedback in exactly the same way that you might, but the quality currently available is good, and in my experience, often better than what actual academics are providing. As with all AI use, some thought about the prompts and practice using the systems will get you better quality results.
Students may hand in work that they have used AI to help produce. If this is how you would ask them to produce the summative work, then this is simply part of the process. If it is not, then they are just robbing themselves of accurate feedback on their own work, and it is their loss.
MANIFESTO POINT 1
Students should have more opportunity for formative work and for feedback on that work. This will be AI-powered, the job of the tutor is to guide students in how to test themselves and how to get feedback in the most accurate manner.
For summative work, there are some more difficult issues to consider. The reason why assessment was the first issue to be discussed is because academics worried about cheating on assessments. With continuing advances in the capabilities of AI models, we are correct to worry. A specific selling point of ChatGPT4 was its ability to do better in various tests that the previous 3.5 model. It is easy to try out the main AI with essays you may have set in the past and see how it does. You may find that it doesn’t absolutely ace it, but probably gets a pass, and likely quite a good one. For now AI tools may perform less well at more technical tasks involving maths, diagrams and code, but this is rapidly changing, so that these types of assignments become achievable with little effort. Broadly speaking, I would argue that almost any assessment you can set for a student to take home and work on over a period of weeks, will be done better by someone using AI tools than someone not using them. The first question this provokes is, does it matter?
Yes it probably does, but it is not as obvious as it first seems. If all students have access to these tools, and so all could do better by using them, might we still see the best students getting the best marks and the worst students getting the worst marks? Perhaps, though this would still matter for the purposes of making comparisons to previous generations that did not have AI access. It would also matter in the context of the UK degree system where there is very little differentiation between students because the so many get a 2:1 or a first.
The real worry is that very weak students can look like good students by using AI tools. Experiments with assessments suggest this is likely to be the case. It would be hard to fail a standard essay assignment with some fairly basic use of AI.
I will write a full-article on how to AI-proof assessment, with application to my own practice, but for this manifesto I will declare:
MANIFESTO POINT 2
Summative assessment should look fundamentally different to the pre-AI age. Assessments like take-home essays or online exams will become increasingly redundant.
Content
Whilst the big changes in assessment are an interesting problem to reflect on, more interesting for me is to think about the content of our courses. Do AI developments change what we should be teaching? I believe the answer is yes.
What we teach students reflects the body of knowledge in our subjects, and to varying degrees in different subjects, this then connects to what students will be doing in their following careers. Although AI is not yet having a very large effect on jobs markets in most areas, it feels inevitable that it will, and what we teach at university will reflect this. There is less reason to teach students how to do tasks that soon might be fully automated by AI in most practical situations.
The difficulty is working out what this new world of work might look like will be an ongoing challenge in coming years. What is that AI will do best, what is it that humans will do best? Academics need to find a way to stay up to date with these industry developments if higher education is to remain relevant as jobs in some areas change rapidly.
As Ethan Mollick also argues, some elements of the world of work, as much as some elements of higher education, are tasks that have meaning because they take effort. Now that at least some of these do not take effort, this sense of meaning, and the signal they create will change. As academics we have an ability to shape this new reality as well as just respond to it.
MANIFESTO POINT 3
What we teach should reflect at least in part what students need for jobs. We need to stay up to date with these changes.
Part of this shift in content will also be our responsibility in teaching students how to use AI. This includes using AI for study, how it can help find and summarise literature, provide explanations and feedback, but also how AI will be used in industry. Whilst there may be few areas currently where there is a settled industry standard on AI integration into processes or specific AI products, there likely soon will be. Effective use of AI tools should become as normal as knowing how to use the library, or email, or the Microsoft Office suite, or whatever bespoke software and equipment your subject area currently uses.
MAINFESTO POINT 4
We are responsible for teaching students how to use AI in the best way, both for study, and for future employability.
Another aspect to this argument is the issue of personalised learning. This is has been the source of much reflection by academics, all the more so in recent years with inclusion being emphasised more. With AI-assistance, students more than ever have the ability to shape their own learning, with our guidance. Rather than everyone reading the same textbook chapter, listen to the same lecture, or do the same practice assessments, each student can engage with these processes in a way that best suits their needs. Take this example of a book by Tyler Cowen. Here the book is designed to be interacted with via an AI chatbot. You do not simply read the text, but ask questions of it. This is a new way of thinking about this experience, but is one that allows students with different educational backgrounds, different interests and different abilities to all get something from a text.
MANIFESTO POINT 5
Education should be deeply personalised, the reading, video watching and assessment tasks of different students need not, and should not be the same.
How to Teach
Academia has already been slow to catch-up to past technological developments. In a world where good quality free YouTube videos on many topics are available, how many academics are still giving lectures that cover exactly the same points, but not as well?
AI poses similar, and perhaps more fundamental questions to previous technological developments in terms of how we teach. The fundamentals of the human brain and the learning process have not changed (though remain an issue that is not fully understood). The fundamental question of what we do with our time, and what we ask students to do with theirs, remains.
Specifically, I believe we should focus our time with students on things that harness the power of the human social environment. Students at university are already expected to do most of their learning individually, using books, videos, and now AI tutors. As well as being taught how to these things well, and crafting assessments that focus the students’ efforts, in the time we have students together as a group, what adds the most value to their learning?
If it is not true already, it will soon be the case that AI chatbots are viewed as the key source of knowledge and explanations for the ideas we want to teach our students. Our role as academics, more so than it is already, is to curate knowledge and guide students through their learning, rather than to ourselves be the source of explanation and knowledge. Things that cannot be well expressed in words should be our focus, ideas where an experience will help the student learn will be our strength. The role of an academic as the best or only person who can explain an idea, are done.
MANIFESTO POINT 6
Teaching should harness the power of human social interaction. Activities and exercises involving groups, teams, real world interaction and exploration are what truly adds value in a world where AI tutors complement other resources as the key source of information.
One hope for AI is that it makes mundane tasks easier. Forms can be filled in, references written, reading summarised, code written, so that we have more time available. One thing we should be doing with that time is giving more personalised help to our students.
I have emphasised so far that AI can do a lot of good in education, but I also believe we are not redundant, students need our guidance on how to use the AI well, for clarification and for guidance on what will be in assessments, and for our subject expertise on what they should be learning. Students will still want to see us, we will still want to see students, and if some of our time is freed up, we should be doing more of this
MANIFESTO POINT 7
Efficiency gains from AI should be used to create time for more office hours, one-to-one meetings with students or smaller group teaching.
Degrees
Ted Gioia, and many others, make the point that AI can fundamentally alter our view of what is real and what isn’t, what to trust and who to trust. One dystopian outcome of an AI future is one where we assume everything can be false, with no possibility of verifying what is real and what isn’t.
One way to think of a degree is that it is a trusted source of verification of the skills and abilities of a student. Having a degree should guarantee a certain level of competence, a certain body of knowledge and of skills.
Even before mass-adoption of AI this has been imperilled by rapid expansion of Higher Education and by grade inflation, AI potentially breaks this trust altogether. A challenge we must face is how to maintain this trust in ourselves and our students.
MANIFESTO POINT 8
In the AI age, the education we provide, and the qualification that goes with it must continue to be a trusted source of value for society. Only be adapting to the new reality can we achieve this.
In future articles I will provide more practical advice on navigating these changes, beginning with how to produce AI-proof assessments as I navigate this process in my own teaching.
Given the nature of the article, I am perhaps being more provocative than normal. Certainly my AI-reviewer GPT was not impressed!