Reimagining learning in the age of AI

Many academics worry AI might hinder learning and enable cheating. I sympathise with these concerns, but if I were still in academia, my inclination wouldn’t be to try and build walls against AI.

Instead, I’d be looking for ways to leverage AI in a way that enhances learning.

The reality is that AI is becoming increasingly prevalent in the world students are entering. Trying to ignore AI in the classroom feels a bit like preparing students for a world that no longer exists.

My focus would therefore be on figuring out how to adapt teaching and grading in a way that fosters understanding and critical thinking with AI as a readily available resource.

Below, some ideas to this end.

Traditional assignments are still valuable

Core educational practices remain vital. Assignments needing manual work, such as essays, problem sets, and research papers, aren’t just old methods. They build skills AI can’t fully replicate.

Actually writing an equation, essay, or presentation develops critical thinking and logic. Furthermore, manual assignments are critical in developing skills needed for, inter alia, overviewing the literature, selecting evidence, structuring arguments, and writing clearly.

Therefore, in my view, these kinds of assignments remain incredibly valuable.

The challenge with traditional assignments is, of course, ensuring students do not circumvent them with AI. Common measures include emphasising process drafts and revisions, requiring in-class work, designing assignments needing personal context, and including in-person presentations as part of the curriculum. It might also be worth revisiting the lecture/seminar ratio. Seminars encourage in-person discussion that is hard to circumvent with AI.

None of these measures are novel. It shouldn’t be a problem to arrange for them. If a university struggles to, for instance, arrange a few in-person examinations, the issue likely relates to management and resources, not AI.

This is not to say, however, that 100% of assignments should be traditional in-person non-AI tasks. The logistical hurdles of implementing in-class work, presentations, or seminars for 100% of assessments can be prohibitive.

This is where academics should also realise the need to involve AI.

Rethinking learning for a world of AI

Flexibility is needed. One cannot change without changing. Traditional experiences are still valuable, but there is also a need to acknowledge that everyone needs to adapt a little.

Two areas where adaptation is critical are teaching and assessment.

Assessments in a world of AI

My general feeling about assessments is that the goal should NOT be to always avoid students using AI. As per above, traditional assessments and experiences remain valuable. But not all teaching and grading should be traditional. Some (and in my personal view, most) assessments should take place under the assumption that students will use AI.

The goal should therefore be to ensure the use of AI does not circumvent the learning experience, as opposed to trying (and failing) to avoid AI usage.

A common suggestion, indeed one that is so common that AI offers it when prompted about this topic, is to move past assessments that primarily test memory recall by emphasising analysis, evaluation, synthesis, and creative thinking. Examples are complex case studies that require students to apply concepts, open problem-based learning scenarios that necessitate1 innovative solutions, and analytical research projects that focus on critical evaluation and the development of original arguments.

All these assessments can be done with AI, no doubt. As noted, my goal wouldn’t be to avoid students using AI. My only point is that these kinds of assessments make it harder for students to easily generate done-and-ready solutions. If student wants to use AI, they will need to use it well.

Going beyond, another seemingly helpful strategy is to do more than simply allow students to use AI by actually encouraging engagement with AI. For instance, in a research-intensive field, this might involve students evaluating the outputs of AI-generated research. Similarly, in a writing-intensive discipline, students could be tasked with critically revising a piece of text initially drafted by an AI, focusing on elements like argumentation, style, and clarity, and justifying their editorial choices.

Whether students use AI or not for these assignments is irrelevant. The idea is to encourage analytical and critical engagement with the technology in a professional context. In other words, the idea is to help them be proactive how they relate with AI, rather than passive consumers of AI output.

Finally, let’s address a necessary debate: AI detection.

Some academics advocate for the use of AI detection software to identify AI-generated content. The potential of such tools might warrant exploration.

That said, I favour an entirely different viewpoint, namely, that the focus should unequivocally be on the quality of the outcome.[^2]

I despise mediocrity. While encouraging AI, I am not encouraging academics to lower their standards. Only the use of AI that ultimately satisfies strict quality standards should be deemed acceptable. Accordingly, severe penalties should be applied to submitted content that allows hallucinations to go undetected, fabricates sources, or otherwise fails to meet the expected levels of academic accuracy and integrity.

Teaching in a world of AI

Teaching also needs a rethink, in at least two ways.

AI-specific modules. There is a need for modules that specifically address AI and its appropriate uses in a professional context.

A key skill that needs to be incorporated into these modules is critical evaluation of AI output. Just as we teach students to scrutinise sources in traditional research, students must also learn to question the information generated by AI. This includes understanding the potential for bias, inaccuracies (hallucinations, as discussed earlier), and the importance of verifying AI-generated content with reliable sources.

Furthermore, teaching in the age of AI also necessitates a focus on developing metacognitive skills. Students need to understand their own learning processes and how AI can either support or hinder them. This involves encouraging self-reflection on when and how AI might be a useful tool, and when more traditional methods of learning are more appropriate. It’s about fostering a sense of agency and informed decision-making in their learning journey.

Among other things.

I would love to design a comprehensive syllabus on this topic, but since I no longer get paid for it, I do not have the time. The examples above will need to suffice.

Exemplary AI practices across modules. In second place, there is also a need for all educators to lead by example, irrespective of what they teach.

What I’m saying, and I’ll be blunt, is that if you are amongst those who support the introduction of AI modules as long as it does not touches your modules, you are in the wrong.

When it comes to AI, each and every responsible adult needs to be an example of how AI can be used appropriately. Else, students will get their AI practices from other figures, i.e., online influencers, marketers, and those desperate to sell them AI at any cost.

So, were I still an academic, I would be open with students about how I advocate for and use AI myself, regardless of what I teach.

Conclusion

Use AI, but use it well. That’s the best approach to the challenges academia is finding in AI.

Ps. Read the footnote!


Footnotes

  1. Did I use AI to write this blog? Of course I did! I even intentionally left some giveaway words and idioms to mess with the minds of those who reject good arguments simply because they were made with AI – such an immature behaviour to think the origin of an argument matters more than the argument itself!