Project Description

Written by a human being – Integrating AI technologies in teaching, learning and assessment, Part 2

By Katrine Wong

Responsible use of AI in learning and teaching

Colleagues at UM have recently been advised that our students can ‘use ChatGPT or other generative-AI systems to enhance their learning’ and that ‘students should be aware that they must be authors of their own work’ (email ‘Notes on the use of generative-AI systems’, 11 April 2023).

Granted, generative AI tools can benefit education in ways such as providing real-time feedback, facilitating independent and personalised learning, generating summaries and translation texts document summarization, and even creating interactive e-learning material such as flashcards, crossword and videos, colleagues are concerned about the impact upon principles of academic integrity. While the initial panic is well understood, instead of fearing that large language models (LLMs) like ChatGPT can kill students’ abilities to write essays and work out answers for concept-checking questions, we can embrace the opportunities afforded by this technological advancement and rethink assessment design.

It is high time that we prioritise training our students to develop reasoning and problem-solving skills. Assessment tasks that involve writing summaries and online quizzes may no longer yield effective, indicative measurement of student achievement. For instance:

  1. We can design tasks that focus on cognitive processes such as analytical, critical and synthetic thinking. Students can demonstrate to us whether or not they have achieved respective intended learning outcomes through activities such as debate, concept diagrams and mind maps.
  2. We can, if time allows, supplement essay writing with Q&A to evaluate students’ actual learning.
  3. We can, when feasible, adjust the distribution of our assessment types in our course and focus on fewer open-book, take-home tasks.

In addition, part of our job as university teachers is to educate our students to become responsible global citizens who will live and work with AI technologies in a reliable, honest and respectable manner. This is no different from living and working as a responsible citizen. While the topic of AI ethics is an increasingly important question (Laupichler et al., 2022; Ng et al., 2021; Shih et al., 2021), this current blog post is not trying to discuss how we can teach AI ethics.

Instead, I would like to suggest a few questions that we can encourage our students to actively consider, when it comes to generative AI technology:

  • Why do I, as a student, need to use AI technology to answer this question? Can I use the knowledge and skills that I already have to answer questions on an assignment?
  • How reliable is the information gathered and generated? Is it correct? Have I checked the AI-generated text against reliable sources of information?
  • Is the information adequate to inform the scope of my discussion? What biases might be present in responses generated by AI technology?
  • What value does this AI-generated text/answer bring to my learning?
  • How can I ethically work with this AI-generated text/answer?

These questions and similar ones will help students begin to become responsible users of generative AI tools. At the same time, here are a few things that students should understand:

  • That they are responsible for their own submitted assignments;
  • That they are responsible for any inaccuracies or omissions in AI-generated materials;
  • That there is the possibility of made-up stuff (including fake references) in AI-generated materials; and
  • That it is recommended that students acknowledge the use of AI in written assignments and include appropriate citation of the sources.

In my next blog post, I will write more about assessment design in the age of AI.

‏‏‎ ‎‏‏‎ ‎

‏‏‎ ‎References:

Laupichler, M. C., Aster, A., Schirch, J., & Raupach, T. (2022). Artificial intelligence literacy in higher and adult education: A scoping literature review. Computers and Education: Artificial Intelligence, 3. https://doi.org/10.1016/j.caeai.2022.100101

Ng, D. T. K., Leung, J. C. L., Chu, S. K. W., & Qiao, M. S. (2021). Conceptualizing AI literacy: An exploratory review. Computers and Education: Artificial Intelligence, 2. https://doi.org/10.1016/j.caeai.2021.100041

Shih, P. K., Lin, C. H., Wu, L. Y., & Yu, C. C. (2021). Learning ethics in AI-teaching non-engineering undergraduates through situated learning. Sustainability, 13(7). https://doi.org/10.3390/su13073718