Week 132017-07-24T12:18:51+08:00

Project Description

Developing Good Multiple Choice Tests

taking-a-test

This week’s tip focuses on multiple-choice questions MCQs.  Multiple-choice questions are commonly used for quizzes and often on final exams, especially then the number of students taking the exam is large.  Multiple-choice tests have several attributes that make them attractive; they are easy to administer and grade, questions can be obtained from the publisher or the Internet test banks, students are comfortable taking MCQ exams since they are the standard format for high-stakes placement exams, and students prefer MCQ exams since they believe that they are easier, amenable to a memory-based study approach and allow the opportunity to guess. In contrast, constructed response exams (short answer or essay) require students to construct answers are easier to write but more difficult to grade.  One limitation of MCQs test is often they failed to measure deeper understanding and assess only the lowest levels of Bloom’s taxonomy e.g.  surface understanding.  There is a significant body of literature on how to design and write good MCQs, one need only Google “multiple-choice questions” to access a number of guides.  An easy to read recent publication 2016 by Xiaomeng Xu provides helpful background information and advice [downloadable from [https://www.researchgate.net/profile/Xiaomeng_Xu].  I refer you to her publication for more detailed information and help.  Tips to think about and try when developing MCQ tests.

Use three-choice items questions.  Questions which have a four choices or five choices are no more effective in assessing student understanding than a three choice format.  Three choices questions are easier to write, allow for more questions per exam or more time per question.  Be sure each of the choices have the same structure and length.   Avoid questions that use negatives either in the question stem or in the choices, also avoid choices such as; “all of the above”, “none of the above” or “combination or composites” such as “A and B”, “A and B but not “C”.  Clearly state the students are to select the most correct answer, this reduce students complaining and allows one to include partially correct answers in the choices, which increase understanding discrimination.

Check the questions to see which levels of Bloom’s taxonomy are assessed (see https://get.quickkeyapp.com/multiple-choice-blooms-taxonomy/ ).  Writing good multiple-choice questions that assess higher levels of Bloom’s taxonomy is possible but challenging, and well worth the effort.

Check the alignment of the questions with the learning outcomes of your course, ask yourself which learning outcomes are assessed and at what level? Are the MCQs consistent with what was covered in the course and not focused on a subset of lectures or concepts.

Do an item analysis to see which questions the majority of the students got correct, which only a few quite students got correct, and those in which one of the incorrect answers was the predominant selection.  There are a number of software applications which will automate this process once the data has been collected for each question and many automatic grading e.g. scantron software routinely provide this data. I generally throw out questions where more than 50% of the students select an incorrect choice.

Since it is easier for student to cheat on MCQ tests reduce the temptation to cheat by having students signed an honors statement at the top on the exam.  Having student sign a pledge has been shown to reduce cheating.  Consider using multiple versions of the same test where the order of the questions or the correct response (A, B, C) is altered.   Two approaches I’ve used in large introductory courses (> 300 students) are to have versions printed on different color paper or to use the same color paper and add a hidden code such as double period (..) at the end of the first question to mark the alternate version. The two versions are alternated when the tests are handed out.   Students are informed that there were two or more versions of the test and they cannot distinguish the versions due to a hidden code.

Finally, but importantly have a colleague or TA proof the exam and if possible take the exam marking those questions they feel are unclear, tricky, or trivial and record the time it took them to complete it.  Generally, if they can complete the exam in one third of the allocated time then the length is probably appropriate.