Pointers to conceptual understanding
- Authors
- Publication Date
- Jan 01, 2017
- Source
- Queensland University of Technology ePrints Archive
- Keywords
- License
- Unknown
- External links
Abstract
Concept inventories are tests used to elicit student misunderstandings and misconceptions. Traditionally, they exist as a set of multiple-choice questions (MCQs), including the correct option, as well as some distractors (Libarkin, 2008). This multiple-choice format allows for faster marking and feedback; however, it does not identify conceptual misunderstandings, or if a student has guessed the correct answer. By adding a space for students to add a textual justification (Goncher, Jayalath, & Boles, 2016), their answers can be checked to ensure that the concepts are correctly understood. Automated textual analysis will allow insights to be uncovered, and to help speed up the process of grading to give feedback to students and informing educators. As part of that process, we endeavour to address the following questions: 1. What pointers can be identified that indicate a student’s conceptual understanding? 2. What conclusions can we make from these identified pointers to conceptual understanding? Over the past four years, two concept inventories have been deployed, both with multiple choice questions, as well as a free text field for students to give reasoning and explanation. We will combine several machine learning techniques to analyse the textual response data, including: • Word2vec – which allows words to be modelled as vectors, for easier computation (Mikolov, Chen, Corrado, & Dean, 2013) • LDA (Latent Dirichlet Allocation) – Allows classification and grouping of topics and areas (Blei, et al., 2003) • SVMs (Support vector machines) – which allow classification to be performed and similar areas grouped Four pointers were identified to help to automatically determine if conceptual understanding is present. The first three pointers can be determined with certainty, the fourth “validity of the response” is one that is traditionally determined by a human marker. Comparing with an expert marked dataset, the algorithm to determine this pointer achieved a 75% accuracy. Using the four identified pointers we are able to detect if a student has correctly identified the concept which they were being tested for in a particular question. The four pointers, allow some leniency if one of these is not achieved, and can also allow us to draw conclusions as to where issues lie in a student’s understanding. This presents several opportunities for benefits such as individualised feedback for students and entire class feedback for educators.