Many educators design multiple-choice question examination. How do we know that these tests are valid and reliable? How can we improve upon the test by way of modifying, revising and deleting items based on student responses?
In a paper in the highly regarded Journal of Engineering Education, Jorion, et al (2016) developed โan analytical framework for evaluating the validity of concept inventory claimsโ. We believe that we can use this framework to help educators design their multiple-choice tests as well, especially, if they are designed as the final mastery examination in a course. An open source software to analyze a multiple-choice question examination would be encouraging to educators who have minimal programming experience and promising to contributors who would enhance the program.
This R package provides useful interfaces and functions to assist with the analysis of data from a typical multiple-choice test. The user needs only to provide an answer key that optionally identifies the concept or topic of each question and a table of responses given by each student to the questions in the test. MCTestAnalysis provides a Shiny web app interface and an automatic report-generation tool featuring concepts from Classical Test Theory (CTT), Item Response Theory (IRT) and structural analysis. We regard this package to be a work-in-progress and encourage contributions. At this time CTT results include item difficulty, item discrimination, Cronbachโs alpha, and alpha-with-item-deleted. Item response functions include model estimation and item characteristic curves. Tetrachoric and scree plots are included with introductory exploratory factor analysis.
Links to try the tool online are provided.