David Foster, President & CEO, Caveon Test Security
Excellence Edition
calendar-icon_whitecircle
Dr. David Foster
How SmartItems Improve Tests
Use the latest scientific research conducted by Caveon to evaluate all of your options and make the best decisions for your testing program.
Reliability In the beginning, one of my biggest concerns about SmartItems was about how they would contribute (or not) to the reliability of an exam. As you may have learned by now,, each SmartItem can be presented in tens of thousands – even hundreds of thousands – of ways. No test-taker sees the same thing, as this effect is compounded by the number of SmartItems on each test. Logically and psychometrically, I thought it would all work out, but I was looking forward to seeing the first calculations of reliability from an information technology certification test, one of the first to facilitate the use of SmartItems. The test had 62 SmartItems, and the reliability statistic was based on the responses to those items by 70 individuals. The reliability was calculated at .83, a decent and acceptable result. Whew! Item Fitness For this IT certification exam, a field test was conducted and item analyses were run to evaluate the performance of the SmartItems. For each SmartItem, we calculated the p-value (proportion of examinees answering the item correctly) to see which questions were too easy or too difficult. We also used the point-biserial correlation as a measure of the SmartItem's ability to discriminate between more competent and less competent candidates. These are statistics based on classical test theory (CTT) and are routinely used to evaluate items in most of the high-stakes tests that are used today. My first peek at these data was gratifying. The analysis looked… well… normal. I’ve seen hundreds of these analyses over my career, and this set looked liked the others; some items performed better than others, with most performing in acceptable ranges. A small number performed poorly, which is to be expected. Of the 62 SmartItems, only 4 have p-values below .20, indicating that these 4 might be too difficult to keep on the exam. In addition, only 8 items had correlations close to zero. The large majority, therefore, performed as designed and built. Given that these were SmartItems – and therefore were viewed differently by each candidate – it was wonderful to see that they could serve competently on a high-stakes certification exam. Based on individual SmartItem statistics, it isn’t too surprising that the resulting reliability was high. Comparability of SmartItems with Traditional Items After this, our interest peaked; we wanted to know how well multiple-choice SmartItems compared with traditional multiple-choice items while covering the same 21 skills. Hence, we created a 21-item test based on the popular HBO series, Game of Thrones (GoT). Questions covered the content of Season 1 and included items designed to measure a variety of cognitive skills. The traditional multiple-choice item for each of the 21 skills was created as the first SmartItem rendering, and it covered the same skill. From almost 1,200 GoT fans who volunteered to take this test, here are some classical statistics averages:
You might also like...
More Reads
There are more than 250 million Youtube videos on how cheat, and only fraction of that number on how to prevent it. Learning how to combat test security incidents through YouTube isn’t the answer.... COTS is.
Contact
Interested in learning more about how to secure your testing program? Want to contribute to this magazine? Contact us.
Submit
Join our mailing list
Copyright© 2018 Caveon, LLC.
All rights reserved. Privacy Policy | Terms of Use

Table 1

Based on the above data, here are some conclusions I would draw:
  1. The average p-value for SmartItems ultimately represents an average of its renderings randomly created for the 1,200 participants. This is not expected to be the same as the Traditional MC, as the traditional MC (being fixed and being the first rendering) could have easily resulted in 21 items that were slightly more difficult in actual content. This does NOT mean that the SmartItems are generally easier, or that they’re more difficult than traditional MC counterparts.
  2. There is no difference in measures of discrimination.
  3. There is no difference in measures of average item duration.
  4. The advantages of SmartItems over traditional items are many. In addition to the above findings, SmartItems almost completely remove security problems that are found with traditional multiple-choice items, and test development and maintenance costs are drastically reduced by using SmartItems. SmartItems can even motivate appropriate test preparation by both learners and teachers alike.
If you’d like to learn more about SmartItems, we’d love to help. Please reach out to Alison at alison.foster@caveon.com and she will gladly show you more about the Caveon SmartItem, and how it can help your testing program.
Learn more about SmartItems HERE: