Excellence Edition
calendar-icon_whitecircle
Tara Williams, Chief Editor, Caveon Test Security
imageedit_6_6354769621
Notes from the Field: SmartItems
You might also like...
More Reads
Caveon veterans speak about their fifteen years of experience at the forefront of test security and ruminate on what the future might hold.
Use the latest scientific research conducted by Caveon to evaluate all of your options and make the best decisions for your testing program.
Tara Williams, a member of the SmartItem development team, discusses what it’s like to create these items, and outlines several of the lessons she has learned on this psychometric journey.
Submit
Join our mailing list
Copyright© 2018 Caveon, LLC.
All rights reserved. Privacy Policy | Terms of Use
Contact
Interested in learning more about how to secure your testing program? Want to contribute to this magazine? Contact us.
For the past year, I’ve been working as a specialist on exam development projects that utilize a new concept in testing – the SmartItem. We’ve created Game of Thrones, Geography, and IT exams, as well as items for many other content areas. It’s been challenging, rewarding, and eye-opening – a process of perpetual discovery. Personally, I’ve taught subject matter experts how to write them, editors how to review them, and have recently been thinking about the ways SmartItems will influence—and possibly improve—our test development processes. So here I am, reporting in from the field to those who are curious about what it’s like to create these items. I will share a couple of the lessons we’ve learned on the ground as we’ve embarked on this psychometric journey. Avid Lockbox readers have likely heard of SmartItems. However, for anyone new to this e-zine or to the concept in general, SmartItems use special technology during development and delivery so that the item changes each time it is administered and covers the objective completely. For example, let’s say you have the following objective: Know the order of the planets from the sun In traditional exam development, the writer creates a select number of items for this objective, per the blueprint stipulations. The writer chooses, based on his or her own experience or personal preferences, which pieces of the skill to measure – in our case here, which planets to test on. The final product might look something like this; 1 item, as follows: Which planet is second from the sun?
A. Earth
B. Saturn
C. Mercury
D. Venus In contrast, a SmartItem is created within a computerized tool to encompass the entire objective, not just a piece of it. Here’s what the scaffolding of a SmartItem for this objective might look like: Which planet is [first, second, third, fourth, fifth, sixth, seventh, eighth] from the sun? The computer will choose one item from this list, at random, to present within the question. One candidate may see “Which planet is the first from the sun?”; another may see “Which planet is the third from the sun?” and so on. The correct option is dependent upon which list item is presented to the candidate. If “third” appears in the stem, “Earth” is the correct option, for example. The incorrect options will be any planet in the solar system that is not the correct option, and are also presented at random. As you can see, this item covers our objective entirely, because it includes each planetary position. (For the astronomy buffs out there, please feel free to voice your concerns on whether Pluto should or should not be included in this list. Email: TaraWilliams.CE@caveon.com. Any and all impassioned stances on Pluto are welcome.)
"A SmartItem is created within a computerized tool to encompass the entire objective, not just a piece of it."
While the above introduction to SmartItems offers only a glimpse into this item format, it’s not a stretch to see that the SmartItem concept has the potential to revolutionize how we create exams. But what does exam development with SmartItems look like? Which processes change; which ones remain the same? Based on my experience, I believe that as SmartItems are adopted, programs will discover that, beyond the core benefits of reducing test theft and cheating threats, SmartItems have fringe benefits – more specifically, quality control during the development process. I’d like to discuss two ways that our team has noticed that SmartItems can enhance the quality of the traditional exam development process:
  1. SmartItems Demand Clear Learning Objectives In traditional exam development, our goal is to create high-quality objectives, but this doesn’t always happen. Learning objectives with “loose” or undefined content boundaries pass through the design phase, all the way through the development of an exam. Writers must often infer what information should be included within their items and what information shouldn’t be. SmartItems demand that our learning objectives are clear, well-defined, and unambiguous. Why? If we ask a writer to cover an entire objective in one SmartItem, the writer naturally needs to know what content to include, and what content not to include. A poorly-defined objective is antithetical to creating a SmartItem. In our SmartItem writing workshop, we ensured that at least one expert who created the objective domain attended. We witnessed spirited, important conversations between the objective domain creator and the writers about the scope of the exam, about what we were trying to measure in objectives, and why. Observing these conversations was heartening, as we were witnessing further refinement of the objective domain. We were witnessing quality control of the exam, demanded by the SmartItem format.
  2. SmartItems Limit Writer Subjectivity
Similarly, in asking a writer to cover an entire objective in a single item, we remove the subjectivity that can be introduced in traditional exam development when a writer decides what part of the objective to write toward. Writers likely base this decision on many things—on personal preference, personal experience, or perhaps on what is easiest to write. In aiming to cover the entire objective, we reduce this possibility of writer subjectivity. In our workshops, our writers felt empowered as they worked back and forth with the creators of the objective domain to ensure that the content in their SmartItem encapsulated the entire objective. Additionally, being passionate about their area of expertise, they liked the idea that candidates would have to study the material more deeply and thoroughly since exam items cover all content associated with the objectives.

Our team has already learned much about SmartItems, and we will continue to make new discoveries over time. I look forward to sharing them with you. But for now, over and out from the field.