The Embrace Change Edition
Dragos Iliescu, University of Bucharest
An Uneasy Alliance
Dragos Iliescu of the University of Bucharest discusses the important alliance between testing and technology, and the need for this complex relationship to move from "uneasy" to "enthusiastic"
Testing is arguably (one of) the most technology-related of all activities conducted by psychologists. Most of the other activities that are typical for psychology are based on direct one-on-one human interaction and build on the effect of rapport, a fundamentally personal relationship that is hard enough to generate based on direct contact, and for which technology may be, with very rare exceptions, rather a deterrent than a help. This is the case for psychotherapy, clinical work, counseling, coaching, personal development, and many others. By contrast, testing has always been conducted with the help of some kind of technology. Even if only completed on a paper-based form or answer sheet (not terribly advanced technology), assessment has been ostensibly done through a technological proxy.
In spite of this, testing has always had an uneasy alliance with technology – technology has always been held at arm’s length. On one hand, it is certainly true that testing has quickly absorbed some types of technology. Testing has for the past few decades made use of the advantages given by devices such as computers, networks, mobile devices, etc. Interestingly though, these technological facilities have never been more than fancy page-turners for testing. In essence, testing and assessment have remained true to their paper-and-pencil approach, with the exception that, of course, the screen and the mouse and keyboard have replaced the paper sheet and pencil. In principle, technology (such as a computer) was used for authentication, stimulus display, interaction with stimuli, scoring and reporting ... but not in any way that was fundamentally different than it used to be without a computer. On the other hand, such an approach shows a lack of adaptability and is certainly a recipe for long-term failure. Max McKeown, one of the influential innovation ‘gurus’, has made the gist of the issue very clear by stating that “adaptability is about the powerful difference between adapting to cope and adapting to win”. From this point of view, testing and assessment have certainly not adapted to change. This is painfully visible from the fact that the testing and assessment field has learned to cope with technological change, but never enough to win in the technological battle. Technology is absorbed and used, but testing companies are not players driving change forward in such matters as deep learning algorithms (or, more generally, computational psychometrics), assessment through wearable sensors, use of chatbots, and others. Au contraire, technology companies now infringe as a habit on a territory that was traditionally claimed by psychological and educational testing and assessment.
You might also like...
More Reads
Get to know your colleagues in the test security industry a little bit better as they answer fun and intriguing questions from Marcel Proust's 18th century parlor game
Read more →
Change drives improvement in every industry, but what are the seven most groundbreaking innovations in testing's past? Each has had a lasting impact, and one just might change our future...
Read more →
In the end, the objective of any testing procedure is to understand, describe, and predict human behavior. Traditional “testing” has definitely been found to be effective in this respect. But this end may also be achieved – and is in fact achieved – through a number of novel procedures that are based on the analysis of large data sets intensively used for the prediction of human behavior. Procedures labeled as “computational learning”, “artificial intelligence”, “artificial neural networks”, “machine learning”, or “deep learning” have become more and more commonplace. These approaches were originally developed as fields of computer science, based on the usage of statistical techniques for the analysis of large data sets, aiming to enable computer systems to improve their performance on a specific task—i.e. to (better) “learn” tasks on which they were not specifically programmed. These applications have proven effective in domains that are connected to human behavior, such as natural language processing, customer relationship management, advertising, financial fraud detection, admission testing, personnel selection, and others. We could argue that these approaches are not new to psychometrics, and that such statistical approaches as factor analysis and item response theory fundamentally qualify as machine learning. But the significant difference in modern machine learning resides in the computing capacity of modern computer systems that enable them to crunch and detect patterns in very large data sets.

This is certainly a change that is dominated by computer science. Psychometrics is a bystander – a concerned bystander, to be true, seeing the inevitable drawing near. Few psychometricians, if any, have already jumped on this bandwagon. There are some notable exceptions, but by and large, psychometricians are excluded from this evolution for several reasons. For example, they do not have access to the data sets needed for the development of these algorithms, and they do not particularly adhere to the statistical or mathematical approach needed for these algorithms. But more than anything, they have a specific mindset that often makes them look down on the newcomer (it’s not “true” psychometrics); more than anything, this is what makes them stay away from the fascinating evolutions in the field.

Of course, the question rightly emerges on how this change could be embraced by testing professionals. I believe testing professionals need to become less attached to their traditional methods. Tests have looked a very specific way over the past 100 years (some would argue for the entire history of modern testing) – so much so that the measurement and subsequent prediction of human characteristics are associated entirely with the form and less with the principle behind it. I also believe that psychometricians need to see the value in what they could bring to the table, and take note of the contribution they could make to this debate and to the future evolution of testing. Being convinced of one’s own potential to make a contribution is an excellent source for enthusiasm, while lacking such knowledge leads to a self-fulfilling prophecy, in which those who consider themselves ill-equipped to play the game will play the game poorly. In fact, I believe that psychometricians can contribute, not from a secondary support position, but from the forefront of the field through their ability to drive robust and important advances.

For example, a major critique toward artificial intelligence algorithms is the fact that these algorithms are fundamentally a-theoretical; they lack “construct validity” and are often not more than unguided fishing based on data-crunching, a black box with an empirical and not theoretical approach to confirmation. Psychometricians could certainly contribute by envisioning ways in which algorithm creation would be guided by scientific theories – in this way, algorithms would not replace our current scientific approach to the prediction of human behavior, but add to it.
"Adaptability is about the powerful difference between adapting to cope and adapting to win"

- Max McKeown
Join our mailing list
Copyright© 2018 Caveon, LLC.
All rights reserved. Privacy Policy | Terms of Use
Interested in learning more about how to secure your testing program? Want to contribute to this magazine? Contact us.