Process Overlap Theory

Performance on diverse cognitive tests always correlate positively. This is called the positive manifold, which can be statistically accounted for by a general factor, g, which is often identified with a domain-general cognitive ability. An alternative explanation, process overlap theory (POT) assumes that any item or task requires a number of domain-specific as well as domain-general cognitive processes. Domain-general processes involved in executive attention are activated by a large number of test items, alongside with domain-specific processes tapped by specific types of items only. Besides the positive manifold, the theory accounts for a number of other phenomena, such as the higher across-domain variance in low ability groups (differentiation). Process overlap theory is translated to a multidimensional item response model, abridging psychometrics and cognitive psychology. Simulations based on the POT model (i.e., without positing a unitary ability) confirm that such data fit a higher-order general factor model. Hence the ability to extract g does not imply that g has any causal effect on cognitive test performance.

Kovacs, K., & Conway, A. R. A. (2016). Process overlap theory: A unified account of the general factor of intelligence. Psychological Inquiry, 27, 151–177.

Kovacs, K. & Conway, A. R. A. (2019). What is IQ? Life beyond general intelligence. Current Directions in Psychological Science, 28, 189-194.

Psychological aspects of test-taking

We investigate the psychological aspects of test-taking from two perspectives. The first one focuses on computerized adaptive testing (CAT), a method of testing based on item response theory that selects test items according to the ability estimates of examinees. Although many studies have provided evidence of the psychometric advantages of CAT over traditional linear tests, evidence on psychological aspects is limited. We investigate the effect of adaptive testing on the psychological reactions of examinees. This includes systematic reviews and meta-analyses of empirical articles studying the effects of CAT on motivation and anxiety compared to linear testing. Additionally, we are conducting empirical studies on the effect of item selection on test-taking experience. In particular, in CAT, items to which examinees have a 50% chance of correct answer are often the most effective from a psychometric perspective. However, from the perspective of the examinees, this success rate may not be preferable, especially in the case of young examinees, for whom this might result in a  uncomfortable experience. We are exploring the effects of success rate on examinees’ test-taking experience. The second perspective focuses on the effect of the order of test administration on user experience: we found that in multi-test assessment settings administering non-ability tests first and ability tests later has a positive effect on participant motivation.

Akhtar, H., Silfiasari, Vekety, B., & Kovacs, K. (2022). The Effect of Computerized Adaptive Testing on Motivation and Anxiety: A Systematic Review and Meta-Analysis. Assessment, 10731911221100995.

Akhtar, H., & Kovacs, K. (2023). Which tests should be administered first, ability or non-ability? The effect of test order on careless responding. Personality and Individual Differences, 207, 112157.