The Iowa gambling task (IGT) is one of the most popular tasks used to study decisionmaking deficits in clinical populations. In order to decompose performance on the IGT in its constituent psychological processes, several cognitive models have been proposed (e.g., the Expectancy Valence (EV) and Prospect Valence Learning (PVL) models). Here we present a comparison of three models—the EV and PVL models, and a combination of these models (EV-PU)—based on the method of parameter space partitioning. This method allows us to assess the choice patterns predicted by the models across their entire parameter space. Our results show that the EV model is unable to account for a frequency-of-losses effect, whereas the PVL and EV-PU models are unable to account for a pronounced preference for the bad decks with many switches. All three models underrepresent pronounced choice patterns that are frequently seen in experiments. Overall, our results suggest that the search of an appropriate IGT model has not yet come to an end.
Steingroever, Helen; Wetzels, Ruud; and Wagenmakers, Eric-Jan
"A Comparison of Reinforcement Learning Models for the Iowa Gambling Task Using Parameter Space Partitioning,"
The Journal of Problem Solving:
2, Article 2.
Available at: http://docs.lib.purdue.edu/jps/vol5/iss2/2