Anchoring vignettes and self-assessed ratings: Monte-Carlo evidence on specification
Despite the growing popularity of the vignette methodology to address the issue of reporting heterogeneity, the formal evaluation of the utility of the approach remains a topic of ongoing research. For the approach to be valid, two assumptions need to hold. The first, termed vignette equivalence implies that the level of the variable represented by any one vignette is perceived by all respondents in the same way and on the same unidimensional scale. This assumes that all respondents agree on the underlying latent level described by the vignette except for random error. The second assumption, termed response consistency implies that individuals use the same mapping from the underlying latent scale to the available ordered response categories when responding to both the self-assessment and the vignette questions. This assumption allows the relationship between reporting behaviour and characteristics of respondents obtained using the responses to the vignettes to be used to adjust respondents' self-reports of the underlying construct of interest (e.g. health).
The empirical literature investigating the validity of the two assumptions of the HOPIT model is equivocal. This paper uses a Monte-Carlo design to assess the bias introduced into model parameter estimates when the two assumptions fail to hold. In addition, we investigate the extent to which the HOPIT approach is useful in reconciling self-assessed data to it objective underlying counterpart for different degrees of failure of the underlying model assumptions. This aims to address the practical issue of how useful the HOPIT approach is in adjusting self-assessments for reporting behaviour in situations where response consistency and vignette equivalence fail to hold.