Anchoring vignettes and self-assessed ratings: Monte-Carlo evidence on specification

Tuesday, June 24, 2014: 9:10 AM
Waite Phillips 103 (Waite Phillips Hall)

Author(s): Nigel Rice

Discussant: Teresa Bago d'Uva

The use of anchoring vignettes has become a popular method to adjust self-assessed data for systematic differences in reporting behaviour. Vignettes represent hypothetical descriptions of levels of a latent construct such as health status. Respondents are asked to rate the same fixed level described by a given vignette and differences in their ratings are assumed to be due to differences in reporting behaviour. Thus vignettes offer a useful means to assess systematic variation in ratings by relating respondent assessments to their socioeconomic and demographic characteristics. This information can then be used to adjust self-assessments of respondents' health status to achieve greater cross-respondent comparability. The majority of studies that address the issue of reporting heterogeneity using vignettes have adopted the hierarchical ordered probit (HOPIT) model. The HOPIT model is an extension of the standard ordered probit model that allows the cut-point thresholds which separate the response categories to vary across individuals as a function of respondent characteristics. In so doing, the model allows for systematic reporting behaviour to vary across respondents.

Despite the growing popularity of the vignette methodology to address the issue of reporting heterogeneity, the formal evaluation of the utility of the approach remains a topic of ongoing research. For the approach to be valid, two assumptions need to hold. The first, termed vignette equivalence implies that the level of the variable represented by any one vignette is perceived by all respondents in the same way and on the same unidimensional scale. This assumes that all respondents agree on the underlying latent level described by the vignette except for random error. The second assumption, termed response consistency implies that individuals use the same mapping from the underlying latent scale to the available ordered response categories when responding to both the self-assessment and the vignette questions. This assumption allows the relationship between reporting behaviour and characteristics of respondents obtained using the responses to the vignettes to be used to adjust respondents' self-reports of the underlying construct of interest (e.g. health). 

The empirical literature investigating the validity of the two assumptions of the HOPIT model is equivocal. This paper uses a Monte-Carlo design to assess the bias introduced into model parameter estimates when the two assumptions fail to hold. In addition, we investigate the extent to which the HOPIT approach is useful in reconciling self-assessed data to it objective underlying counterpart for different degrees of failure of the underlying model assumptions. This aims to address the practical issue of how useful the HOPIT approach is in adjusting self-assessments for reporting behaviour in situations where response consistency and vignette equivalence fail to hold.