Survey Methodology, Active Learning for Voting Advice Applications
How much faster can the same quality of recommendations be given in Voting Advice Applications when dynamically selecting questions based on users’ previous answers (Bachmann et al., 2024)?
How does such an adaptive questionnaire affect user behavior in Voting Advice Applications (Bachmann et al., 2026)?
Which algorithms can estimate the quality of recommendations before questionnaire completion, and how do users perceive such previews (Bachmann et al., 2026)?
More information will follow soon.
2026
under review
Adaptive Questionnaires for Voting Advice Applications: Three User Experiments on Recommendation Quality, Transparency, and Predictive Influence
Fynn Bachmann, Cristina Sarasua, and Abraham Bernstein
Adaptive Questionnaires (AQs) were recently proposed to accelerate the recommendation process in Voting Advice Applications (VAAs). Often supported by statistical models, AQs select the most informative next question based on users’ individual response profiles, which increases the information gain. However, the user perspective within AQs has been studied to a lesser extent. To address this research gap, we conduct three online experiments focusing on how users (i) assess candidate recommendations, (ii) understand model explanations, and (iii) interpret their predicted responses. Our results show that highly engaged users are more satisfied with recommendations in AQs than in equally long, static questionnaires. While the study also reveals that some users have difficulties understanding the logic of the AQ’s statistical model, we find that they nevertheless rely on its predictions when explicitly displayed in the interface. This evidence suggests that AQs can contribute to political education in VAAs while improving the user experience.
Pol. Gov.
Estimating the Recommendation Certainty in Candidate-based Voting Advice Applications
Fynn Bachmann, Daan Van Der Weijden, Cristina Sarasua, and Abraham Bernstein
Voting Advice Applications (VAAs) typically require users to answer questionnaires before receiving party or candidate recommendations. As users answer more questions, the recommendations naturally become more accurate. However, when users do not complete the questionnaire, the certainty of these recommendations is unknown. In this work, we develop and present a measure to quantify this certainty by introducing an algorithm that estimates the Candidate Recommendation Accuracy (CRA) – the overlap between early and final recommendations – after each question. Through simulations based on existing voter data, we find that our algorithm is more accurate than heuristic estimates. Additionally, it can identify stable recommendations – candidates who are likely to be among the final recommendations – with fewer false positives. Furthermore, we conduct a user experiment investigating different ways of communicating recommendation certainty to users. Our results show that users answer more questions when they see a preview of stable recommendations, but quit the questionnaire earlier when we display an artificially high CRA estimate. Moreover, we find that users appreciate the interface’s simplicity over its accuracy. We conclude that displaying personalized stable recommendations can spark curiosity in VAAs while providing a robust estimate of recommendation certainty for users who submit incomplete questionnaires.
2024
ECML/PKDD
Fast and Adaptive Questionnaires for Voting Advice Applications
Fynn Bachmann, Cristina Sarasua, and Abraham Bernstein
In Machine Learning and Knowledge Discovery in Databases. Applied Data Science Track
The effectiveness of Voting Advice Applications is often compromised by the length of their questionnaires. To address user fatigue and incomplete responses, some applications (such as the Swiss Smartvote) offer a condensed version of their questionnaire. However, these condensed versions cannot ensure the accuracy of recommended parties or candidates, which we show to remain below 40%. To address these limitations, this work introduces an adaptive questionnaire approach that selects subsequent questions based on users’ previous answers, aiming to enhance recommendation accuracy while reducing the number of questions posed to the voters. Our method uses an encoder and decoder module to predict missing values at any completion stage, leveraging a two-dimensional latent space that is reflective of the traditional methods used in political science for visualizing ideology. Additionally, a selector module is proposed to determine the most informative subsequent question based on the voter’s current position in the latent space and the remaining unanswered questions. We validated our approach using the Smartvote dataset from the Swiss Federal elections in 2019, testing various spatial models and selection methods to optimize the system’s predictive accuracy. Our findings indicate that employing the IDEAL model both as encoder and decoder, combined with a PosteriorRMSE method for question selection, significantly improves the accuracy of recommendations, achieving 74% accuracy after asking the same number of questions as in the condensed version.