No CrossRef data available.
Published online by Cambridge University Press: 16 July 2025
This article proposes a new approach for measuring the quality of answers in political question-and-answer sessions. We assess the quality of an answer based on how easily and accurately it can be recognized among a random set of candidate answers given the question’s text. This measure reflects the answer’s relevance and depth of engagement with the question. Drawing a parallel with semantic search, we can implement this approach by training a language model on the corpus of observed questions and answers without additional human-labeled data. We showcase and validate our methodology within the context of the Question Period in the Canadian House of Commons. Our analysis reveals that while some answers only have a weak semantic connection to questions, suggesting some evasion or obfuscation, they are generally at least moderately relevant, far exceeding what we would expect from random replies. We also find meaningful correlations between the quality of answers and the party affiliation of the members of Parliament asking the questions.
Edited by: Margaret E. Roberts