How often do SCJP questions change?

What is the best way to compare results for an annual survey when the questions change year over year?

  • I manage an annual survey that currently has 42 questions.  The responses to all the questions are averaged together to get a meta score.  My survey analysis primarily consists of comparing this year’s score against last year’s score.  Between last year and this year, we eliminated 3 questions, added 6 questions and changed 9 questions slightly.  I am having trouble understanding the appropriate way to compare the two data sets.  1) I could ignore the question changes and simply compare meta scores for the two years. 2) I could only count questions that stayed constant year over year and recalculate last year with the smaller subset of questions. 3) I could eliminate the questions that were removed from last year’s results and recalculate last year’s score.  In this case I would  count all the new questions in this year’s score. 4) Or is there a fourth and better option?

  • Answer:

    #4: Proceed with extreme caution and give serious thought to what you're doing. Otherwise, it's garbage in, garbage out. Anyone with a background in social statistics is going to look at such a "meta score" with suspicion.  Even the term is suspect.  First off, you need to demonstrate that your INDEX (that's the more accepted term as opposed to "meta score") actually measures a single, underlying concept/construct. Reliability analysis (Cronbach's alpha) and confirmatory factor analysis are two commonly-used approaches here.  With 42 items, my best guess is that you're measuring more than one underlying construct, meaning that your "meta score" isn't a coherent measure of anything (here's hoping no one's bonus is tied to those scores...).  Even if you do have an acceptable index, 42 items makes for a lot of redundant measurements.  You can get by with fewer than 10, sometimes as few as 3 or 4.  Again, my best guess is that you'd end up throwing out a lot of questions, or creating a series of indexes instead of just one. Also, you need to evaluate whether your "slightly" changed questions are in fact equivalent to the original questions.  So you need to test measurement invariance.  What might seem like a slight change of wording can be interpreted quite differently by survey respondents.  The onus is on the researcher to show that two questions elicit the same/similar responses. Lastly, even IF your have a workable index with the subset of questions that remained the same across both waves (as envisioned in your option #2), you still need to contend with question order effects: simply put, asking survey questions before other survey questions (potentially) influences the responses to subsequent questions -- so change the survey context and (potentially) change the responses.  So #2 is viable, and I've seen it done many times, but it really warrants a caveat about the changed survey context. And what's wrong with comparing scores question by question between years? That's probably the least problematic and most easily understandable procedure to employ.

Tim Gravelle at Quora Visit the source

Was this solution helpful to you?

Just Added Q & A:

Find solution

For every problem there is a solution! Proved by Solucija.

  • Got an issue and looking for advice?

  • Ask Solucija to search every corner of the Web for help.

  • Get workable solutions and helpful tips in a moment.

Just ask Solucija about an issue you face and immediately get a list of ready solutions, answers and tips from other Internet users. We always provide the most suitable and complete answer to your question at the top, along with a few good alternatives below.