This is a public Forum  publicRSS

Topic

    peitience
    Survey Questions Scoring Strategy?
    Topic posted July 13, 2010 by peitienceExplorer, last edited October 29, 2011 
    3413 Views, 2 Comments
    Title:
    Survey Questions Scoring Strategy?
    Content:

    Hi!

    I was curious as to how any of your currently score your survey questions. At one point in time, before we understood how scoring could be important, we made positive responses worth more and negative responses worth less.  (i.e. "Were we helpful? No = 1 point, Yes = 2 points.)

    Since then, we have completely turned this on its head and developed the concept where NEGATIVE feedback is scored HIGHER. For example, one question asking about specific categories our customers may have had issues in has each individual answer choice worth 50 points each. This allows us to pull only surveys with very high scores and review them for negative feedback for ways we can improve. We have a "Win Back" team that specifically reaches out to those customers who have given negative surveys for resolution on these issues.

    I'm not sure this is exactly industry standare but thus far it has worked for our purposes.

    Anyone else willing to share their survey scoring strategy?

    Answer

     

    • Aaron

      We answered this in our expert seminar and I will touch on the main points here as well.

      First of all, the scoring should go hand-in-hand with the survey methodology (i.e. If using a Net Promoter type of question then the scoring should be based on a 10 point scale). This will allow you to compare your results to published industry standards.

      To dig deeper, I understand the thought process around making negative scores higher so that the ‘Win Back’ team can easily identify these for follow-up.  However, it seems like this will cause issues whenever anyone outside of the process needs to understand survey results.  Anytime the results need to be communicated on an executive level they would need to be accompanied by a caveat explaining why high scores are very negative.  This may also cause an unnecessary learning curve for new employees or internal transfers as it seems counter-intuitive.

      There should be enough functionality in the product to keep a normal scoring system and still recognize low (negative scores).  A few options include the following:

      • Create custom reports that sort on score ascending.  This will ensure that all low scores are shown first when running the reports.
      • Utilize exceptions in reports that specifically target a score below some threshold.  This combined with scheduled reports that run regularly should point out the negative responses that require follow-up.
      • Have follow-up processes built into the survey itself.  In advanced mode, it is possible to branch based on the response to any question in the survey and then create incidents or notifications which can be assigned to the ‘Win Back’ team.  This automates the process ensuring that no human needs to notice the negative score in a report. 
    • LarryC

      I know this post is pretty old, but I figure I'd present our stragegy and hope to get some feedback as well.

      We currently score our satisfaction responses in the following manner:

      • Extremely Satisfied = 5
      • Somewhat Satisfied = 4
      • Neither Satisfied nor Dissatisfied = 3
      • Somewhat Dissatisfied = 2
      • Extremely Dissatisfied = 1

      Essentially, we tally up the scores with the weights above, figure out the avg out of a perfect '5', then convert that to a percentage of overall satisfaction.  However. there is some major flaws with this methodology:

      1. If someone scores a 1 on all aspects, this is still 20% overall - this may be okay from a Pass/Fail or Successful/Unsuccessful perspective, but now your total scores would only effectively range from 20%-100%.

      2. If we changed our score values from 1-5 to 0-4, then in my mind, the mapped percentages for each of the scores would then fall similar to this:

      • 0=0%
      • 1=25%
      • 2=50%
      • 3=75%
      • 4=100%

      So look at this model, say we get all 2's in our surveys throughout the month (Neither Satisfied nor Dissatisfied), then our CSAT for the month would be 50%.  Now reporting this back up to our Senior Execs, they will look at the score and think 'COMPLETELY FAILED' as we have certain goals at 90% CSAT and most likely will not understand our scoring methodology behind the scenes.  They will most likely interpret this as a school grade:

      A = 90%, B = 80%, and eventually to <60% = F.

      Therefore, I'm not sure it's appropriate to score “Neither Satisfied nor Dissatisfied” as a 50%.  If we were to follow the Net Promoter strategy, I'm not sure how we'd present the NPS score in a meaningful way or how to relate that to our 90% CSAT goals.

      Any thoughts or feedback?

     

    All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners.