User outcome survey
Author: Andy Braren | Last edit: May 01, 2024
What is it?
A survey that asks a target user group two key questions about a set of user outcome statements:
- How important is the following statement to you?
- Extremely important
- Very important
- Moderately important
- Slightly important
- Not at all important
- How satisfied are you with your ability to achieve this today?
- Extremely satisfied
- Somewhat satisfied
- Neither satisfied nor dissatisfied
- Somewhat dissatisfied
- Extremely satisfied
- (Optionally) Please briefly explain your ratings
These two scores are then combined with a formula (described below) into an Opportunity Score, allowing them to be stack-ranked and categorized into either:
- Overserved: satisfied by existing solutions, not a lot of opportunity
- Appropriately-served: not super satisfied, but also not super important
- Underserved: not fully satisfied and important; opportunities to provide new value
- Table stakes: so important and satisfied that it’s the minimum bar to be considered; a product should support users in achieving this
Every user outcome statement our team has tested with any user type is collected in our user outcome repository for everyone’s reference.
Why run a user outcome survey?
User outcome surveys are a great way to:
- Understand which parts of a user’s job-to-be-done they’re unsatisfied with today
- Help teams prioritize roadmap items that address underserved areas, increasing the likelihood that they’ll be appreciated by users
- Get a baseline quantitative satisfaction metric that can be measured over time as solutions are shipped that intend to improve the user’s experience (see the Understanding and defining outcomes and KPIs method)
- Inform the product’s marketing toward user needs that will likely resonate
- Inform the product’s documentation toward helping users find out how to achieve the outcomes they care about most
- Learn whether users of different products are satisfied with their ability to achieve certain outcomes or not
- Find participants to schedule follow-up user interviews with
- Inform competitive analysis based on solutions that address “table stakes” user outcomes
Including the optional question to describe each rating can provide a hint as to why users rated each user outcome the way they did, but additional follow-up user interviews are recommended to develop a better understanding.
When should you run a user outcome survey?
Running a brand new user outcome survey can take a while and involve many people, which could feel distracting to teams when they’re feeling pressure from deadlines. Try to run these surveys either early on in the product lifecycle while they’re still exploring the problem space, or after launch when the team is likely to be trying to figure out what to prioritize next.
Thankfully you may not actually need to run a new user outcome survey. Check to see if any of the user outcomes in our user outcome repository are relevant enough that they can be used without creating a new survey instead.
How to run a user outcome survey
- Identify the user outcomes that you’re interested in testing with a new survey
- Meet with a ResearchOps member to discuss budget, target audience, and timeline
- Use the UXD Screener Index (coming soon) to inform your screener questions
- Ask a UX Researcher to create a copy of this Qualtrics survey and add in your screener, themes, and each user outcome statement
- Share the survey with potential participants
Try to avoid including more than 40-50 user outcome statements in one survey, especially if you ask them to elaborate on their answers. Try to gather responses from 30+ people who meet your screening criteria.
Once you’ve collected enough responses, use the following formulas to determine the Importance, Satisfaction, and Opportunity Score:
- Importance = (# who responded Very or Extremely Important) / (total responses)
- Satisfaction = (# who responded Very or Extremely Satisfied) / (total responses)
- Opportunity Score = Importance + max(Importance - Satisfaction, 0)
Add each statement and its scores to our user outcome repository to calculate if it’s Underserved, Overserved, Appropriately-served, or Table Stakes. You can also use Google Sheets to create a chart like this one to share in readout presentations.
Examples
- Platform Engineering user outcome survey Qualtrics form
- Platform Engineering user outcome survey readout
- User outcome prioritization survey and Miro activity
Additional resources
- Tony Ulwick’s outcome-driven innovation framework
Get in Touch!
Spotted a typo? Any feedback you want to share? Do you want to collaborate? Get in touch with the UXD Methods working group using one of the links below!
Drop us a message on the #uxd-hub Slack channel
Submit Feedback