User interviews
Author: Matt Carrano | Last edit: July 29, 2024
What are user interviews?
User interviews are semi-structured and moderated discussions with your target users. These discussions are used to gather qualitative data by asking users about their current activities, pain points, and desired outcomes or goals. During these sessions, the interviewer asks questions that are designed to probe topics of interest. While interviewers will typically work from a predetermined script, questions should be open-ended so that they provoke a conversation and can be adapted according to the interviewee’s responses. Interviews are intended to be performed live (in-person or remote) so that the interviewer can pick up on non-verbal cues that may provide additional insights into the interviewee’s attitude and feelings regarding a particular issue.
Interviews are a foundational part of several other user research methods, like usability testing, contextual inquiry, and outcome-driven design. In fact, the skills developed by becoming an experienced interviewer are applicable to any and all forms of qualitative user research. The data gathered through user interviews helps inform important user research artifacts, like personas, journey maps, and user scenarios.
Why perform user interviews?
User interviews will allow you the opportunity to gain insight into user preferences and behaviors. They also allow UX designers to build empathy with target users, which is a key ingredient for building an understanding of users and an appreciation for their current pain points. This insight and understanding allows you to better develop potential solutions.
When should you do interviews?
User interviews are best performed during the discovery phase at the start of a new project or at the end of a project as a means to measure the applicability and effectiveness of new features. They are ideal when you are:
- Trying to decide what to work on next.
- Wanting to discover new opportunities.
- Needing to define your target users.
- In search of deeper insight into user behavior.
The following table illustrates how user interviews fit into the bigger picture of user research methods.
Phase: | Discovery | Exploratory | Testing | Listening and measuring |
When you need to: | Find out how people are doing things now
Learn about the real users and environment | Understand opportunities and gaps
Prioritize who you are supporting and what their goals are | See how people will react to something
Find ways to make something more user-friendly | Understand why they feel the way they do
Establish metrics to measure the impact of changes |
Appropriate tools can include: | Field study Diary study Competitive analysis Stakeholder/User interviews | Journey maps/Service blueprint User stories Survey Card sort | Usability / Prototype Test Accessibility test Heuristic evaluation Tree jack Benchmark test | Semi-structured interview Sentiment scale (e.g. NPS, SUS, etc.) Telemetry/Log analysis |
Outputs can include: | Insights summary Competitive comparison Empathy Personas Distinguish what people say from what they actually do | Dendrogram Prioritized backlog items Structured data visualizations | User quotes /clips Usability issues by severity Where people click/look for things Measured impact of changes in product | Satisfaction metrics Broad understanding of user behaviors |
How to perform user interviews
Who to interview
Good results from an interview study can be obtained by talking to as few as 5-8 participants. Remember that the goal of these sessions is to gain deeper understanding, not to make statistical inferences. While it’s always good to talk with more people, the additional time and energy required to recruit and interview more than 8 participants may not justify the effort. Your goal should be to extract trends in participant responses, which you should start to see after talking with only a few people.
Exceptions to this rule might occur when your users can be divided into separate sub-groups or roles, like developers vs system administrators, for example. In that case you should allow for at least 3-5 participants per group.
If – at the end of your interviews with a small pool of participants – you still feel uncertain about the answers to your questions, you might consider expanding the study to include additional people.
Planning your interview study
Start by defining your research question(s). These are not questions that you will ask directly of participants, but rather they define what you want to learn from the study. What questions are you hoping can be answered at the end of the study? For example,
- Who are the target users for my application and what are their typical behaviors, goals, and pain points?
- What are the most frequently performed tasks using this software?
- How easy is it to install and setup this application?
You may generate several questions at this point. This is where people can get stuck, or studies get bloated. Note down all questions somewhere, then prioritize for the first round of interviews. Ask yourself: “what do I need to know FIRST?”
Once you’ve identified the questions you want answered, follow these simple steps to plan your study:
- Draft study plan (and maybe your final readout deck).
- Draft the actual study questions.
- Recruit the participants.
- Schedule the sessions.
- Prepare the moderator and observers/notetakers (optional Miro template for notetakers).
- Conduct the sessions.
- Thank your participants.
- Analyze and share the results.
You will likely find that the largest amounts of time are spent preparing for sessions - developing the plan and recruiting and scheduling participants - and analyzing results. You should plan accordingly.
Writing your study questions
You need to translate your research questions into a set of interview questions that you will ask to participants. Start by brainstorming. To keep your interview focused, only ask what you need to know and make sure your questions are relevant to your research goals.
You may find that your initial set of questions are mainly close-ended questions that can be answered with a short or one-word response. While these may be appropriate for survey research, they are inadequate for generative user research since they don’t allow unanticipated stories or statements to emerge.
One common technique for broadening interview questions and making them more open-ended is to ask participants to describe a specific incident, event, or experience that they recently had. For example, if you are trying to gather information about how to make it easier for people to prepare meals in their homes, you might ask: “Tell me about the last time you cooked at home.” The response to this question (or really a prompt) is likely to be in the form of a story. In response to what the participant tells you, you may ask follow-up questions to further clarify or dig deeper into topics that are of interest in relation to your study goals.
Another common technique is to give participants the “magic wand.” You may ask, “If you could wave a magic wand and change ANYTHING, it would just automagically do what you needed, what would it do? This is a very powerful question. It frees participants up to dream. To go outside the bounds of your line of questioning, and even reality. THIS IS WHERE GREAT IDEAS COME FROM (sometimes).
User Interviews publishes a great list of sample interview questions that can provide further ideas.
In constructing your interview questions, the following are some additional pitfalls to avoid:
- Avoid questions that give the participant context that they never said. "Why do you hate this product?”, "How hard it was for you to use it?”, "How awesome is this product in your opinion?”, and so on. If the participant didn't tell you that they hate the product or that it was hard to use - we shouldn’t assume that. Other questions can reveal what you're looking for specifically: "We're trying to learn whether you noticed this button or not" or "How would you rate the user experience of this product?"
- In an interview, it's important to make sure we're not using any words that may create tension between you and the interviewee. For example: “How much do you make?” or “Why did you do that?” (instead of "I noticed you did XX, can you share with me your process?").
- Avoid asking 2 questions in 1: “What did you like and dislike about this product?” It may create confusion or just make your participant give you a "half-baked" answer. For example: “How many times did you use OpenShift, or did you always use VMware?”
- Avoid complex questions. Questions need to be direct and clear. Otherwise, the participant may not understand the question correctly.
The following table illustrates some of the best practices in writing interview questions, including what to do and what to avoid.
Common problems | Bad example(s) | Why this is problematic | A better approach |
Leading questions | Tell be about some of the problems that you've had printing with this software. | The question begins by assuming that there are problems with printing and the interviewee must be experiencing them. This may prompt them to try to think of problems even if they have not had any. | Tell me about your experience printing with this software. |
Suggesting what you want to hear | Would you use this application? Do you like it? | Without even realizing it, people often tend to modify their responses to align with what they think you want to hear, which reduces candor and reliability. | Thinking about the way you work, would this be of any value in your everyday tasks or not? How much or little does this feature actually matter to you? [Scale of 1-5] Why? |
Double-barreled questions | Are security controls and dark mode options important to you? | These muddle responses, because they are cramming two or more queries into one. They can even create contradictions. | Please rate each of the following features from high to low based on their importance to you:
|
Artificial memory compression | When you’re responding to a downtime alert, what do you usually do? | This forces people to average out their experiences, which yields vague answers that are not as detailed or can even be inaccurate. E.g. people don’t calculate the percentage of time they order alcohol with a meal (unless it’s never or always). | Think about the last time you were responding to a downtime alert. What did you do? Can you remember another time you were responding to an alert? How was it the same or different? |
When you’ve drafted your questions, order questions by their specificity starting with general questions that become more specific as the interview progresses. Open back up with general questions that summarize the session as you near the end.
This video provides further guidance on how to write and ask good questions during a user interview.
Recruiting and scheduling participants
Once you’ve established your study goals, identified your target user, and written your interview questions, you will need to identify people to interview. Recruiting can be one of the most difficult and time-consuming parts of planning a user interview study. As is the case for conducting any external research, you should review Red Hat's policies on handling personally identifying information (PII) and providing compensation and gifts to participants. You can find more information about Red Hat's standard recruiting practices here (VPN required).
When scheduling interviews, prepare participants with the following information:
- How long the session will be.
- If they will be expected to be on camera and/or share their screen.
- If they must be on a specific device type (e.g. if something will only work on a laptop, or on a Chrome browser).
- If there will be observers or notetakers (just refer to them as observers or “others who are curious about what you have to say” or something comfortable, and maybe don’t number how many are coming or draw attention to it).
- Include contact info if they have trouble connecting or need to reschedule.
- Consider reminding them of the appointment the day or morning before
Refer to these outreach email templates for more guidance. You should start with this one about communicating with participants, but there are other gems in our collection. You might want to set up a Google Calendar so participants can self-schedule (this reduces drop-off rates!).
When conducting studies with external (non-Red Hat) participants, it is important to keep the following considerations in mind:
- Keep participant identity private. For legal, ethical, and compliance reasons, you must review and apply these practices for handling PII whenever engaging with human participants in research. If you are recruiting participants who are not fully anonymous (NO name, email, recording of audio or video), there are certain forms that must be completed both to protect Red Hat and the participants. As these are subject to change over time, reach out to UXD Ops for support in this area.
- Be aware of guidelines for paying participants (or other exchange of value in any form). If you choose to compensate participants in order to motivate them to participate there are strict compliance requirements on the terminology you use when offering and sending this compensation. If you go this route you should reach out to UXD Ops for help.
If you need help or have any questions about recruitment, be sure to reach out to the UXD Research Operations team.
Conducting interview sessions
Once you are ready to begin conducting your user interview sessions, here are some practices you should follow and thing to avoid to make sure things run smoothly:
- Conduct a practice session first and .Pre-test any technology that you will be using to ensure that the sessions run smoothly! Pull in a colleague and ask them to go through the study as a participant, then give you feedback. You will be more prepared, and might learn some ways to make the participant feel more comfortable.
- Introduce yourself and put the participant at ease. Explain the purpose of your study and cover any legal and compliance topics. Do NOT rush this. While this seems to “eat up” some of the time for your session, establishing friendly rapport is key to help the participant share with you.
- Embrace awkward silences. Silence is golden – don’t try to fill in the silence. If you’ve asked a question, give the respondent some time to formulate their response. If you’re nervous, try counting slowly to 15 in your head. You could even say “Take your time, and let me know if you’d like me to clarify my question.” The BEST stuff comes out when people have a minute to think. They may be debating whether something is worth mentioning, and your intervention might make them think you want to rush along.
- Give regular encouragement, but keep reactions neutral. You don’t want them to think that you “like” a response, or are looking for a specific response. Give feedback like:
- “Thanks for explaining that so clearly. I feel like I can picture it!”
- “That context is very helpful.”
- “These responses are clear and thorough. I think the team I’m taking this back to will appreciate your candor.”
- Ask clarifying follow-up questions. If participants state a like or dislike, always try to probe why they expressed that opinion.
- Respect the participant's time:
- End the session on time
- Sometimes ending on time will cut off the respondent though! If it looks like it could run over, ask something like, “It looks like I have 15 minutes worth of questions left, but we only have 10 minutes and I want to respect your time. Would you rather me drop some questions to end right on time? Or would you prefer to keep going at this pace even if we take a few extra minutes?”
- When talking to participants, call it a “discussion”, NOT an interview. We’ve had folks get confused and think it’s a job interview (really!) or feel very nervous like they are being quizzed. The term “discussion” is much more approachable.
- Avoid bias. You should always be conscious of keeping bias out of your interview practice. Bias can creep into the questions we ask, the participants we select, and even how we interpret the data. Be aware, but don’t despair. Just by observing and interacting with the human subjects, we are altering their behavior. Even seasoned researchers with hundreds of hours of live moderation experience may say something that introduces bias. To avoid this, be sure to check your own assumptions and preconceptions at the door. Ask open-ended questions that use words like “how, why, and what” to guide interviewees towards their own answers rather than leading them in a direction that you want or expect them to go.
- Be willing to have your assumptions challenged. We may all enter a research study with preconceived notions about what is true. Be open minded and willing to hear information that refutes previous assumptions and opens your mind to forming a new understanding of the problem at hand.
- Be prepared to engage with different personalities. Some participants might be verbose while others require more proding to get them to open up. Keep talkative participants focused and be prepared with multiple follow-up questions to keep more introverted types talking.
- Practice neutral responses and reflective listening. Instead of saying, “I’m so glad you think so!” or “Oh, no!”, try “Thanks for explaining that to me” or “I appreciate you sharing that.” Reflective listeners reflect back what they heard to make sure they understood. A reflective response to something you just heard might be,“let me make sure I understood that. Are you saying [brief summary of what they said].”
- Be sure to thank participants at the end. A lot of people find participating in research rewarding if they feel like their input will have an impact. You’ve worked hard to reduce bias the whole time, but now that it’s all done, go back into human mode. Tell them how helpful it was and how you will share it with excited folks. The end of an experience is a big factor in the emotions associated with recall of it, so take a minute to be pleasant again. Refer to these templates for ideas on sending a thank you note.
Watch this video for some additional advice and tips for performing good user interviews.
Analyzing your results
It’s important to collect and organize individual test findings in a way that can help you identify top problems and trends. You want to convert the data into insights that lead to action. Collect observations (while they’re fresh), including:
- Moderator notes
- Observer/notetaker notes
- Recordings
It’s always helpful to have a separate note-taker present during your interview sessions. This will allow you to focus on engaging the participant while someone else is taking notes. As a notetaker, capture as much as you can without making people feel like bugs under microscopes. It can help to record with permission or take notes (sample template provided) in a non-intrusive way. For more note taking tips, refer to our interview toolkit.
As you analyze your user interview data, your goal should be to extract trends that are repeatable across sessions. It’s easy to get lost in the large amount of data that your interviews may create. Keep your analysis focused on answering your initial research questions, and:
- Review and group similar observations or problems.
- Look for similar or related remarks.
- Group any identified issues by using affinity mapping or a similar technique.
- For each group, try to summarize the problem in one short statement.
- Find a way to place groups in “buckets” and rank the findings.
- Move groups with only 1-2 data points into a “could dig into this later” bucket.
- Other bucket examples could include: “Things they love”, “Daily Problems”, or “Workarounds”.
- Rank findings in a hierarchy that demonstrates the “size” of the problem. For example, you could rank findings as “t-shirt sized problems” from XS to XL.
Sorting observations at a high level might make your results more manageable and digestible by the team. Some possible categories could include:
- “Opportunities to improve our products and services”
- Simple fixes: these should be kept somewhere so they can be added to the backlog and pulled into sprints at some point.
- Wicked problems: these can go on a research backlog to dig into when the team is ready to try to tackle them. They may need to better define the problem or test potential solutions, for example.
- “More specific definitions of our “user”
- Was it hard to find the people you expected to talk to? Is it possible that the people you had in mind don’t exist? Were they doing the tasks but maybe had a different job title?
- Refine your team’s understanding of assumptions about users.
- “Stuff we already knew”
- No harm done in tracking confirmed knowledge!
- No harm done in tracking confirmed knowledge!
- “Surprises”
- Whoa. We weren’t expecting that. Cool!
- Whoa. We weren’t expecting that. Cool!
- “New questions”
- Welcome to the eternal mobius of seeking clarity!
- Again, put these on the research backlog to revisit when time permits.
You might also consider collecting your data points in some sort of spreadsheet or database. Put any notes you took and links to recordings right in the file too so they’re easy to find. Entering the data in one central location will also make it easier to group data points into insights as you analyze the data. If it’s a large study, this data will be the input to your statistical analysis software.
Sharing the findings
The final step to the user interview process is to share your findings with your stakeholders! It is important to communicate the work that you’ve completed and the new information you’ve uncovered. Not only does this help demonstrate the value of a user interview study, but it also provides data-driven action points that can guide future work projects.
Before sharing your user interview findings, ask yourself the following questions:
- Who needs to know what?
- Maybe you learned stuff that doesn’t apply to your specific role or area. Or, perhaps your findings have a broader application. If you find yourself wondering if people are interested, default to oversharing!
- If you learned approaches that don’t work or other valuable lessons and techniques then share this with your audience. It’s helpful for others to learn from your work so that they can build off this new knowledge in future projects.
- Be sure to attribute/credit others who contributed to the project.
- Who is my audience?
- You might need both a mini version and a deep dive version of your results if you’re presenting to multiple audiences.
- Make your presentation quick and engaging, but have options to expand when time permits.
- Pro tips:
- Always make it clear how many people you heard from.
- Use data visualizations with care (for example, a pie chart might not be the best if you only spoke to 4 people).
Feel free to use or refer to the template shareout deck the UXD Research team uses. Make your own copy and scale down and shape it from there. Please always share your final findings with UXD Research and share them in the Research area on UX Hub so that others can benefit from your findings.
Finally, take extra care to remove any personally identifiable information and unnecessary detail before sharing with others. Examples include:
- Deleting names and email addresses from any spreadsheets or lists.
- Referring to individuals as “Respondent 1” or “Participant 2.”
- Only disclose context that is pertinent to the project.
- Avoid any details like age, race, gender identity, etc that could introduce bias or reveal individual identities.
Examples
Red Hat sales rep enablement discovery:
This was an internal study that involved discussion plus asking participants to walk through some common workflows to see how they approached key tasks. Here are the findings from the discussion plus a related survey.
Developer outcome discovery:
This was a deep dive interview into several underserved areas that were identified and prioritized from a survey. Here is the readout of the deep dive interview.
Additional resources
The following are internal and external resources for learning more about conducting user interviews:
- User Interviews for UX Research: What, Why & How - this is a great overall guide for performing a user interview study.
- Writing an Effective Guide for a UX Interview - an in-depth look at how to write good research questions.
- User Interview Discussion Guide - from User Interviews - an example interview script that follows Teresa Torres’s Continuous Discovery Habits framework.
- Conducting User Interviews - PWDR Enablement - user interview training (deck and recording).
- Interview Techniques for UX Practitioners - a handy e-book on the topic if you want to delve further into this topic.
- How to do analysis on “generative” research - while this guide is written with a specific focus on “Outcomes” interviews, the principles apply to most interviews.
- Debrief Template in Miro - will help the moderator and all observers sort through their hot takes from the sessions.
- Design-Thinking and Prioritization Miro - once you have sorted through the data and found some themes, make a copy of this or use it as inspiration for your own situation, to identify general solutions and future-state stories and agree on what should come next.
- Red Hat UXDR PII and research best practices - a guide to Red Hat's requirements around handling personal identifying information (PII).
- How to Ask the Right Questions - guidance on constructing good interview questions.
- Overcoming Communications Barriers in Research - how to confront communication challenges that may occur with different sets of target users/participants.
Get in Touch!
Spotted a typo? Any feedback you want to share? Do you want to collaborate? Get in touch with the UXD Methods working group using one of the links below!
Drop us a message on the #uxd-hub Slack channel
Submit Feedback