List

I was recently asked to give an online guest lecture for the University of Wisconsin-Madison User Experience Design Capstone Certificate, or “Mad UX,” program.  I had mentioned to Kristin Eschenfelder, the Director of the iSchool there, that I thought there were some non-traditional ways of using surveys to do UX research.  So she asked me to give a short guest lecture on the topic.  This blog is a summary of that talk.

This is by no means a summary of all the ways you can use surveys to do UX research.  Surveys are especially useful early on in a UX project to learn more about the users, their needs, and perhaps their expectations with regard to a system or product.  Instead, this is just a summary of a handful of tips about using surveys in ways that you might not have thought about.

Tip #1: Consider the MaxDiff Method

If you haven’t heard of the MaxDiff method, you might want to take a look at it.  MaxDiff, which stands for Maximum Difference, is a way of getting users to prioritize their needs.  For example, let’s assume you’re working on the design of an app that would allow users to search for hotel rooms using various criteria. And you want to know which criteria the users actually care the most about.  In a traditional survey approach you might ask users to rate the importance of each of the criteria using something like the following example, but it’s possible that many of your respondents might give ratings like these:

Survey where everything is rated Most Important

Of course, it’s possible that all of those attributes are very important to the users, but it’s also possible that the respondents just weren’t being all that thoughtful in their responses.  The MaxDiff method is a way of getting users to prioritize.  Respondents are shown a subset of the possible attributes and are asked to indicate the least and most important attributes (only). Then this process continues for more subsets.  For example, the first two subsets might be something like this:

MaxDiff Example

The key is that each time the user must choose one attribute as the least important and one as the most important out of that subset.  Generally, no more than about five attributes are shown in each subset.  In this particular example, the process continued for a total of four subsets. You can use an online calculator to determine the number of sets you need to present given the total number of attributes and the number shown per set.  Across all the respondents, various subsets of the attributes are shown. You can see an online example of a MaxDiff question type built using SurveyGizmo.

The results you get from this type of exercise show you, for each attribute, the percentage of time it was chosen as Most Important, Least Important, and not chosen. In addition, you get a score for each which represents the overall importance of that attribute. The typical approach to scoring is (# of times chosen as Best – # of time chosen as Worst)/(# of times presented).

MaxDiff is provided as a predefined question type in several online survey tools, including Qualtrics, SurveyGizmo, CheckMarket, and QuestionPro.

Tip #2: Show Your Confidence (Intervals, That Is)

Most people who know me know that I’m a big fan of confidence intervals. Confidence intervals show the likely range of values for a mean, as in this example:

Confidence Intervals for Design Choices

The 90% confidence interval is shown for each mean as error bars. They help both the researcher and any viewer of the graph understand how confident you can be in each value and also get at least some sense for what differences actually matter.

When dealing with survey responses, confidence intervals are more commonly referred to as margin of error.  Technically, the margin of error is half of the total confidence interval, but, just to confuse matters, when Excel calculates a confidence interval using the CONFIDENCE function, it returns that “half” value– so it’s really more like a margin of error.  In any event, we’re talking about a value that needs to be both added to and subtracted from the mean to get the total confidence interval.

The assumption here is that we’re dealing with responses to multiple-choice questions where the user can choose one answer (e.g., “Which of these four designs do you most prefer?”).  There are at least two types of confidence intervals, or margin of error, that we see when dealing with surveys.  One is the overall margin of error for the entire survey and the other is for individual questions.

The basic formula for calculating margin of error is:

Margin of Error = z * sqrt(p * (1-p) / n)

Where:

z = the z-value for your chosen confidence level (e.g., 1.645 for a 90% confidence level or 1.96 for 95%)
p = the sample proportion
n = your sample size

So what’s the “sample proportion”?  Well, if you’re calculating the margin of error for the overall survey, the traditional approach is to just assume that it’s 0.5 (or 50%) because that gives the maximum value for the margin of error.  So if you want a 95% confidence interval, the formula boils down to:

Margin of Error = 1.96 * sqrt(0.25 / n)

For example, with a sample size of 100:

Margin of Error = 1.96 * sqrt(0.25/100), or
Margin of Error = 0.098, or 9.8%

So the overall margin of error for this 100-person survey is ± 9.8%.  (That’s the type of margin of error you hear reported for things like political polls or opinion surveys.)

If you’re looking at the responses to an individual question you can let the actual proportions determine the “sample proportion” in that calculation.  So, let’s assume that 22% said they prefer Design B, the sample size was 100, and we want a 95% confidence level.  That would give us:

Margin of Error = 1.96 * sqrt(.22 * (1-.22) / 100), or
Margin of Error = 0.041, or 4.1%

Note that the margin of error will decrease the further you get from 50%, either above or below.

There are a number of online tools for calculating margin of error, including these:

You’ll note that some of those calculators ask for the total population size.  My assumption has been that the total population is something much larger (e.g., 100,000) than the sample.  In that case, the total population size doesn’t really matter.

Tip #3: Collect Performance Data

Most of us don’t think of surveys as a way of collecting performance data. They’re primarily a tool for collecting subjective or self-reported data like ratings, verbatim responses, etc.  But with some online survey tools you can also get performance data, particularly how long it takes respondents to answer a question. This can let you use a survey to conduct something more like an online usability study.

For example, take a look at this survey question which asks the respondent to use the Bentley University website to find some information about their UX Certificate program:

Online study built with survey

The key thing is that, behind the scenes, the survey is timing how long it takes the user to answer the question. To accomplish this, a “Timing” element was placed on the page in addition to the actual question. This is a hidden element not shown to the respondent. It’s important that there be only one question per page since the timing is until the user submits the page.  And, since there’s a single correct answer, you can get accuracy data in addition to the time.  You now have the same kind of data that you can get from an online usability study.

Taking this a step further, you could also direct half of the respondents to one design (site) and half to a different design.  These might both be prototypes or perhaps two competitor’s sites.  Then you could compare the performance data for the two designs, as in this example:

Time Data

Here Design A yielded significantly shorter mean task times than Design B. And this data was all captured with an online survey tool.  Two of the tools that I know of which include this kind of timing capability are SurveyGizmo and Qualtrics.

Tip #4: Get Feedback about Mobile Designs

These days anyone doing UX design, especially web design, is probably also doing mobile design. In fact, according to the global stats from StatCounter.com, more than half of web sessions overall are from a mobile device. So if you’re not testing your website on mobile devices you probably should be.

This technique for testing mobile designs with an online survey relies on a simple fact: many people who are sitting at a desktop or laptop computer also have their smartphone with them. So the basic approach is simple: deliver the online survey on the desktop/laptop while they’re accessing the mobile design(s) on their smartphone.  (They could both be delivered on the smartphone, but the limited screen real estate makes it impractical.)  From the user’s perspective, the setup looks something like this:

Mobile Survey Setup

The online survey (potentially with performance-type tasks) is presented on the desktop and the user completes tasks or finds the answers to questions using their mobile device. Once they find the answer or complete the task on their mobile device they go back to the survey on their desktop to answer the question, provide ratings, etc.

The key to making this work successfully is syncing the online survey with the mobile design.  You could just provide a URL at the beginning of the survey with directions for the user to type it on their smartphone.  But that could be error-prone, even with a shortened URL.  These days, a better approach is to provide a QR code (probably in addition to a shortened URL) at the beginning of the survey.  That would look something like this:

Mobile Survey with QR Code

The user would then simply point their smartphone’s camera at the QR code to be taken to the appropriate URL automatically. A number of QR-code generators are available online, such as this one.  The code generator gives you an image which you simply put at the beginning of your survey along with appropriate instructions. Note that for iPhones the ability to recognize QR codes was added with iOS 11. Some Android phones may have it built in, but there are also plenty of free apps for reading QR codes.

Tip #5: Consider using “Speed Bumps” in Longer Surveys

This technique has been used in market research for quite some time, but you might not be familiar with it in the context of UX research.  Basically, it’s a way of detecting respondents who aren’t really taking your survey seriously and are just trying to get to the end so they can receive whatever incentive is being offered.  These are often referred to as “speeders and cheaters”. They’re mainly a problem with longer surveys and with the use of commercial panels to recruit your respondents.

The technique for trying to “catch” these speeders and cheaters is simply to insert a question where you explicitly tell them what answer to give, such as this example:

Speed Bump Question

If the user doesn’t give the answer that you tell them to give, there’s a pretty good chance they’re not paying attention and you should consider deleting their data.  Jeff Sauro did an analysis of five studies that contained these types of “speed bump” questions and he found that the percentage of respondents who failed the test ranged from 2% to 20% with a mean of 12%.  That analysis was from 2010 and my own experience has been that the percentage of these speeders and cheaters, at least in commercial panels, has been steadily increasing since then.  So being able to weed them out of your data is probably pretty important and useful.  There are other techniques for detecting them as well, such as looking at the total time that someone took to complete the survey or checking for illogical answers (e.g., when asked what brand of dog food they use, they select an option you provided which doesn’t actually exist).

These kinds of speed bumps are particularly useful in matrix-type questions, such as the following System Usability Scale (SUS) example:

SUS with Speed Bump

In a matrix like this, it might be particularly tempting for a respondent to just tick off the same rating for each item, such as all down the middle.  Inserting a speed bump (fourth question from the bottom in this example) helps catch this.  By the way, some would argue that this is one of the reasons for alternating positive and negative statements in SUS– if a respondent answers “Strongly agree” or “Strongly disagree” to all the statements, that’s not logical.  But answering “Neutral” to every statement is logical.

I doubt that any of these tips are particularly earth-shattering, but I hope you find at least some of them useful.

Leave a Reply

Your email address will not be published. Required fields are marked *