First of all, thank you to everyone that attended our successful webinar that aired on October 28, 2014 called Everything You Need to Know About MaxDiff. If you were unable to attend live, you can access the slides and replay here. A special thanks goes out to Paul Richard McCullough of MACRO Consulting Inc., Bert Willard of Ferguson and our very own Esther LaVielle for an insightful presentation about MaxDiff applications in the real world.
Now, we are going to address some of the questions that came up during the presentation and answer them for you here. As always, if you have more questions that come up, or if you are interested in trying MaxDiff out for yourself, don't hesitate to reach out to us.
Q1: Where can MaxDiff be used?
A: MaxDiff can be used virtually anywhere you can use a grid with a rating scale. Anytime that a grid with rating scale is possible, it can essentially be replaced with the MaxDiff model. MaxDiff actually does a much better job than traditional rating scales because it forces respondents to choose between items that truly matter to them.
Q2: Why should I use MaxDiff?
A: MaxDiff discriminates better. It forces users to make hard choices on what is most important to them. Rather than respondents saying that every factor is an important one, a relative importance is established that can be weighed against responses. This is what differentiates MaxDiff from other types.
Q3: Is there ever an issue with the "thumbs up" and "thumbs down" imagery translating globally?
A: According to our research in the past, we've taken surveys with symbols that were "thumbs up" and "thumbs down", and it failed in the far East in Asian countries because those symbols mean something else in their culture. For example, a "thumbs down" in that country was a symbol of shame, so they interpret this differently than others. This effectively ruined the rest of the survey because they did not feel as though they could honestly open up. In general, you always have to be careful with the use of symbols in countries that are not native.
Q4: Is the argument for MaxDiff that you can see many things juxtaposed to each other at the same time, or is it just a sort of ease of cognitive stress?
A: The primary reason was to try to collect this data as time-efficiently as possible. Because if you take 5 different brands, let's say, and you have to do 18 different choice tasks per brand, it is extremely inefficient. MaxDiff alleviates this problem by showing all 5 items at once, so you don't have to reorient yourself each time you are shown an item.
Q5: Do sample sizes need to be adjusted when you are using a MaxDiff model versus a traditional rating scale?
A: It is roughly the same sample size. You don't really want to look at ratings data at a sample size less than 100, because your means are not stable. However, you can adjust for a sample size issue by playing around with the number of items per task, and the number of tasks per respondent. In other words, if you only have 60 people, you could increase the number of tasks you show each of the 60 people to bring your precision up to the level that you'd like it to be at.
Q6: How do you deal with the process of dividing up tasks per user or items per task?
A: In terms of number of items, 4 or 5 is usually the best. Basically, you are going to learn more about their preferences if you ask them about 4 brands (at once) than if you ask them about 10 brands at once. So it is much easier to break down preferences of 4 at a time than 10. In terms of the tasks, you have the cognitive burden issue. Luckily with MaxDiff, this is very light. It is easy on the respondent, and actually kind of fun to take. This is another reason why MaxDiff works so well, because it results in virtually zero survey fatigue.
Q7: Can MaxDiff be used for Customer Satisfaction?
A: Yes, I think customer satisfaction is one of the primary uses for MaxDiff. It is used in that area quite often.
Q8: What if you have 60-100 attributes that you want to test?
A: In this case, I would divide up quota groups based on sampling. For example, you could divide up a group of 60 into 6 quota groups, you can make sure there is an even spread of utilities across those groups so that they do not see all the attributes at once. Although sometimes 60 attributes is just too much, and you may need to go back to the drawing board.
Q9: What is the rule of thumb for how many tasks you need to perform each time?
A: If you show 5 items per task, and each person has to see each one at least 3 times. So 3 exposures per item per respondent is ideal.
Did you enjoy this post? Sign up for our weekly blog digest.