The art of formulating good questions

Löfgren (2007) laments the scientific community's "lack of interest in the wording [...] in view of the consensus that the wording is one of the most critical issues in conducting a study".

Take your time to turn your ideas into Kano survey questions. From good questions come good results.

Clearly define what you want to know. Make it easy for your customers to understand what you want to know from them. Don’t tell the customer how a feature works, just explain what it does for them. Do this very clearly and unequivocally.

What is it that you want to know?

Think hard about what you exactly want to know. It’s no use asking your customers questions that, when answered, won’t help you any further.

For every question you create, play around with the possible answers. Does every possible answer make sense?

Also have a look at every possible Kano category. If the feature falls into one of the categories, what does that tell you? Does the category tell you what you wanted to know?

Say you did a survey containing this pair of questions:

When opening a new bank account, how would you feel if you were automatically given access to our mobile banking service too?

and

When opening a new bank account, how would you feel if you were not automatically given access to our mobile banking service?

Suppose that after doing the survey, the feature appears to be a Must-Be feature. Customers feel automatically getting access to the mobile banking service when opening an account is only natural. They’d be unsatisfied if they weren’t given access to your mobile banking service.

Is this what you wanted to know? Or were you in fact trying to find out how you should provide access to the mobile service? If that’s what you wanted to know, the survey results won’t tell you that.

Suppose the feature turns out to be a One-Dimensional feature, meaning the better the automatic provisioning of the mobile banking service, the more satisfied the customers will be.

I’ve seen teams interpret this as a blank check to start developing a range of features for the mobile banking service. But look again at the question and the category: the survey result does not mean that customers want the mobile banking service as such to be hyper-performant. That’s another matter entirely and that was not what you asked.

Instead, customers indicate that the fact of automatically being provided with a mobile banking service is what will contribute to their satisfaction with your service. So delivering access to the service should be as effortless as possible: the less paperwork the better, the more automatically it happens the better, the faster, the better, and so on.

Were the feature an Attract feature, that would mean customers would be positively surprised by automatically getting access to a mobile service.

It means you should start working on the process of automatically giving access and emphasise it more in your marketing communication. It does not mean customers feel the mobile banking service as such is attractive. It only means customers feel that getting access to the service is attractive. If you wanted to know how customers felt about the mobile banking service, you didn’t ask the right question.

Similarly, if the attribute falls into the Indifferent category, it would not mean customers do not care about the mobile banking service. It means customers don’t care about being automatically given access to the service when opening a new account. Again, that’s a completely different matter.

Suppose the answer is a Reverse answer. It would mean surveyees do not like the idea of getting the mobile banking service automatically. It does not mean you should close down your mobile banking department. It means people want to choose whether they are going to use the service or not.

You get the gist. Run through the answer categories when formulating your questions to see whether they will tell you what you actually want to know.

Focus on the outcome or presence, not on the how

There are three types of questions you can ask morning commuters about how they would feel about changes to their milkshake: technical questions (but you shouldn't ask such questions), outcome-based questions and presence-based questions.

Technical questions (don't ask these)

You can ask customers how they would feel if extra fats were added to their milkshakes after the ice cream was aged for a few days. (Because that’s how you make ice cream thicker).

But:

The customer is not interested in how but which of his problems will be solved. If one asks about the technical solutions of a product, it can easily happen that the question is not correctly understood (Matzler, 1998).

You don’t want to ask technical questions. Customers don’t understand what the value is of aged ice-cream and added fats. They do understand the value of a milkshake that takes a bit longer to finish or keeps them from getting hungry before lunchtime, so ask about that instead. People aren't interested in the solution, they're interested in what it helps them achieve.

Of course, when your customers are very technically inclined, questions containing technical information are not prohibited. But always make sure your question -- even if it is about a technical topic -- clearly indicates how the problem's being solved.

Ask outcome-based questions...

Instead of focusing on the how ("added fats"), you can concentrate on the outcome by asking customers how they would feel if it would take longer for them to finish the milkshake.

Outcome questions are great questions to spur innovation. If you know what outcomes are most valuable to customers, you know in what context you should devising features and solutions.

Outcome-based questions are

measurable, controllable, actionable, devoid of solutions and stable over time (Ulwick, 2016)

So don't ask: "The brakes work really well", but "When applying the brakes fully, the car goes from a 100 to 0 mph in 3 seconds".

... or ask presence-based questions

You will have discussions with your team about the usefulness of a Kano study based on outcomes, because it does not really seem to help with deciding how to reach that outcome.

"If we ask customers whether they would like to enjoy their milkshakes longer, how are we going to know whether we need to make the ice-cream thicker or provide smaller straws?", the team might object.

In other words: what features would be most valuable in achieving the outcome? You can ask presence-based questions ("thicker milkshake", "smaller straw", ...) in your Kano survey to find out.

There is a thin line between feature-based questions and technical questions. The important thing to remember is that you want to discover how customers perceive the value of a feature. Some people may find the idea of thicker milkshakes icky. Some may think a thinner straw is frustrating. That's what you want to know. You're not asking whether the technical solution is preferable, you're asking about how people would feel about the feature.

This means that what constitutes a technical question and what constitutes a presence-based question is entirely dependent on your audience. Race car engineers have certain attitudes towards the more technical aspects of the braking systems of a car, while Sunday drivers won't. The former would be able to express their perception of the value of carbon disk-based brakes, while the latter would be unable to do so reliably.

When you're asking presence-based questions, make sure the outcome is clear to the customer. Also make sure it is something they care about. If you are able to do more than one survey, first test outcomes and for the most desired outcomes, do a separate survey with presence- or feature-based questions. If you can't do more than one survey, make the customer understand the context of your questions (by for instance introducing it with "In order to make you enjoy your milkshake longer, we're considering these changes").

In any case, don't mix outcome-based questions with presence-based questions in the same survey. To avoid confusion, your survey questions must maintain unity.

Maintaining unity in the types of questions

Before you write your survey, categorize and cluster your questions. In this schema for example, Level 1 contains desired outcomes, while Level 2 contains features.

If you’re trying to determine whether customer satisfaction will increase when it takes longer to finish the milkshake, that question should figure next to other desired outcomes like “one milkshake will make me feel filled until noon”. It should not be in the same survey with presence-based questions like "The milkshake contains bits of fruit".

Stick to the same hierarchy

If it turns out “I can enjoy my milkshake for longer” is a feature that contributes to customer satisfaction, you can look into it further with a separate survey. (Because you should never mix outcome-based questions with presence-based questions).

In that follow-up survey, you can go to a lower level in the hierarchy: the one where you try and find out what attributes of the milkshake that will make it longer to finish will contribute to customer satisfaction the most. A question about extra bits of fruit will figure next to questions regarding thickness, larger containers or thinner straws.

Keep to the same hierarchical levels and cluster features in your survey around the outcomes they help achieve. You'll confuse the survey participant otherwise, and that will lead to less useful answers.

Suppose you put the question about milkshakes taking longer to finish next to the question about thinner straws. And suppose a customer does not like the idea of needing more time to finish a milkshake.

The customer's reasoning could be “If it would take longer to finish the milkshake — which I do not like, but anyway — I would expect that to be because the milkshake is thicker, and not because of a thinner straw”. So she'd answer positively to the presence of a thicker milkshake. Her answer is too rational; it does not come from the heart.

The customer could also answer that she doesn't care about a milkshake being thicker, because she dislikes the idea of needing more time to finish a milkshake. Her answer is not about the presence of thickness only, it's also about needing more time to finish it. Again, this answer is not a true reflection of her perception of value of the feature.

The team won’t know whether her answer was concerning the thickness of the milkshake or about the time it takes to finish the milkshake. Worse still, the team may not even know it doesn't know.

To prevent confusion and avoid less useful outcomes, cluster your features and create a hierarchy. To get the best results from your Kano study, make sure each survey's questions are in the same hierarchical level.

So if you must use features from across different levels, put them in separate surveys. If you want to know whether your ideas of how you could increase the time to finish a milkshake and on how to make sure the customer is not hungry until noon are valuable, do two separate surveys.

Introduce your presence-based surveys with a statement like "We're making a milkshake that takes longer to finish". That way, the customer will know the context of the questions that follow. She'll know that the milkshake will take longer to finish is a given. When looking at your questions, her feelings about the time it takes to finish a milkshake won't interfere with her answers to your questions about how you plan on making it so.

Don't mix categories

The advice about not mixing hierarchies applies to feature categories too. Don't mix questions that pertain to the underlying milkshake quality of taking longer to finish with questions that have to do with fending off hunger until noon.

The best thing to do is create separate surveys per category. Sometimes that's impractical, however. There may be too few questions for a given category, or you only have one chance of surveying customers.

If you must ask questions from different categories in the same survey, make it very clear that the questions are clustered around a category. Use a heading, an introduction or a separator to make it clear to the customer that she's moving on to another mental construct.

Go from the abstract to the concrete

If you feel you’re still leaning on too many assumptions for your survey, move to a higher level of the hierarchy.

If you’re not sure whether a milkshake that takes longer to finish would increase customer satisfaction, do a survey on that level first. Validate the overarching idea before you delve into its details.

When dealing with abstract topics, try to do your surveys in person. Oral interviews give you the opportunity to clarify and explain broader concepts and more abstract questions. You don't have that luxury with surveys where you cannot talk with the surveyee. All clarity must come from your written introduction, your questions and the possible answers. The more concrete these are, the easier they will be for the participant to understand.

Here's another way of thinking about it. Suppose your making an app for a local movie theater chain and you're wondering what to work on first. Your surveys should handle the more abstract feature levels first before zooming in on the details in separate surveys.

Do not use "I can..." or "You can..."

You want the customer to have a mental picture of your product or feature and elicit a value judgement. The more objectively you describe a feature, the less leading your questions will become and the better the quality of your answers will be.

Instead of asking "You can see the exact date of delivery", use something like "The product page shows the exact date of delivery". It is easier for a customer to build a mental picture of the product when your questions are phrased this way. The clearer that mental picture, the straighter from the heart the answers will be.

Ask one question at a time

Consider this set of questions:

If the milkshake has extra bits of fruit and a thinner straw, how do you feel?
( ) I like it
( ) I take it for granted
( ) I don't care
( ) I can live with it
( ) I dislike it

If the milkshake does not have extra bits of fruit and a thinner straw, how do you feel?
( ) I like it
( ) I take it for granted
( ) I don't care
( ) I can live with it
( ) I dislike it

Suppose that this turns out to be a “Must-Be” quality of the milkshake. What does that even mean? Do customers expect extra bits of fruit and a thinner straw? Or do they expect one or the other? Should the team start working on both, or on only one of the two?

This is what is known as a double-barrelled question. You're asking several questions at once, and the answers you'll get will be ambiguous.

Split up your questions or push them to a higher level of abstraction. (And then make sure your other questions are on that same level of abstraction too of course).

Don't use user stories as questions

User stories have value, but you should not use them verbatim in a Kano survey. Suppose you have this user story and you have used it as a question in your survey:

As a user, I want to be able to invite colleagues, so that we can work together on the document.

There are a lot of issues with this:

  • The question is not about a feature, it describes a need. A Kano survey is not meant to validate needs, it's meant to measure the perceived value of a feature;

  • The question is double-barrelled (triple-barrelled even): the customer can have different feelings about the ability to invite colleagues, about collaborating on a document and about using invitations to start collaborating with colleagues;

  • The question mixes presence (ability to invite) and outcome (working together on a document);

  • Go ahead and try to make the dysfunctional version of this question...

User stories are a good source for your Kano survey questions, but don't just copy and paste them in your survey.

Functional and dysfunctional must not always be exact opposites

You're doing a Kano survey because you want to know how customers feel about certain attributes of your product or service. You may want to know how they feel about the level of performance of the feature.

Say you're responsible for customer services at a public transport company. For technological reasons, you're not always able to present precise information about arrival times to travelers. The issue could be fixed, but it will be costly. So you have to decide what aspects to tackle first and how.

One parameter in your decisions will of course be their impact customer satisfaction. In order to know, you want to know how customers will feel if the arrival times are sometimes wrong.

So instead of asking:

Functional

Dysfunctional

If the arrival times are correct, how do you feel?

If the arrival times are incorrect, how do you feel?

ask this:

Functional

Dysfunctional

If the arrival times are always correct, how do you feel?

If the arrival times are sometimes incorrect, how do you feel?

Remember it's all about what you want to know. You want to know whether customer satisfaction is impacted when the information is not always up-to-date, not when the information is never up-to-date.

Be clear and be specific

Compare these two examples from Mikulić and Prebezac (2011):

When opening a new bank account, how would you feel if you were provided with a mobile banking service?

When opening a new bank account, how would you feel if you were not provided with a mobile banking service?

and

When opening a new bank account, how would you feel if you were provided with a mobile banking service that works very well?

When opening a new bank account, how would you feel if you were provided with a mobile banking service that works very poorly?

For the first set of questions, the majority of answers (59%) were in the Attract category. In the second version that amount dropped to 22%.

Some (like Grapentine, 2015) consider this a sign of the Kano model’s unreliability. But it is not. This study is just another reminder that you should think hard about what you want to know and how you formulate your questions.

The initial set of questions proposes a binary option: either the feature is present or it isn’t. You’re asking customers how they would feel about the mobile banking’s presence, not about its performance. You want to know whether you should even investigate the idea of a mobile banking service.

From the results (55% Attractive), it’s clear that a mobile banking service would be a competitive differentiator.

There are three major problems with the second set of questions:

  • They gauge customer satisfaction for the delivery and the performance of the mobile banking service. That's a double-barrelled question, and that's a big no-no;

  • The question not only contains two separate questions, they are also different in nature. One is presence-based (the service is there or it isn't) and the next is outcome-based (it works very well or it doesn't). How can you expect a customer to answer such questions unequivocally?

  • The outcome-based aspect of the question (the service works very well or it doesn't) is not precise. Everyone has their own ideas about what defines "very well", so the answers are not very useful.

You can point to a mobile banking service and ask customers whether they too think if what you’re showing them is a mobile banking service. Everyone will agree it is indeed a mobile banking service. Therefore, “How do you feel if a mobile banking service were present?” is a good question.

But what is “bad performance” or “good performance”? Some customers will think financial errors are a sign of bad performance, while others may consider slow loading times frustrating enough to call it bad. To get useful results, the researcher should have split up the "good performance" feature into its parts. A separate survey with questions on financial errors and the app's loading times would have delivered far better results.

Last updated