Sunsetting Teston at the end of 2021 Read More


How to create good questions and tasks

Tips on how to make good tests

Asking the right questions

Much depends on the questions you’re asking your test users.

Good questions are crucial for getting honest feedback from your test. Also, honest feedback will provide you with a strong foundation to decide on future product decisions. Here's a quick review of the kind of questions and tasks you should use.

Open questions

So, what is an open question? Simply put, it’s a question that can’t be answered with a “yes” or “no”.

"What do you expect from a language learning app?"

This is a good open question. Good open questions encourage users to convey their opinions and ideas without guidance from you - the person who asks the questions - and this has a lot to say.

Open questions lead to open answers, and you may discover features or ideas that weren’t on the table before. A good open question causes people to talk, but they’re going to say different things. So be prepared for a great variety in your feedback.

Remember, the more feedback you get, the better. It may take more time to go through, but the results will give you a much better understanding of your product and your customers.

Avoid bias and get rid of leading questions

You've probably heard earlier that leading questions should be avoided, but we’re still using them. Here is a quick refresh.

"How easy did you find the information about our product?"

This is an example of a leading question. By using the word "easy" the question becomes positive leading. So you should always balance your question with neutral alternatives:

How easy or difficult ...

How fast or slow ...

How did you feel?

Remember, feedback will affect your product decisions, so it's very important to get unbiased feedback. And it all begins with a neutral question.

Ask, listen and always follow up

Asking follow-up questions means gaining insight into the motives and obstacles of the tester. They also allow you to ensure that the tester was vigilant and able to complete the task. Your understanding of each task - and how difficult it is - will be greatly improved.

Question: "Would you pay $ 99 for this service?"

Follow-up: "If yes, why? If not, why not?"

Your testers can get distracted, rush through the tests or lose track of their progression. Therefore, it's your job to keep them focused and concentrated.

A good way to do this is using closed follow-up questions. This way you can verify that they were paying attention:

Test: "Find out how much Volvo XX costs."

Closed follow-up: "How much did Volvo XX cost?"

Another way is to ask the tester about the task they just performed:

Test: "Find out how much Volvo XX costs."

Follow-up: "How easy or difficult was it to complete this task, and why?"

This gives you valuable insight into the interests of the tester and what they experience during the process.

Remember, all feedback is good feedback.

Make it super easy

When designing your test, try to make it so simple that even a child can understand it.

When the test is not moderated, it's easy for testers to misunderstand a question, and nobody is nearby to explain it to them. If a task or a question is too complicated, you don’t have the flexibility to modify the test or assist the tester. So don’t leave anything to chance.

And one test at a time

This should be the guiding principle for all the tests you conduct. But what does it mean?

Well, it means that you should never i.g. test web and mobile applications/sites at the same time, and you should choose one function – let's say a sales transaction – for each test.

Keep in mind, your tester only has about 20 minutes, so you need to make the most of that time. Remote testing also means that you do not have the flexibility to change tasks or provide any assistance during the test.

Do you have a big question or a big task? Break. Them. Down.

"What did you think about our ordering process?"

That is a big question. There are probably more steps in a typical ordering process:

  • Find a product
  • Add it to the cart
  • Go to checkout
  • Create an account
  • Payment details and address

A tester can forget one step in the process, skip parts in some of the other steps or just give a short, general and outdated answer, like "it was good." But you can make a series of subclasses and use follow-up questions to get the right result:

  • Was it easy or difficult to find the item you were looking for?
  • Did you have trouble creating an account, or was it a smooth process?
  • When you submitted your payment information, did you trust the site, or did you feel that something was wrong?
  • Why did you feel this way?
  • What were your thoughts on the ordering process in general?
  • Did the ordering process contain too many steps, or was it just right?
  • If there were too many steps, which steps do you think should be removed or changed?
  • Why?

If you want answers to specific sections of the user's journey, you must ask specific questions. Otherwise, you may not get feedback on your focus areas.

Remember, testers come from all kinds of backgrounds, so never assume they know as much as you about UX. They may not even have any knowledge of technology in general, so avoid jargon and terminology wherever you can:

Micro content = error message

User Interface = page

Below the fold = further down the page

Question - How many are too many?

In a 20-minute test, you should aim at 15-20 questions to make the most of the time for the tester. If you give them less, they will complete the test quicker than 20 minutes, which means you will miss out on valuable feedback.

The complexity of the test is also something to consider. You should always spend more time on a more complicated test, which brings us to the next point ...

Specifying tasks

By asking the tester to perform some kind of action on your platform, like using a specific feature, you can measure how user-friendly your platform really is and where you can improve it. There are two types of tasks you can specify:

  • Open tasks
  • Specific tasks

Open tasks

An open task means that you ask the tester to do something without having a specific goal in mind. For example, you can ask them to spend 10 minutes discovering your product, browse arbitrarily and to explore what they want.

Open tasks are great for collecting new data and new insights, that you didn’t have before. And when it comes to remote testing without observation (like your test), open tasks are an effective method which provides realistic results in a natural environment.

For example, your users may use the product in a way you had not thought about or that they could not use a key feature because it was unclear to them. A good open task asks the tester to explore your product naturally with a minimum of instructions.

Specific tasks

A specific task entails giving the end-user a goal, for example, to create an account and find your refer-a-friend discount code. With this task, you want to see if users easily can complete a specific goal.

Specific tasks are great for measuring usability and information architecture (such as navigation) on your site.

For example, if it’s only necessary to use three clicks to find the refer-a-friend section, but your tester uses six, you have a communication issue in your navigation flow that needs to be addressed.

Remember, before you issue a specific task, you should already have a reference point, that you can compare with the results of your tester.

Always, always test the test

We’re well aware. You have done the job of making the test. You have spent countless hours picking the right questions. You're ready to execute the test…

But if you want the correct test duration, a flawless questionnaire, and trusted feedback, try your test on one or two testers, to begin with.

Therefore, you should either try the test yourself, proofread and click on links along the way, or you may want to ask a colleague or someone who doesn’t know your product and observe them while answering questions and doing the tasks. You will definitely discover things that will improve your test.

Remember, you pay to get good feedback in the right areas, so it's important to test your questions first.

Warm-up and conclusion

Some testers are not used to test situations and may find it a little uncomfortable in the beginning. Therefore, start all tests with one or two introductory questions that are easy to answer.

This helps them get used to the format, increase their confidence and remind them that the test has started:

"What do you do for a living?"

"How often do you shop online?"

“Tell me about the last time you purchased something online?”

You should also complete the process with some simple, concluding interview questions, such as:

"How did you experience shopping on our site?"

"Was there anything special you liked or disliked?"

You can also use this as an opportunity to gather valuable feedback about the tester's experience of the test format itself.

Follow this guide, and the accuracy of your test results will increase remarkably. For more examples of good test questions, check out our templates.

Some quick tips

Open questions

Open questions should tell you a story, while closed questions provide short answers without many details.

Enter a period of time

If you want response based on how many times your product has been used, you can include a time frame in your question. For example, if you want to know how often a customer places orders in your store, we will show you how you should ask.

Good example: "How many orders have you placed in our store during the last three months?"

Bad example: "How many orders have you placed at our store?"

The bad example creates doubts. A customer may have just placed one order but found your store a week ago. Questions about frequency which are not time-limited leaves many unknown variables behind.

Focus on the goal of your test

Always keep what you want to achieve in mind when making questions and tasks. If a question or task doesn’t contribute to what you are aiming for, please don’t use it.

For example, let's say your test goal is to see if a product page contains enough information for the tester to make a purchase decision on the spot. A bad open question would be:

Bad example: "Do you think there is enough information on the product page, for you to decide whether you want to purchase the product or not?"

This question is too specific; It limits the tester’s response and accordingly what information you can get. A better question would be:

Good example: "How easy or difficult was it finding information about our products?"

Or: "What did you think about the amount of information available on the product page?"

Example of follow-up questions:

"Did it help you make a purchase decision or not?"

"Was it what you were looking for or not?"

This question gives the tester greater freedom to provide an unbiased answer to the question. Not only will you get an answer to your test goal, but you will also uncover things you might hadn’t thought about.

Search for something else