This is part two of a guide to User research.
Start at part one: How to conduct user research: A Step-by-step guide
Continue with part three: How to conduct effective product testing
Leave nothing to chance by exploring the product testing process that leads to a delightfully smooth user-centric product.
You’ve done your user research, you’ve pinned down what kind of product you’re about to design and you know exactly why your target market needs it… right?
If you can’t give me a solid “Hell yeah!”, then check out my user research guide before continuing with this article. It’s a prequel to product testing and it’s pretty much essential.
If you’ve laid down a solid foundation, that’s awesome! Now it’s time to build prototypes and wireframes, then relentlessly test and polish them until you have a product with silky smooth UX.
That’s how you’ll maximize your product’s chances of success. Let’s take a closer look at how it’s done!
Test early, test often
The general rule of thumb during development is to test things as soon as you can. You need to trial your prototype among your target users, collect feedback, refine your prototype and repeat the process.
Catching mistakes and misassumptions at an early stage means that you can make changes as quickly and cheaply as possible. If you don’t, they will pile up and make revisions a nightmare.
You can begin testing in the week following your user research. Hold a workshop and end it by creating a medium-fidelity prototype with tools such as Sketch or InVision, or anything else that works for you. Test it with users and confirm that you’re on the right path from the very start.
Not only is this useful, it’s also super rewarding. Going from an idea to a working concept and receiving actual feedback always feels great, and it’s a fantastic way to kickstart product development.
But what kind of feedback are you looking for? And which three aspects should you always strive to achieve?
The Big 3 – Effectiveness, Efficiency and Satisfaction
You’re designing products to be effective, efficient and satisfying.
But what exactly does this mean? Let’s break it down.
Does the product allow users to achieve their goals? Do users understand how the product works?
These are the main two questions you’re looking to answer when testing for effectiveness.
In order to measure it, keep track of accuracy, user error rates and quality of completion when completing the task at hand.
Can users find what they are looking for? How much energy did it take for the users to get there?
Nowadays, time is the most precious currency and so efficiency is crucial. People won’t do something if it takes longer than it needs to.
That’s why you should measure efficiency by monitoring the time it takes to complete the task and by asking users if they feel that the task took too long to finish.
What are the users’ attitudes towards the product? Do the features look and feel right?
This is the most intangible attribute of the three. Everything might function perfectly, but that means nothing if most users dislike your product.
You can measure satisfaction through questionnaires and scale ratings, as well as asking users for their opinions. Let them explain what they dislike about the product and why.
Conduct holistic product testing
Some teams test and validate their concept early on, and that’s pretty much where their testing ends.
Even when a project is underway, you should still be running regular testing sessions as you create higher fidelity prototypes, longer flows, richer features and so on.
As your development process matures, so too should your product testing. Adapt it so that it serves your current needs. Make it fluid, not static or too rigid.
Here’s how the testing focus could shift throughout your project:
Validate the product idea with your target users.
Prioritize features, especially when building prototypes.
- Information architecture:
Test out different approaches to information architecture.
- User flow:
Test how proposed flows affect the users’ experience and behavior.
- Visual design:
Test your visual language to ensure that it is consistent, clear and brand-aligned. This should be done if it serves a purpose, but because taste is subjective, testing the general appeal of visual design may be misleading.
Make sure that you guide specific tests towards a desired outcome. If you’re testing the visual design of the product, let assumptions and goals focus on items directly relating to this phase.
Of course, you’re still going to encounter some feedback pertaining to other things. Write this down and integrate it into the product roadmap later on.
Avoid the dreaded user churn
User churn is the percentage of users who stopped using your company’s product or service within a certain time frame. Product testing plays a huge role in minimizing it.
The most common reason for a high churn rate is that there are problems with the new user experience. If the UX is bad, you risk losing a potential user or customer almost immediately. An example would be when you’re asked to provide credit card details in order to continue to the next step (that’s not checkout). Most visitors just leave the site, even if it’s not shady. Another example is when the visual design of the checkout is different or inconsistent, which can confuse customers and lead them to abandoning their cart.
Such cases create a fast churn effect, whereby your new customer engagement rate might be impressively high but you lose just as many every day.
For this reason, churn is very much a design problem; you need to get the experience right via testing or you may lose your users after just a short period of time.
Where should you test your product?
You can conduct a Home Usage Test (HUT) or a Central Location Test (CTL).
A HUT means that your product is shipped to target users, so the research can be conducted in their own homes.
The main advantage of HUT is that the product is being used in its natural environment – a real-life setting. It enables the user to have a longer testing period (up to a few weeks) and as a result, it produces a more accurate product evaluation. Products such as cooking ingredients, face and body cosmetics, domestic appliances, electronics and products for children can hardly be evaluated after 30 minutes of use in a controlled environment.
This method of testing can be more expensive, as it can only be carried out with high-fidelity prototypes or physical products that are close to their final form.
HUTs are usually conducted via mobile and online market research platforms or through specialist companies.
A CLT means that the test is conducted within a chosen environment, such as a lab, a mall or your office building.
CLTs offer the opportunity for a fast turnaround of results – necessary tests can be completed in the space of a day, with results following in the same week. They are also easier to moderate and allow more participants to be involved.
You can either conduct CLTs yourself or hire a market research company.
Choosing between a HUT and a CTL really comes down to the current stage of your product development, how quickly you need the results, the size of your budget and which product testing method you choose.
Speaking of methods, let’s take a look at some of the ones at your disposal.
Product testing methods
In usability tests, users are asked to complete specific tasks using your prototype. The user is usually asked to think out loud during the process, or to retrospectively explain their thought process while performing the task. The main goal is to detect flaws and then iterate based on direct feedback.
Usability tests can be moderated or unmoderated, in person or remote.
MODERATED USABILITY TESTING
Detect facial expressions
Capture body language
Inaccessible for small teams
Body language is not perceived
Spontaneity is lost
UNMODERATED USABILITY TESTING
No support in real time
Can’t use low-fidelity wireframes
Less realistic behaviour
Usability testing of low-fidelity prototypes allows you to obtain feedback on the product at an early stage in the development process. It’s a very flexible method that can be used to test a variety of features, flows and concepts. You can (and should) return to usability testing during your development process, as it uncovers new insights every time you make a significant change.
If you’re using low-fidelity prototypes, you’ll probably need someone to guide the users through the test, which could impact their decision making. If conducted in a controlled environment, usability testing might not be 100% representative of the real-life scenario in which a user would engage with your product.
- If you need a moderator, make sure that they won’t inadvertently guide the user.
- Prepare three to five clear tasks that take no longer than 15 to 30 minutes to complete.
- Think about outsourcing usability testing during the final stage of development, especially if you’re creating a digital service or product. There are a lot of affordable online services out there and their feedback may uncover issues that you haven’t yet encountered.
- Strive to create clickable prototypes. They make testing much easier as users don’t have to imagine too many things at once.
As soon as possible and whenever you need to validate a concept, test a feature or make a significant change.
Heuristic analysis is a usability inspection method in which one or more usability experts compare a digital product’s design to a list of recognized design or usability principles (called heuristics).
Experts go over the features and flow, then identify where the product is not following these principles to highlight potential issues. It’s basically a critical review of your product.
Quicker and easier to set up than extensive usability testing. Focuses on the most relevant areas and reveals the most important problems.
- ConsHeuristic experts can be expensive, and the analysis is only as good as the people who are performing it. Sometimes, the problems they identify are not considered to be critical (or even noticed!) by the actual users.
- Be careful with who you hire, as there are many guidelines on how to perform heuristic evaluation and some people might not be real experts at all.
- Try to find analysts who are familiar with your niche.
The best time to perform a heuristic analysis is when you have most of the features and flows fleshed out. However, you don’t need to have a high-fidelity prototype because the experts won’t actually be using the product. I’d advise that you always conduct the analysis before the final stage of usability testing.
You can help to ensure that your product or service is easy to use by organizing information so that people can find what they’re looking for. That’s where card sorting comes into play. It’s a method where users are asked to group topics and labels in a way that makes sense to them.
Card sorting helps you to create an information architecture that matches users’ expectations, allowing you to label your categories accordingly. It’s a quick, cheap and easy method that serves as a guide for the structure of your product or service.
The method can sometimes produce inconsistent results from user to user. If this does happen, the analysis might become more time-consuming.
- Decide if you want your users to sort the cards first and then name the groups (open card sorting) or if you want to create labels beforehand and ask users to organize the cards into these categories (closed card sorting).
- Avoid topics and cards that contain the same word. People tend to group these together automatically.
- Tell users that the size and number of piles don’t matter (at least in the beginning). Make sure that they know it’s okay to change their mind as they work.
- If you want users to name the groups themselves, make sure that they do it after they finish sorting. If they do it as they work, they can lock themselves into certain categories.
At the start of the development process.
A/B testing is a controlled experiment in which you compare two or more versions of a page or flow to optimize a certain result or metric. It must include a hypothesis, which can be a simple statement such as: “If we change the subscription button from small and green to big and red, we will get more subscribers.”
In order to confirm or reject the hypothesis, you need to measure the user interaction with both versions of your app or site. After a certain number of participants have interacted with each version, you’ll get a result.
A/B testing is a good method for polishing your product, but you need to have a solid story, copy and UX design to make it really effective.
You can improve design decisions with minimal risk. The method can be used to continually enhance important aspects of your site or app.
It’s only useful for specific products. It requires time and a certain amount of traffic to get relevant results. If you don’t have these, you may face problems during the development phase.
Test only the most significant variables during the development stage. If user feedback is split on a key aspect, it may be a good idea to perform an A/B test.
During the final stages of development or when the product is already live.
How many participants should you include per testing session?
You’ve probably heard of the “fiver users” rule before. It’s so well-known because it applies to usability tests, which is by far the most widely used product testing method.
Having five users is enough to shed light on the majority of the issues you need to know about (around 85%).
With that said, you do need to have five users per target group. You’ll know your target groups once you’ve finished with your user research and sometimes, there’s going to be more than one. If this is the case, you’ll need to test with participants from each group or you might end up with a product that’s perfect for 60% of your users and confusing for the other 40%.
Voila, a user-centered product!
User research + Product testing = Maximized chances of success
That’s the equation! Leave nothing to chance – go after that feedback and create a marvelous product that will delight your users. Trust me, it’ll all be worth it when everyone’s raving about your product. 🙂
About the author
Oh hey, I’m Romina Kavcic, Design strategist
I am a Design Strategist who holds a Master of Business Administration. I have 14+ years of career experience in design work and consulting across both tech startups and several marquee tech unicorns such as Stellar.org, Outfit7, Databox, Xamarin, Chipolo, Singularity.NET, etc. I currently advise, coach and consult with companies on design strategy & management, visual design and user experience. My work has been published on Forbes, Hackernoon, Blockgeeks, Newsbtc, Bizjournals, and featured on Apple iTunes Store.
More about me * Let’s connect on Linkedin * Let’s connect on Twitter