Services
UX Design
Usability Testing
Explore our Services Close

Usability Testing

Efficiently optimise your UX with confidence

Usable digital products drive success—boosting sales, enhancing brand value, reducing churn, and cutting support costs. We collaborate with your UX team to deliver expert usability testing, providing clarity and direction across strategy, design, and development. Our expertise spans websites, intranets, mobile apps, software, legacy systems, and (medical) hardware, ensuring future-ready solutions.

EXPERIENCES VALIDATED
  • Evaluate if your product experience aligns with user expectations and goals
  • Determine whether users can efficiently find information and complete tasks with ease
  • Identify high-impact areas for optimization and opportunities for innovation
— GET IN TOUCH WITH US
— GET IN TOUCH WITH US

HOW WE DO IT

  1. 1

    We understand from your stakeholders and UX team your users—their goals, challenges, and how they interact with your product or service to achieve success.

  2. 2

    We design practical testing protocols, define meaningful experience metrics, recruit the right participants, and conduct iterative testing sessions that seamlessly integrate into your design and development workflow.

  3. 3

    Using results-driven analysis, we identify root causes with precision, guided by user tasks and proven industry heuristics.

  4. 4

    We transform insights into actionable strategies, delivering test results through engaging workshops where we co-create impactful short- and long-term improvements.

  5. 5

    Our usability testing recommendations are crafted to inspire action—clear, practical, and tailored for seamless communication across your product teams.

Want to learn more?
Contact us today

WHAT YOU GET

Gain a comprehensive and insightful usability testing report that validates your digital product's real-world performance and uncovers opportunities for growth. Each report includes:

  • A detailed analysis of both quantitative and qualitative usability test data, providing a holistic view of user interactions
  • In-depth identification of the root causes behind usability challenges to drive meaningful solutions
  • Strategically prioritized, actionable usability testing recommendations to enhance performance and user satisfaction
  • A forward-looking exploration of opportunities for innovation, ensuring your product remains ahead of the curve
SELECTED PROJECTS
Latest POSTS
Explore Our Blog
Clients we've helped with Intentional Experiences
Akendi UX Client
Akendi UX Client
Akendi UX Client
Akendi UX Client
Akendi UX Client
Akendi UX Client
Akendi UX Client
Akendi UX Client
Our foundation
Experience thinking perspective

Experience Thinking underpins every project we undertake. It recognizes users and stakeholders as critical contributors to the design cycle. The result is powerful insights and intuitive design solutions that meet real users' and customers' needs.

Have usability testing questions?

Check out our Q&As. If you don't find the answer you're looking for, send us a message at contact@akendi.com.

What exactly is usability testing and why do we need it?

Usability testing is a research method where real users perform specific tasks with your product to identify what works and what doesn't. It measures effectiveness, efficiency, and satisfaction. Following our Experience Thinking framework, usability testing connects your product experience to user needs while ensuring all four quadrants - brand, content, product, and service - work together seamlessly.

Tip: Test early with paper prototypes before any code is written to catch major issues when they're still inexpensive to fix.

How is usability testing different from other research methods?

Unlike surveys or focus groups that gather opinions, usability testing observes actual behavior. It's not about what users say they would do, but what they actually do when trying to complete tasks. This behavioral evidence removes opinion and conjecture from design decisions, providing objective data about user performance.

Tip: Combine usability testing with other methods - use surveys for attitudes and usability testing for behavior.

What can usability testing tell us about our product?

Usability testing reveals task completion rates, error patterns, navigation paths, time on task, and points of user confusion. It shows whether your product design supports user goals effectively. However, it won't validate product value or market fit - that requires different research approaches with larger sample sizes.

Tip: Be clear about what you're testing - usability testing answers 'how easy is it to use' not 'do people want this product'.

When should we conduct usability testing in our development process?

Test early and often throughout development. Start with paper prototypes during concept stage, test again with early interactive prototypes, and conduct final validation before launch. In Experience Thinking terms, this ensures your product experience aligns with brand promise and content strategy from the earliest stages.

Tip: Schedule testing at three key points: concept stage, interactive prototype stage, and pre-launch validation.

What's the difference between formative and summative usability testing?

Formative testing happens during design and development to inform decisions and improve the product. Summative testing occurs later as quality assurance to determine if the product meets usability standards before release. Formative testing allows for iteration; summative testing provides pass/fail metrics.

Tip: Use formative testing to guide design decisions and summative testing to verify you've met usability benchmarks.

How do we know if our usability testing approach is working?

Effective usability testing produces actionable insights that directly inform design decisions. You should see measurable improvements in user performance metrics between testing rounds. The test results should align with business goals and user needs identified in your Experience Thinking strategy.

Tip: Track improvement metrics across testing rounds - if the same issues keep appearing, adjust your research approach.

What makes usability testing different from user acceptance testing?

Usability testing focuses on how easy and efficient it is for users to complete tasks, while user acceptance testing verifies that the system meets business requirements. Usability testing is observational and behavioral; user acceptance testing is functional and technical. Both are necessary but serve different purposes.

Tip: Schedule usability testing before user acceptance testing to ensure the product is both functional and usable.

How do we prepare for our first usability testing session?

Start by defining clear objectives and tasks that reflect real user goals. Recruit participants who match your target audience. Create a testing protocol that balances structure with natural user behavior. Consider the Experience Thinking framework - ensure your test covers how users experience your brand, content, product, and service elements.

Tip: Write your test objectives as questions you need answered, not features you want validated.

What level of prototype fidelity do we need for testing?

The fidelity depends on what you're testing. For navigation and information architecture, low-fidelity wireframes work well. For detailed interactions and content comprehension, you need higher fidelity with realistic content. Avoid Lorem Ipsum when testing content understanding - it distracts users from the actual experience.

Tip: Match your prototype fidelity to your research questions - test navigation with wireframes, test content with realistic copy.

How do we recruit the right participants for testing?

Recruit based on user characteristics relevant to your product, not just demographics. Consider domain expertise, technical comfort level, frequency of similar product use, and context of use. If you've developed personas through Experience Thinking research, recruit participants who match those persona profiles.

Tip: Screen participants based on behavior and experience, not just age and job title.

What questions should we ask during usability testing?

Focus on understanding user thought processes rather than collecting opinions. Ask about actions: 'What are you looking for?' or 'What would you expect to happen next?' Avoid leading questions or asking for design recommendations. Let user behavior guide your observations more than their verbal feedback.

Tip: Ask 'What are you thinking?' instead of 'Do you like this?' to understand user mental models.

How do we create realistic tasks for testing?

Base tasks on actual user goals and scenarios from your user research. Tasks should have clear endpoints and reflect how users would naturally interact with your product. In Experience Thinking terms, tasks should span the user journey and touch multiple experience quadrants when relevant.

Tip: Write tasks as scenarios with context, not just feature lists - 'Find information about...' not 'Use the search function'.

What environment works best for usability testing?

The environment should match natural usage context as closely as possible. Lab testing allows for controlled observation; remote testing captures natural behavior in real environments. Consider where and how users actually interact with your product when choosing your testing setup.

Tip: Test in the environment that most closely matches where users will actually use your product.

How do we balance structure with natural user behavior?

Create a structured protocol but allow flexibility for natural user exploration. Have set tasks but let users approach them their own way. The goal is observing authentic user behavior within a controlled research framework. Too much structure eliminates natural discovery patterns.

Tip: Give users clear tasks but avoid telling them exactly how to complete them.

How many participants do we need for usability testing?

For qualitative usability testing, 5-8 participants per user group typically reveal 80% of usability issues. This assumes you're recruiting correctly and testing represents your target users. More participants are needed for quantitative metrics like task completion times, but qualitative insights emerge quickly with fewer participants.

Tip: Run multiple small rounds rather than one large study - test with 5 users, fix issues, then test again.

Why do some sources recommend different participant numbers?

Different research goals require different sample sizes. Qualitative usability testing (finding problems) needs fewer participants than quantitative testing (measuring performance). Statistical significance for performance metrics requires 20-50+ participants, while qualitative insights emerge with 5-8 participants per user segment.

Tip: Choose participant numbers based on your research goals - qualitative insights or quantitative metrics.

Should we test with more participants if we have multiple user types?

Yes, test with 5-8 participants from each distinct user segment. Different user types may encounter different usability issues based on their experience level, domain knowledge, or usage context. Your Experience Thinking research should identify these key user segments for targeted testing.

Tip: Identify 2-3 key user segments and test with 5-8 participants from each segment.

How do we maximize insights with limited testing budget?

Run iterative testing rounds with smaller participant groups. Test with 5 participants, implement fixes, then test again with 5 more. This approach reveals more issues over time than testing with 10 participants once. Each iteration builds on previous insights and validates improvements.

Tip: Split your participant budget across multiple testing rounds for better long-term insights.

What if we discover new issues with additional participants?

Finding new issues with more participants is normal and expected. The goal isn't to find every possible issue but to identify the most critical problems affecting user success. Focus on patterns that appear across multiple participants rather than isolated individual difficulties.

Tip: Look for issues that affect multiple participants rather than trying to solve every individual struggle.

How do we know when we've tested with enough participants?

You've tested enough when additional participants aren't revealing new critical issues and the patterns are clear. If the same problems keep emerging and you have actionable insights for improvement, you can stop testing and start implementing fixes. Quality of insights matters more than quantity of participants.

Tip: Stop testing when you have clear, actionable insights rather than trying to reach a specific participant number.

What's the best approach for moderating usability testing sessions?

Balance guidance with natural user behavior. For early-stage products, use more conversational moderation to understand user thinking. For mature products, minimize moderator intervention to capture authentic performance. The think-aloud protocol helps reveal user mental models during task completion.

Tip: Adjust your moderation style based on product maturity - more guidance for prototypes, less for finished products.

How do we handle participants who get stuck during tasks?

Distinguish between productive struggle (learning the interface) and unproductive confusion (fundamental usability problems). Allow some struggle time to observe natural recovery patterns, but intervene if participants become frustrated or the session stalls completely. Document both the struggle and your intervention.

Tip: Give participants 2-3 minutes to work through challenges before offering guidance.

Should we tell participants this is a test of the product or their performance?

Always emphasize that you're testing the product, not the participant's abilities. This reduces performance anxiety and encourages honest feedback about difficulties. Frame it as helping improve the product rather than evaluating user competence. This creates a collaborative rather than evaluative atmosphere.

Tip: Start each session by explaining that any difficulties are problems with the design, not user mistakes.

How do we capture both quantitative and qualitative data?

Use screen recording for task completion patterns, time on task metrics, and error documentation. Combine this with observation notes about user behavior, verbal feedback, and emotional responses. The Experience Thinking approach recognizes that user experience includes both measurable performance and emotional satisfaction.

Tip: Record screens and audio, but also take detailed notes about user behavior and emotional responses.

What should we do when participants provide conflicting feedback?

Focus on behavioral patterns rather than verbal opinions. Different participants may have conflicting preferences but similar behavioral challenges. Look for consistent usability problems across participants, even if their stated preferences differ. Behavior is more reliable than opinion for design decisions.

Tip: Trust what participants do more than what they say they would do.

How do we test complex workflows that span multiple sessions?

Break complex workflows into meaningful task segments that can be tested individually. Consider the complete user journey from the Experience Thinking perspective, ensuring each segment connects logically to the next. Test critical decision points and transitions between workflow stages.

Tip: Map the complete workflow first, then identify the most critical segments for focused testing.

What's the role of pre-test and post-test interviews?

Pre-test interviews establish participant context, experience level, and expectations. Post-test interviews capture overall impressions and clarify observed behaviors. Keep both focused and brief - the actual task performance is your primary data source, not participant opinions about preferences.

Tip: Use pre-test interviews to understand participant background and post-test interviews to clarify specific behaviors you observed.

How do we analyze usability testing results effectively?

Look for patterns across participants rather than isolated incidents. Categorize issues by severity and frequency. Connect usability findings to business impact and user goals from your Experience Thinking strategy. Focus on actionable insights that can directly inform design improvements.

Tip: Group similar issues together and prioritize based on how frequently they occurred and their impact on user success.

What makes a usability issue worth fixing?

Prioritize issues that prevent task completion, cause significant delays, or create user frustration. Consider both frequency (how many users encountered it) and severity (impact on user success). Issues that contradict your brand experience or core value proposition should be high priority regardless of frequency.

Tip: Fix issues that block task completion first, then address those that slow users down or cause confusion.

How do we present usability findings to stakeholders?

Connect findings to business goals and user experience strategy. Use video clips to illustrate key issues, but focus on patterns rather than individual struggles. Frame recommendations within the Experience Thinking framework, showing how usability improvements support brand experience and business objectives.

Tip: Lead with business impact, support with user evidence, and provide specific recommendations for improvement.

What if usability testing reveals fundamental design problems?

Major usability issues often indicate misalignment between user mental models and design approach. This connects to Experience Thinking's emphasis on understanding user experience holistically. Fundamental problems may require revisiting information architecture, interaction patterns, or content strategy across multiple experience quadrants.

Tip: Don't just fix surface symptoms - investigate whether fundamental usability problems indicate deeper design approach issues.

How do we track usability improvements over time?

Establish baseline metrics for key tasks and measure improvement in subsequent testing rounds. Track both performance metrics (completion rates, time on task) and satisfaction scores. Consistent improvement across testing cycles indicates effective application of usability insights to design decisions.

Tip: Use the same core tasks across testing rounds to measure improvement while adding new tasks as the product evolves.

How do we connect usability findings to overall user experience strategy?

Map usability issues to specific quadrants of the Experience Thinking framework. Navigation problems may indicate content experience issues; interaction difficulties may reflect product experience gaps. Consider how usability improvements support your brand promise and service delivery goals across the complete user journey.

Tip: Review usability findings through the lens of brand, content, product, and service experience to identify broader strategic implications.

What do we do when usability testing contradicts stakeholder assumptions?

Use behavioral evidence to challenge assumptions while maintaining collaborative relationships. Present findings objectively, focus on user goals and business impact, and involve stakeholders in interpreting results. The goal is building shared understanding based on user evidence rather than internal opinions.

Tip: Invite stakeholders to observe testing sessions so they can see user behavior firsthand.

How do we test products used in specialized or regulated environments?

Specialized environments require understanding both standard usability principles and domain-specific constraints. For medical devices, financial systems, or industrial applications, safety and compliance are paramount. Test with domain experts who understand both the technical requirements and user workflow realities.

Tip: Recruit participants with actual domain expertise, not just demographic matches.

What's different about testing mobile apps versus websites?

Mobile testing requires attention to touch interactions, screen size constraints, and usage context. Consider how mobile usage patterns differ from desktop behavior. Test in realistic mobile contexts including various lighting conditions, distractions, and one-handed use scenarios that reflect natural mobile behavior.

Tip: Test mobile products on actual devices in realistic usage contexts, not just desktop simulators.

How do we test accessibility alongside usability?

Include participants with disabilities in your testing to understand real accessibility challenges. Standard usability testing may miss barriers that affect users with visual, motor, or cognitive differences. Consider how assistive technologies interact with your product and test with actual assistive technology users.

Tip: Recruit participants who actually use assistive technologies rather than simulating disability conditions.

What about testing products that require specialized knowledge?

Professional tools and specialized software require participants with relevant domain expertise. Generic users cannot provide meaningful feedback about professional workflows. Focus on task efficiency and error prevention since expert users prioritize productivity over discoverability in specialized tools.

Tip: Test professional tools with actual professionals who use similar products in their daily work.

How do we test products used in high-stress situations?

Emergency response systems, trading platforms, and crisis management tools require testing under realistic stress conditions. Consider how time pressure, multitasking, and emotional stress affect usability. Design testing scenarios that reflect actual usage pressure without creating unsafe conditions.

Tip: Simulate realistic stress conditions during testing while maintaining participant safety and comfort.

What special considerations apply to testing with children?

Testing with children requires different protocols, shorter sessions, and age-appropriate tasks. Consider developmental differences in attention span, reading level, and motor skills. Focus on natural play patterns and learning behaviors rather than adult task completion efficiency.

Tip: Adapt your testing approach to child attention spans and communication styles rather than using adult protocols.

How do we ensure usability testing leads to actual design improvements?

Connect testing insights directly to design decisions and development priorities. Establish clear processes for implementing recommendations and track which changes are made. The Experience Thinking approach emphasizes connected experiences, so ensure usability improvements support broader experience goals across all touchpoints.

Tip: Create a tracking system that connects specific usability findings to implemented design changes.

What's the best way to integrate usability testing into agile development?

Plan testing cycles that align with sprint schedules. Test early wireframes or prototypes before full development begins. Use rapid testing methods and focus on critical user paths that can be validated quickly. Integrate findings into backlog prioritization and sprint planning processes.

Tip: Schedule testing early in each sprint cycle so findings can influence current development work.

How do we measure the business impact of usability improvements?

Track metrics that connect to business goals: conversion rates, support tickets, task completion rates, and user satisfaction scores. Connect usability improvements to customer retention, revenue impact, and operational efficiency. Document both user experience improvements and business value generated.

Tip: Establish baseline business metrics before implementing usability improvements so you can measure impact.

What if we don't have budget for regular usability testing?

Start with low-cost methods like guerrilla testing, remote unmoderated testing, or internal hallway testing. Even informal testing provides valuable insights. Focus on testing critical user paths and high-impact design decisions. Build the business case for formal testing by demonstrating value from initial low-cost efforts.

Tip: Begin with informal testing methods to demonstrate value, then invest in formal testing as budget allows.

How do we build organizational commitment to usability testing?

Demonstrate clear connections between usability improvements and business outcomes. Share user feedback and testing videos with stakeholders to build empathy for user struggles. Celebrate successes when usability improvements lead to measurable business results. Make user-centered design part of organizational culture.

Tip: Share compelling user stories and business results to build stakeholder investment in ongoing usability testing.

What's the relationship between usability testing and ongoing user experience strategy?

Usability testing provides crucial feedback for refining your Experience Thinking strategy across all four quadrants. Testing reveals whether your product experience aligns with brand promise, whether content supports user goals, and whether service touchpoints create seamless experiences. Use testing insights to evolve your holistic experience approach.

Tip: Review usability findings as part of regular user experience strategy reviews to ensure continuous improvement.

How do we scale usability insights across multiple products or teams?

Document patterns and principles that apply across products, not just specific fixes. Create shared guidelines based on testing insights that inform design decisions across teams. Establish communities of practice where teams share usability findings and solutions. Build organizational knowledge about user behavior patterns.

Tip: Create reusable design patterns and guidelines based on usability testing insights that can guide multiple product teams.

How can we help_hand help you?