Explore our Services Close

AI UX Research

Get to deeply know your users and how they use your AI driven digital application.

Creating human-centred AI is simpler when you understand who the users are and how they behave with AI driven experiences. We conduct AI UX research that uncovers specific insights into your AI ecosystem - be they customers, users or partners.

INSIGHTS AND ANSWERS
  • Who are your AI users, what are their journeys and goals?
  • When is your audience using your AI products, and what are their experiences?
  • What insights do our stakeholders expect to receive from our analytics?
— GET IN TOUCH WITH US
— GET IN TOUCH WITH US

HOW WE DO IT

  1. 1

    Start with AI UX discovery workshops to capture current knowledge your stakeholders and team have about your AI users.

  2. 2

    Conduct AI UX research to gather indepth qualitative and quantitative data. Approaches include: interviews, survey research, ethnographic research, focus groups, data analytics, and other research techniques.

  3. 3

    Analyse and synthesize the research data into the key insights that will drive AI agent and product strategies.

  4. 4

    Communicate those AI UX research findings to the team in a visual format that brings the insights to life and has a lasting impact in your AI agent and product definition.

Want to learn more?
Contact us today

WHAT YOU GET

You'll find deep insights into your AI users and their specific experiences. You'll get:

  • Data visualizations that captures the AI UX research findings in an engaging form that can be shared throughout the organization. This communication will keep the users top-of-mind during planning, design, and development.
  • Clarity on who your AI users are, how they think, and what they need to complete journeys and achieve their goals.
  • Validation of assumptions about your AI users; clarity on the user stories; and new insights into opportunities to surprise and delight them.
  • Accelerate your organization's digital transformation; reduce risk from employee churn and reduced focus.
SELECTED PROJECTS
Latest POSTS
Explore Our Blog
Clients we've helped with Intentional Experiences
Akendi UX Client
Akendi UX Client
Akendi UX Client
Akendi UX Client
Akendi UX Client
Akendi UX Client
Akendi UX Client
Akendi UX Client
Our foundation
Experience thinking perspective

Experience Thinking underpins every project we undertake. It recognizes users and stakeholders as critical contributors to the design cycle. The result is powerful insights and intuitive design solutions that meet real users' and customers' needs.

Have AI UX research questions?

Check out our Q&As. If you don't find the answer you're looking for, send us a message at contact@akendi.com.

What exactly is AI UX research and why do we need it?

AI UX research helps you deeply know your users and how they use your AI-driven digital applications. Creating human-centered AI is simpler when you understand who the users are and how they behave with AI-driven experiences. We conduct AI UX research that uncovers specific insights into your AI ecosystem - be they customers, users, or partners. This research addresses unique challenges of human-AI interaction including trust, transparency, and emotional intelligence.

Tip: Start with understanding your users' current mental models and expectations before introducing AI features to avoid creating experiences that feel foreign or untrusted.

How does AI UX research differ from traditional UX research?

AI UX research must account for the unique aspects of human-AI interaction, including user perceptions of AI emotional intelligence, trust in AI recommendations, and expectations around AI transparency. Traditional usability testing methods are enhanced with AI-specific evaluations like conversational flow testing, bias detection, and trust measurement. The research focuses on how users interpret AI behavior and what they expect from an AI personality.

Tip: Include both functional testing (does the AI work correctly) and emotional testing (how do users feel about the AI responses) in your research plan.

How does Akendi's Experience Thinking framework apply to AI UX research?

Experience Thinking ensures AI UX research examines how users experience AI across four connected areas: brand (how AI reflects your brand personality), content (how AI delivers information), product (how AI enhances functionality), and service (how AI supports customer service). This holistic approach ensures AI doesn't create disconnected experiences but strengthens the overall user journey across all touchpoints.

Tip: Map your AI touchpoints across all four experience areas to identify research opportunities and ensure your AI creates cohesive rather than fragmented user experiences.

What unique challenges does AI UX research address?

AI UX research addresses challenges like user trust in AI systems, transparency expectations, bias detection, conversational flow effectiveness, and AI personality assessment. Users evaluate AI differently than traditional interfaces - they often anthropomorphize AI and expect more human-like understanding. Research must capture both functional performance and emotional responses to AI interactions.

Tip: Design research methods that capture both conscious user feedback and unconscious behavioral responses to AI interactions, as users may not always articulate their true feelings about AI systems.

What types of AI applications benefit most from specialized UX research?

Conversational AI, recommendation systems, predictive analytics tools, and autonomous systems benefit most from specialized research. Any AI application where users must trust AI decisions, understand AI reasoning, or engage in natural language interactions requires research beyond traditional usability testing. The higher the stakes or more complex the interaction, the more important specialized AI UX research becomes.

Tip: Prioritize AI UX research for applications where user trust is critical or where AI failure could have significant consequences for users or your business.

How do you research AI systems that are still learning and evolving?

Research methods must account for AI systems that change over time through machine learning. This includes testing AI performance across different training stages, understanding how user behavior changes as AI improves, and designing research frameworks that can adapt to evolving AI capabilities. Longitudinal research becomes essential for understanding long-term user-AI relationships.

Tip: Plan for multiple research phases that align with your AI development timeline rather than treating AI UX research as a one-time activity.

What should organizations consider before starting AI UX research?

Organizations should have clear AI use cases, defined success metrics, and realistic timelines for AI development. Research requires access to representative users, AI prototypes or simulations for testing, and stakeholder alignment on research objectives. Consider your AI's intended personality, transparency level, and role in the user experience before designing research studies.

Tip: Define your AI's intended personality and behavior characteristics before starting research to ensure you're testing against clear criteria rather than just gathering general feedback.

What research methods work best for AI UX research?

Effective AI UX research combines traditional methods with AI-specific approaches. We use contextual inquiry to understand AI integration into user workflows, conversational interface testing for AI dialogue systems, trust assessment studies, and emotional response evaluation. Methods include user interviews, usability testing, A/B testing of AI responses, and longitudinal studies to track user-AI relationship development over time.

Tip: Use multiple research methods to triangulate findings since users may behave differently with AI than they report in interviews.

How do you test conversational AI and chatbot interfaces?

Conversational AI testing focuses on dialogue flow effectiveness, natural language understanding, error recovery, and user satisfaction with AI responses. We test conversation starters, response accuracy, personality consistency, and how well users can accomplish tasks through natural language. Testing includes both scripted scenarios and open-ended conversations to understand AI performance limits.

Tip: Test conversational AI with both structured tasks and unstructured conversations to understand how users naturally interact with your AI when not following specific prompts.

What's your approach to testing AI emotional intelligence and personality?

We test AI emotional intelligence through scenario-based research where users interact with AI in different emotional states and contexts. This includes testing how AI responds to user frustration, confusion, or success, and evaluating whether AI personality feels authentic and helpful. We measure user perceptions of AI empathy, trustworthiness, and social intelligence through both behavioral observation and attitudinal surveys.

Tip: Test AI personality across different user emotional states, not just positive interactions, to understand how your AI performs when users are stressed or frustrated.

How do you research trust and transparency in AI systems?

Trust research examines what information users need to feel confident in AI decisions, how much explanation they want about AI reasoning, and what builds or destroys trust over time. We test different levels of AI transparency, from simple confidence indicators to detailed explanations of AI decision-making. Research includes both initial trust formation and long-term trust maintenance.

Tip: Research trust needs specific to your AI's role and consequences - users need different levels of transparency for entertainment recommendations versus financial advice.

What methods help identify bias in AI user experiences?

Bias research involves testing AI performance across diverse user groups, examining AI responses for unfair treatment, and understanding how different users experience AI systems. We use demographic analysis, comparative testing across user segments, and qualitative research to understand lived experiences with AI. This includes testing for both algorithmic bias and experiential bias in AI interactions.

Tip: Include diverse user groups in your research from the beginning rather than treating bias testing as an afterthought - early detection prevents costly fixes later.

How do you test AI systems that provide recommendations or predictions?

Recommendation system research focuses on relevance, user control, explanation quality, and long-term satisfaction. We test how users interpret AI recommendations, what information they need to make decisions, and how recommendation quality affects user trust and engagement. This includes testing both individual recommendations and the overall recommendation strategy.

Tip: Test not just recommendation accuracy but also user understanding of why recommendations were made and their ability to influence future recommendations.

What's your approach to longitudinal AI UX research?

Longitudinal research tracks how user-AI relationships develop over time, including changes in trust, usage patterns, and satisfaction. We study how users adapt to AI capabilities, how AI learning affects user experience, and how user expectations evolve. This research is crucial for understanding AI systems that improve through use and for planning AI capability expansion.

Tip: Plan longitudinal research to capture both honeymoon and long-term usage phases, as user attitudes toward AI often change significantly after initial novelty wears off.

How do you design AI UX research studies that produce actionable insights?

Study design starts with clear research objectives tied to specific AI performance metrics and user experience goals. We create realistic testing scenarios that reflect actual AI usage contexts, recruit representative users, and design tasks that reveal both functional and emotional aspects of AI interaction. Studies balance structured testing with open exploration to capture unexpected user behaviors.

Tip: Design studies that test AI performance in realistic contexts rather than ideal conditions to understand how your AI will actually perform for users.

What's your approach to participant recruitment for AI UX research?

Recruitment focuses on finding users who represent your AI's intended audience while also including diverse perspectives that reveal potential bias or accessibility issues. We consider participants' prior AI experience, domain expertise, and demographic diversity. Recruitment strategies account for potential AI anxiety or enthusiasm that might bias results.

Tip: Include both AI-experienced and AI-naive users in your research to understand how different comfort levels with AI affect user experience.

How do you create realistic scenarios for testing AI interactions?

Scenarios reflect real user goals and constraints, including time pressure, distractions, and incomplete information that users experience in actual AI usage. We design scenarios that test both successful AI interactions and common failure modes. Scenarios consider the emotional context of AI usage, such as seeking help when frustrated or making important decisions.

Tip: Base scenarios on actual user journeys and use cases rather than idealized interactions to get meaningful insights about AI performance.

What's your approach to testing AI prototypes versus live systems?

Prototype testing allows controlled evaluation of specific AI features before full development, while live system testing reveals real-world performance and user adaptation. We use Wizard of Oz testing for early AI concepts, interactive prototypes for usability testing, and live system testing for validation. Each approach provides different insights about AI user experience.

Tip: Use prototype testing to validate core AI concepts early, but always validate findings with live system testing to understand real-world performance.

How do you balance qualitative and quantitative approaches in AI UX research?

AI UX research requires both qualitative insights about user attitudes and quantitative measures of AI performance. Qualitative methods reveal user mental models, emotions, and trust factors, while quantitative methods measure task completion, accuracy, and usage patterns. The combination provides a complete picture of AI user experience effectiveness.

Tip: Use qualitative research to understand why users behave certain ways with AI, then use quantitative research to validate those insights at scale.

What ethical considerations apply to AI UX research?

Ethical considerations include informed consent about AI testing, protection of user privacy and data, transparency about AI capabilities being tested, and ensuring research doesn't inadvertently train AI on user data without consent. We also consider the potential impact of AI research on user perceptions and expectations of AI systems.

Tip: Be transparent with research participants about how their data will be used and whether their interactions will contribute to AI training or improvement.

How do you ensure AI UX research findings are generalizable?

Generalizability requires diverse participant samples, multiple testing contexts, and validation across different AI implementations. We test AI performance across various user scenarios, environmental conditions, and use cases. Research designs account for the fact that AI performance can vary significantly based on context and user characteristics.

Tip: Test your AI with edge cases and diverse user groups to understand the boundaries of your research findings rather than just optimal conditions.

How do you analyze and interpret AI UX research data?

Analysis combines traditional UX metrics with AI-specific measures like trust scores, conversation quality ratings, and AI personality assessments. We examine both user task performance and emotional responses to AI interactions, looking for patterns that indicate successful AI user experiences. Analysis includes both individual user journeys and aggregated performance metrics.

Tip: Look for patterns in both successful and failed AI interactions to understand what makes AI experiences work well for users.

What metrics matter most in AI UX research?

Key metrics include task completion rates, user satisfaction with AI responses, trust in AI recommendations, time to complete tasks, error recovery success, and long-term usage patterns. We also measure AI-specific metrics like conversation flow effectiveness, perceived AI intelligence, and user comfort with AI decision-making. The most important metrics depend on your AI's specific role and goals.

Tip: Define success metrics that align with your AI's intended role rather than using generic usability metrics that may not capture AI-specific user experience factors.

How do you identify when AI UX research reveals bias or fairness issues?

Bias detection involves comparing AI performance across different user groups, examining language and response patterns for unfair treatment, and understanding how different users experience AI interactions. We look for statistical differences in AI performance and qualitative differences in user experience quality. Analysis includes both obvious bias and subtle differences that might affect user experience.

Tip: Analyze research data by demographic groups and use cases to identify patterns that might indicate bias, even when overall performance seems acceptable.

What's your approach to understanding AI conversation quality?

Conversation quality analysis examines dialogue flow, response relevance, personality consistency, and user satisfaction with AI communication style. We analyze both successful conversations and breakdowns to understand what makes AI interactions effective. This includes examining conversation length, user effort required, and whether users achieve their goals through AI dialogue.

Tip: Analyze complete conversation flows rather than individual AI responses to understand how conversation quality affects overall user experience.

How do you interpret user feedback about AI personality and emotional intelligence?

AI personality analysis examines user perceptions of AI character traits, emotional appropriateness, and social intelligence. We analyze both direct feedback about AI personality and indirect indicators like user comfort, trust, and willingness to continue interacting with AI. This includes understanding how AI personality affects user task performance and satisfaction.

Tip: Look for consistency between what users say about AI personality and how they actually behave with the AI, as reported preferences may differ from actual usage patterns.

What patterns in AI UX research indicate successful user experiences?

Success patterns include users achieving goals efficiently, expressing trust in AI recommendations, showing willingness to use AI again, and demonstrating appropriate mental models of AI capabilities. We look for evidence that users understand AI limitations while still finding AI helpful. Successful AI experiences often show users adapting their behavior to work effectively with AI.

Tip: Identify patterns that show users developing appropriate expectations for AI performance rather than expecting either too much or too little from AI systems.

How do you translate AI UX research findings into design recommendations?

Translation involves connecting research insights to specific AI behavior changes, interface modifications, and interaction design improvements. We provide recommendations for AI response patterns, transparency levels, error handling, and personality adjustments based on user feedback. Recommendations balance user preferences with technical constraints and business objectives.

Tip: Prioritize recommendations that address user trust and understanding of AI capabilities, as these fundamental factors affect all other aspects of AI user experience.

How do you research AI in customer service and support applications?

Customer service AI research focuses on problem resolution effectiveness, escalation handling, empathy demonstration, and user satisfaction with AI support. We examine how AI handles complex customer emotions, when users prefer human assistance, and how AI support affects customer relationships. Research includes both routine support scenarios and emotionally charged situations.

Tip: Test AI customer service with users who are genuinely frustrated or stressed, not just calm research participants, to understand how AI performs in realistic support scenarios.

What's your approach to researching AI in healthcare and high-stakes applications?

High-stakes AI research emphasizes safety, trust, and user understanding of AI limitations. We examine how users interpret AI recommendations, what information they need for informed decision-making, and how AI affects professional workflows. Research includes extensive testing of edge cases and failure modes that could have serious consequences.

Tip: Include domain experts and end users in healthcare AI research to understand both clinical accuracy and practical usability in real medical contexts.

How do you research AI-powered search and information retrieval systems?

Search AI research examines query understanding, result relevance, personalization effectiveness, and user satisfaction with AI-enhanced search experiences. We test how users formulate queries for AI systems, how they interpret AI-generated results, and how AI search affects information discovery patterns. This includes testing both precise queries and exploratory search behaviors.

Tip: Test AI search with both specific information needs and exploratory search tasks to understand how AI affects different types of information-seeking behavior.

What's your approach to researching AI in creative and content generation applications?

Creative AI research focuses on output quality, user control over AI generation, creative process integration, and user satisfaction with AI-assisted creativity. We examine how users collaborate with AI for creative tasks, what level of AI autonomy users prefer, and how AI affects creative workflows. Research includes both novice and expert creative users.

Tip: Include both creative professionals and general users in AI creativity research to understand different expectations and use cases for AI-generated content.

How do you research AI in educational and learning applications?

Educational AI research examines learning effectiveness, engagement, personalization quality, and user understanding of AI tutoring or assistance. We test how AI adapts to different learning styles, how learners interact with AI feedback, and how AI affects motivation and learning outcomes. Research includes both formal and informal learning contexts.

Tip: Test educational AI with learners of different skill levels and learning preferences to understand how AI personalization affects learning effectiveness.

What's your approach to researching AI in e-commerce and recommendation systems?

E-commerce AI research focuses on recommendation relevance, user control over personalization, privacy comfort, and purchase decision support. We examine how users interact with AI-powered product recommendations, how AI affects browsing behavior, and how recommendation explanations influence purchase decisions. Research includes both individual product recommendations and overall shopping experience.

Tip: Test e-commerce AI across different purchase contexts, from routine purchases to high-consideration decisions, to understand how AI affects different shopping behaviors.

How do you research AI accessibility and inclusion across different user groups?

Accessibility research examines how AI works for users with different abilities, technical skills, and cultural backgrounds. We test AI performance across diverse user groups, examine language and interaction barriers, and understand how AI can either improve or hinder accessibility. This includes testing AI with assistive technologies and diverse interaction methods.

Tip: Include accessibility testing throughout AI development rather than treating it as a final check, as AI systems can create new accessibility barriers that traditional testing might miss.

How does AI UX research impact business outcomes and user satisfaction?

AI UX research typically improves user adoption rates, reduces support costs, increases user trust and satisfaction, and enables more effective AI implementations. Research helps avoid costly AI failures and ensures AI systems deliver genuine user value. Well-researched AI experiences often show higher user retention and more positive brand perception.

Tip: Track business metrics like user retention and support ticket volume alongside traditional usability metrics to demonstrate AI UX research value.

What return on investment can we expect from AI UX research?

ROI includes both cost savings from avoiding AI failures and revenue increases from better user experiences. Research prevents expensive post-launch fixes, reduces user support needs, and increases user adoption of AI features. The ROI is typically highest for AI applications with high user interaction frequency or business-critical functions.

Tip: Calculate ROI by comparing the cost of research to the potential cost of AI system failures or poor user adoption rather than just direct development costs.

How does AI UX research support competitive advantage and market differentiation?

Research enables AI experiences that users find genuinely helpful rather than just technically impressive. This creates competitive advantages through superior user experiences that are difficult for competitors to replicate. Research helps identify unique AI applications that provide real user value rather than following technology trends.

Tip: Focus research on understanding unmet user needs that AI can address uniquely rather than just improving existing AI implementations.

What risks does AI UX research help mitigate?

Research mitigates risks including user rejection of AI systems, bias and fairness issues, privacy concerns, and AI systems that don't solve real user problems. Early research prevents costly redesigns and protects brand reputation from AI failures. Research also helps identify potential regulatory or ethical issues before they become problems.

Tip: Use research to identify potential AI risks specific to your industry and user base rather than just general AI concerns.

How does AI UX research inform AI development roadmaps and feature prioritization?

Research provides evidence for which AI features users actually want and need, helping prioritize development efforts for maximum impact. User insights guide AI capability development, personality design, and interaction patterns. Research helps balance technical possibilities with user value and business objectives.

Tip: Use research findings to challenge assumptions about what AI features users want rather than just validating planned AI capabilities.

What's the long-term strategic value of AI UX research capability?

AI UX research capability becomes a strategic asset that enables rapid adaptation to changing user needs and AI technology advances. Organizations with strong AI UX research capabilities can identify new AI opportunities faster and implement AI more successfully than competitors. This capability compounds over time through accumulated user insights and research expertise.

Tip: Build internal AI UX research capabilities alongside external consulting to ensure you can continue optimizing AI experiences as technology evolves.

How do you measure the success of AI UX research initiatives over time?

Success measurement includes both immediate research outcomes and long-term business impact. We track how research insights influence AI development decisions, user satisfaction improvements, and business performance gains. Success also includes organizational learning and improved AI design capabilities that result from research investments.

Tip: Establish baseline measurements before AI UX research begins so you can track specific improvements in user experience and business outcomes.

What makes Akendi's AI UX research approach unique?

Our Experience Thinking framework ensures AI UX research examines how AI affects the complete user experience across brand, content, product, and service touchpoints. We combine traditional UX research expertise with AI-specific methodologies, focusing on human needs rather than just AI capabilities. Our research-led analysis balances user experience insights with business objectives and technical constraints.

Tip: Choose research partners who understand both AI technology and human experience design rather than specialists who only focus on technical AI performance.

How do you customize AI UX research for different industries and AI applications?

Customization considers industry-specific user expectations, regulatory requirements, and AI risk tolerance. We adapt research methods, participant recruitment, and success metrics based on your AI application type and business context. Healthcare AI research differs significantly from entertainment AI research in terms of user needs and success criteria.

Tip: Work with researchers who have relevant industry experience with AI applications rather than general UX researchers who may not understand your specific context.

What's your approach to collaborating with AI development teams?

Collaboration involves integrating research insights with AI development workflows, providing user feedback during AI training, and validating AI performance from user perspective. We work closely with data scientists, AI engineers, and product teams to ensure research insights can be implemented effectively. This includes understanding technical constraints and possibilities for AI improvement.

Tip: Establish clear communication channels between research and AI development teams to ensure user insights can influence AI system design and training.

How do you handle AI UX research projects with evolving requirements?

AI development often involves changing requirements as technology capabilities and user needs become clearer. We design flexible research approaches that can adapt to evolving AI capabilities while maintaining focus on user experience outcomes. Research plans include contingencies for different AI development scenarios and timeline changes.

Tip: Plan AI UX research with flexibility to adapt to changing AI capabilities rather than rigid research protocols that might become outdated.

What ongoing support do you provide after initial AI UX research?

Ongoing support includes research guidance for AI iterations, user feedback integration, and optimization based on live AI performance data. We provide consultation on new AI features, help interpret user behavior changes, and support continuous improvement of AI user experiences. Our goal is building long-term partnerships that support evolving AI development.

Tip: Plan for ongoing research support rather than treating AI UX research as a one-time project, as AI systems require continuous optimization based on user feedback.

How do you ensure our team can apply AI UX research insights effectively?

Knowledge transfer includes training team members on AI UX research methods, providing frameworks for ongoing user feedback collection, and establishing processes for translating research insights into AI improvements. We build internal capabilities for continued AI UX optimization rather than creating dependency on external research support.

Tip: Invest in building internal AI UX research capabilities to ensure you can continue optimizing AI experiences as your systems evolve and user needs change.

What's your vision for the future of AI UX research and how do you help us prepare?

The future of AI UX research involves more sophisticated methods for understanding human-AI relationships, better integration of research with AI development, and increased focus on ethical AI experiences. We help organizations build research capabilities that remain valuable as AI technology advances, focusing on timeless human needs rather than specific AI implementations.

Tip: Focus on building research capabilities that understand human needs and behaviors rather than just current AI technology, as human needs remain more stable than AI capabilities.

How can we help_hand help you?