#601

Global Rank · of 601 Skills

usability-tester AI Agent Skill

View Source: oakoss/agent-skills

Medium

Installation

npx skills add oakoss/agent-skills --skill usability-tester

33

Installs

Usability Tester

Overview

Validates that users can successfully complete core tasks through systematic observation and expert evaluation. Covers moderated and unmoderated testing, heuristic evaluation, accessibility checks, and issue severity scoring. Not a substitute for analytics or A/B testing -- those measure what happens, usability testing reveals why.

When to use: Testing user flows, validating designs, identifying friction points, running heuristic evaluations, ensuring users can complete core tasks, planning and executing usability test sessions.

When NOT to use: Analytics or A/B test setup, visual design critique without task-based evaluation, automated UI testing (use a testing framework), performance benchmarking.

Quick Reference

Method Best For Participants When to Use
Moderated testing Deep insights, complex flows 5-8 per persona Design and prototyping stage
Unmoderated testing Scale, quantitative data 20-50+ Pre-launch and post-launch
Guerrilla testing Quick validation, early concepts 5-10 random Early concept stage
First-click testing Navigation, information architecture 20-50 Any stage, especially IA redesigns
Heuristic evaluation Expert review against principles 3-5 evaluators Before user testing, design audits
Cognitive walkthrough Task flow analysis 2-3 evaluators Early design, new feature review
Accessibility audit Inclusive design validation 3-5 users with disabilities Pre-launch, compliance reviews
Synthetic user testing Scalable task validation with AI agents N/A (automated) Continuous, regression testing
AI-moderated sessions Async moderated testing at scale 10-50+ When moderator availability is limited

Core Metrics

Metric Target What It Measures
Task success rate 80% or higher for core tasks Can users complete the task
Time on task Simple under 30s, medium 1-2m, complex 3-5m Efficiency
Error rate Fewer than 2 per task Learnability and clarity
Post-task satisfaction 4.0 or higher on 5-point scale Subjective ease
SUS score 68+ (industry average), 80+ (excellent) Overall usability

Issue Severity

Severity equals Impact (1-3) multiplied by Frequency (1-3). Critical (8-9): fix before release. High (6-7): fix before release. Medium (4-5): next release. Low (1-3): backlog.

Common Mistakes

Mistake Correct Pattern
Writing task scenarios with product jargon or step-by-step instructions Use natural language with realistic context and clear goals to test discoverability
Helping users or explaining the UI during test sessions Observe silently and let users struggle; confusion reveals real issues
Testing only the happy path with ideal inputs Include error states, edge cases, and recovery scenarios in test plans
Using fewer than 5 participants per persona Recruit 5-8 users per persona; after 5, diminishing returns per Nielsen research
Collecting findings without severity scoring or follow-up actions Score every issue by impact times frequency and assign fix priority
Skipping heuristic evaluation before user testing Run expert review first to catch obvious issues cheaply before spending on recruitment
Ignoring accessibility in test planning Include participants with disabilities and test with assistive technologies
Running only one round of testing Test in waves: 5 users, fix issues, test 5 more until success rate meets targets

Delegation

  • Test plan and scenario creation: Use Task agent to draft objectives, research questions, and realistic task scenarios for each persona
  • Heuristic evaluation: Use Task agent to walk through interfaces against Nielsen's 10 heuristics and document violations with severity ratings
  • Metrics analysis and severity scoring: Use Task agent to calculate SUS scores, task success rates, and severity ratings from raw test data
  • Test report generation: Use Task agent to compile findings into structured reports with prioritized recommendations
  • Test infrastructure planning: Use Plan agent to select testing methods, define recruitment criteria, and schedule test waves

If the quality-auditor skill is available, delegate full codebase quality audits and scoring to it.
Otherwise, recommend: pnpm dlx skills add oakoss/agent-skills -s quality-auditor -a claude-code -y

References

  • Test Planning -- objectives, research questions, recruitment, task scenario templates, screening criteria
  • Conducting Tests -- think-aloud protocol, facilitation rules, post-task questions, session structure
  • Testing Methods -- unmoderated, guerrilla, first-click, cognitive walkthrough, method selection by lifecycle stage
  • Heuristic Evaluation -- Nielsen's 10 heuristics, evaluation process, severity rating, combining with user testing
  • Metrics and Severity -- success rate, time on task, errors, satisfaction, SUS scoring, severity formula
  • Reporting -- test report template, key insights format, stakeholder presentation, recommended actions
  • Accessibility Testing -- inclusive recruitment, assistive technology testing, WCAG alignment, accessibility heuristics
  • Remote and Tools -- remote vs in-person comparison, testing tools, test frequency, checklists

Installs

Installs 33
Global Rank #601 of 601

Security Audit

ath Medium
socket Safe
Alerts: 0 Score: 90
snyk Low
EU EU-Hosted Inference API

Power your AI Agents with the best open-source models.

Drop-in OpenAI-compatible API. No data leaves Europe.

Explore Inference API

GLM

GLM 5

$1.00 / $3.20

per M tokens

Kimi

Kimi K2.5

$0.60 / $2.80

per M tokens

MiniMax

MiniMax M2.5

$0.30 / $1.20

per M tokens

Qwen

Qwen3.5 122B

$0.40 / $3.00

per M tokens

How to use this skill

1

Install usability-tester by running npx skills add oakoss/agent-skills --skill usability-tester in your project directory. Run the install command above in your project directory. The skill file will be downloaded from GitHub and placed in your project.

2

No configuration needed. Your AI agent (Claude Code, Cursor, Windsurf, etc.) automatically detects installed skills and uses them as context when generating code.

3

The skill enhances your agent's understanding of usability-tester, helping it follow established patterns, avoid common mistakes, and produce production-ready output.

What you get

Skills are plain-text instruction files — not executable code. They encode expert knowledge about frameworks, languages, or tools that your AI agent reads to improve its output. This means zero runtime overhead, no dependency conflicts, and full transparency: you can read and review every instruction before installing.

Compatibility

This skill works with any AI coding agent that supports the skills.sh format, including Claude Code (Anthropic), Cursor, Windsurf, Cline, Aider, and other tools that read project-level context files. Skills are framework-agnostic at the transport level — the content inside determines which language or framework it applies to.

Data sourced from the skills.sh registry and GitHub. Install counts and security audits are updated regularly.

EU Made in Europe

Chat with 100+ AI Models in one App.

Use Claude, ChatGPT, Gemini alongside with EU-Hosted Models like Deepseek, GLM-5, Kimi K2.5 and many more.

Customer Support