TINAA Best Practices¶
Follow these best practices to get the most out of TINAA's AI-powered browser automation and testing capabilities.
Test Execution Best Practices¶
1. Focused Test Requests¶
When using the HTTP API, be specific about what you want to test:
Good:
{
"action": "exploratory",
"parameters": {
"url": "https://example.com/login",
"focus_area": "forms",
"depth": 3
}
}
Better:
{
"action": "exploratory",
"parameters": {
"url": "https://example.com/login",
"focus_area": "authentication",
"scenarios": ["valid_login", "invalid_credentials", "forgot_password"]
}
}
2. Leverage WebSocket for Real-time Feedback¶
Monitor test progress in real-time:
const ws = new WebSocket('ws://localhost:8765/ws');
ws.onmessage = (event) => {
const progress = JSON.parse(event.data);
if (progress.type === 'error') {
console.error('Test failed:', progress.details);
}
};
3. Use Playbooks for Complex Scenarios¶
Create playbooks for multi-step test scenarios:
{
"name": "checkout-flow",
"actions": [
{"type": "navigate", "url": "https://shop.example.com"},
{"type": "exploratory", "focus": "product-selection"},
{"type": "form_analysis", "target": "checkout"},
{"type": "accessibility", "standard": "WCAG2AA"}
]
}
Performance Optimization¶
Parallel Test Execution¶
Run multiple test types simultaneously:
# Run in separate terminals or use a script
curl -X POST http://localhost:8765/test/exploratory -d '{"url": "page1"}'
curl -X POST http://localhost:8765/test/accessibility -d '{"url": "page2"}'
Resource Management¶
- Close browser contexts after tests
- Use headless mode for CI/CD environments
- Implement proper timeouts for long-running tests
Integration Best Practices¶
1. CI/CD Integration¶
# GitHub Actions example
- name: Run TINAA Tests
run: |
docker-compose -f docker-compose.prod.yml up -d
./scripts/wait-for-tinaa.sh
curl -X POST http://localhost:8765/test/accessibility \
-d '{"url": "${{ env.STAGING_URL }}"}'
2. MCP Integration with Claude¶
When using with Claude Desktop: - Be specific in your requests - Ask for screenshots to verify actions - Request test summaries after execution
Example prompts: - "Navigate to example.com and check all form fields for accessibility" - "Take screenshots of each step during the checkout process" - "Analyze the login page for security best practices"
Test Result Analysis¶
1. Review Accessibility Reports¶
Pay attention to: - Critical violations (must fix) - Serious violations (should fix) - Moderate violations (consider fixing)
2. Analyze Exploratory Test Insights¶
Look for: - Broken links discovered - Form validation issues - JavaScript errors - Performance bottlenecks
Maintenance Best Practices¶
1. Regular Health Checks¶
2. Log Analysis¶
Monitor TINAA logs for: - Failed test patterns - Performance degradation - Resource usage
3. Update Resources¶
Keep test patterns and heuristics updated:
- Review /resources/test_patterns.json
- Update accessibility rules as standards evolve
- Refine exploratory heuristics based on findings
Common Pitfalls to Avoid¶
- Running tests without specific focus - Always specify what to test
- Ignoring WebSocket feedback - Use real-time updates for better insights
- Not reviewing test results - Analyze reports for actionable improvements
- Overloading the system - Limit concurrent tests based on resources
Security Considerations¶
- Don't expose TINAA publicly - Keep it on internal networks
- Validate input URLs - Ensure you're testing authorized domains
- Secure sensitive test data - Don't hardcode credentials