TINAA Enterprise Guide¶
Enterprise-Grade Deployment and Management¶
Complete guide for deploying, scaling, and managing TINAA in enterprise environments.
Table of Contents¶
- Enterprise Architecture
- Production Deployment
- Security & Compliance
- Scaling & Performance
- Monitoring & Observability
- Team Management
- Integration Patterns
- Governance & Best Practices
- Support & Maintenance
Enterprise Architecture¶
Reference Architecture¶
graph TB
subgraph "Load Balancing Layer"
LB[Load Balancer<br/>NGINX/HAProxy]
CDN[CDN/WAF<br/>CloudFlare/AWS CloudFront]
end
subgraph "API Gateway Layer"
AG[API Gateway<br/>Kong/AWS API Gateway]
AUTH[Authentication<br/>OAuth2/SAML]
end
subgraph "Application Layer"
TINAA1[TINAA Instance 1]
TINAA2[TINAA Instance 2]
TINAA3[TINAA Instance 3]
end
subgraph "Data Layer"
REDIS[Redis Cache<br/>Session/Results]
POSTGRES[PostgreSQL<br/>Metadata/Audit]
S3[Object Storage<br/>Screenshots/Reports]
end
subgraph "Monitoring Layer"
PROM[Prometheus<br/>Metrics]
GRAF[Grafana<br/>Dashboards]
ELK[ELK Stack<br/>Logging]
end
CDN --> LB
LB --> AG
AG --> AUTH
AUTH --> TINAA1
AUTH --> TINAA2
AUTH --> TINAA3
TINAA1 --> REDIS
TINAA1 --> POSTGRES
TINAA1 --> S3
TINAA1 --> PROM
PROM --> GRAF
TINAA1 --> ELK
Component Specifications¶
Component | Purpose | Recommended Technology |
---|---|---|
Load Balancer | Traffic distribution | NGINX Plus, HAProxy, AWS ALB |
API Gateway | Rate limiting, auth | Kong, AWS API Gateway, Zuul |
Container Orchestration | Service management | Kubernetes, Docker Swarm |
Cache Layer | Performance optimization | Redis Cluster, Memcached |
Database | Persistent storage | PostgreSQL, MySQL |
Object Storage | File storage | AWS S3, Azure Blob, MinIO |
Monitoring | Observability | Prometheus + Grafana |
Logging | Audit and debugging | ELK Stack, Splunk |
Production Deployment¶
Kubernetes Deployment¶
1. Namespace and ConfigMap¶
# k8s/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: tinaa-production
labels:
app: tinaa
environment: production
---
# k8s/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: tinaa-config
namespace: tinaa-production
data:
TINAA_MODE: "http"
LOG_LEVEL: "INFO"
REDIS_URL: "redis://redis-service:6379"
POSTGRES_URL: "postgresql://tinaa:password@postgres-service:5432/tinaa"
MAX_CONCURRENT_TESTS: "10"
DEFAULT_TIMEOUT: "60000"
SCREENSHOT_STORAGE: "s3"
S3_BUCKET: "tinaa-screenshots"
2. Deployment Configuration¶
# k8s/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: tinaa-deployment
namespace: tinaa-production
labels:
app: tinaa
version: v1.0.0
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
selector:
matchLabels:
app: tinaa
template:
metadata:
labels:
app: tinaa
version: v1.0.0
spec:
containers:
- name: tinaa
image: tinaa-playwright-msp:1.0.0
ports:
- containerPort: 8765
name: http
env:
- name: TINAA_MODE
valueFrom:
configMapKeyRef:
name: tinaa-config
key: TINAA_MODE
envFrom:
- configMapRef:
name: tinaa-config
resources:
requests:
memory: "2Gi"
cpu: "500m"
limits:
memory: "4Gi"
cpu: "2000m"
livenessProbe:
httpGet:
path: /health
port: 8765
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /health
port: 8765
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 3
volumeMounts:
- name: browser-cache
mountPath: /ms-playwright
- name: tmp-storage
mountPath: /tmp
volumes:
- name: browser-cache
emptyDir:
sizeLimit: 2Gi
- name: tmp-storage
emptyDir:
sizeLimit: 1Gi
securityContext:
runAsNonRoot: true
runAsUser: 1000
fsGroup: 1000
3. Service and Ingress¶
# k8s/service.yaml
apiVersion: v1
kind: Service
metadata:
name: tinaa-service
namespace: tinaa-production
labels:
app: tinaa
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8765
protocol: TCP
name: http
selector:
app: tinaa
---
# k8s/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: tinaa-ingress
namespace: tinaa-production
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/rate-limit: "100"
nginx.ingress.kubernetes.io/rate-limit-window: "1m"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
tls:
- hosts:
- tinaa.yourcompany.com
secretName: tinaa-tls
rules:
- host: tinaa.yourcompany.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: tinaa-service
port:
number: 80
4. Horizontal Pod Autoscaler¶
# k8s/hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: tinaa-hpa
namespace: tinaa-production
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: tinaa-deployment
minReplicas: 3
maxReplicas: 20
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
behavior:
scaleUp:
stabilizationWindowSeconds: 60
policies:
- type: Percent
value: 50
periodSeconds: 60
scaleDown:
stabilizationWindowSeconds: 300
policies:
- type: Percent
value: 25
periodSeconds: 60
Docker Swarm Deployment¶
# docker-compose.prod.yml
version: '3.8'
services:
tinaa:
image: tinaa-playwright-msp:1.0.0
networks:
- tinaa-network
environment:
- TINAA_MODE=http
- LOG_LEVEL=INFO
- REDIS_URL=redis://redis:6379
- POSTGRES_URL=postgresql://tinaa:${POSTGRES_PASSWORD}@postgres:5432/tinaa
deploy:
replicas: 3
update_config:
parallelism: 1
delay: 10s
order: start-first
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
resources:
limits:
memory: 4G
cpus: '2'
reservations:
memory: 2G
cpus: '0.5'
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8765/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
nginx:
image: nginx:alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
- ./ssl:/etc/nginx/ssl:ro
networks:
- tinaa-network
deploy:
replicas: 2
update_config:
parallelism: 1
delay: 10s
redis:
image: redis:7-alpine
networks:
- tinaa-network
volumes:
- redis_data:/data
deploy:
replicas: 1
restart_policy:
condition: on-failure
postgres:
image: postgres:15
environment:
POSTGRES_DB: tinaa
POSTGRES_USER: tinaa
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
volumes:
- postgres_data:/var/lib/postgresql/data
networks:
- tinaa-network
deploy:
replicas: 1
restart_policy:
condition: on-failure
networks:
tinaa-network:
driver: overlay
attachable: true
volumes:
redis_data:
postgres_data:
Security & Compliance¶
Authentication & Authorization¶
1. OAuth2 Integration¶
# app/auth/oauth2.py
from authlib.integrations.fastapi_oauth2 import AuthorizationServer
from authlib.integrations.sqla_oauth2 import (
OAuth2ClientMixin,
OAuth2TokenMixin,
)
from fastapi import Depends, HTTPException, status
from fastapi.security import OAuth2PasswordBearer
import jwt
from datetime import datetime, timedelta
class EnterpriseAuth:
def __init__(self, config):
self.config = config
self.oauth2_scheme = OAuth2PasswordBearer(tokenUrl="token")
async def authenticate_user(self, token: str = Depends(oauth2_scheme)):
"""Authenticate user with OAuth2 token."""
try:
payload = jwt.decode(
token,
self.config.JWT_SECRET,
algorithms=[self.config.JWT_ALGORITHM]
)
username = payload.get("sub")
if username is None:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Invalid authentication credentials"
)
# Verify user exists and has required permissions
user = await self.get_user(username)
if not user or not user.is_active:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="User not found or inactive"
)
return user
except jwt.PyJWTError:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Could not validate credentials"
)
async def check_permissions(self, user, required_permissions: list):
"""Check if user has required permissions."""
user_permissions = await self.get_user_permissions(user.id)
if not all(perm in user_permissions for perm in required_permissions):
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN,
detail="Insufficient permissions"
)
return True
# Role-based access control
class Permission:
READ_TESTS = "tests:read"
WRITE_TESTS = "tests:write"
EXECUTE_TESTS = "tests:execute"
ADMIN_ACCESS = "admin:access"
MANAGE_USERS = "users:manage"
# Protected endpoints
@app.post("/test/exploratory")
async def exploratory_test(
request: TestRequest,
user = Depends(auth.authenticate_user)
):
await auth.check_permissions(user, [Permission.EXECUTE_TESTS])
# Implementation
pass
2. SAML SSO Integration¶
# app/auth/saml.py
from saml2 import BINDING_HTTP_POST, BINDING_HTTP_REDIRECT
from saml2.client import Saml2Client
from saml2.config import Config as Saml2Config
class SAMLAuth:
def __init__(self, config):
self.saml_config = Saml2Config()
self.saml_config.load({
'entityid': config.SAML_ENTITY_ID,
'service': {
'sp': {
'endpoints': {
'assertion_consumer_service': [
(config.SAML_ACS_URL, BINDING_HTTP_POST),
],
},
'allow_unsolicited': True,
'authn_requests_signed': False,
'logout_requests_signed': True,
'want_assertions_signed': True,
'want_response_signed': False,
},
},
'metadata': {
'remote': [
{
'url': config.SAML_METADATA_URL,
},
],
},
})
self.saml_client = Saml2Client(config=self.saml_config)
async def initiate_login(self):
"""Initiate SAML login process."""
reqid, info = self.saml_client.prepare_for_authenticate()
redirect_url = None
for key, value in info['headers']:
if key == 'Location':
redirect_url = value
break
return redirect_url, reqid
async def process_response(self, saml_response, request_id):
"""Process SAML response and create user session."""
authn_response = self.saml_client.parse_authn_request_response(
saml_response,
BINDING_HTTP_POST,
outstanding_queries={request_id: "/"}
)
if authn_response is None:
raise ValueError("Invalid SAML response")
# Extract user attributes
user_info = authn_response.get_identity()
return {
"username": user_info.get("username", [None])[0],
"email": user_info.get("email", [None])[0],
"groups": user_info.get("groups", []),
}
Data Protection & Privacy¶
1. Data Encryption¶
# app/security/encryption.py
from cryptography.fernet import Fernet
from cryptography.hazmat.primitives import hashes
from cryptography.hazmat.primitives.kdf.pbkdf2 import PBKDF2HMAC
import base64
import os
class DataEncryption:
def __init__(self, master_key: str = None):
if master_key:
self.key = self._derive_key(master_key)
else:
self.key = Fernet.generate_key()
self.cipher_suite = Fernet(self.key)
def _derive_key(self, password: str) -> bytes:
"""Derive encryption key from password."""
salt = os.environ.get('ENCRYPTION_SALT', 'default_salt').encode()
kdf = PBKDF2HMAC(
algorithm=hashes.SHA256(),
length=32,
salt=salt,
iterations=100000,
)
key = base64.urlsafe_b64encode(kdf.derive(password.encode()))
return key
def encrypt(self, data: str) -> str:
"""Encrypt sensitive data."""
encrypted_data = self.cipher_suite.encrypt(data.encode())
return base64.urlsafe_b64encode(encrypted_data).decode()
def decrypt(self, encrypted_data: str) -> str:
"""Decrypt sensitive data."""
decoded_data = base64.urlsafe_b64decode(encrypted_data.encode())
decrypted_data = self.cipher_suite.decrypt(decoded_data)
return decrypted_data.decode()
# Usage for sensitive test data
encryption = DataEncryption(os.environ.get('MASTER_KEY'))
async def store_test_credentials(username: str, password: str):
"""Store test credentials securely."""
encrypted_username = encryption.encrypt(username)
encrypted_password = encryption.encrypt(password)
# Store in database
await db.store_credentials(encrypted_username, encrypted_password)
async def get_test_credentials(credential_id: str):
"""Retrieve and decrypt test credentials."""
encrypted_creds = await db.get_credentials(credential_id)
return {
"username": encryption.decrypt(encrypted_creds.username),
"password": encryption.decrypt(encrypted_creds.password)
}
2. Audit Logging¶
# app/security/audit.py
import json
import asyncio
from datetime import datetime
from typing import Dict, Any
import structlog
class AuditLogger:
def __init__(self, config):
self.config = config
self.logger = structlog.get_logger("audit")
async def log_action(
self,
user_id: str,
action: str,
resource: str,
result: str,
details: Dict[str, Any] = None
):
"""Log user action for audit trail."""
audit_record = {
"timestamp": datetime.utcnow().isoformat(),
"user_id": user_id,
"action": action,
"resource": resource,
"result": result,
"details": details or {},
"session_id": getattr(self, 'current_session_id', None),
"ip_address": getattr(self, 'current_ip', None),
"user_agent": getattr(self, 'current_user_agent', None)
}
# Log to structured logging
self.logger.info(
"audit_event",
**audit_record
)
# Store in audit database
await self._store_audit_record(audit_record)
async def _store_audit_record(self, record: Dict[str, Any]):
"""Store audit record in database."""
# Implementation depends on your database choice
pass
# Audit middleware
async def audit_middleware(request: Request, call_next):
"""Middleware to capture audit information."""
start_time = time.time()
# Extract request information
user_id = getattr(request.state, 'user_id', 'anonymous')
action = f"{request.method} {request.url.path}"
try:
response = await call_next(request)
# Log successful action
await audit_logger.log_action(
user_id=user_id,
action=action,
resource=str(request.url),
result="success",
details={
"status_code": response.status_code,
"duration_ms": round((time.time() - start_time) * 1000, 2)
}
)
return response
except Exception as e:
# Log failed action
await audit_logger.log_action(
user_id=user_id,
action=action,
resource=str(request.url),
result="error",
details={
"error": str(e),
"duration_ms": round((time.time() - start_time) * 1000, 2)
}
)
raise
app.middleware("http")(audit_middleware)
Compliance Features¶
GDPR Compliance¶
# app/compliance/gdpr.py
from datetime import datetime, timedelta
from typing import List, Dict
class GDPRCompliance:
def __init__(self, config):
self.config = config
self.data_retention_days = config.DATA_RETENTION_DAYS
async def anonymize_user_data(self, user_id: str):
"""Anonymize user data for GDPR compliance."""
# Replace personally identifiable information
anonymized_data = {
"user_id": f"anon_{hash(user_id)}",
"anonymized_at": datetime.utcnow(),
"original_user_id": None # Remove reference
}
# Update all related records
await self._anonymize_test_results(user_id, anonymized_data["user_id"])
await self._anonymize_audit_logs(user_id, anonymized_data["user_id"])
return anonymized_data
async def export_user_data(self, user_id: str) -> Dict:
"""Export all user data for GDPR data portability."""
user_data = {
"user_info": await self._get_user_info(user_id),
"test_results": await self._get_user_test_results(user_id),
"audit_logs": await self._get_user_audit_logs(user_id),
"exported_at": datetime.utcnow().isoformat()
}
return user_data
async def delete_expired_data(self):
"""Delete data older than retention period."""
cutoff_date = datetime.utcnow() - timedelta(days=self.data_retention_days)
# Delete old test results
deleted_count = await self._delete_old_test_results(cutoff_date)
return {
"deleted_records": deleted_count,
"cutoff_date": cutoff_date.isoformat()
}
Scaling & Performance¶
Horizontal Scaling Strategy¶
1. Load Balancer Configuration¶
# nginx.conf
upstream tinaa_backend {
least_conn;
server tinaa-1:8765 max_fails=3 fail_timeout=30s;
server tinaa-2:8765 max_fails=3 fail_timeout=30s;
server tinaa-3:8765 max_fails=3 fail_timeout=30s;
# Health check
keepalive 32;
}
server {
listen 80;
listen 443 ssl http2;
server_name tinaa.yourcompany.com;
# SSL configuration
ssl_certificate /etc/ssl/certs/tinaa.crt;
ssl_certificate_key /etc/ssl/private/tinaa.key;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE+AESGCM:ECDHE+CHACHA20:DHE+AESGCM:DHE+CHACHA20:!aNULL:!MD5:!DSS;
# Rate limiting
limit_req_zone $binary_remote_addr zone=api:10m rate=10r/s;
limit_req zone=api burst=20 nodelay;
# Compression
gzip on;
gzip_types application/json text/plain text/css application/javascript;
location / {
proxy_pass http://tinaa_backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Timeouts
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 300s;
# Buffer settings
proxy_buffering on;
proxy_buffer_size 8k;
proxy_buffers 16 8k;
}
# WebSocket support
location /ws/ {
proxy_pass http://tinaa_backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_read_timeout 300s;
proxy_send_timeout 300s;
}
# Health check endpoint
location /health {
access_log off;
proxy_pass http://tinaa_backend;
}
}
2. Redis Cluster Setup¶
# redis-cluster.yml
version: '3.8'
services:
redis-node-1:
image: redis:7-alpine
command: redis-server /usr/local/etc/redis/redis.conf
volumes:
- ./redis-cluster.conf:/usr/local/etc/redis/redis.conf
- redis-1-data:/data
networks:
- redis-cluster
redis-node-2:
image: redis:7-alpine
command: redis-server /usr/local/etc/redis/redis.conf
volumes:
- ./redis-cluster.conf:/usr/local/etc/redis/redis.conf
- redis-2-data:/data
networks:
- redis-cluster
redis-node-3:
image: redis:7-alpine
command: redis-server /usr/local/etc/redis/redis.conf
volumes:
- ./redis-cluster.conf:/usr/local/etc/redis/redis.conf
- redis-3-data:/data
networks:
- redis-cluster
networks:
redis-cluster:
driver: overlay
volumes:
redis-1-data:
redis-2-data:
redis-3-data:
# redis-cluster.conf
port 6379
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
appendonly yes
Performance Optimization¶
1. Caching Strategy¶
# app/cache/distributed.py
import aioredis
import json
import pickle
from typing import Any, Optional
import hashlib
class DistributedCache:
def __init__(self, redis_url: str):
self.redis_url = redis_url
self.redis = None
async def connect(self):
self.redis = aioredis.from_url(
self.redis_url,
encoding="utf-8",
decode_responses=False,
max_connections=20
)
async def get(self, key: str) -> Optional[Any]:
"""Get cached value."""
try:
cached_data = await self.redis.get(key)
if cached_data:
return pickle.loads(cached_data)
except Exception as e:
logger.error(f"Cache get error: {e}")
return None
async def set(self, key: str, value: Any, ttl: int = 300):
"""Set cached value with TTL."""
try:
serialized_data = pickle.dumps(value)
await self.redis.setex(key, ttl, serialized_data)
except Exception as e:
logger.error(f"Cache set error: {e}")
async def invalidate_pattern(self, pattern: str):
"""Invalidate all keys matching pattern."""
try:
keys = await self.redis.keys(pattern)
if keys:
await self.redis.delete(*keys)
except Exception as e:
logger.error(f"Cache invalidation error: {e}")
# Cache usage in services
cache = DistributedCache("redis://redis-cluster:6379")
async def cached_test_execution(test_config: dict):
"""Cache test execution results."""
# Generate cache key from test configuration
config_hash = hashlib.md5(
json.dumps(test_config, sort_keys=True).encode()
).hexdigest()
cache_key = f"test_result:{config_hash}"
# Try to get from cache first
cached_result = await cache.get(cache_key)
if cached_result:
return cached_result
# Execute test and cache result
result = await execute_test(test_config)
await cache.set(cache_key, result, ttl=3600) # Cache for 1 hour
return result
2. Connection Pool Management¶
# app/database/pool.py
import asyncpg
import aioredis
from sqlalchemy.ext.asyncio import create_async_engine, AsyncSession
from sqlalchemy.orm import sessionmaker
class DatabasePool:
def __init__(self, config):
self.config = config
self.postgres_engine = None
self.redis_pool = None
async def initialize(self):
"""Initialize database connections."""
# PostgreSQL connection pool
self.postgres_engine = create_async_engine(
self.config.POSTGRES_URL,
pool_size=20,
max_overflow=0,
pool_pre_ping=True,
pool_recycle=3600
)
# Redis connection pool
self.redis_pool = aioredis.ConnectionPool.from_url(
self.config.REDIS_URL,
max_connections=50,
retry_on_timeout=True
)
async def get_postgres_session(self) -> AsyncSession:
"""Get PostgreSQL session."""
async_session = sessionmaker(
self.postgres_engine,
class_=AsyncSession,
expire_on_commit=False
)
return async_session()
async def get_redis_connection(self):
"""Get Redis connection."""
return aioredis.Redis(connection_pool=self.redis_pool)
async def close(self):
"""Close all connections."""
if self.postgres_engine:
await self.postgres_engine.dispose()
if self.redis_pool:
await self.redis_pool.disconnect()
Monitoring & Observability¶
Prometheus Metrics¶
# app/monitoring/metrics.py
from prometheus_client import Counter, Histogram, Gauge, generate_latest
import time
from functools import wraps
# Define metrics
REQUEST_COUNT = Counter('tinaa_requests_total', 'Total requests', ['method', 'endpoint', 'status'])
REQUEST_DURATION = Histogram('tinaa_request_duration_seconds', 'Request duration')
ACTIVE_TESTS = Gauge('tinaa_active_tests', 'Number of active tests')
BROWSER_INSTANCES = Gauge('tinaa_browser_instances', 'Number of browser instances')
TEST_RESULTS = Counter('tinaa_test_results_total', 'Test results', ['test_type', 'result'])
def track_metrics(func):
"""Decorator to track function execution metrics."""
@wraps(func)
async def wrapper(*args, **kwargs):
start_time = time.time()
ACTIVE_TESTS.inc()
try:
result = await func(*args, **kwargs)
TEST_RESULTS.labels(test_type=func.__name__, result='success').inc()
return result
except Exception as e:
TEST_RESULTS.labels(test_type=func.__name__, result='error').inc()
raise
finally:
ACTIVE_TESTS.dec()
REQUEST_DURATION.observe(time.time() - start_time)
return wrapper
# Metrics endpoint
@app.get("/metrics")
async def get_metrics():
"""Prometheus metrics endpoint."""
return Response(generate_latest(), media_type="text/plain")
# Usage
@track_metrics
async def run_exploratory_test(url: str):
"""Tracked exploratory test execution."""
# Implementation
pass
Grafana Dashboard Configuration¶
{
"dashboard": {
"title": "TINAA Production Dashboard",
"panels": [
{
"title": "Request Rate",
"type": "graph",
"targets": [
{
"expr": "rate(tinaa_requests_total[5m])",
"legendFormat": "{{method}} {{endpoint}}"
}
]
},
{
"title": "Response Time",
"type": "graph",
"targets": [
{
"expr": "histogram_quantile(0.95, rate(tinaa_request_duration_seconds_bucket[5m]))",
"legendFormat": "95th percentile"
}
]
},
{
"title": "Active Tests",
"type": "singlestat",
"targets": [
{
"expr": "tinaa_active_tests"
}
]
},
{
"title": "Error Rate",
"type": "graph",
"targets": [
{
"expr": "rate(tinaa_test_results_total{result=\"error\"}[5m])",
"legendFormat": "Error Rate"
}
]
}
]
}
}
Logging Strategy¶
# app/logging/structured.py
import structlog
import logging.config
from pythonjsonlogger import jsonlogger
def configure_logging(config):
"""Configure structured logging."""
LOGGING_CONFIG = {
"version": 1,
"disable_existing_loggers": False,
"formatters": {
"json": {
"()": jsonlogger.JsonFormatter,
"format": "%(asctime)s %(name)s %(levelname)s %(message)s"
}
},
"handlers": {
"console": {
"class": "logging.StreamHandler",
"formatter": "json",
"level": config.LOG_LEVEL
},
"file": {
"class": "logging.FileHandler",
"filename": "logs/tinaa.json",
"formatter": "json",
"level": config.LOG_LEVEL
}
},
"loggers": {
"tinaa": {
"handlers": ["console", "file"],
"level": config.LOG_LEVEL,
"propagate": False
}
}
}
logging.config.dictConfig(LOGGING_CONFIG)
# Configure structlog
structlog.configure(
processors=[
structlog.stdlib.filter_by_level,
structlog.stdlib.add_logger_name,
structlog.stdlib.add_log_level,
structlog.stdlib.PositionalArgumentsFormatter(),
structlog.processors.TimeStamper(fmt="iso"),
structlog.processors.StackInfoRenderer(),
structlog.processors.format_exc_info,
structlog.processors.UnicodeDecoder(),
structlog.processors.JSONRenderer()
],
context_class=dict,
logger_factory=structlog.stdlib.LoggerFactory(),
cache_logger_on_first_use=True,
)
# Usage throughout application
logger = structlog.get_logger("tinaa.app")
async def process_test_request(request):
logger.info(
"test_request_started",
url=request.url,
test_type=request.test_type,
user_id=request.user_id
)
try:
result = await execute_test(request)
logger.info(
"test_request_completed",
url=request.url,
success=result.success,
duration=result.duration
)
return result
except Exception as e:
logger.error(
"test_request_failed",
url=request.url,
error=str(e),
exc_info=True
)
raise
Team Management¶
User Roles and Permissions¶
Configure role-based access control for your teams:
# config/roles.yaml
roles:
admin:
permissions: ['*']
description: 'Full system access'
test_engineer:
permissions:
- 'tests:create'
- 'tests:read'
- 'tests:execute'
- 'reports:read'
description: 'Create and run tests'
viewer:
permissions:
- 'tests:read'
- 'reports:read'
description: 'Read-only access'
Team Collaboration Features¶
- Shared test suites and resources
- Real-time collaboration on test creation
- Version control integration
- Code review workflows
Integration Patterns¶
CI/CD Integration¶
# .github/workflows/test.yml
name: TINAA Tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Run TINAA Tests
uses: tinaa/action@v1
with:
api-key: ${{ secrets.TINAA_API_KEY }}
test-suite: 'regression'
Third-Party Integrations¶
- Slack notifications
- Jira issue creation
- PagerDuty alerts
- Confluence reports
Governance & Best Practices¶
Testing Standards¶
- Test Naming Conventions
- Use descriptive names
- Include feature/component prefix
-
Version test suites
-
Code Review Process
- All tests must be reviewed
- Automated quality checks
-
Performance benchmarks
-
Documentation Requirements
- Test purpose and scope
- Prerequisites and dependencies
- Expected outcomes
Compliance Frameworks¶
- SOC 2 Type II compliance
- GDPR data protection
- HIPAA for healthcare
- PCI DSS for payments
Support & Maintenance¶
Support Channels¶
- Enterprise Support
- 24/7 phone support
- Dedicated account manager
- Priority issue resolution
-
Custom SLAs
-
Community Support
- GitHub discussions
- Discord community
- Stack Overflow
Maintenance Best Practices¶
- Regular dependency updates
- Security patch management
- Performance monitoring
- Capacity planning
This enterprise guide provides comprehensive coverage of production deployment, security, scaling, and monitoring for TINAA in enterprise environments. Each section includes practical implementation examples and enterprise-grade configurations.