SOC Metrics and Dashboards
Measure and visualize SOC performance with meaningful metrics and dashboards.
Last updated: February 2026Purpose and Scope
Metrics help SOC teams understand their performance, identify improvement areas, and demonstrate value to leadership. This playbook covers selecting meaningful metrics, building useful dashboards, and avoiding common measurement pitfalls.
Prerequisites
- Data sources: SIEM, ticketing system, EDR, and other tools that capture operational data
- Baseline data: Historical data to compare against
- Stakeholder input: Understanding of what leadership wants to know
- Visualization tools: Dashboarding capability in SIEM or separate tool
Metric Categories
Detection Metrics
Measure the effectiveness of threat detection:
- Mean time to detect (MTTD): Average time from threat activity to detection
- Detection coverage: Percentage of MITRE ATT&CK techniques with detection rules
- True positive rate: Percentage of alerts that are actual threats
- False positive rate: Percentage of alerts that are not threats
- Alert volume: Total alerts generated over time
- Detection sources: Which tools or rules generate detections
Response Metrics
Measure the speed and effectiveness of response:
- Mean time to respond (MTTR): Time from detection to initial response action
- Mean time to contain (MTTC): Time from detection to threat containment
- Mean time to resolve (MTTR): Time from detection to incident closure
- Escalation rate: Percentage of alerts escalated to incidents
- Incidents by severity: Distribution across severity levels
- Containment success rate: Percentage of incidents successfully contained
Operational Metrics
Measure SOC operations and efficiency:
- Alert queue depth: Number of alerts awaiting triage
- Analyst utilization: Time spent on investigation vs. other tasks
- Alerts per analyst: Workload distribution
- Triage time: Average time to classify an alert
- Ticket aging: How long incidents remain open
- SLA compliance: Percentage meeting response time targets
Improvement Metrics
Track continuous improvement efforts:
- Rules tuned: Detection rules adjusted to reduce noise
- New detections: Detection rules added
- Playbooks created: Documented response procedures
- Automation implemented: Tasks automated via SOAR
- Post-incident action completion: Percentage of improvement items completed
Selecting the Right Metrics
Align with Goals
Choose metrics that support organizational objectives:
- If the goal is faster detection, focus on MTTD
- If alert fatigue is a problem, track false positive rate
- If demonstrating value, show threats blocked and risks reduced
- If improving efficiency, measure automation adoption
Balance Quantity and Quality
- Volume metrics (alert count) alone do not show effectiveness
- Combine with quality metrics (true positive rate)
- Track trends over time, not just point in time values
Avoid Vanity Metrics
Some metrics look impressive but do not drive improvement:
- "Threats blocked" without context of severity or impact
- Alert volume that does not distinguish signal from noise
- Metrics that can be gamed without improving security
Dashboard Design
Executive Dashboard
For leadership audiences, show:
- Overall security posture (risk score or summary)
- Incident trends (count by severity over time)
- Key metrics vs. targets (MTTD, MTTR)
- Notable incidents or threats
- Improvement progress
Keep it high level: 5 to 7 key visualizations maximum.
Operational Dashboard
For SOC managers and leads:
- Current queue status and workload
- Alert and incident volume by source
- Analyst performance and utilization
- SLA status and aging tickets
- Detection rule performance
Analyst Dashboard
For individual analysts:
- Personal queue and assigned items
- Recent alerts by category
- Quick access to investigation tools
- Active incidents requiring attention
Building Effective Visualizations
- Use appropriate chart types: Line charts for trends, bar charts for comparisons, tables for detail
- Include context: Show targets, baselines, and historical comparisons
- Highlight anomalies: Use color to draw attention to outliers
- Keep it simple: Avoid cluttered visualizations
- Update automatically: Dashboards should refresh without manual effort
Reporting Cadence
Real-Time
- Operational dashboards for SOC floor
- Alert queue status
- Active incident status
Daily
- Shift handoff reports
- Daily activity summary
- Anomaly highlights
Weekly
- SOC performance review
- Trend analysis
- Notable incidents summary
Monthly
- Executive reporting
- Goal progress review
- Improvement initiative status
Quarterly
- Board-level reporting
- Strategic review
- Budget and resource planning input
Data Collection Challenges
- Inconsistent data: Ensure alerts and incidents are logged consistently
- Manual entry: Automate data collection where possible
- Tool silos: Aggregate data from multiple sources
- Data quality: Validate and clean data before reporting
- Historical gaps: Start collecting data now for future baselines
Using Metrics for Improvement
- Establish baselines for key metrics
- Set realistic improvement targets
- Review metrics regularly in team meetings
- Investigate significant changes or anomalies
- Connect metrics to specific improvement actions
- Celebrate progress and adjust targets as needed
Common Pitfalls
- Measuring everything: Too many metrics dilute focus
- Gaming metrics: Optimizing numbers instead of outcomes
- Ignoring context: Numbers without interpretation are misleading
- Static targets: Not adjusting goals as capability matures
- Manual reporting: Unsustainable effort that degrades over time
Benchmarking
Compare your metrics to industry benchmarks:
- Verizon Data Breach Investigations Report for incident trends
- SANS SOC surveys for operational benchmarks
- Peer organizations in your industry
- Vendor benchmark data (with appropriate skepticism)
References
- SANS SOC Survey: sans.org/white-papers
- NIST Cybersecurity Framework: Measurement guidance
- MITRE ATT&CK for coverage metrics: attack.mitre.org
- Verizon DBIR: verizon.com/business/resources/reports/dbir
Was this helpful?