Frequently Asked Questions
What kind of data can I query with Warestack?
Warestack aggregates and normalizes operational data across your engineering ecosystem — GitHub, Slack, Linear, Jira, and more.
You can query everything from pull requests and deployment events to discussion threads, commits, and rule violations — all in one schema.
Instead of manually checking each platform, you can write a single prompt like:
“List PRs where files under infra/ were modified and the build failed.”
Warestack automatically correlates data from multiple sources and returns structured, timestamped insights that can be exported or reused in audits.
How are patterns created?
Warestack detects operational patterns using agentic AI that learns from commits, pull requests, workflows, and discussion data.
These learned patterns represent consistent developer behaviors or process signals — for example, long review delays, repeated test failures, or after-hours deploys.
Each pattern can then be used as a rule or condition in checks.
What does “pattern-enriched metadata” mean in Warestack?
Warestack’s reporting engine interprets your query or prompt, finds relevant entities (PRs, commits, reviews), and generates a context-aware summary.
It combines deterministic SQL reasoning with neural summarization, so outputs are both explainable and reproducible.
Pattern-enriched metadata means Warestack attaches additional context to every code-related event — from pull requests and commits to CI/CD pipelines and discussions.
Instead of tracking just what happened (like a PR merged), Warestack also captures how it happened — review latency, file types, revert count, and related discussions.
Example:
“Show me all PRs that changed authentication code in the past month.”
Warestack returns both the high-level summary and the underlying records with timestamps and metadata.
What kind of metadata does Warestack attach to each event?
Each entity (PR, commit, issue, or deployment) is enriched with metrics such as:
- Review latency – how long a review took
- LOC change volume – code churn and size metrics
- Reviewer density – number and type of reviewers involved
- Risk and anomaly scores – based on learned historical data
- Cross-tool context – Slack mentions, Linear/Jira issues, CI test failures
This metadata supports cross-system reasoning and early detection of regressions or risky behavior.
Can I access or verify the raw data behind every report?
Yes. Every LLM-generated report links to its raw JSON event data in the Warestack schema.
You can expand, download, or query it using deterministic SQL for validation.
This design makes reports explainable, auditable, and reproducible — suitable for compliance or root-cause analysis.
How does the YAML check system work?
Warestack allows you to define pattern checks declaratively in YAML. Each check enforces a rule or dependency — for example:
- description: "Pull requests that modify .sql files must include a .migration.sql file."
event_types: ["pull_request"]
parameters:
file_pattern_dependency:
source_pattern: "*.sql"
dependent_pattern: "*.migration.sql"These checks can be activated from the UI, CLI, or API and are portable, version-controlled, and easy to reuse across repositories.
How does automatic scheduling and alerting work?
Any query or prompt can be turned into a scheduled report or alert.
Examples:
- “Send me every Friday a summary of PRs that touched
auth/orsecrets/.” - “Alert me if any PR exceeds 500 lines or stays open for more than 48 hours.”
Warestack runs these continuously and sends alerts through Slack or email.
You can define frequency, filters, and recipients — enabling proactive monitoring and continuous intelligence.
How are cross-tool correlations established?
Warestack maintains a shared reference graph using temporal and semantic keys (like PR IDs, commit hashes, and message references).
This allows Slack messages, Jira issues, or Linear tasks to be automatically linked to their related PRs or deployments.
Example:
A Slack message mentioning “fix in PR-482” is linked to the GitHub PR, merge commit, and deployment — forming a complete trace from discussion → code → production.
How are notifications and alerts triggered from detected patterns?
Each check can be configured to trigger actions when specific operational patterns occur.
Example:
If a rule detects that “PRs larger than 500 LOC took more than 3 days to review,“
Warestack can automatically notify reviewers via Slack, add a comment in Linear, or create a follow-up task.
These triggers make checks active guardrails, helping maintain consistency and accountability.
What are the usage and data access limits?
All users can access agentic checks, reports, and queries.
Limits depend only on data volume and query frequency, based on your current plan.
How can teams query and integrate Warestack data?
All normalized and enriched data is exposed through REST and SQL APIs.
You can query with natural language or deterministic SQL for reproducible results.
Warestack also supports event streaming for dashboards and automation triggers.
Can Warestack’s pattern checks integrate with my existing tools?
Yes. Detected patterns can trigger actions across your tools, including:
- Posting alerts to Slack or Microsoft Teams
- Creating Linear or Jira issues automatically
- Commenting on GitHub pull requests
- Updating dashboards or exporting structured reports
Integrations can be configured via YAML, API, or through the Warestack UI.
| Company | Check type | Description | Context factors | Severity |
|---|---|---|---|---|
| Code Review | Non-trivial changes require design docs | Developer experience, change complexity, project phase | Medium | |
| Code Review | All changes need team review; core systems need domain experts | System impact, risk level | High | |
| Netflix | Service Dependencies | Service interface changes need owner + affected teams | Backward compatibility, migration strategy | High |
| Netflix | Performance | Performance-impacting changes need performance team review | User impact, scale requirements | High |
| Uber | Database | Schema changes need data team + migration scripts | Migration complexity, rollback risk | Critical |
| Uber | API | API changes must maintain backward compatibility | API impact, client impact | High |
| Microsoft | Security | Security changes need security team + threat modeling | File patterns, risk assessment | Critical |
| Microsoft | Privacy | User data changes need privacy team review | GDPR compliance, data minimization | Critical |
| Amazon | Reliability | Reliability-impacting changes need incident procedures | System criticality, monitoring coverage | High |
| Amazon | Cost | Cost-impacting changes need finance team review | Resource utilization, scaling requirements | Medium |
| Meta | Performance | User-facing changes need performance testing | Feature complexity, scale requirements | High |
| Meta | Scalability | Scalability changes need infrastructure team review | Resource requirements, growth projection | High |
| Apple | UX | UI changes need design team + UX testing | User experience, accessibility | High |
| Apple | Accessibility | All user-facing changes must comply with accessibility standards | WCAG compliance, assistive technology | High |
| Airbnb | Testing | Code changes must maintain test coverage | Code complexity, feature scope | Medium |
| Airbnb | Integration | Multi-service changes need integration testing | Service impact, dependency changes | High |