Skip to Content
🏆 1st Place on Product Hunt! Check it out →
DocumentationFAQ

Frequently Asked Questions

What kind of data can I query with Warestack?

Warestack aggregates and normalizes operational data across your engineering ecosystem — GitHub, Slack, Linear, Jira, and more.
You can query everything from pull requests and deployment events to discussion threads, commits, and rule violations — all in one schema.

Instead of manually checking each platform, you can write a single prompt like:
“List PRs where files under infra/ were modified and the build failed.”

Warestack automatically correlates data from multiple sources and returns structured, timestamped insights that can be exported or reused in audits.

How are patterns created?

Warestack detects operational patterns using agentic AI that learns from commits, pull requests, workflows, and discussion data.
These learned patterns represent consistent developer behaviors or process signals — for example, long review delays, repeated test failures, or after-hours deploys.
Each pattern can then be used as a rule or condition in checks.

What does “pattern-enriched metadata” mean in Warestack?

Warestack’s reporting engine interprets your query or prompt, finds relevant entities (PRs, commits, reviews), and generates a context-aware summary.
It combines deterministic SQL reasoning with neural summarization, so outputs are both explainable and reproducible.

Pattern-enriched metadata means Warestack attaches additional context to every code-related event — from pull requests and commits to CI/CD pipelines and discussions.
Instead of tracking just what happened (like a PR merged), Warestack also captures how it happened — review latency, file types, revert count, and related discussions.

Example:
“Show me all PRs that changed authentication code in the past month.”
Warestack returns both the high-level summary and the underlying records with timestamps and metadata.

What kind of metadata does Warestack attach to each event?

Each entity (PR, commit, issue, or deployment) is enriched with metrics such as:

  • Review latency – how long a review took
  • LOC change volume – code churn and size metrics
  • Reviewer density – number and type of reviewers involved
  • Risk and anomaly scores – based on learned historical data
  • Cross-tool context – Slack mentions, Linear/Jira issues, CI test failures

This metadata supports cross-system reasoning and early detection of regressions or risky behavior.

Can I access or verify the raw data behind every report?

Yes. Every LLM-generated report links to its raw JSON event data in the Warestack schema.
You can expand, download, or query it using deterministic SQL for validation.

This design makes reports explainable, auditable, and reproducible — suitable for compliance or root-cause analysis.

How does the YAML check system work?

Warestack allows you to define pattern checks declaratively in YAML. Each check enforces a rule or dependency — for example:

- description: "Pull requests that modify .sql files must include a .migration.sql file." event_types: ["pull_request"] parameters: file_pattern_dependency: source_pattern: "*.sql" dependent_pattern: "*.migration.sql"

These checks can be activated from the UI, CLI, or API and are portable, version-controlled, and easy to reuse across repositories.

How does automatic scheduling and alerting work?

Any query or prompt can be turned into a scheduled report or alert.

Examples:

  • “Send me every Friday a summary of PRs that touched auth/ or secrets/.”
  • “Alert me if any PR exceeds 500 lines or stays open for more than 48 hours.”

Warestack runs these continuously and sends alerts through Slack or email.
You can define frequency, filters, and recipients — enabling proactive monitoring and continuous intelligence.

How are cross-tool correlations established?

Warestack maintains a shared reference graph using temporal and semantic keys (like PR IDs, commit hashes, and message references).
This allows Slack messages, Jira issues, or Linear tasks to be automatically linked to their related PRs or deployments.

Example:
A Slack message mentioning “fix in PR-482” is linked to the GitHub PR, merge commit, and deployment — forming a complete trace from discussion → code → production.

How are notifications and alerts triggered from detected patterns?

Each check can be configured to trigger actions when specific operational patterns occur.

Example:
If a rule detects that “PRs larger than 500 LOC took more than 3 days to review,“
Warestack can automatically notify reviewers via Slack, add a comment in Linear, or create a follow-up task.

These triggers make checks active guardrails, helping maintain consistency and accountability.

What are the usage and data access limits?

All users can access agentic checks, reports, and queries.
Limits depend only on data volume and query frequency, based on your current plan.

How can teams query and integrate Warestack data?

All normalized and enriched data is exposed through REST and SQL APIs.
You can query with natural language or deterministic SQL for reproducible results.
Warestack also supports event streaming for dashboards and automation triggers.

Can Warestack’s pattern checks integrate with my existing tools?

Yes. Detected patterns can trigger actions across your tools, including:

  • Posting alerts to Slack or Microsoft Teams
  • Creating Linear or Jira issues automatically
  • Commenting on GitHub pull requests
  • Updating dashboards or exporting structured reports

Integrations can be configured via YAML, API, or through the Warestack UI.

CompanyCheck typeDescriptionContext factorsSeverity
GoogleCode ReviewNon-trivial changes require design docsDeveloper experience, change complexity, project phaseMedium
GoogleCode ReviewAll changes need team review; core systems need domain expertsSystem impact, risk levelHigh
NetflixService DependenciesService interface changes need owner + affected teamsBackward compatibility, migration strategyHigh
NetflixPerformancePerformance-impacting changes need performance team reviewUser impact, scale requirementsHigh
UberDatabaseSchema changes need data team + migration scriptsMigration complexity, rollback riskCritical
UberAPIAPI changes must maintain backward compatibilityAPI impact, client impactHigh
MicrosoftSecuritySecurity changes need security team + threat modelingFile patterns, risk assessmentCritical
MicrosoftPrivacyUser data changes need privacy team reviewGDPR compliance, data minimizationCritical
AmazonReliabilityReliability-impacting changes need incident proceduresSystem criticality, monitoring coverageHigh
AmazonCostCost-impacting changes need finance team reviewResource utilization, scaling requirementsMedium
MetaPerformanceUser-facing changes need performance testingFeature complexity, scale requirementsHigh
MetaScalabilityScalability changes need infrastructure team reviewResource requirements, growth projectionHigh
AppleUXUI changes need design team + UX testingUser experience, accessibilityHigh
AppleAccessibilityAll user-facing changes must comply with accessibility standardsWCAG compliance, assistive technologyHigh
AirbnbTestingCode changes must maintain test coverageCode complexity, feature scopeMedium
AirbnbIntegrationMulti-service changes need integration testingService impact, dependency changesHigh
Last updated on

Warestack

About

Our story

© 2024, Warestack