Skip to main content
Most infrastructure tools see your stack for the first time every time you log in. RubixKube remembers. The Memory Engine is what turns each incident into lasting knowledge your team and your agents can lean on. It is private to your workspace, not shared across tenants, and gets sharper the longer it runs.
The Memory Engine powers the “eternal context” idea on the RubixKube homepage. Every signal, session, and resolution builds a deeper model of your system. That model belongs to you.

What the Memory Engine stores

Incident resolutions

Every RCA report, the action that resolved it, and the verification outcome.

Operator context

Short notes operators add on resolution (“peak traffic repeats every Tuesday”). These are gold for future matches.

Chat sessions

Investigation threads, the questions asked, and which answers proved useful.

Graph deltas

Topology changes over time. What services existed when, and what depended on what.

Rejections

Which recommendations your team turned down, and why. The system learns what not to suggest.

Policies

Guardian decisions and their outcomes, so policy drift stays measurable.

Why memory compounds

Every incident produces new connections in the graph. Three compounding effects are worth naming explicitly.
1

Pattern recognition sharpens

The second time a similar signal shape appears, the RCA Pipeline starts with the previous causal chain as a hypothesis. Investigation time drops.
2

Recommendation quality rises

Rejected recommendations are weighted down. Applied-and-verified recommendations are weighted up. Your team’s style shapes the defaults.
3

Context survives staff turnover

Tribal knowledge gets written into the memory instead of trapped in one engineer’s head. When they move on, the context stays.

A concrete example

An OOMKilled event in payments-api. First time RubixKube sees it, the RCA walks the full causal chain from scratch: memory curves, traffic patterns, recent deploys, code changes. Ten minutes of signal correlation. Ninety days later, the same shape appears in a different namespace. The Memory Engine surfaces the original RCA, the fix that worked, and the operator note (“raise the limit from 512Mi to 1Gi, campaign traffic is bursty”). Investigation time drops from ten minutes to thirty seconds. Action is approved, verification lands inside the next stabilisation window. The second incident took less effort than the first. The third takes less than the second. That is the compounding.

Privacy and tenancy

The Memory Engine is scoped to your workspace. Nothing is shared across tenants. There is no cross-pollination of operator notes or incident history between customers.
Free tier keeps 7 days of history, Business keeps 30, Enterprise is unlimited. You can export or delete at any time.
Signals that contain secrets or personal data can be excluded from memory by rule. The Observer supports per-environment redaction.
RCAs, sessions, and resolution notes export as Markdown or JSON. The graph itself exports as a structured dump for your archive.

How to get the most out of memory

Small habits make the memory far more useful.

Write short resolution notes

One or two sentences on what really worked, why it worked, and any context that would not be obvious to a newcomer.

Reject politely

When a recommendation is wrong, reject with a reason. The memory uses the reason to tune future suggestions.

Rename recurring incidents

Give repeating issues a clear name (“campaign-tuesday-memory-spike”). Matching surfaces faster.

Link to runbooks

Attach links to internal runbooks on the RCA. Next incident of the same shape shows them automatically.

Common questions

It uses embeddings under the hood, plus a graph model that tracks causal relationships and operator feedback. The two stay in sync, so similarity search and relationship queries both work against the same underlying data.
Yes, through Chat. Questions like “have we seen this before” or “what did we do last time X happened” are first-class. The CLI supports the same.
Decommissioned resources fall out of the active Knowledge Graph but stay in memory for retention purposes. Historical RCAs still reference them correctly, so post-mortems of past incidents do not break.
Yes. If a pattern learned on staging applies to production, the memory surfaces it. If a pattern from one AWS account applies to another, the same holds. Scoping stays inside your workspace.

Knowledge Graph

The live model of your stack that the Memory Engine references.

Root Cause Analysis

How RCAs leverage prior art from memory.