Command Palette

Search for a command to run...

Core_Concepts

How it Works

Understand the primitives that power Memori's active state management.

1. Active Memory Layer

Unlike a traditional vector database where you push/pull vectors manually, Memori acts as a middleware. It sits between your application code and the LLM, intercepting requests to inject relevant context.

App Code──►Memori Layer──►LLM

2. Auto-Augmentation

This is the "magic" part. When you call register(client), we monkey-patch the client's completion methods/hooks to add a pre-processing and post-processing step.

1

Retrieval (Pre-flight)

The user's prompt is embedded and compared against the local vector index. Relevant memories are found.

2

Injection

Context is inserted into the system message, invisible to the end user but visible to the model.

3

Storage (Post-flight)

The resulting conversation turn is saved, embedded, and indexed for future reference.

3. Attribution & Scoping

For multi-tenant applications, you need to ensure User A essentially never sees User B's memories. Memori handles this via Attribution.

typescript
1// Global scope (default)
2const memori = new Memori();
3
4// User scope
5memori.attribution("user_123");
6await memori.addMemory("My name is John");
7
8// Switch user
9memori.attribution("user_456");
10const memories = await memori.search("What is my name?");
11// -> Returns [] (Empty, because John's memory is isolated)