Memori Logo
Memori
SYSTEM_ACTIVE: V1.0.57

The SQL-Native AI Memory Fabric

for JavaScript & TypeScript

✨ Inspired by memorilabs.ai Python library

memori-js is an active memory layer that lives inside your application, automatically managing context for your AI agents.ZERO-CONFIG · TYPE-SAFE · LOCAL-FIRST

20ms LATENCY
SQLITE / PG
NODE / BUN
$npm install memori-js
READ_DOCS
core/agent.ts
01import { Memori } from 'memori-js';
02
03// Initialize active memory layer
04const memori = new Memori();
05
06// Patch LLM client (OpenAI/Anthropic)
07memori.llm.register(client);
08
09// Context is auto-injected
10const res = await client.chat.completions.create({
11 messages: [{ role: 'user', content: query }]
12});
THE_PROBLEM

If you're building an AI app today, you usually have to:

1. Set up a vector DB (Pinecone, Qdrant, Weaviate...)
2. Manually chunk and embed user input
3. Query the DB
4. Inject user context into the system prompt
5. Call the LLM
6. Save the new conversation back to the DB
THE_SOLUTION

With memori-js, you just do this:

// 1 line to register memory
memori.llm.register(client);
// Call your LLM as normal
await client.chat.completions.create({ ... });
That's it.Memory is now automatic.

Compare legacy keyword search vs semantic vector search

52% faster with semantic search·1064ms515ms
Legacy v1.0
1064ms

[Legacy SQL Match] Found keyword 'what' in unrelated record #8492

Match: 1.5%

System Error: Index out of sync.

Match: 0.5%

Keyword / Basic SQL · None

Current v1.0.57
515ms

The speed of light is approximately 299,792 km/s.

Relevance: 0.252

Photosynthesis converts light into chemical energy.

Relevance: -0.071

The Great Wall of China is visible from space (mostly a myth).

Relevance: -0.095

The Earth is the third planet from the Sun.

Relevance: -0.100

DNA carries genetic instructions for life.

Relevance: -0.110

Vector Similarity (Cosine) · HNSW / IVFFlat (via sqlite-vec)

WHY_MEMORI

Memori vs. Standard Vector DBs

Stop building complex RAG pipelines. Let Memori handle context injection automatically.

Feature
Standard Vector DB
🧠 Memori-JS
Setup
Requires Docker, API keys, or cloud infrastructure.
Zero-Config. Creates a local memori.db SQLite file instantly.
Scalability
Manual migration needed.
Pluggable. Scale from local SQLite to Postgres/Supabase seamlessly.
Integration
You write the RAG pipeline logic manually.
Auto-Augmentation. Patches the LLM client to inject memory automatically.
Complexity
High (Embeddings, Chunking, Retrieval).
Low. Handles embedding generation and retrieval internally.

CORE_CAPABILITIES

Explore the primitives that power stateful agents.

Context Injection via Middleware

ACTIVE_MEMORY

Memori lives inside your app, actively managing context for your agents.

typescript
1// Auto-injects relevant context
2memori.llm.register(openai);
3await openai.chat.completions.create({...})

PROVIDER_AGNOSTIC

Works seamlessly with OpenAI, Google GenAI, and Anthropic. One line to register, zero config to maintain.

typescript
1import OpenAI from "openai";
2import { Memori } from "memori-js";
3
4const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
5const memori = new Memori({ googleApiKey: process.env.GOOGLE_API_KEY });
6
7// Register for auto-augmentation
8memori.llm.register(client, "openai");
9
10// Now, every call is memory-augmented!
11const response = await client.chat.completions.create({
12 model: "gpt-4",
13 messages: [{ role: "user", content: "What is my favorite color?" }],
14});
"Memory should be invisible."

Most "Memory" libraries are just complex wrappers around vector stores. Memori-JS takes a different approach: as a developer, you shouldn't care how the relevant context is found, only that your agent has it.

SQLite/Postgres NativeClient-Side PatchingZero Configuration