loading… Persistent Memory for AI Coding Agents

Let your AI
remember your
full project.

Lore is a transparent LLM proxy that adds three-tier memory to any AI coding client. Point Claude Code, Cursor, or any Anthropic/OpenAI-compatible tool at localhost:3207 and your agent remembers everything — across sessions.

Lore
@loreai/gateway
Anthropic + OpenAI
Local SQLite + FTS5
19× Compression
85%
Coding Recall Accuracy
7.4×
Cheaper per Correct Answer
19×
Compression Ratio

The Workflow

How LoreAI scales knowledge

01

Intercept

The gateway proxy sits between your AI client and the upstream API. It captures every message — no client changes needed, just change the base URL.

02

Context Management

No more compaction — a gradient context manager replaces it entirely. Distillations run as batch background requests at 50% lower cost, and an embedded vector search lets your agent recall any detail on demand.

03

Distill

Messages are incrementally distilled into timestamped observation logs — preserving file paths, error messages, exact decisions, and bug fixes instead of lossy summaries.

Core Tech

Operational Intelligence

Lore preserves the operational details that summarization destroys: file paths, error messages, exact commands, and why decisions were made. Distillation, not summarization.

Architecture

Three-Tier Memory

Temporal storage (FTS5-indexed messages), distilled observation logs (timestamped, priority-tagged), and long-term curated knowledge — following Nuum and Mastra's research.

Integration

Works with Any AI Client

Transparent proxy on port 3207 — supports Anthropic and OpenAI protocols. Point Claude Code, Cursor, Copilot, Windsurf, or any compatible client at the gateway. Zero client changes.

# Start the gateway
$ npx @loreai/gateway

# Point your client at it
ANTHROPIC_BASE_URL=http://localhost:3207

Get early access to Lore.

Join the waitlist and be the first to know when new features ship. We'll never spam you.

No spam. Unsubscribe anytime.

✓ You're on the list. We'll be in touch.