Lore is a transparent LLM proxy that adds three-tier memory to any AI coding client. Point
Claude Code, Cursor, or any Anthropic/OpenAI-compatible tool at localhost:3207 and
your agent remembers everything — across sessions.
The Workflow
The gateway proxy sits between your AI client and the upstream API. It captures every message — no client changes needed, just change the base URL.
No more compaction — a gradient context manager replaces it entirely. Distillations run as batch background requests at 50% lower cost, and an embedded vector search lets your agent recall any detail on demand.
Messages are incrementally distilled into timestamped observation logs — preserving file paths, error messages, exact decisions, and bug fixes instead of lossy summaries.
Core Tech
Lore preserves the operational details that summarization destroys: file paths, error messages, exact commands, and why decisions were made. Distillation, not summarization.
Architecture
Temporal storage (FTS5-indexed messages), distilled observation logs (timestamped, priority-tagged), and long-term curated knowledge — following Nuum and Mastra's research.
Integration
Transparent proxy on port 3207 — supports Anthropic and OpenAI protocols. Point Claude Code, Cursor, Copilot, Windsurf, or any compatible client at the gateway. Zero client changes.
# Start the gateway
$ npx @loreai/gateway
# Point your client at it
ANTHROPIC_BASE_URL=http://localhost:3207
Join the waitlist and be the first to know when new features ship. We'll never spam you.
No spam. Unsubscribe anytime.
✓ You're on the list. We'll be in touch.