Skip to content

Releases: aiming-lab/SimpleMem

v0.2.0 — Omni-SimpleMem: Multimodal Memory

03 Apr 04:11

Choose a tag to compare

What's New

🧠 Omni-SimpleMem: Multimodal Memory

SimpleMem now supports text, image, audio & video memory.

  • +411% on LoCoMo F1 over previous baselines
  • +214% on Mem-Gallery F1
  • 5.81 q/s retrieval throughput (3.5x faster)
  • Built on three principles: Selective Ingestion, Progressive Retrieval, Knowledge Graph Augmentation

Other Changes

  • README restructured with parallel SimpleMem (text) + Omni-SimpleMem (multimodal) organization
  • Added Roadmap for upcoming multimodal infrastructure (cross-session, MCP, Docker)
  • All 13 language READMEs updated
  • Bug fixes and stability improvements

Full Changelog: v0.1.0...v0.2.0

v0.1.0 — Initial Release (Text Only)

10 Mar 03:55
7da777f

Choose a tag to compare

SimpleMem v0.1.0 — Initial Release (Text Only)

An efficient memory framework based on semantic lossless compression for LLM agents.

Features

  • Semantic Structured Compression — converts dialogues into compact, atomic memory units
  • Online Semantic Synthesis — consolidates related memory fragments to eliminate redundancy
  • Intent-Aware Retrieval Planning — dynamic search intent inference with parallel multi-view retrieval
  • 43.24% F1 on LoCoMo benchmarks (26% improvement over Mem0)
  • ~550 tokens per query — 30× fewer than full-context approaches
  • MCP protocol support — compatible with Claude, Cursor, LM Studio, and more