Agentic memory framework for LLMs and AI Agents!
Agentic memory framework for LLMs and AI Agents!
Instead of stuffing context or relying only on vector search, MemU lets agents read and reason over memory files directly.
Memory is not an index.
It’s something the model can understand.
MemU ingests multimodal inputs, extracts structured textual memory items, and autonomously organizes them into thematic Markdown files.
How memory is structured:
Raw resources → memory items → memory category files
Documents, conversations, images, and audio are preserved in their original form, without deletion or modification. Facts are then extracted and organized into human-readable memory category files.
Key features:
• Dual-mode retrieval, including LLM-based (non-embedding) search for higher accuracy
• File-system based memory where each category is a Markdown file
• Hierarchical memory layers that preserve traceability
• Native multimodal memory for text, images, audio, and video
• Lightweight and developer-friendly, no heavy graph constraints
• Fully configurable prompts for high extensibility
Why this architecture matters:
Most memory systems force developers to decide what matters.
MemU lets the agent decide.
It learns what to remember, promotes frequently used knowledge, and reorganizes memory as usage evolves. Retrieval works top-down and falls back gracefully when needed.
The result is better temporal reasoning, fewer hallucinations, and memory that actually scales across sessions.
The best part?It’s 100% open source.Github: NevaMind-AI/memU
Labels:
News
