Solving the Agentic Memory "Hoarding" Problem with Biological Pruning #725
sachitrafa
started this conversation in
Show and tell
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Agentic AI memory is a hot topic, but we are looking at it the wrong way.
I’ve always viewed agentic memory as a Dynamic Programming problem. We should be storing sub-problem results to reach a final solution without recomputation. However, current "hoarder" architectures treat memory like an infinite hard drive, saving every raw log. This leads to massive token bloat and a collapsing signal to noise ratio.
In most cases, we don’t need the results of every sub problem forever. We need the relationships between facts and a way to prune the noise.
That is why I built YourMemory.
It is a biologically-inspired persistent memory engine (available as an MCP server and API) that mimics how human storage works. Instead of just being another vector database, it uses an Auto Pruning mechanism based on the Ebbinghaus forgetting curve.
Why this approach?
The Benchmarks
I tested this against the LoCoMo dataset (1,534 multi session QA pairs) to see how it stacks up against traditional memory layers. The results show that "smarter forgetting" actually leads to better recall:
By focusing on high signal data and pruning the rest, we achieved nearly 2x the accuracy of established tools.
I’d love to get your thoughts on this "forgetting" philosophy. Are we over-relying on raw context? How are you all handling memory bloat in your agentic workflows?
Check out the repo here: https://github.com/sachitrafa/YourMemory
Also a Star on my repo will be appreciated !
#AI #AgenticAI #MachineLearning #OpenSource #MCP
Beta Was this translation helpful? Give feedback.
All reactions