I understand your frustrations. I feel your approach works well for a narrower profile. One person, stable context and fewer files.
I've seen many users that need a wider breadth of memory across more topics, where structure and organization of that memory plays a big part in the LLM's performance.
My response to that was a local system that I ended up turning into Sig <https://sig-ai.app/>
It has some overlap to how you've approached it, but differs in other obvious ways.
Having said all that, I'm just highlight another use case for memory. I think your approach is a very valid approach for a lot of people. I appreciate the simplicity and lack of lock-in.
I understand your frustrations. I feel your approach works well for a narrower profile. One person, stable context and fewer files.
I've seen many users that need a wider breadth of memory across more topics, where structure and organization of that memory plays a big part in the LLM's performance.
My response to that was a local system that I ended up turning into Sig <https://sig-ai.app/>
It has some overlap to how you've approached it, but differs in other obvious ways.
Having said all that, I'm just highlight another use case for memory. I think your approach is a very valid approach for a lot of people. I appreciate the simplicity and lack of lock-in.